Back
 JAMP  Vol.9 No.4 , April 2021
Approximate Analytical Solutions to the Heat and Stokes Equations on the Half-Line Obtained by Fokas’ Transform
Abstract: In this paper, we evaluate the integrals that are solutions of the heat and Stokes’ equations obtained by Fokas’ transform method by deriving exact formulas. Our method is more accurate and efficient than the contour deformation and parametrization used by Fokas to compute these integrals. In fact, for the heat equation, our solution is exact up to the imaginary error function and for the Stokes equation, our solution is exact up to the incomplete Airy function. In addition, our solutions extend to the lateral boundary without convergence issues, allow for asymptotic expansions, and are much faster than those obtained by other methods.

1. Introduction

Fokas’ method (Fokas, 1997 [1] and 2002 [2] ) is a recently introduced approach that allows solving a large class of PDEs. Also known as the unified transform, this method extends classical approaches, such as separation of variables or the scattering transform, contains them as special cases, and gives solutions in situations where the classical methods cannot.

Classical methods work only poorly for spatial domains with boundary, when dealing with inhomogeneous boundary conditions. In this context, the key contribution of Fokas is the so-called “global relation”, which combines specified and unknown values of the solution and its derivatives on the boundary.

Fokas’ method gives the solutions as integrals on an unbounded contour in the complex plane. For example, for the heat and Stokes (first kind) equations on the half-line

q t ( x , t ) q x x ( x , t ) = 0 , 0 < x < , 0 < t , q ( x , 0 ) = q 0 ( x ) , 0 x , q ( 0 , t ) = g 0 ( t ) , 0 t , (1.1)

and

q t ( x , t ) + q x x x ( x , t ) = 0 , 0 < x < , 0 < t , q ( x , 0 ) = q 0 ( x ) , 0 x , q ( 0 , t ) = g 0 ( t ) , 0 t , (1.2)

respectively, this method constructs the solutions q ( x , t ) as integrals (2.1) and (2.2) in the complex plane involving an x-transform of the initial condition q 0 and a t-transform of the boundary condition g 0 .

To evaluate these integrals numerically, Flyer-Fokas (2008, [3] ), Fokas et al. (2009, [4] ), and Papatheodorou-Kandili (2009, [5] ) used the steepest descent method, deforming the contour in the complex plane, then used the simple trapezoid rule, after parametrizing the curve, to compute the integral.

Following Flyer-Fokas [3] and for the sake of enabling a comparison, we make the choice

q 0 ( x ) = 0, g 0 ( t ) = sin ( λ t ) .

But, any reasonable (e.g. tempered distribution) Dirichlet boundary condition g 0 ( t ) can be decomposed into a superposition of sines, using the sine Fourier transform. Thus, we can solve the above equations for general boundary data. Initial data are simpler, to the point that one does not need Fokas’ method to handle them and will be treated elsewhere.

In this paper, we evaluate the integrals (2.1) and (2.2) by an alternative approach. First, we obtain a closed formula for the solution q ( x , t ) of the heat Equation (1.1) in terms of elementary functions and one special function, namely the imaginary error function erfi:

q ( x , t ) = e x 2 4 t 2 R [ e z 1 2 ( i + erfi ( z 1 ) ) + e z 2 2 ( i + erfi ( z 2 ) ) ] ,

where z 1 and z 2 are defined later. Second, we obtain an exact expression for the solution of Stokes’ Equation (1.2) in terms of elementary functions and another special function, the incomplete Airy functions:

q ( x , t ) = 1 2 ( sin ( λ t + λ 1 3 x ) + e ( 3 2 + i 2 ) λ 1 3 x sin ( λ t ) ) + 1 4 i = 1 6 F ( α i * , x * )

where F ( α i * , x * ) are functions of the incomplete Airy functions and will be determined later. The quantities x * = ( 3 t ) 1 3 x and α i * = ( 3 t ) 1 3 α i .

The above formulas lend themselves well to applications, since there exist fast and highly accurate methods for computing the imaginary error function and incomplete Airy functions. For example, the imaginary error function is a standard function in MATLAB, where it is estimated with 10−20 accuracy by means of Padé approximants (Cody, 1969 [6] ). Hence, our numerical solution is more accurate than Fokas’ solution and even than that of Papatheodorou-Kandili (2009, [5] ), who obtained a solution with 10−15 accuracy for the heat equation on a finite spatial domain. We also obtained speed improvements of at least an order of magnitude, see the Appendix A.

A related problem is that, as stated in Flyer-Fokas: “for x = 0 , the integration interval is infinite and any truncation will result in the integral not converging exactly to the boundary condition”. By contrast, our exact expressions for the solutions extend all the way to the lateral boundary x = 0 without this issue of convergence (they are still ill-behaved as t 0 , though).

In this paper, we also derive the asymptotic expansion for the heat equation solution with precise bounds for the error term, which allows one to compute the solution with arbitrarily high precision. This only requires evaluating elementary functions and the imaginary error function erfi.

The paper is organized as follows. In the next section, we briefly describe Fokas’ method as applied to Equations (1.1) and (1.2). In Section 3 we derive the exact formulas for the solutions. In Section 4 we obtain an asymptotic expansion for the heat equation solution, together with an estimate for the error size. Finally, in the Appendix B, we describe the numerical implementation of our scheme and compare the results to those of Flyer-Fokas.

2. Fokas’ Integral Solutions

In this section, we present the solutions of Equations (1.1) and (1.2) given by Fokas and that take the form of integrals. For the heat equation on the half-line with initial condition ( q 0 ( x ) = 0 ) and a Dirichlet boundary condition ( g 0 ( t ) = sin ( λ t ) ), the solution is:

q ( x , t ) = 1 2 π D + ( e i λ t e k 2 t k 2 i λ e i λ t e k 2 t k 2 + i λ ) k e i x k d k (2.1)

For the Stokes’ equation of the first kind with the same initial and boundary conditions as above the solution is:

q ( x , t ) = 3 4 π D + ( e i λ t e i t k 3 λ k 3 + e i λ t e i t k 3 λ + k 3 ) k 2 e i x k d k (2.2)

To evaluate numerically these integrals, Flyer-Fokas (2008, [3] ) and Fokas et al., (2009, [4] ) deformed the contour of integration to a path in the region where the integrand decays exponentially fast for large k. Specifically, in order to get rapid convergence of the numerical scheme, they deformed the contour of integration to a hyperbola above the real axis and asymptotically below D + . After that, they mapped the hyperbola from the complex plane to the real line using the following parametrisation:

k ( θ ) = i γ sin ( α i θ ) = γ ( cos α sinh θ + i sin α cosh θ ) , (2.3)

which maps points θ on the real line to points k ( θ ) on a hyperbola in the complex plane. The integral in Equation (2.1) becomes (also see Formula (3.8) in Flyer-Fokas):

q ( x , t ) = γ 2 π ( e i λ t e k ( θ ) 2 t k ( θ ) 2 i λ e i λ t e k ( θ ) 2 t k ( θ ) 2 + i λ ) k ( θ ) e i x k ( θ ) cos ( α i θ ) d θ . (2.4)

The parameters α and γ were set to 1. After mapping the integral to the real line, Flyer-Fokas truncated the infinite integration interval to a finite one and used the simple trapezoidal rule. The same deformation-mapping procedure was used with the stokes Equation (2.2) except that α = π / 8 was replaced with π / 6 .

q ( x , t ) = 3 γ 4 π ( e i λ t e i k ( θ ) 3 t λ k ( θ ) 3 e i λ t e i k ( θ ) 3 t λ + k ( θ ) 3 ) k ( θ ) 2 e i x k ( θ ) cos ( α i θ ) d θ . (2.5)

3. Exact Formulas for the Solutions

3.1. The Heat Equation

Using several variables and contour changes, as well as Cauchy’s residue theorem, we obtain a more manageable expression for the solution q ( x , t ) of the heat Equation (1.1). Our starting point is identity (2.1). Define the following four quantities z i , 1 i 4 , which will appear in the computation:

z 1 , 2 , 3 , 4 = i x 2 t + λ 2 t 2 4 = ± λ t 2 + i ( x 2 t ± λ t 2 ) .

More precisely, let

z 1 = λ t 2 + i ( x 2 t λ t 2 ) , z 2 = λ t 2 + i ( x 2 t + λ t 2 ) , z 3 = λ t 2 + i ( x 2 t + λ t 2 ) , z 4 = λ t 2 + i ( x 2 t λ t 2 ) .

They have the property that z 1 = z 4 ¯ and z 2 = z 3 ¯ . Consequently, z 1 2 = z 4 2 ¯ and z 2 2 = z 3 2 ¯ , We also note for future reference that

z 1 2 = x 2 4 t + x λ 2 i ( x λ 2 λ t ) = z 4 2 ¯ , z 2 2 = x 2 4 t x λ 2 + i ( x λ 2 + λ t ) = z 3 2 ¯ .

Lemma 3.1. The solution of Equation (1.1) is given by:

q ( x , t ) = ( e x λ 2 sin ( λ t x λ 2 ) e x 2 4 t 4 π ( i = 1 2 + e k 2 k + z i d k i = 3 4 + e k 2 k + z i d k ) , x < t 2 λ e λ t 2 4 π ( i = 1 2 + e k 2 k + z i d k i = 3 4 + e k 2 k + z i d k ) , x = t 2 λ e x 2 4 t 4 π ( i = 1 2 + e k 2 k + z i d k i = 3 4 + e k 2 k + z i d k ) , x > t 2 λ . (3.1)

Proof. First, we convert the integral along the infinite contour to an integral on the real line. From (2.1), we have

q ( x , t ) = 1 2 π D + ( e i λ t ( i λ k k 3 ) + e i λ t ( i λ k + k 3 ) 2 i λ k e t k 2 k 4 + λ 2 ) e i x k d k (3.2)

We consider the contour C = [ R , R ] C R 2 C D + C R 1 , where C D + is the part of C on the boundary of the domain D + and C R 1 and C R 2 are circular arcs of radius R. Let f ( k ) denote the integrand. On the contour C there are two removable singularities at k = i λ and at k = i λ . There are no poles either inside or on the contour. Using the analyticity of the integrand, we have:

The contributions of the integrals along C R 1 and C R 2 vanish as R , by Jordan’s lemma. Equation (3.1) can be written as:

q ( x , t ) = e i λ t 2 π D + k e i x k k 2 i λ d k e i λ t 2 π D + k e i x k k 2 + i λ d k i λ π D + k e t k 2 + i x k k 4 + λ 2 d k . (3.3)

Along C R 1 , k = R e i θ , for 0 θ π 4 and d k = R i e i θ d θ . The first integral in (3.3) becomes

0 π 4 R 2 i e 2 i θ e i x R e i θ R 2 e 2 i θ i λ d θ

We will show that the integral of the modulus of the integrand converges to 0 as R and therefore the integral of the integrand converges to 0 as well.

| 0 π 4 R 2 i e 2 i θ e i x R e i θ R 2 e 2 i θ i λ d θ | 0 π 4 R 2 e x R sin θ | R 2 e 2 i θ i λ | d θ R 2 R 2 λ 0 π 4 e x R sin θ d θ

Therefore, when R , the integral converges to 0 since sin ( θ ) 0 for 0 θ π 4 .

For the second integral in (3.3), we have

| 0 π 4 R 2 i e 2 i θ e i x R e i θ R 2 e 2 i θ + i λ d θ | 0 π 4 R 2 e x R sin θ | R 2 e 2 i θ + i λ | d θ 0 π 4 R 2 e x R sin θ | R 2 e 2 i θ | d θ = 0 π 4 e x R sin θ d θ

which also converges to 0 as R .

For the third integral, we have

| 0 π 4 R e i θ e t R 2 e 2 i θ e i x R e i θ R i e i θ R 4 e 4 i θ + λ 2 d θ | 0 π 4 R 2 e t R 2 cos 2 θ x R sin θ | R 4 e 4 i θ + λ 2 | d θ 1 R 2 0 π 4 e t R 2 cos 2 θ x R sin θ d θ

which converges to 0 as well as R since sin ( θ ) 0 and cos ( 2 θ ) 0 for 0 θ π 4 .

Therefore, the three integrals vanish also along C R 2 since sin ( θ ) 0 and cos ( 2 θ ) 0 for 3 π 4 θ π . Also, as R , the contour C D + becomes D + . Therefore, as R ,

D + f ( k ) d k = f ( k ) d k .

and,

q ( x , t ) = e i λ t 2 π + k e i x k k 2 i λ d k e i λ t 2 π + k e i x k k 2 + i λ d k i λ π + k e t k 2 + i x k k 4 + λ 2 d k . (3.4)

Now, we evaluate each of these three integrals. The first and second one are computed directly using the residue theorem. The third one is computed using a different strategy.

For the first integral, we consider a contour that is the boundary of an upper semicircular region of a circle of a radius large enough to include the pole i λ . Using the residue theorem and Jordan’s lemma, we get:

e i λ t 2 π + k e i x k k 2 i λ d k = ( e i λ t 2 π ) 2 π i Res ( i λ ) = i 2 e x λ 2 + i ( x λ 2 λ t ) (3.5)

In this integral, we need not consider the residue at the other pole i λ since it is not inside the contour. Same holds for the second integral. We consider a contour that is the boundary of an upper semicircular region of a circle of radius large enough to include the pole i λ . Using the residue theorem, we get:

e i λ t 2 π + k e i x k k 2 + i λ d k = i 2 e x λ 2 i ( x λ 2 λ t ) (3.6)

In this integral as well, we need not consider the residue at the other pole i λ since it is not inside the contour. Summing (3.5) and (3.6) gives:

e i λ t 2 π + k e i x k k 2 i λ d k e i λ t 2 π + k e i x k k 2 + i λ d k = e x λ 2 sin ( x λ 2 λ t ) (3.7)

For the third integral in (3.4), we use the partial fractions decomposition of k k 4 + λ 2 as:

k k 4 + λ 2 = 1 4 λ i ( 1 k i λ + 1 k + i λ 1 k i λ 1 k + i λ ) . (3.8)

We get:

i λ π + k e t k 2 + i x k k 4 + λ 2 d k = 1 4 π ( + e t k 2 + i x k k i λ d k + + e t k 2 + i x k k + i λ d k + e t k 2 + i x k k i λ d k + e t k 2 + i x k k + i λ d k ) (3.9)

We next compute these four integrals. After changing the variable k by k ˜ = k t and completing the square in k ˜ , we have:

• The first integral

+ e t k 2 + i x k k i λ d k = e x 2 4 t + e ( k ˜ i x 2 t ) 2 k ˜ i λ t d k ˜ ,

Another change of variable: k = k ˜ i x 2 t , gives:

+ e t k 2 + i x k k i λ d k = e x 2 4 t i x 2 t + i x 2 t e k 2 k + z 1 d k , (3.10)

where z 1 = i x 2 t i λ t = λ t 2 + i ( x 2 t λ t 2 ) .

The integrand in the right hand side of (3.10) has z 1 as a pole. Deforming the integral back to the real line will result in a residue at z 1 exactly when

Im ( z 1 ) = λ t 2 x 2 t is non-positive (Cauchy’s theorem). If x t 2 λ , then there is a residue. If x < t 2 λ , then the pole is not inside the contour. Therefore, the first integral is:

+ e t k 2 + i x k k i λ d k = { e x 2 4 t + e k 2 k + z 1 d k , x < t 2 λ e x 2 4 t + e k 2 k + z 1 d k + π i Res ( z 1 ) , x = t 2 λ e x 2 4 t + e k 2 k + z 1 d k + 2 π i Res ( z 1 ) , x > t 2 λ ,

where the integral is interpreted in the principal value sense when z 1 and

Res ( z 1 ) = l i m k z 1 e x 2 4 t ( k + z 1 ) e k 2 k + z 1 = e x 2 4 t z 1 2 = e x λ 2 + i ( x λ 2 λ t ) . (3.11)

• The second integral

+ e t k 2 + i x k k + i λ d k = e x 2 4 t + e k 2 k + z 2 d k ,

where z 2 = λ t 2 + i ( x 2 t + λ t 2 ) . Note that in this case there is no residue

associated with the pole z 2 since it is outside the contour when deforming the integral to the real line.

• The third integral

+ e t k 2 + i x k k i λ d k = e x 2 4 t + e k 2 k + z 3 d k ,

where z 3 = λ t 2 + i ( x 2 t + λ t 2 ) . Note also that deforming the integral on the

real line in this case will not result in a residue since the pole z 3 is never inside the contour.

• The fourth integral

+ e t k 2 + i x k k + i λ d k = { e x 2 4 t + e k 2 k + z 4 d k , x < t 2 λ e x 2 4 t + e k 2 k + z 4 d k + π i Res ( z 4 ) , x = t 2 λ e x 2 4 t + e k 2 k + z 4 d k + 2 π i Res ( z 4 ) , x > t 2 λ ,

where z 4 = λ t 2 + i ( x 2 t λ t 2 ) . Deforming the integral on the real line will

result in a residue depending on the location of the pole z 4 . The residue at z 4 is:

Res ( z 4 ) = l i m k z 4 e x 2 4 t ( k + z 4 ) e k 2 k + z 4 = e x 2 4 t z 4 2 = e x λ 2 i ( x λ 2 λ t ) . (3.12)

The difference of the residues at z 1 and z 4 is

Res ( z 1 ) Res ( z 4 ) = e x 2 4 t ( e z 1 2 e z 4 2 ) = 2 i e x λ 2 sin ( x λ 2 λ t ) ,

which cancels in the boundary case x = t 2 λ . Therefore, the sum of (3.7) and (3.9) produces (3.1), which is what we wanted to prove.

We note that

F ( z ) = + e k 2 k + z d k .

is odd and F ( z ¯ ) = F ( z ) ¯ . Since z 1 = z 4 ¯ and z 2 = z 3 ¯ , it follows that F ( z 4 ) = F ( z 1 ) ¯ and F ( z 3 ) = F ( z 2 ) ¯ . Then

i = 1 2 + e k 2 k + z i d k i = 3 4 + e k 2 k + z i d k = 2 R i = 1 2 + e k 2 k + z i d k . (3.13)

Therefore Equation (3.1) becomes:

q ( x , t ) = ( e x λ 2 sin ( λ t x λ 2 ) e x 2 4 t 2 π R i = 1 2 + e k 2 k + z i d k , x < t 2 λ e λ t 2 2 π R i = 1 2 + e k 2 k + z i d k , x = t 2 λ e x 2 4 t 2 π R i = 1 2 + e k 2 k + z i d k , x > t 2 λ . (3.14)

3.2. The Stokes Equation of the First Kind

Lemma 3.2. The solution of Equation (1.2) is given by:

q ( x , t ) = 1 2 ( sin ( λ t + λ 1 3 x ) + e ( 3 2 + i 2 ) λ 1 3 x sin ( λ t ) ) + 1 4 π i = 1 6 e i ( t k 3 + x k ) k α i d k . (3.15)

where α 1 = λ 1 3 , α 2 = λ 1 3 ; α 3 = ( 1 2 + i 3 2 ) λ 1 3 , α 4 = ( 1 2 + i 3 2 ) λ 1 3 + ; α 5 = ( 1 2 i 3 2 ) λ 1 3 , α 6 = ( 1 2 i 3 2 ) λ 1 3 .

Proof. From Equation (2.2), we have that

q ( x , t ) = 3 4 π D + e i x k ( e i λ t ( λ k 2 + k 5 ) + e i λ t ( λ k 2 k 5 ) 2 k 5 e i t k 3 λ 2 k 6 ) d k . (3.16)

We convert the integral along the infinite contour to an integral on the real line. Consider the contour C = [ R , R ] C R 1 C D + C R 2 , where C D + is the part of C on the boundary of the domain D + and C R 1 and C R 2 are circular arcs of radius R. Let f ( k ) denote the integrand. On the contour C, there are

four poles: k 1 = λ 1 3 , k 2 = λ 1 3 , k 3 = ( 1 2 + i 3 2 ) λ 1 3 and k 4 = ( 1 2 + i 3 2 ) λ 1 3 . Using the analyticity of the integrand, we obtain:

C f ( k ) d k = R R f ( k ) d k + C R 1 f ( k ) d k + C D + f ( k ) d k + C R 2 f ( k ) d k = π i { Res ( k 1 ) + Res ( k 2 ) + 2 Res ( k 3 ) + 2 Res ( k 4 ) } = 0 ,

because the residues at k 1 , k 2 , k 3 and k 4 are all equal to 0. (It is easy to show that. The proof is available upon request).

The contributions of the integral vanish along C R 1 and C R 2 . In fact, q ( x , t ) can be written as:

q ( x , t ) = 3 4 π e i λ t k 2 e i k x λ k 3 d k + 3 4 π e i λ t k 2 e i k x λ + k 3 d k + 3 2 π k 5 e i k x + i t k 3 k 6 λ 2 d k

Along C R 1 , k = R e i θ , for 0 θ π 3 and d k = R i e i θ d θ . Therefore:

• The first integral becomes

0 π 3 R 3 i e 3 i θ e i x R e i θ λ R 3 e 3 i θ d θ

Therefore,

| 0 π 3 R 3 i e 3 i θ e i x R e i θ λ R 3 e 3 i θ d θ | 0 π 3 R 3 e x R sin θ | λ R 3 e 3 i θ | d θ = R 3 λ R 3 0 π 3 R 3 e x R sin θ d θ

when R , the integral converges to 0 since sin ( θ ) 0 for 0 θ π 3 .

• For the second integral, we have:

| 0 π 3 R 3 i e 3 i θ e i x R e i θ λ + R 3 e 3 i θ d θ | 0 π 3 R 3 e x R sin θ | λ + R 3 | d θ

which converges to 0 as R .

• The third integral

| 0 π 3 R 6 i e i 6 θ e i t R 3 e 3 i θ e i x R e i θ R 6 e 6 i θ λ 2 d θ | R 6 R 6 λ 2 0 π 3 R 6 e x R 2 sin θ t R 3 sin 3 θ d θ

which converges to 0 as R since sin ( θ ) 0 and sin ( 3 θ ) 0 for 0 θ π 3 .

Along C R 2 , the three integrals vanish also since sin ( θ ) 0 and sin ( 3 θ ) 0 for 2 π 3 θ π .

Also, as R , the contour C D + becomes D + .

Therefore, as R , D + f ( k ) d k = f ( k ) d k .

Therefore, the contour is deformed to the real axis:

q ( x , t ) = 3 4 π k 2 e i k x ( e i λ t e i k 3 t λ k 3 + e i λ t e i k 3 t λ + k 3 ) d k = 3 4 π e i λ t k 2 e i k x λ k 3 d k + 3 4 π e i λ t k 2 e i k x λ + k 3 d k + 3 2 π k 5 e i k x + i t k 3 k 6 λ 2 d k (3.17)

The first and second integrals are computed directly using the residue theorem. For the first one, the roots of the denominator of the integrand are:

k 1 = λ 1 3 , k 2 = ( 1 2 + i 3 2 ) λ 1 3 and k 3 = ( 1 2 i 3 2 ) λ 1 3 . We therefore consider

a contour that is the boundary of an upper semicircular region of a circle that has a radius large enough to include the poles k 1 and k 2 . We do not need to consider the residue at the other pole k 3 since it is not in the considered contour. Therefore:

3 4 π e i λ t { π i Res ( k 1 ) + 2 π i Res ( k 2 ) } = 3 i 4 e i λ t { Res ( k 1 ) + 2 Res ( k 2 ) } ,

where (after some computations)

Res ( k 1 ) = e i λ 1 3 x 3 , Res ( k 2 ) = e ( 3 2 + i 2 ) λ 1 3 x 3

Substitution of Res ( k 1 ) and Res ( k 2 ) in the integral gives

i 4 e i λ t { e i λ 1 3 x + 2 e ( 3 2 + i 2 ) λ 1 3 x }

For the second integral, the roots of the denominator of the integrand are: k 1 = λ 1 3 , k 2 = ( 1 2 + i 3 2 ) λ 1 3 and k 3 = ( 1 2 i 3 2 ) λ 1 3 . We therefore consider a

contour that is the boundary of an upper semicircular region of a circle that has a radius large enough to include the poles k 1 and k 2 . We do not need to consider the residue at the other pole k 3 since it is not in the considered contour. Therefore:

3 4 π e i λ t { π i Res ( k 1 ) + 2 π i Res ( k 2 ) } = 3 i 4 e i λ t { Res ( k 1 ) + 2 Res ( k 2 ) } .

where (after some computations),

Res ( k 1 ) = e i λ 1 3 x 3 , Res ( k 2 ) = e ( 3 2 + i 2 ) λ 1 3 x 3

Substitution of Res ( k 1 ) and Res ( k 2 ) in I 2 gives

I 2 = i 4 e i λ t { e i λ 1 3 x + 2 e ( 3 2 + i 2 ) λ 1 3 x }

Therefore,

I 1 + I 2 = 1 2 sin ( λ t + λ 1 3 x ) + 1 2 e ( 3 2 + 1 2 ) λ 1 3 x sin ( λ t ) .

For the third integral, we use the partial fractions decomposition of k 5 k 6 λ 2 as follows:

k 5 k 6 λ 2 = 1 6 ( 1 k λ 1 3 + 1 k + λ 1 3 + 1 k ( 1 2 + i 3 2 ) λ 1 3 + 1 k ( 1 2 i 3 2 ) λ 1 3 + 1 k ( 1 2 i 3 2 ) λ 1 3 + 1 k ( 1 2 + i 3 2 ) λ 1 3 )

Therefore,

q ( x , t ) = I 1 + I 2 + I 3 = 1 2 ( sin ( λ t + λ 1 3 x ) + e ( 3 2 + i 2 ) λ 1 3 x sin ( λ t ) ) + 1 4 π i = 1 6 e i ( t k 3 + x k ) k α i d k ,

In light of identities (3.1) and (3.15), to obtain exact formulas for the solutions of Equations (1.1) and (1.2) we are left with evaluating integrals of the form:

F ( z i ) = 1 π e k 2 k + z i d k

and

F ( α i , x , t ) = 1 π e i ( t k 3 + x k ) k α i d k .

These Cauchy integrals can be computed by means of the Hilbert transform. When restricted to the positive half-line [ 0, + ) , the first integral is known as the Goodwin-Staton integral, see Abramowitz and Stegun [7]; also see Dawson’s integral. We compute both integrals in the following section in terms of special functions.

3.3. The Hilbert Transform

Lemma 3.3. Consider two entire functions f and g, such that f + i g is bounded on + and f i g is bounded on , then H f = g . In particular, consider two entire functions f and g such that f and g are real-valued on the real line and f + i g is bounded on + . Then H f = g .

Proof. Since f + i g is bounded on + , its Fourier support is contained in [ 0, ) , (that is its Fourier transform vanishes over ( ,0 ) ). Therefore, for ξ < 0 , f ^ ( ξ ) + i g ^ ( ξ ) = 0 . Likewise, the boundedness of f i g implies that f ^ ( ξ ) i g ^ ( ξ ) = 0 for ξ > 0 . Hence, for ξ 0 , g ^ ( ξ ) = i ( s g n ( ξ ) ) f ^ ( ξ ) , and therefore H f = g . For ξ = 0 , under our boundedness assumptions, one cannot get anything worse than a constant, and the Hilbert transform is only unique up to a constant.

For the second conclusion, by the Schwartz reflection principle f ( z ¯ ) = f ( z ) ¯ and g ( z ¯ ) = g ( z ) ¯ , so f ( z ) + i g ( z ) ¯ = f ( z ¯ ) i g ( z ¯ ) . Hence the boundedness of f + i g in the upper half-plane implies that of f i g in the lower half-plane and we can fall back on the previous argument.

Regarding the error function, since e z 2 and e z 2 erfi ( z ) are real-valued on the real axis and e z 2 ( 1 i erfi ( z ) ) is bounded in the upper half-plane, it follows that the Hilbert transform of e k 2 is

( H e k 2 ) ( z ) = e z 2 erfi ( z ) = i e z 2 erf ( i z ) ,

which is also an analytic function, and real-valued on the real line. Here the imaginary error function erfi ( z ) is defined as erfi ( z ) = i erf ( i z ) and erf ( z ) is the usual error function:

erf ( z ) = 2 π 0 z e t 2 d t = 1 2 π z e t 2 d t .

For z , erfi ( z ) is real-valued too, as seen from the fact that

erfi ( z ) = 2 π 0 z e t 2 d t .

This Hilbert transform is closely related to Dawson’s function and to the Mittag-Leffler functions E 1 / 2 : H e k 2 / 2 = 2 π D + ( z ) , see [7].

Writing the Cauchy kernel as a combination of the Poisson and Hilbert kernels, we get:

1 π P V + f ( k ) k z d k = { [ H f ] ( z ) , z i f ( z ) + [ H f ] ( z ) , z + i f ( z ) + [ H f ] ( z ) , z . (3.18)

Therefore:

1 π P V + e k 2 k z d k = { ( H e k 2 ) ( z ) = e z 2 erfi ( z ) , z i e z 2 + ( H e k 2 ) ( z ) = e z 2 ( i + erfi ( z ) ) , z + i e z 2 + ( H e k 2 ) ( z ) = e z 2 ( i erfi ( z ) ) , z (3.19)

where P V denotes the Cauchy principal value, H the Hilbert transform, erfi ( z ) the imaginary error function, + = { z = x + i y : x , y , y > 0 } the upper half complex plane, and the lower half complex plane.

The expression (3.19) is analytic, for z + and for z separately, and is bounded on the complex plane, but is not continuous, due to the jump

discontinuity across the real axis. It decays at a rate of O ( 1 | Im z | ) as | Im z | . At this stage, we obtain an exact formula for the solution of the heat equation.

Proposition 3.4. The exact solution of Equation (1.1) is given by

q ( x , t ) = e x 2 4 t 2 R [ e z 1 2 ( i + erfi ( z 1 ) ) + e z 2 2 ( i + erfi ( z 2 ) ) ] . (3.20)

The above formula is valid for all ( x , t ) with x 0 , t > 0 (since z i are not defined when t = 0 ). This agrees with the fact that, although the expression (3.1) defines a discontinuous function, the solution q ( x , t ) is continuous up to the boundary and smooth for t > 0 .

Proof. Of the four values z i , z 2 and z 3 are always in the upper half-plane, while z 1 and z 4 can be in + , , or . Thus, we have to distinguish the following three cases:

• If x < t 2 λ , then z 1 and z 4 are in and z 2 and z 3 are in + . Therefore,

P V + e k 2 k + z d k = { π e z 1 2 ( i erfi ( z 1 ) ) , z = z 1 π e z 2 2 ( i + erfi ( z 2 ) ) , z = z 2 π e z 3 2 ( i + erfi ( z 3 ) ) , z = z 3 π e z 4 2 ( i erfi ( z 4 ) ) , z = z 4

and from Equation (3.1) above

q ( x , t ) = e x λ 2 sin ( x λ 2 λ t ) e x 2 4 t 4 π [ j = 1 2 + e k 2 k + z j d k j = 3 4 + e k 2 k + z j d k ] = e x 2 4 t 2 i ( e z 1 2 e z 4 2 ) e x 2 4 t 4 [ e z 1 2 ( i erfi ( z 1 ) ) e z 2 2 ( i + erfi ( z 2 ) ) + e z 3 2 ( i + erfi ( z 3 ) ) e z 4 2 ( i erfi ( z 4 ) ) ] = e x 2 4 t 4 [ j = 1 2 e z j 2 ( i + erfi ( z j ) ) + j = 1 2 e z j 2 ( i + erfi ( z j ) ) ¯ ] = e x 2 4 t 2 Re [ e z 1 2 ( i + erfi ( z 1 ) ) + e z 2 2 ( i + erfi ( z 2 ) ) ] . (3.21)

Here we used the fact that i z 1 = i z 4 ¯ and i z 2 = i z 3 ¯ , so erfi ( z 4 ) = erfi ( z 1 ) ¯ .

• If x = t 2 λ , then z 1 = λ t 2 and z 4 = λ t 2 are in ; and z 2 = λ t 2 + i 2 λ t and z 3 = λ t 2 + i 2 λ t are in + . Therefore,

P V + e k 2 k + z d k = { π e z 1 2 erfi ( z 1 ) , z = z 1 π e z 2 2 ( i + erfi ( z 2 ) ) , z = z 2 π e z 3 2 ( i + erfi ( z 3 ) ) , z = z 3 π e z 4 2 erfi ( z 4 ) , z = z 4

and

q ( x , t ) = e λ t 2 4 π [ j = 1 2 + e k 2 k + z j d k j = 3 4 + e k 2 k + z j d k ] = e λ t 2 2 Re [ e z 1 2 ( i + erfi ( z 1 ) ) + e z 2 2 ( i + erfi ( z 2 ) ) ] . (3.22)

• If x > t 2 λ , then all the roots z i are in + . Therefore,

P V + e k 2 k + z d k = { π e z 1 2 ( i + erfi ( z 1 ) ) , z = z 1 π e z 2 2 ( i + erfi ( z 2 ) ) , z = z 2 π e z 3 2 ( i + erfi ( z 3 ) ) , z = z 3 π e z 4 2 ( i + erfi ( z 4 ) ) , z = z 4

and

q ( x , t ) = e x 2 4 t 4 π [ j = 1 2 + e k 2 k + z j d k j = 3 4 + e k 2 k + z j d k ] = e x 2 4 t 2 Re [ e z 1 2 ( i + erfi ( z 1 ) ) + e z 2 2 ( i + erfi ( z 2 ) ) ] . (3.23)

The Formulas (3.21) and (3.23), which correspond respectively to the regions x < t 2 λ and x > t 2 λ , are the same and reduce to the Formula (3.22) when x approaches t 2 λ from both regions. In all three cases, we have proved (3.20).

For arbitrary boundary data g 0 ( t ) , this leads to the following solution of the heat equation:

q ( x , t ) = e x 2 4 t 2 π 0 Re [ e z 1 2 ( i + erfi ( z 1 ) ) + e z 2 2 ( i + erfi ( z 2 ) ) ] g ^ 0 s ( λ ) d λ ,

where g ^ 0 s is the sine transform of the boundary condition g 0 :

g ^ 0 s ( λ ) = 2 π 0 g 0 ( t ) sin ( λ t ) d t .

Next, we compute the integral F ( α i , x , t ) and obtain an exact expression for the solution of the Stokes equation of the first kind. The change of variable k * = ( 3 t ) 1 3 k in the expression of F ( α i , x , t ) gives:

F ( α i * , x * ) = 1 π e i ( k * 3 3 + x * k * ) k * α i * d k * , (3.24)

where x * = ( 3 t ) 1 3 x and α i * = ( 3 t ) 1 3 α i . This is the Cauchy integral of e i ( k * 3 3 + x * k * ) , which is an analytic function bounded on the real line—so, for fixed x and y , F ( + i y , x ) BMO .

Proposition 3.5. The exact solution of Equation (1.2) is given by:

q ( x , t ) = 1 2 [ sin ( λ t + λ 1 3 x ) + e ( 3 2 + i 2 ) λ 1 3 x sin ( λ t ) ] + 1 4 i = 1 6 F ( α i * , x * ) (3.25)

where F ( α i * , x * ) are computed later.

Proof. By analogy with (3.18) and following Constandinides-Marhefka [8], let the incomplete Airy functions be given by:

Ai k ( x ) = k ( 3 + i ) e i ( t 3 3 + x t ) d t , Bi k ( x ) = k ( 3 + i ) e i ( t 3 3 + x t ) d t , Ci k ( x ) = k i e i ( t 3 3 + x t ) d t . (3.26)

For the integral to converge, the upper integration limit in (3.26) can be taken to be e i θ , for θ ( 0 , π 3 ) ( 2 π 3 , π ) ( 4 π 3 , 5 π 3 ) . Over each range, the integral is constant.

Due to the integrand’s rapid decay, the integral (3.26) converges for any x , k , defining an analytic function of both. Ai k ( x ) is known as the incomplete Airy integral, introduced by Levey-Felsen (1969, [9] ). Also see Michaeli (1996, [10] ) for a so-called “Airy-Fresnel integral”.

When k = 0 , Ai 0 ( x ) , Bi 0 ( x ) , and Ci 0 ( x ) solve the inhomogeneous Airy equation y x y = 1 . More generally, for any k they solve it with a source term of

y x y = e i ( k 3 3 + x k ) .

The three incomplete Airy functions are then related, modulo the usual Airy functions Ai ( x ) and Bi ( x ) :

Bi k ( x ) = Ai k ( x ) 2 π Ai ( x ) , Ci k ( x ) = Ai k ( x ) π ( Ai ( x ) + Bi ( x ) ) .

Since (by a change of variable t = i t )

Ci k ( x ) = k i e i ( t 3 3 + x t ) d t = i i k + e t 3 3 + x t d t , (3.27)

Ci k ( x ) i for k i and x . Hence i Ci ( i k ) ( x ) for k and x .

By the Schwartz reflection principle, it follows that i Ci k ¯ ( x ) = i Ci k ( x ) ¯ , so Ci k ¯ ( x ) = Ci k ( x ) ¯ . In particular, Ci k ( x ) = Ci k ( x ) ¯ for k and x .

For k = 0 and x , Ci 0 ( x ) is purely imaginary:

Ci 0 ( x ) = i C ( x ) , ( x ) = 0 + e t 3 3 + t x d t .

When x , C ( x ) ( 0, ) is a positive real number. C ( 0 ) is an absolute constant that can be computed in terms of the Γ function:

C ( 0 ) = 0 + e t 3 3 d t = 3 2 / 3 Γ ( 1 / 3 ) .

When x = 0 , Ci k ( 0 ) can also be expressed in terms of the incomplete Γ function:

Ci k ( 0 ) = i i k + e t 3 3 d t = 3 2 / 3 i ( 3 i k ) 1 / 3 + e t t 2 / 3 d t = 3 2 / 3 i Γ ( 1 / 3 , ( 3 i k ) 1 / 3 ) .

However, for k 0 and x 0 , the incomplete Airy functions cannot be reduced to a simpler expression.

Lemma 3.6. The function

f ( k , x ) = e k 3 3 x k k + e t 3 3 + x t d t

is uniformly bounded for R ( k ) 0 .

Proof.

• If k + , then the integral converges and gives a continuous function of k and therefore it is uniformly bounded.

• If k + , then we have k + i y and the first part of the above function becomes:

e ( k + i y ) 3 3 ( k + i y ) x = e k 3 3 k ( x + y 2 ) + i ( y k 2 3 y 3 3 x y ) = e k 3 3 k ( x + y 2 ) × p h a s e ,

which is bounded given that e k y 2 1 . The absolute value of the integral of the above function is also bounded. In fact:

e k 3 3 k ( x + y 2 ) 0 y e k 3 3 + k ( x + y ˜ 2 ) d y ˜ = e k y 2 0 y e k y ˜ 2 d y ˜

Now, let k = i k ˜ and t = i t ˜ then R ( k ) 0 implies Im k ˜ 0 and the above function becomes:

f ( k ˜ , x ) = e i ( k ˜ 3 3 + x k ˜ ) i k ˜ + e t 3 3 + x t d t = i e i ( k ˜ 3 3 + x k ˜ ) k ˜ i e i ( t ˜ 3 3 + x t ˜ ) d t ˜ = i e i ( k ˜ 3 3 + x k ˜ ) Ci k ˜ ( x ) ,

which is bounded for Im k ˜ 0 .

Now, let k ˜ ˜ = ε k ˜ , where ε 3 = 1 , then the above function will be expressed in terms of the two other incomplete Airy functions. In fact:

i e i ( k ˜ 3 3 + x k ˜ ) k ˜ ε e i ( t ˜ 3 3 + x t ˜ ) d t ˜ = i e i ( k ˜ 3 3 + x k ˜ ) Bi k ˜ ( x ) = i e i ( k ˜ 3 3 + x k ˜ ) ( Ci k ˜ ( x ) + C 1 ( x ) ) = f ( k ˜ , x ) + i C 1 ( x ) e i ( k ˜ 3 3 + x k ˜ )

where C 1 ( x ) = π ( Ai ( x ) i Bi ( x ) ) , is bounded for k ˜ in some half-plane. On the other half plane, the function becomes:

i e i ( k ˜ 3 3 + x k ˜ ) k ˜ ε 2 e i ( t ˜ 3 3 + x t ˜ ) d t ˜ = i e i ( k ˜ 3 3 + x k ˜ ) Ai k ˜ ( x ) = i e i ( k ˜ 3 3 + x k ˜ ) ( Ci k ˜ ( x ) + C 2 ( x ) ) = f ( k ˜ , x ) + i C 2 ( x ) e i ( k ˜ 3 3 + x k ˜ )

where C 2 ( x ) = π ( A i ( x ) + i B i ( x ) ) , is also bounded.

Therefore:

{ f ^ ( ξ ) = 0 ; ξ ( , 0 ] ( f + i C 1 ( x ) e i ( k ˜ 3 3 + x k ˜ ) ) ^ ( ξ ) = 0 ; ξ on some half-line ( f + i C 2 ( x ) e i ( k ˜ 3 3 + x k ˜ ) ) ^ ( ξ ) = 0 ; ξ on the other half-line (3.28)

Now Let’s apply the same change of variables to the derivative of the above function with respect to x. We have:

f ( k , x ) = e k 3 3 x k k + t e t 3 3 + x t d t

Then the change of variables k = i k ˜ and t = i t ˜ gives:

f ( k ˜ , x ) = e i ( k ˜ 3 3 + x k ˜ ) k ˜ i t ˜ e i ( t ˜ 3 3 + x t ˜ ) d t ˜ = i e i ( k ˜ 3 3 + x k ˜ ) k ˜ i t ˜ e i ( t ˜ 3 3 + x t ˜ ) d t ˜ = i e i ( k ˜ 3 3 + x k ˜ ) C i k ˜ ( x ) ,

which is bounded. In terms of the two other incomplete Airy functions, we let k ˜ ˜ = ε k ˜ , where ε 3 = 1 . Therefore:

i e i ( k ˜ 3 3 + x k ˜ ) k ˜ ε t ˜ e i ( t ˜ 3 3 + x t ˜ ) d t ˜ = i e i ( k ˜ 3 3 + x k ˜ ) B i k ˜ ( x ) = i e i ( k ˜ 3 3 + x k ˜ ) ( C i k ˜ ( x ) + C 1 ( x ) ) = f ( k ˜ , x ) + i C 1 ( x ) e i ( k ˜ 3 3 + x k ˜ )

where: C 1 ( x ) = π ( A i ( x ) i B i ( x ) ) , and

i e i ( k ˜ 3 3 + x k ˜ ) k ˜ ε 2 t ˜ e i ( t ˜ 3 3 + x t ˜ ) d t ˜ = i e i ( k ˜ 3 3 + x k ˜ ) A i k ˜ ( x ) = i e i ( k ˜ 3 3 + x k ˜ ) ( C i k ˜ ( x ) + C 2 ( x ) ) = f ( k ˜ , x ) + i C 2 ( x ) e i ( k ˜ 3 3 + x k ˜ )

where: C 2 ( x ) = π ( A i ( x ) + i B i ( x ) ) , which are both bounded.

Now, we have the general formula of the Hilbert transform for the function f = e i ( k 3 3 + x k ) :

H f = α f + β f Ci k ( x ) + γ f C i k ( x ) .

Taking the Fourier transform and using the above results, we get:

{ i f ^ = α f ^ + β f Ci k ( x ) ^ + γ f C i k ( x ) ^ , ξ ( , 0 ] i ξ f ^ = α f ^ + β f Ci k ( x ) ^ + γ f C i k ( x ) ^ , ξ on some half-line i ξ 2 f ^ = α f ^ + β f Ci k ( x ) ^ + γ f C i k ( x ) ^ , ξ on the other half-line

Using Equation (3.28), the above system becomes:

{ i = α i ξ = α C 1 ( x ) β C 1 ( x ) γ i ξ 2 = α C 2 ( x ) β C 2 ( x ) γ (3.29)

Therefore, the solution for α , β , and γ :

{ α = i β = i [ ( 1 ξ ) C 2 ( x ) ( 1 ξ 2 ) C 1 ( x ) ] C 1 ( x ) C 2 ( x ) C 2 ( x ) C 1 ( x ) γ = i [ ( 1 ξ 2 ) C 1 ( x ) ( 1 ξ ) C 2 ( x ) ] C 1 ( x ) C 2 ( x ) C 2 ( x ) C 1 ( x ) (3.30)

Using the formulas for C 1 ( x ) , C 2 ( x ) , C 1 ( x ) , C 2 ( x ) , ξ = ( 3 + i ) , ξ 2 = ( 3 + i ) and the Wronskian W ( Ai ( x ) , Bi ( x ) ) = 1 / π , we get after simplification:

{ α = i β = ( i 1 ) A i ( x ) 3 i B i ( x ) γ = ( 1 i ) Ai ( x ) + 3 i Bi ( x ) (3.31)

Now, we have:

H e i ( k 3 3 + x k ) = e i ( k 3 3 + x k ) [ i + [ ( i 1 ) A i ( x ) 3 i B i ( x ) ] Ci k ( x ) + [ ( 1 i ) Ai ( x ) + 3 i Bi ( x ) ] C i k ( x ) ]

Thus,

F ( α i * , x * ) = { π e i ( α i * 3 3 + x * α i * ) [ i + [ ( i 1 ) A i ( x * ) 3 i B i ( x * ) ] Ci α i * ( x * ) + [ ( 1 i ) Ai ( x * ) + 3 i Bi ( x * ) ] C i α i * ( x * ) ] , for α 1 , 2 * = ± ( 3 λ t ) 1 3 π e i ( α i * 3 3 + x * α i * ) [ [ ( i 1 ) A i ( x * ) 3 i B i ( x * ) ] Ci α i * ( x * ) + [ ( 1 i ) Ai ( x * ) + 3 i Bi ( x * ) ] C i α i * ( x * ) ] , for α 3 , 4 * = ( ± 1 2 + i 3 2 ) ( 3 λ t ) 1 3 , π e i ( α i * 3 3 + x * α i * ) [ 2 i + [ ( i 1 ) A i ( x * ) 3 i B i ( x * ) ] Ci α i * ( x * ) + [ ( 1 i ) A i ( x * ) + 3 i Bi ( x * ) ] C i α i * ( x * ) ] , for α 5 , 6 * = ( ± 1 2 i 3 2 ) ( 3 λ t ) 1 3

and finally

q ( x , t ) = 1 2 [ sin ( λ t + λ 1 3 x ) + e ( 3 2 + i 2 ) λ 1 3 x sin ( λ t ) ] + 1 4 i = 1 6 F ( α i * , x * )

4. Asymptotic Expansion of the Heat Equation’s Solution

4.1. Deriving the Asymptotic Expansion

Our exact expression (3.20) for the solution q is not very transparent. In order to better understand its properties, we next derive an asymptotic expansion for q ( x , t ) . It is well known, see Abramowitz and Stegun [7], that the imaginary error function erfi ( z ) = i erf ( i z ) has the following asymptotic expansion:

erfi ( z ) = i + e z 2 π ( 1 z + 1 2 z 3 + 3 4 z 5 + 15 8 z 7 + + ( 2 n 1 ) ! ! 2 n z 2 n + 1 + ) , (4.1)

Here ( 2 n 1 ) ! ! = 1 3 5 ( 2 n 1 ) is the semifactorial function, with the convention that ( 1 ) ! ! = 1 .

For completeness, we rederive this formula below, together with some precise bounds for the error in this expansion. We start from the definition of the error function:

erf ( i z ) = 2 π 0 i z e t 2 d t = 1 2 π i z + e t 2 d t

By repeated integration by parts, we obtain

i z e t 2 d t = e z 2 i ( 1 2 z + 1 2 2 z 3 + 1 × 3 2 3 z 5 + + 1 × 3 × × ( 2 ( N 1 ) 1 ) 2 N z 2 N 1 ) + R N ( z ) = e z 2 i n = 0 N 1 ( 2 n 1 ) ! ! 2 n + 1 z 2 n + 1 + R N ( z ) ,

where R N ( z ) is the remainder after N terms and is given by

R N ( z ) = ( 1 ) N ( 2 N 1 ) ! ! 2 N i z + e t 2 t 2 N d t .

Therefore,

erfi ( z ) = i erf ( i z ) = i e z 2 π n = 0 N 1 ( 2 n 1 ) ! ! 2 n z 2 n + 1 + R N ( z ) (4.2)

where

R N ( z ) = ( i ) ( 1 ) N ( 2 N 1 ) ! ! 2 N 1 π i z + e t 2 t 2 N d t . (4.3)

Replacing (4.1) in (3.20), we formally get the asymptotic expansion of the solution

q ( x , t ) e x 2 4 t π n = 0 Re ( ( 2 n 1 ) ! ! 2 n + 1 z 1 2 n + 1 + ( 2 n 1 ) ! ! 2 n + 1 z 2 2 n + 1 ) . (4.4)

However, this computation is slightly misleading, because it does not give a good estimate for the size of the error. If we use Formula (4.4), the error will be large in some cases of interest. In fact, Formula (3.1) is better in a certain sense than Formula (3.20), because the sine term in the former, which is absent in the latter, can sometimes be a leading term, see below.

4.2. Estimating the Error in the Asymptotic Expansion

A more straightforward approach also leads to a better error estimate. For this purpose, we discard the exact Formula (3.20), obtained by estimating the Cauchy integral using the Hilbert transform, and go back to (3.1).

Lemma 4.1. The following asymptotic expansion is valid as | z | for arg z [ δ , π δ ] [ π + δ ,2 π δ ] , δ > 0 :

e k 2 k z d k n = 0 ( 2 n 1 ) ! ! π 2 n z 2 n + 1 .

More precisely,

e k 2 k z d k = n = 0 N 1 ( 2 n 1 ) ! ! π 2 n z 2 n + 1 + Q 2 N ( z ) , (4.5)

where the error is bounded by

| Q 2 N ( z ) | < ( 2 N 1 ) ! ! π 2 N | z | 2 N | Im z | , (4.6)

which converges to 0 for fixed N as | Im z | .

Proof. The fraction 1 k z can be written as:

1 k z = 1 z × 1 1 k z = ( 1 z + k z 2 + k 2 z 3 + + k N 1 z N ) + k N ( k z ) z N .

Therefore,

e k 2 k z d k = m = 1 2 N 1 z m k m 1 e k 2 d k + Q 2 N ( z ) , (4.7)

where

Q 2 N ( z ) = k 2 N e k 2 ( k z ) z 2 N d k .

The values of the integrals in (4.7) are known: for m 1 odd they are zero because the integrands are odd. For even values of m 1 = 2 n we obtain, by repeated integrations by parts,

k 2 n e k 2 d k = ( 2 n 1 ) ! ! π 2 n . (4.8)

Therefore

e k 2 k z d k = n = 0 N 1 ( 2 n 1 ) ! ! π 2 n z 2 n + 1 + Q 2 N ( z ) .

An upper bound for the error term Q 2 N , based on the exact formula above, is

| Q 2 N ( z ) | < 1 | z | 2 N | Im z | k 2 N e k 2 d k = ( 2 N 1 ) ! ! π 2 N | z | 2 N | Im z | .

Substituting (4.7) in Formula (3.1) and taking into account (3.14), we get the following asymptotic expansion of the solution q ( x , t ) :

Proposition 4.2. The solution q ( x , t ) to Equation (1.1) admits the following asymptotic expansion:

q ( x , t ) = ( e x λ 2 sin ( λ t x λ 2 ) e x 2 4 t 2 π n = 0 N 1 R ( ( 2 n 1 ) ! ! 2 n + 1 z 1 2 n + 1 + ( 2 n 1 ) ! ! 2 n + 1 z 2 2 n + 1 ) + Q 2 N , x t 2 λ e x 2 4 t 2 π n = 0 N 1 R ( ( 2 n 1 ) ! ! 2 n + 1 z 1 2 n + 1 + ( 2 n 1 ) ! ! 2 n + 1 z 2 2 n + 1 ) + Q 2 N , x t 2 λ , (4.9)

where the error Q 2 N is bounded by

| Q 2 N | < 2 e x 2 4 t t ( 2 N 1 ) ! ! max ( λ t , x 2 4 t ) N | x t 2 λ | π .

Proof. From (3.1) and (4.4) we get the following rough bound for the error Q 2 N :

| Q 2 N | < 1 4 π 4 e x 2 4 t ( 2 N 1 ) ! ! π 2 N ( λ t 2 + ( x 2 t λ t 2 ) 2 ) N | x 2 t λ t 2 | e x 2 4 t ( 2 N 1 ) ! ! max ( λ t , x 2 4 t ) N | x 2 t λ t 2 | π 2 e x 2 4 t t ( 2 N 1 ) ! ! max ( λ t , x 2 4 t ) N | x t 2 λ | π .

The first line can also be used to obtain other bounds, e.g.

| Q 2 N | < 2 t N + 1 / 2 e x 2 4 t ( 2 N 1 ) ! ! | x t 2 λ | 2 N + 1 π .

Thus, the asymptotic expansion (4.9) may be inaccurate near the diagonal x = t 2 λ , but covers both the case when x is fixed and t is large and the case when t is fixed and x is large.

5. Discussion

Interestingly, the solution behaves differently in these two regions. In the large x regime, the leading order approximation to the solution is given by (after simplification)

q ( x , t ) = q 1 ( x , t ) + Q 2 = 8 λ t 5 2 x e x 2 4 t ( x 4 + 16 λ 2 t 4 ) π + Q 2 , | Q 2 | < 2 t 3 / 2 e x 2 4 t ( x t 2 λ ) 3 π . (5.1)

The leading term q 1 and the error are both of size x 3 e x 2 4 t .

The next term in this expansion is (after simplification)

q 3 ( x , t ) = ( 3 λ t 5 2 x 8 + 3 λ t 7 2 x 7 16 λ 3 t 9 2 x 6 144 λ 3 t 11 2 x 5 ) e x 2 4 t 16 ( x 4 + 16 λ 2 t 4 ) 3 π . (5.2)

Explicitly, as x

q ( x , t ) = q 1 ( x , t ) + q 3 ( x , t ) + Q 4 , | Q 4 | < 6 t 5 / 2 e x 2 4 t ( x t 2 λ ) 5 π .

The second term q 3 is of size x 4 e x 2 4 t and the error is of size x 5 e x 2 4 t .

On the other hand, in the large t regime, the term

e x λ 2 sin ( λ t x λ 2 )

dominates (at least on average, since the sine can be zero), so as t the first two terms are

q ( x , t ) = e x λ 2 sin ( λ t x λ 2 ) + 8 λ t 5 2 x e x 2 4 t ( x 4 + 16 λ 2 t 4 ) π + Q 2 , | Q 2 | < 2 t 1 / 2 e x 2 4 t λ ( t 2 λ x ) π .

Again, q 1 ( x , t ) has the same size, t 3 / 2 , as the remainder. Considering one more term, we have

q ( x , t ) = e x λ 2 sin ( λ t x λ 2 ) + q 1 ( x , t ) + q 3 ( x , t ) + Q 4 , | Q 4 | < 6 t 3 / 2 e x 2 4 t λ 2 ( t 2 λ x ) π .

Here q 3 + Q 4 has a size of at most t 5 / 2 .

Finally, let us mention that along the diagonal x = t 2 λ a similar analysis implies that

q ( t 2 λ , t ) e λ t 2 2 π λ t , t .

Acknowledgements

M.B. is partially supported by a grant from the Simons Foundation (No. 429698, Marius Beceanu) and by startup funds provided by the University at Albany, SUNY.

Appendix A. A Comparison with Fokas’ Method

For comparison with Fokas’ method, we repeatedly ran both our code and Flyer-Fokas’ code on the same publicly available MATLAB/Octave online implementation http://www.tutorialspoint.com/execute_matlab_online.php, and averaged the times we obtained. The running times and averages are listed in TableA1.

We found that it takes much longer to produce the figure below for the heat equation using Fokas’ method and that the difference in running times becomes more pronounced for a bigger grid.

Table A1. Running time difference in seconds between Fokas' method and our method in solving the heat equation on a half-line.

Appendix B. Code

For reference, this is the MATLAB/Octave code we used to compare the running times of Fokas’ method and our method and to generate FigureB1.

Figure B1. The solutions of the heat equation displayed on x [ 0,1 ] and t [ 0,4 π ] (left) and of Stokes equation displayed on x [ 0,1 ] and t [ 0,1 ] (right).

B.1. Our Matlab code for the heat equation

B.2. Flyer-Fokas’ code for the heat equation (Adapted from [3] )

Cite this paper: Lachaab, M. and Beceanu, M. (2021) Approximate Analytical Solutions to the Heat and Stokes Equations on the Half-Line Obtained by Fokas’ Transform. Journal of Applied Mathematics and Physics, 9, 809-833. doi: 10.4236/jamp.2021.94055.
References

[1]   Fokas, A.S. (1997) A Unified Transform Method for Solving Linear and Certain Nonlinear PDEs. Proceedings of the Royal Society A, 453, 1411-1443.
https://doi.org/10.1098/rspa.1997.0077

[2]   Fokas, A.S. (2002) A New Transform Method for Evolution PDEs. IMA Journal of Applied Mathematics, 67, 559-590.
https://doi.org/10.1093/imamat/67.6.559

[3]   Flyer, N. and Fokas, A.S. (2008) A Hybrid Analytical-Numerical Method for Solving Evolution Partial Differential Equations in the Half-Line. Proceedings of the Royal Society A, 464, 1823-1849.
https://doi.org/10.1098/rspa.2008.0041

[4]   Fokas, A.S., et al. (2009) A Semi-Analytical Numerical Method for Solving Evolution and Elliptic Partial Differential Equations. Journal of Computational and Applied Mathematics, 227, 59-74.
https://doi.org/10.1016/j.cam.2008.07.036

[5]   Papatheodorou, T.S. and Kandili, A.N. (2009) Novel Numerical Techniques Based on Fokas Transforms for the Solution of Initial Boundary Value Problems. Journal of Computational and Applied Mathematics, 227, 75-82.
https://doi.org/10.1016/j.cam.2008.07.031

[6]   Cody, W.J. (1969) Rational Chebyshev Approximations for the Error Function. Mathematics of Computation, 23, 631-637.
https://doi.org/10.1090/S0025-5718-1969-0247736-4

[7]   Abramowitz, M. and Stegun, I. (1964) Handbook of Mathematical Functions. National Bureau of Standards, Applied Mathematics Series, 55.

[8]   Constantinides, E.D. and Marhefka, R.J. (1993) Efficient and Accurate Computation of the Incomplete Airy Functions. Radio Science, 28, 441-457.
https://doi.org/10.1029/93RS00278

[9]   Levey, L. and Felsen, L.B. (1969) On Incomplete Airy Functions and Their Application to Diffraction Problems. Radio Science, 4, 959-969.
https://doi.org/10.1029/RS004i010p00959

[10]   Michaeli, A. (1996) Asymptotic Analysis of Edge-Excited Currents on a Convex Face of a Perfectly Conducting Wedge under Overlapping Penumbra Region Conditions. IEEE Transactions on Antennas and Propagation, 44, 97-101.
https://doi.org/10.1109/8.477533

 
 
Top