A Local Meshless Method for Two Classes of Parabolic Inverse Problems

Show more

1. Introduction

The inverse problem of parabolic equations appears naturally in a wide variety of physical and engineering settings; many researchers solved this problem using different methods [1] - [10] . An important class of inverse problem is reconstructing the source term in parabolic equation, and it has been discussed in many papers [11] - [19] .

In meshless method, mesh generation on the spatial domain of the problem is not needed; this property is the main advantage of these techniques over the mesh dependent methods. The moving least squares method and the radial basis functions method are all the primary methods of constructing shape function in meshless method. The moving least squares method is introduced by Lancaster and Salkauskas [20] for the surface construction; in this method, one can obtain a best approximation in a weighted least squares sense, and this method emphasizes the compacted support of weight function especially, so it has the local characteristics. The radial basis functions method [21] is very efficient interpolating technique related to the scattered data approximation, it has high precision, and it is very suitable for the scattered data model; however, there are some drawbacks such as the character of global supported, the full matrix obtained from discretization scheme is always ill-conditioned as the number of collocation points increases, and it is very sensitive for the selection of the free parameter c.

To overcome the problems of ill-conditioned and the shape parameter sensitivity in radial basis functions method, the local radial basis function was introduced by Lee et al. [22] ; in contrast to radial basis functions method, only scattered data in the neighboring points are used in local radial basis functions, instead of using all the points, thus the order of the matrix which is obtained from discretization being reduced, so the matrix of shape function is sparse. This will improve the computational accuracy and be suitable for solving large-scale problems [23] .

The meshless method of moving least squares coupled with radial basis functions used for constructing shape function was introduced by Mohamed et al. [24] , but this method is global, and the problems in radial basis functions still exist. The method based on the linear combination of moving least squares and local radial basis functions in the same compact support was introduced by Wang [25] , which is a local method, and is very suitable for practical problems.

In this paper, we consider two classes of inverse problems of reconstructing the source term in parabolic equation from additional measurements, and we use the local meshless method presented in [25] .

This paper is organized as follows. In Section 2, we give an outline of the local meshless method. In Section 3, we solve the inverse problems using the local meshless method. In order to illustrate the feasibility of the method, numerical experiments will be given in Section 4.

2. Preliminaries

Let Ω be an open bounded domain in R^{d}, given data values
$\left\{{x}_{j},{u}_{j}\right\},j=1,2,\cdots ,N$ , where
${x}_{j}$ is the distinct scattered point in
$\stackrel{\xaf}{\Omega}$ ,
${u}_{j}$ is the data value of function u at the node
${x}_{j}$ , N is the number of scattered nodes, and we let
$\stackrel{\u02dc}{u}$ denote the approximate function of u in this work.

Combining with the collocation method, in [25] , the approximate function $\stackrel{\u02dc}{u}\left(x\right)$ was written as

$\stackrel{\u02dc}{u}\left(x\right)={\displaystyle \underset{i=1}{\overset{N}{\sum}}{\varsigma}_{i}\left(x\right){u}_{i}},$ (1)

where ${\varsigma}_{i}\left(x\right)$ stands for the shape function, and it can be written as the linear combination of the shape functions of moving least squares and local radial basis functions,

${\varsigma}_{i}\left(x\right)=\nu {\varphi}_{i}^{M}\left(x\right)+\left(1-\nu \right){\psi}_{i}^{L}\left(x\right),$

${\varphi}_{i}^{M}\left(x\right)$ and ${\psi}_{i}^{L}\left(x\right)$ stand for the shape functions in method moving least squares and local radial basis functions, respectively, ν is a constant which can be taken different values in [0, 1].

3. The Inverse Problem and Its Numerical Solution

Inverse problem I. The problem can be described as follows,

$\frac{\partial u\left(x,t\right)}{\partial t}=\frac{{\partial}^{2}u\left(x,t\right)}{\partial {x}^{2}}+f\left(t\right),\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}0<x<l,0<t<T,$ (2)

with the initial condition

$u\left(x,0\right)={u}_{0}\left(x\right),\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}0<x<l,$ (3)

and the boundary conditions

$u\left(0,t\right)={u}_{1}\left(x\right),\text{\hspace{0.05em}}\text{\hspace{0.05em}}u\left(l,t\right)={u}_{2}\left(x\right),\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}0\le t\le T.$ (4)

The Formulas (2)-(4) are the direct problem, and the inverse problem is that the functions $u\left(x,t\right)$ and $f\left(t\right)$ are unknown, with the additional observation of $u\left(x,t\right)$ at some internal point ${x}_{0}\left(0<{x}_{0}<l\right)$ ,

$u\left({x}_{0},t\right)=E\left(t\right),$ (5)

according to (5), consider the following transformation in [26] ,

${E}^{\prime}\left(t\right)=\frac{{\partial}^{2}u\left({x}_{0},t\right)}{\partial {x}^{2}}+f\left(t\right),$ (6)

using (6), we get

$f\left(t\right)={E}^{\prime}\left(t\right)-\frac{{\partial}^{2}u\left({x}_{0},t\right)}{\partial {x}^{2}},$ (7)

substituting (7) into (2), we have

$\frac{\partial u\left(x,t\right)}{\partial t}=\frac{{\partial}^{2}u\left(x,t\right)}{\partial {x}^{2}}+{E}^{\prime}\left(t\right)-\frac{{\partial}^{2}u\left({x}_{0},t\right)}{\partial {x}^{2}},\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}0<x<l,\text{\hspace{0.17em}}0<t<T,$ (8)

the initial and boundary conditions are

$u\left(x,0\right)={u}_{0}\left(x\right),\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}0<x<l,$ (9)

$u\left(0,t\right)={u}_{1}\left(x\right),\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}u\left(l,t\right)={u}_{2}\left(x\right),\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}0\le t\le T.$ (10)

So the inverse problem is transformed to a direct problem, then we use the local meshless method described in Section 2 solving the problem (8)-(10).

From (1), the approximate function $\stackrel{\u02dc}{u}\left(x,t\right)$ of $u\left(x,t\right)$ at $t={t}_{m}$ can be represented as

$\stackrel{\u02dc}{u}\left(x,{t}_{m}\right)={\displaystyle \underset{j=1}{\overset{N}{\sum}}{\varsigma}_{j}\left(x\right)\stackrel{\u02dc}{u}\left({x}_{j},{t}_{m}\right)},$ (11)

where ${\varsigma}_{j}\left(x\right)$ is the shape function described in Section 2.

Then

$\frac{{\partial}^{2}u\left(x,{t}_{m}\right)}{\partial {x}^{2}}={\displaystyle \underset{j=1}{\overset{N}{\sum}}\frac{{\partial}^{2}{\varsigma}_{j}\left(x\right)}{\partial {x}^{2}}\stackrel{\u02dc}{u}\left({x}_{j},{t}_{m}\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\frac{{\partial}^{2}u\left({x}_{0},{t}_{m}\right)}{\partial {x}^{2}}={\displaystyle \underset{j=1}{\overset{N}{\sum}}\frac{{\partial}^{2}{\varsigma}_{j}\left({x}_{0}\right)}{\partial {x}^{2}}\stackrel{\u02dc}{u}\left({x}_{j},{t}_{m}\right)},$

for the derivative of t, we apply one step forward difference formula to t, and let $\Delta t={t}_{m+1}-{t}_{m},m=1,2,\cdots ,M$ , then we have

$\frac{\partial u\left(x,{t}_{m}\right)}{\partial t}=\frac{\stackrel{\u02dc}{u}\left(x,{t}_{m+1}\right)-\stackrel{\u02dc}{u}\left(x,{t}_{m}\right)}{\Delta t},\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{E}^{\prime}\left({t}_{m}\right)=\frac{E\left({t}_{m+1}\right)-E\left({t}_{m}\right)}{\Delta t},$

so the Equation (8) can be rewritten as

$\begin{array}{l}\frac{\stackrel{\u02dc}{u}\left(x,{t}_{m+1}\right)-\stackrel{\u02dc}{u}\left(x,{t}_{m}\right)}{\Delta t}\\ ={\displaystyle \underset{j=1}{\overset{N}{\sum}}\frac{{\partial}^{2}{\varsigma}_{j}(x)}{\partial {x}^{2}}\stackrel{\u02dc}{u}\left({x}_{j},{t}_{m}\right)}+\frac{E\left({t}_{m+1}\right)+E\left({t}_{m}\right)}{\Delta t}-{\displaystyle \underset{j=1}{\overset{N}{\sum}}\frac{{\partial}^{2}{\varsigma}_{j}\left({x}_{0}\right)}{\partial {x}^{2}}\stackrel{\u02dc}{u}\left({x}_{j},{t}_{m}\right)},\end{array}$

that is equivalent to

$\begin{array}{c}\stackrel{\u02dc}{u}\left(x,{t}_{m+1}\right)=\stackrel{\u02dc}{u}\left(x,{t}_{m}\right)+\Delta t({\displaystyle \underset{j=1}{\overset{N}{\sum}}\frac{{\partial}^{2}{\varsigma}_{j}\left(x\right)}{\partial {x}^{2}}\stackrel{\u02dc}{u}\left({x}_{j},{t}_{m}\right)}+\frac{E\left({t}_{m+1}\right)-E\left({t}_{m}\right)}{\Delta t}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}-{\displaystyle \underset{j=1}{\overset{N}{\sum}}\frac{{\partial}^{2}{\varsigma}_{j}({x}_{0})}{\partial {x}^{2}}\stackrel{\u02dc}{u}\left({x}_{j},{t}_{m}\right)}),\end{array}$

by substituting each ${x}_{k}$ for x,

$\begin{array}{c}\stackrel{\u02dc}{u}\left({x}_{k},{t}_{m+1}\right)=\stackrel{\u02dc}{u}\left({x}_{k},{t}_{m}\right)+\Delta t({\displaystyle \underset{j=1}{\overset{N}{\sum}}\frac{{\partial}^{2}{\varsigma}_{j}\left({x}_{k}\right)}{\partial {x}^{2}}\stackrel{\u02dc}{u}\left({x}_{j},{t}_{m}\right)}+\frac{E\left({t}_{m+1}\right)-E\left({t}_{m}\right)}{\Delta t}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-{\displaystyle \underset{j=1}{\overset{N}{\sum}}\frac{{\partial}^{2}{\varsigma}_{j}\left({x}_{0}\right)}{\partial {x}^{2}}\stackrel{\u02dc}{u}\left({x}_{j},{t}_{m}\right)}),\end{array}$ (12)

from (12) and the conditions (9)-(10), we can obtain the numerical solution $\stackrel{\u02dc}{u}\left({x}_{k},{t}_{m}\right)$ , and $\stackrel{\u02dc}{f}\left({t}_{m}\right),\text{\hspace{0.05em}}\text{\hspace{0.05em}}k=1,2,\cdots ,N,m=1,2,\cdots ,M$ .

Inverse problem II. The problem can be described as follows,

$\frac{\partial u\left(x,t\right)}{\partial t}=\frac{{\partial}^{2}u\left(x,t\right)}{\partial {x}^{2}}+f\left(x,t\right),\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}0<x<l,0<t<T,$ (13)

with the initial condition

$u\left(x,0\right)={u}_{0}\left(x\right),\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}0<x<l,$ (14)

and the boundary conditions

$u\left(0,t\right)=0,\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}u\left(l,t\right)=0,\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}0\le t\le T.$ (15)

The Formulas (13)-(15) are the direct problem, and the inverse problem is that the functions $u\left(x,t\right)$ and $f\left(x,t\right)$ are unknown, with the additional observation of $u\left(x,t\right)$ at some internal point ${x}_{0}\left(0<{x}_{0}<l\right)$ ,

$u\left({x}_{0},t\right)=E\left(t\right).$ (16)

Assume that the function $f\left(x,t\right)$ can be described as

$f\left(x,t\right)=\eta \left(t\right)\psi \left(x\right),$ (17)

where $\psi \left(x\right)$ is the known function, and satisfies the following restrictions:

1) $\psi \left({x}_{0}\right)\ne 0,$

2) $\psi \left(x\right)$ is smooth enough,

3) $\psi \left(x\right)=0$ on the boundary of the computational domain.

Let

$u\left(x,t\right)=\theta \left(t\right)\psi \left(x\right)+\omega \left(x,t\right),$ (18)

where

$\theta \left(t\right)={\displaystyle {\int}_{0}^{t}\eta \left(s\right)\text{d}s},$ (19)

substituting (17) and (18) into (13), we have

$\frac{\partial \omega \left(x,t\right)}{\partial t}=\frac{{\partial}^{2}\omega \left(x,t\right)}{\partial {x}^{2}}+\theta \left(t\right)\frac{{\text{d}}^{2}\psi \left(x\right)}{\text{d}{x}^{2}},\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}0<x<l,0<t<T,$ (20)

from (18) and combining (16),we get

$\theta \left(t\right)=\frac{E\left(t\right)-\omega \left({x}_{0},t\right)}{\psi \left({x}_{0}\right)},$ (21)

then according to (19),

$\eta \left(t\right)={\theta}^{\prime}\left(t\right),$ (22)

substituting (21) into (20),

$\frac{\partial \omega \left(x,t\right)}{\partial t}=\frac{{\partial}^{2}\omega \left(x,t\right)}{\partial {x}^{2}}+\frac{E\left(t\right)-\omega \left({x}_{0},t\right)}{\psi \left({x}_{0}\right)}\frac{{\text{d}}^{2}\psi \left(x\right)}{\text{d}{x}^{2}},\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}0<x<l,0<t<T,$ (23)

the initial and boundary conditions are

$\omega \left(x,0\right)={u}_{0}\left(x\right),\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}0<x<l,$ (24)

$\omega \left(0,t\right)=0,\text{\hspace{0.05em}}\text{\hspace{0.05em}}\omega \left(l,t\right)=0,\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}0\le t\le T.$ (25)

Through the above descriptions, if we have the numerical solution $\stackrel{\u02dc}{\omega}\left(x,t\right)$ of (23), from (17)-(18) and (21)-(22), we can get the numerical solution $\stackrel{\u02dc}{u}\left(x,t\right)$ and $\stackrel{\u02dc}{f}\left(x,t\right)$ .

Next, we use the local meshless method described in Section 2 solving the problem (23)-(25).

From (1), the approximate function $\stackrel{\u02dc}{\omega}\left(x,t\right)$ of $\omega \left(x,t\right)$ at $t={t}_{m}$ can be represented as

$\stackrel{\u02dc}{\omega}\left(x,{t}_{m}\right)={\displaystyle \underset{j=1}{\overset{N}{\sum}}{\varsigma}_{j}\left(x\right)\stackrel{\u02dc}{\omega}\left({x}_{j},{t}_{m}\right)},$

where ${\varsigma}_{j}\left(x\right)$ is the shape function described in Section 2.

Then

$\stackrel{\u02dc}{\omega}\left({x}_{0},{t}_{m}\right)={\displaystyle \underset{j=1}{\overset{N}{\sum}}{\varsigma}_{j}\left({x}_{0}\right)\stackrel{\u02dc}{\omega}\left({x}_{j},{t}_{m}\right)},\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\frac{{\partial}^{2}\stackrel{\u02dc}{\omega}}{\partial {x}^{2}}={\displaystyle \underset{j=1}{\overset{N}{\sum}}\frac{{\partial}^{2}{\varsigma}_{j}\left(x\right)}{\partial {x}^{2}}\stackrel{\u02dc}{\omega}\left({x}_{j},{t}_{m}\right)},$

for $\frac{\partial \omega}{\partial t}$ , we apply one step forward difference formula to t, and let

$\Delta t={t}_{m+1}-{t}_{m},m=1,2,\cdots ,M$ , then we have

$\frac{\partial \omega}{\partial t}=\frac{\stackrel{\u02dc}{\omega}\left(x,{t}_{m+1}\right)-\stackrel{\u02dc}{\omega}\left(x,{t}_{m}\right)}{\Delta t},$

so the Equation (23) can be rewritten as

$\begin{array}{l}\frac{\stackrel{\u02dc}{\omega}\left(x,{t}_{m+1}\right)-\stackrel{\u02dc}{\omega}\left(x,{t}_{m}\right)}{\Delta t}\\ ={\displaystyle \underset{j=1}{\overset{N}{\sum}}\frac{{\partial}^{2}{\varsigma}_{j}\left(x\right)}{\partial {x}^{2}}\stackrel{\u02dc}{\omega}\left({x}_{j},{t}_{m}\right)}+\frac{E\left({t}_{m}\right)-{\displaystyle \underset{j=1}{\overset{N}{\sum}}{\varsigma}_{j}\left({x}_{0}\right)\stackrel{\u02dc}{\omega}\left({x}_{j},{t}_{m}\right)}}{\psi \left({x}_{0}\right)}\frac{{\text{d}}^{2}\psi \left(x\right)}{\text{d}{x}^{2}},\end{array}$

that is equivalent to

$\begin{array}{c}\stackrel{\u02dc}{\omega}\left(x,{t}_{m+1}\right)=\stackrel{\u02dc}{\omega}\left(x,{t}_{m}\right)+\text{\hspace{0.05em}}\Delta t[{\displaystyle \underset{j=1}{\overset{N}{\sum}}\frac{{\partial}^{2}{\varsigma}_{j}\left(x\right)}{\partial {x}^{2}}\stackrel{\u02dc}{\omega}\left({x}_{j},{t}_{m}\right)\begin{array}{c}{}^{}\\ \\ \end{array}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{E\left({t}_{m}\right)-{\displaystyle \underset{j=1}{\overset{N}{\sum}}{\varsigma}_{j}\left({x}_{0}\right)\stackrel{\u02dc}{\omega}\left({x}_{j},{t}_{m}\right)}}{\psi \left({x}_{0}\right)}\frac{{\text{d}}^{2}\psi \left(x\right)}{\text{d}{x}^{2}}],\end{array}$

by substituting each x_{k} for x,

$\begin{array}{c}\stackrel{\u02dc}{\omega}\left({x}_{k},{t}_{m+1}\right)=\stackrel{\u02dc}{\omega}\left({x}_{k},{t}_{m}\right)+\Delta t[{\displaystyle \underset{j=1}{\overset{N}{\sum}}\frac{{\partial}^{2}{\varsigma}_{j}\left({x}_{k}\right)}{\partial {x}^{2}}\stackrel{\u02dc}{\omega}\left({x}_{j},{t}_{m}\right)}\begin{array}{c}\underset{}{}\\ \\ \end{array}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{E\left({t}_{m}\right)-{\displaystyle \underset{j=1}{\overset{N}{\sum}}{\varsigma}_{j}\left({x}_{0}\right)\stackrel{\u02dc}{\omega}\left({x}_{j},{t}_{m}\right)}}{\psi \left({x}_{0}\right)}\frac{{\text{d}}^{2}\psi \left({x}_{k}\right)}{\text{d}{x}^{2}}],\end{array}$ (26)

from (26) and the conditions (24)-(25), we can obtain the numerical solution $\stackrel{\u02dc}{\omega}\left({x}_{k},{t}_{m}\right)$ and $\stackrel{\u02dc}{f}\left({x}_{k},{t}_{m}\right),\text{\hspace{0.17em}}k=1,2,\cdots ,N,m=1,2,\cdots ,M$ .

4. Numerical Experiments and Discussions

To test the efficiency of the method in this paper, in this section, we give two examples to illustrate the correctness of the theoretical result and the feasibility of the method.

Example 1. Consider the problem (2)-(5), with the conditions

${u}_{0}\left(x\right)=2+\mathrm{sin}x,\text{\hspace{0.17em}}{u}_{0}\left(t\right)=\left(2+t\right){\text{e}}^{-t},\text{\hspace{0.17em}}{u}_{1}\left(t\right)=\left(2+t+\mathrm{sin}l\right){\text{e}}^{-t},\text{\hspace{0.17em}}E\left(t\right)=\mathrm{sin}\left(\text{\pi}t\right),$

and we let $l=2,T=2,{x}_{0}=1$ .

The exact solutions are

$u\left(x,t\right)=\left(2+t+\mathrm{sin}x\right){\text{e}}^{-t},\text{\hspace{0.17em}}f\left(t\right)=-\left(1+t\right){\text{e}}^{-t}.$

Firstly, we plot the error functions $f\left(t\right)-\stackrel{\u02dc}{f}\left(t\right)$ and $u\left(x,t\right)-\stackrel{\u02dc}{u}\left(x,t\right)$ in Figure 1, respectively, where $\Delta t=0.0001,\Delta x=0.05$ .

From Figure 1, we can see that the approximation effect is good.

Secondly, in order to test the stability of the numerical solution, we give small perturbations on $E\left(t\right)$ , and the artificial error is introduced into the additional specification data by defining function

(a) (b)

Figure 1. The error functions (a) $f\left(t\right)-\stackrel{\u02dc}{f}\left(t\right)$ ; (b) $u\left(x,t\right)-\stackrel{\u02dc}{u}\left(x,t\right)$ .

${E}_{\gamma}\left(t\right)=E\left(t\right)\left(1+\gamma \right),$ (27)

where γ is the noise parameter.

We plot the error functions $f\left(t\right)-\stackrel{\u02dc}{f}\left(t\right)$ and $u\left(x,t\right)-\stackrel{\u02dc}{u}\left(x,t\right)$ when $\gamma =0.001$ in Figure 2, where $\Delta t=0.0001,\Delta x=0.05$ .

From Figure 2, we see that when there is the noisy data, the approximation effect of numerical solution is worse relatively, but there is no obvious oscillation in the error graph.

Lastly, we define the following error of functions $f\left(t\right)$ and $u\left(x,t\right)$ ,

$Ef=\sqrt{\frac{{\displaystyle \underset{j=1}{\overset{N}{\sum}}{\left(f\left({t}_{j}\right)-\stackrel{\u02dc}{f}\left({t}_{j}\right)\right)}^{2}}}{N}},\text{\hspace{0.17em}}Eu=\sqrt{\frac{{\displaystyle \underset{i=1}{\overset{M}{\sum}}{\displaystyle \underset{j=1}{\overset{N}{\sum}}{\left(u\left({x}_{i},{t}_{j}\right)-\stackrel{\u02dc}{u}\left({x}_{i},{t}_{j}\right)\right)}^{2}}}}{MN}},$ (28)

where $f\left({t}_{j}\right),u\left({x}_{i},{t}_{j}\right)$ and $\stackrel{\u02dc}{f}\left({t}_{j}\right),\stackrel{\u02dc}{u}\left({x}_{i},{t}_{j}\right)$ are the exact and numerical solutions at ${x}_{j},{t}_{m}$ , M and N are the number of nodes about x and t, respectively. we give the results under the different cases in Table 1.

From Table 1, we get that the error decreases with the decrease of Δt, when the number of nodes are fixed. When Δt is fixed, the error decreases with the increase of the number of nodes. When Δt and Δx are fixed, the error varies with the change of the noisy data, and the error decreases with the decrease of noisy data.

Example 2. Consider the problem (13)-(16), with the conditions

${u}_{0}\left(x\right)=0,\text{\hspace{0.17em}}E\left(t\right)=\mathrm{sin}\left(\text{\pi}t\right),$

and we let $l=1,T=1,{x}_{0}=0.5$ .

The exact solutions are

$u\left(x,t\right)=\mathrm{sin}\left(\text{\pi}x\right)\mathrm{sin}\left(\text{\pi}t\right),f\left(t\right)=\text{\pi}\mathrm{sin}\left(\text{\pi}x\right)\left(\mathrm{cos}\left(\text{\pi}\right)+\text{\pi}\mathrm{sin}\left(\text{\pi}t\right)\right),$

with

$\psi \left(x\right)=\text{\pi}\mathrm{sin}\left(\text{\pi}x\right).$

Firstly, in order to illustrate the accuracy of the method, we plot the error

(a) (b)

Figure 2. The error functions (a) $f\left(t\right)-\stackrel{\u02dc}{f}\left(t\right)$ ; (b) $u\left(x,t\right)-\stackrel{\u02dc}{u}\left(x,t\right)$ .

Table 1. The error under different cases.

functions $f\left(x,t\right)-\stackrel{\u02dc}{f}\left(x,t\right)$ and $u\left(x,t\right)-\stackrel{\u02dc}{u}\left(x,t\right)$ in Figure 3, where $\Delta t=0.0001,\Delta x=0.05$ .

From Figure 3, we see that the approximation effect is good.

Secondly, in order to test the stability of the numerical solution, we give small perturbations on $E\left(t\right)$ , and the artificial error is defined by (27).

The results of $f\left(x,t\right)-\stackrel{\u02dc}{f}\left(x,t\right)$ and $u\left(x,t\right)-\stackrel{\u02dc}{u}\left(x,t\right)$ with $\gamma =0.001$ are shown in Figure 4, where $\Delta t=0.0001,\Delta x=0.05$ .

From Figure 4, we see that when there is noise, the approximation effect is worse than there is no noise, but the error function is smooth and there is no obvious oscillation in error graph.

At last, we define Eu by (28), and the definition of Ef is same as Eu, we give the results under the different cases in Table 2.

From Table 2, we get that when $\gamma =0$ , the error decreases with the decrease of Δt and Δx, when Δt and Δx are fixed, the error decreases with the decrease of noise parameter.

5. Conclusion

In this paper, we use the local meshless method based on the moving least

(a) (b)

Figure 3. The error functions (a) $f\left(x,t\right)-\stackrel{\u02dc}{f}\left(x,t\right)$ ; (b) $u\left(x,t\right)-\stackrel{\u02dc}{u}\left(x,t\right)$ .

(a) (b)

Figure 4. The error functions (a) $f\left(x,t\right)-\stackrel{\u02dc}{f}\left(x,t\right)$ ; (b) $u\left(x,t\right)-\stackrel{\u02dc}{u}\left(x,t\right)$ .

Table 2. The error under different cases.

squares method and the local radial basis functions method to solve two classes of inverse problems of reconstructing the source term in parabolic equations. From the experiments, we can see that this method is accurate and efficient.

Acknowledgements

This work was supported by Scientific Research Fund of Scientific and Technological Project of Changsha City, (Grant No. ZD1601077, K1705078).

References

[1] Liu, Z.H. and Wang, B.Y. (2009) Coefficient Identification in Parabolic Equations. Applied Mathematics and Computation, 209, 379-390.

https://doi.org/10.1016/j.amc.2008.12.062

[2] Liu, J.B., Wang, B.Y. and Liu, Z.H. (2010) Determination of a Source Term in a Heat Equation. International Journal of Computer Mathematics, 87, 969-975.

https://doi.org/10.1080/00207160802044126

[3] Hasanov, A. and Liu, Z.H. (2008) An Inverse Coefficient Problem for a Nonlinear Parabolic Variational Inequality. Applied Mathematics Letters, 21, 563-570.

https://doi.org/10.1016/j.aml.2007.06.007

[4] Liu, Z.H. and Tatar, S. (2012) Analytical Solutions of a Class of Inverse Coefficient Problems. Applied Mathematics Letters, 25, 2391-2395.

https://doi.org/10.1016/j.aml.2012.07.010

[5] Wang, B.Y. (2014) Moving Least Squares Method for a One-Dimensional Parabolic Inverse Problem. Abstract and Applied Analysis, 2014, Article ID 686020, 12 pages.

[6] Fatullayev, A.G. (2002) Numerical Solution of the Inverse Problem of Determining an Unknown Source Term in a Heat Equation. Mathematics and Computers in Simulation, 58, 247-253.

https://doi.org/10.1016/S0378-4754(01)00365-2

[7] Badia, A.E. and Duong, T.H. (2002) An Inverse Problem in Heat Equation and Application to Pollution Problem. Journal of Inverse and Ill-posed Problems, 10, 585-599.

https://doi.org/10.1515/jiip.2002.10.6.585

[8] Cannon, J.R. (1968) Determination of an Unknown Heat Source from Overspecified Boundary Data. SIAM Journal on Numerical Analysis, 5, 275-286.

https://doi.org/10.1137/0705024

[9] Choulli, M. and Yamamoto, M. (2004) Conditional Stability in Determining a Heat Source. Journal of Inverse and Ill-Posed Problems, 12, 233-243.

https://doi.org/10.1515/1569394042215856

[10] Hussein, M.S. and Lesnic, D. (2016) Simultaneous Determination of Time and Space-Dependent Coefficients in a Parabolic Equation. Communications in Nonlinear Science and Numerical Simulation, 33, 194-217.

https://doi.org/10.1016/j.cnsns.2015.09.008

[11] Cannon, J.R. and Lin, Y. (1990) An Inverse Problem of Finding a Parameter in a Semi-Linear Heat Equation. Journal of Mathematical Analysis and Applications, 145, 470-484.

https://doi.org/10.1016/0022-247X(90)90414-B

[12] Cannon, J.R. and DuChateau, P. (1998) Structural Identification of an Unknown Source Term in a Heat Equation. Inverse Problems, 14, 535-551.

https://doi.org/10.1088/0266-5611/14/3/010

[13] Burykin, A.A. and Denisov, A.M. (1997) Determination of the Unknown Sources in the Heat-Conduction Equation. Computational Mathematics and Modeling, 8, 309-313.

https://doi.org/10.1007/BF02404048

[14] Farcas, A. and Lesnic, D. (2006) The Boundary-Element Method for the Determination of a Heat Source Dependent on One Variable. Journal of Engineering Mathematics, 54, 375-388.

https://doi.org/10.1007/s10665-005-9023-0

[15] Johansson, T. and Lesnic, D. (2007) Determination of a Spacewise Dependent Heat Source. Journal of Computational and Applied Mathematics, 209, 66-80.

https://doi.org/10.1016/j.cam.2006.10.026

[16] Johansson, T. and Lesnic, D. (2007) A Variational Method for Identifying a Spacewise-Dependent Heat Source. IMA Journal of Applied Mathematics, 72, 748-760.

https://doi.org/10.1093/imamat/hxm024

[17] Fatullayev, A.G. and Can, E. (2000) Numerical Procedures for Determining Unknown Source Parameters in Parabolic Equations. Mathematics and Computers in Simulation, 54, 159-167.

https://doi.org/10.1016/S0378-4754(00)00221-4

[18] Borukhou, V.T. and Vabishchevich, P.N. (2000) Numerical Solution of the Inverse Problem of Reconstructing a Distributed Right-Hand Side of a Parabolic Equation. Computer Physics Communications, 126, 32-36.

https://doi.org/10.1016/S0010-4655(99)00416-6

[19] Saadatmandi, A. and Dehghan, M. (2010) Computation of Two-Dependent Coefficients in a Parabolic Partial Differential Equation Subject to Additional Specifications. International Journal of Computer Mathematics, 87, 997-1008.

[20] Lancaster, P. and Salkauskas, K. (1981) Surfaces Generated by Moving Least Squares Methods. Mathematics of Computation, 37, 141-158.

https://doi.org/10.1090/S0025-5718-1981-0616367-1

[21] Buhmann, M.D. (2003) Radial Basis Functions: Theory and Implementations. Cambridge University Press, Cambridge.

https://doi.org/10.1017/CBO9780511543241

[22] Lee, C.K., Liu, X. and Fan, S.C. (2003) Local Multiquadric Approximation for Solving Boundary Value Problems. Computational Mechanics, 30, 395-409.

https://doi.org/10.1007/s00466-003-0416-5

[23] Li, M., Chen, W. and Chen, C.S. (2013) The Localized RBFs Collocation Methods for Solving High Dimensional PDEs. Engineering Analysis with Boundary Elements, 37, 1300-1304.

https://doi.org/10.1016/j.enganabound.2013.06.001

[24] Mohamed, H.A., Bakrey, A.E. and Ahmed, S.G. (2012) A Collocation Mesh-Free Method Based on Multiple Basis Functions. Engineering Analysis with Boundary Elements, 36, 446-450.

https://doi.org/10.1016/j.enganabound.2011.09.002

[25] Wang, B. (2015) A Local Meshless Method Based on Moving Least Squares and Local Radial Basis Functions. Engineering Analysis with Boundary Elements, 50, 395-401.

https://doi.org/10.1016/j.enganabound.2014.10.001

[26] Conton, D., Ewing, R. and Rundell, W. (1990) Inverse Problems in Partial Differential Equation. SIAM, Philadelphia.