Application of the Economization of Power Series to Solving the Schrödinger Equation for the Gaussian Potential via the Asymptotic Iteration Method
Abstract: This paper presents economized power series for the Gaussian function. The economization is accomplished by utilizing the “usual” and the “shifted” Chebyshev polynomials of the first kind. The resulting economized series are applied to the solution of the radial Schrödinger equation with the attractive Gaussian potential via the asymptotic iteration method (AIM). The obtained bound state energies are compared with those given by the same method when the Taylor expansion is used to approximate the Gaussian potential. We also compare them with those obtained from the exact Hamiltonian diagonalization on a finite basis of Coulomb Sturmian functions.

1. Introduction

The easiest and most obvious way to obtain a polynomial approximation to a

given function $f\left(x\right)$ is to use a truncated Taylor series of the form $\underset{j=0}{\overset{N}{\sum }}{a}_{j}{x}^{j}$ or more generally $\underset{j=0}{\overset{N}{\sum }}{a}_{j}{\left(x-{x}_{0}\right)}^{j}$ [1] [2]. In this truncated method, the more the number of the retained terms, the higher the accuracy of the approximation.

However, this method suffers from the uneven distribution of errors in the approximation. The closer the evaluated point to the origin of expansion, the higher the accuracy and vice versa. This means that for a desired level of accuracy, the points far from the origin will need substantially more terms than those close to the origin of expansion. For computational purposes, however, it may be undesirable to require as many as $N+1$ terms when N is large. Indeed, it may be unnecessary to use more than a few terms, especially if interest in the function $f\left(x\right)$ is restricted to a small range ${x}_{0}\le x\le {x}_{1}$ of the argument.

The powers of a variable x appeared originally purely in algebraic problems [2]. With the development of calculus, the great importance of power expansions became evident. The expansion discovered by Taylor in 1715 and by Maclaurin in 1742 allows predicting the evolution of a function from its value and all its derivatives in one particular point [3]. The “Taylor series” thus became one of the cornerstones of analytical research and was particularly useful in establishing the existence of solutions of differential equations [2]. It should be recalled, however, that the Taylor expansion suffers from slow convergence speed for points far from the origin of expansion. This problem could be alleviated by using minimization methods such as least square (LS) algorithm [3] [4]. In this case, the function $f\left(x\right)$ is approximated with a finite degree polynomial

$\underset{k=0}{\overset{N}{\sum }}{c}_{k}{x}^{k}$ whose coefficients ${c}_{k}$ are selected such that

$J\equiv \underset{a}{\overset{b}{\int }}w\left(x\right){\left(f\left(x\right)-\underset{k=0}{\overset{N}{\sum }}{c}_{k}{x}^{k}\right)}^{\text{2}}\text{d}x$ (1)

is minimum, where $w\left(x\right)$ is an arbitrary weighting function and $\left[a,b\right]$ is the interval in which the function is approximated. The minimization in Equation (1) yields [1]

$\underset{k=0}{\overset{N}{\sum }}{c}_{k}\underset{a}{\overset{b}{\int }}w\left(x\right){x}^{k}{x}^{n}\text{d}x=\underset{a}{\overset{b}{\int }}w\left(x\right)f\left(x\right){x}^{n}\text{d}x,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}n=0,1,2,\cdots$ (2)

We have to indicate that the system of Equation (2) is difficult to solve because it requires the computation of a full two-dimensional matrix. The reason is that the function $f\left(x\right)$ is approximated with non-orthogonal power series basis $\left(1,x,{x}^{2},\cdots \right)$. This could be avoided if the function is approximated with an orthogonal basis. That is, if the orthogonal basis is given by ${P}_{0}\left(x\right),{P}_{1}\left(x\right),{P}_{2}\left(x\right),\cdots$, then the coefficients ${c}_{k}$ are determined by

${c}_{k}\underset{a}{\overset{b}{\int }}w\left(x\right){\left[{P}_{k}\left(x\right)\right]}^{2}\text{d}x=\underset{a}{\overset{b}{\int }}w\left(x\right)f\left(x\right){P}_{k}\left(x\right)\text{d}x,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=0,1,2,\cdots$ (3)

Using an orthogonal basis will cause the off diagonal terms to be null, and can occasionally lead to the so-called “economized power series”. As a side note, we should indicate that much attention has also been paid to the problem of inventing methods of summing a series in such a way that it shall become convergent, although the original series, if added term by term, increases to infinity [5].

Economization of power series is a procedure that replaces a very accurate (or even exact) polynomial approximation $\underset{j=0}{\overset{N}{\sum }}{a}_{j}{x}^{j}$ of degree N by an “economized” polynomial $\underset{j=0}{\overset{n}{\sum }}{e}_{j}{x}^{j}$ of a smaller degree n such that, in the range of

interest, the absolute error introduced by the replacement is less than some acceptable value E [6]:

$|\underset{j=0}{\overset{N}{\sum }}{a}_{j}{x}^{j}-\underset{j=1}{\overset{n}{\sum }}{e}_{j}{x}^{j}| (4)

The procedure of economization, or telescoping as it is sometimes called [2] [6], is accomplished by utilizing the properties of Chebyshev polynomials of the first kind [2] [6] [7] [8], among which the minimax property [9] [10]. According to the minimax principle, Chebyshev approximations are associated with the approximations which minimize the maximum error.

We have to emphasize that the economization algorithm has many distinct phases [6] [11] [12]. More precisely, the economization of power series has four basic steps:

Step 1. Expand $f\left(x\right)$ in a Taylor series valid on the interval $\left[-1,1\right]$. Truncate this series to obtain a polynomial

${P}_{N}\left(x\right)={a}_{0}+{a}_{1}x+\cdots +{a}_{N}{x}^{N}$, (5)

which approximates $f\left(x\right)$ within a prescribed tolerance error E for all x in $\left[-1,1\right]$.

Step 2. Expand ${P}_{N}\left(x\right)$ in a Chebyshev series,

${P}_{N}\left(x\right)=\frac{1}{2}{c}_{0}+{c}_{1}{T}_{1}\left(x\right)+\cdots +{c}_{N}{T}_{N}\left(x\right)$, (6)

making use of the following matrix equation [8]:

$\left[\begin{array}{c}1/2\\ {2}^{0}x\\ {2}^{1}{x}^{2}\\ {2}^{2}{x}^{3}\\ {2}^{3}{x}^{4}\\ {2}^{4}{x}^{5}\\ {2}^{5}{x}^{6}\\ {2}^{6}{x}^{7}\\ {2}^{7}{x}^{8}\\ ⋮\end{array}\right]=\left[\begin{array}{cccccccccc}1& & & & & & & & & \\ 0& 1& & & & & & & & \\ 2& 0& 1& & & & & & & \\ 0& 3& 0& 1& & & & & & \\ 6& 0& 4& 0& 1& & & & & \\ 0& 10& 0& 5& 0& 1& & & & \\ 20& 0& 15& 0& 6& 0& 1& & & \\ 0& 35& 0& 21& 0& 7& 0& 1& & \\ 70& 0& 56& 0& 28& 0& 8& 0& 1& \\ ⋮& ⋮& ⋮& ⋮& ⋮& ⋮& ⋮& ⋮& ⋮& \ddots \end{array}\right]\left[\begin{array}{c}1/2{T}_{0}\left(x\right)\\ {T}_{1}\left(x\right)\\ {T}_{2}\left(x\right)\\ {T}_{3}\left(x\right)\\ {T}_{4}\left(x\right)\\ {T}_{5}\left(x\right)\\ {T}_{6}\left(x\right)\\ {T}_{7}\left(x\right)\\ {T}_{8}\left(x\right)\\ ⋮\end{array}\right]$ (7)

Step 3. Truncate this Chebyshev series to a smaller number of terms by retaining the first n terms, choosing n so that the maximum error given by

$|f\left(x\right)-{M}_{n}\left(x\right)|\le E+|{c}_{n+1}|+\cdots +|{c}_{N}|$ (8)

is acceptable, where ${M}_{n}\left(x\right)$ designs the resulting small Chebyshev series:

${M}_{n}\left(x\right)=\frac{1}{2}{c}_{0}+{c}_{1}{T}_{1}+\cdots +{c}_{n}{T}_{n}$. (9)

Step 4. Replace ${T}_{j}\left(x\right)\text{\hspace{0.17em}}\left(j=0,1,\cdots ,n\right)$ by its polynomial form, which leads to

$f\left(x\right)\approx {e}_{0}+{e}_{1}x+\cdots +{e}_{n}{x}^{n}$, (10)

using the following matrix equation [8]:

$\left[\begin{array}{c}\frac{1}{2}{T}_{0}\\ {T}_{1}\left(x\right)\\ {T}_{2}\left(x\right)\\ {T}_{3}\left(x\right)\\ {T}_{4}\left(x\right)\\ {T}_{5}\left(x\right)\\ {T}_{6}\left(x\right)\\ {T}_{7}\left(x\right)\\ {T}_{8}\left(x\right)\\ ⋮\end{array}\right]=\left[\begin{array}{cccccccccc}1& & & & & & & & & \\ 0& 1& & & & & & & & \\ -2& 0& 1& & & & & & & \\ 0& -3& 0& 1& & & & & & \\ 2& 0& -4& 0& 1& & & & & \\ 0& 5& 0& -5& 0& 1& & & & \\ -2& 0& 9& 0& -6& 0& 1& & & \\ 0& -7& 0& 14& 0& -7& 0& 1& & \\ 2& 0& -16& 0& 20& 0& -8& 0& 1& \\ ⋮& ⋮& ⋮& ⋮& ⋮& ⋮& ⋮& ⋮& ⋮& \ddots \end{array}\right]\left[\begin{array}{c}{2}^{-1}\\ {2}^{0}x\\ {2}^{1}{x}^{2}\\ {2}^{2}{x}^{3}\\ {2}^{3}{x}^{4}\\ {2}^{4}{x}^{5}\\ {2}^{5}{x}^{6}\\ {2}^{6}{x}^{7}\\ {2}^{7}{x}^{8}\\ ⋮\end{array}\right]$ (11)

If necessary in step 1, i.e., when we have an interval $\left[a,b\right]$ other than $\left[-1,1\right]$, make a transformation of independent variables so that the expansion is valid on that interval, by means of the expression [6] [11]

$y=\frac{x-\left(b+a\right)/2}{\left(b-a\right)/2}$. (12)

In this case, it is necessary to change variable back to x after step 4, making use of the expression [13]

$x=\frac{1}{2}\left(b-a\right)y+\frac{1}{2}\left(b+a\right).$ (13)

For the special domain $0\le x\le 1$, we can write

$x=\left(y+1\right)/2,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}y=2x-1$ (14)

In this domain, the Chebyshev polynomials are denoted ${T}_{n}^{\ast }\left(x\right)$ and defined by [14]: ${T}_{0}^{\ast }\left(x\right)=\frac{1}{2}$ and ${T}_{n}^{*}\left(x\right)={T}_{n}\left(2x-1\right)$ for $n\ge 1$, $0\le x\le 1$. They are called shifted Chebyshev polynomials of the first kind.

Note that Equations (7) and (11) can, in general, be summarized as [8]

$\underset{˜}{x}=C\underset{˜}{T}$ (15)

and

$\underset{˜}{T}=P\underset{˜}{x}$ (16)

respectively, where:

$\underset{˜}{T}$ and $\underset{˜}{x}$ are the $\left(n+1\right)$ -element vectors, i.e.,

${\underset{˜}{T}}^{t}=\left[1/2{T}_{0}\left(x\right),{T}_{1}\left(x\right),\text{​}\text{​}{T}_{2}\left(x\right),\cdots ,{T}_{n}\left(x\right)\right]$ (17)

and

${\underset{˜}{x}}^{t}=\left[1/2,x,2{x}^{2},\cdots ,{2}^{n-1}{x}^{n}\right]$. (18)

・ P and C are lower triangle matrices such that [8]

$P={\left[{P}_{ij}\right]}_{i,j=0}^{n},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}C={\left[{C}_{ij}\right]}_{i,j=0}^{n},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{P}_{00}=1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{P}_{10}=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{P}_{11}=1;$ (19)

${P}_{20}=-2,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{P}_{21}=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{P}_{22}=1;$ (20)

$\begin{array}{l}{P}_{i,0}=-{P}_{i-2,0}\\ {P}_{i,j}={P}_{i-1,j-1}-{P}_{i-2,j},\text{\hspace{0.17em}}\text{\hspace{0.17em}}j=1,\cdots ,i\end{array}\right\}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=3,\cdots ,n;$ (21)

${C}_{ii}=1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=0,\cdots ,n;$ (22)

$\begin{array}{l}{C}_{i,0}=2{C}_{i-1,1}\\ {C}_{i,j}={C}_{i-1,j-1}+{C}_{i-1,j+1},\text{\hspace{0.17em}}\text{\hspace{0.17em}}j=1,\cdots ,i\end{array}\right\}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,\cdots ,n-1;$ (23)

${C}_{n,0}=2{C}_{n-1,1};\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{C}_{n,\text{\hspace{0.17em}}j}={C}_{n-1,j-1}+{C}_{n-1,j+1},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}j=1,\cdots ,n-1.$ (24)

The main purpose of this paper is to develop a technique for generating a polynomial approximation for the Gaussian function which, among all polynomial approximations with the same degree, has a very small maximum error. This technique is based on the telescoping procedure of power series proposed by Lanczos [2], and the polynomial fitting of the error in the approximation, with the objective of economizing a sufficiently accurate truncated Maclaurin series of the Gaussian function. The resulting economized series will be used to compute bound state energies associated with the attractive Gaussian potential via the Asymptotic Iteration Method (AIM). Similar computations have been made by Mutuk [15] who applied the AIM to the Gaussian potential using a truncated Maclaurin series to approximate the function $\mathrm{exp}\left(-{r}^{2}\right)$.

The rest of this paper is organized as follows. In Section 2, we apply the procedure of economization to the Gaussian function by using firstly Chebyshev polynomials of the first kind, and secondly the ${T}_{j}^{\ast }\left(x\right)$ polynomials. For each economized series, the exact error is calculated and fitted by a power series having the same degree as the initial non-economized finite power series. The new finite series obtained by adding the approximate error to the associated economized series in turn undergoes the procedure of economization, which leads to a much more efficient economized power series. The originality of our work is precisely the multiple application of the economization method, which alleviates one of the most harmful aspects of the telescoping method, i.e., the low accuracy of the economized series around the origin of expansion [16]. Section 3 contains a brief introduction to the AIM for the Gaussian potential using the economized series obtained in Section 2 to approximate the Gaussian function. We also present and comment our results concerning bound state energies of the attractive Gaussian potential for a given well depth. We compare them with those given by the exact Hamiltonian diagonalization on a finite basis of Coulomb Sturmian functions. The conclusion is given in Section 4.

2. Gaussian Function Economization

We here consider the Gaussian function of the form

${f}_{G}\left(x\right)=\mathrm{exp}\left(-{x}^{2}\right)$ (25)

and the interval [−1, 1] for the independent variable x. The Maclaurin series expansion of this function is given by

$\mathrm{exp}\left(-{x}^{2}\right)=\underset{j=0}{\overset{\infty }{\sum }}\frac{{\left(-1\right)}^{j}}{j!}{x}^{2j}.$ (26)

We denote by ${f}_{G}^{\left\{N\right\}}\left(x\right)$ the Nth-degree truncated Maclaurin series of ${f}_{G}\left(x\right)$ and we choose $N=14$. We have:

${f}_{G}^{\left\{14\right\}}\left(x\right)=\underset{j=0}{\overset{7}{\sum }}\frac{{\left(-1\right)}^{j}}{j!}{x}^{2j}$ (27)

Expanding ${f}_{G}^{\left\{14\right\}}\left(x\right)$ in a Chebyshev series, we obtain:

$\begin{array}{c}{f}_{G}^{\left\{14\right\}}\left(x\right)=\frac{739773}{1146880}-\frac{205029}{655360}{T}_{2}\left(x\right)+\frac{114127}{2949120}{T}_{4}\left(x\right)-\frac{18943}{5898240}{T}_{6}\left(x\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{293}{1474560}{T}_{8}\left(x\right)-\frac{61}{5898240}{T}_{10}\left(x\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1}{2949120}{T}_{12}\left(x\right)-\frac{1}{41287680}{T}_{14}\left(x\right)\end{array}$ (28)

where we have used relations given in Equation (7). Of course if we expand the Chebyshev polynomials again in terms of power series of x, we obtain the same polynomial back.

Let us truncate the Chebyshev series (28) by neglecting the last two terms, and denote by ${\stackrel{˜}{f}}_{G}^{\left\{10\right\}}\left(x\right)$ the resulting expression, i.e.,

$\begin{array}{c}{\stackrel{˜}{f}}_{G}^{\left\{10\right\}}\left(x\right)=\frac{739773}{1146880}-\frac{205029}{655360}{T}_{2}\left(x\right)+\frac{114127}{2949120}{T}_{4}\left(x\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\frac{18943}{5898240}{T}_{6}\left(x\right)+\frac{293}{1474560}{T}_{8}\left(x\right)-\frac{61}{5898240}{T}_{10}\left(x\right)\end{array}$ (29)

Replacing ${T}_{j}\left(x\right)\text{\hspace{0.17em}}\left(j=2,4,\cdots ,10\right)$ by its polynomial form (see Equation (11)), we obtain the tenth-degree economized power series of $\mathrm{exp}\left(-{x}^{2}\right)$ associated with the finite series ${f}_{G}^{\left\{14\right\}}\left(x\right)$ which is a polynomial of degree 14:

$\begin{array}{l}\mathrm{exp}\left(-{x}^{2}\right)\approx {\stackrel{˜}{f}}_{G}^{\left\{10\right\}}\left(x\right)=\frac{2752511}{2752512}-\frac{2949041}{2949120}{x}^{2}+\frac{184201}{368640}{x}^{4}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\frac{15227}{92160}{x}^{6}+\frac{99}{2560}{x}^{8}-\frac{61}{11520}{x}^{10}\end{array}$ (30)

Figure 1 shows four errors ${E}_{G}^{\left\{10\right\}}\left(x\right)$, ${E}_{G}^{\left\{12\right\}}\left(x\right)$, ${E}_{G}^{\left\{14\right\}}\left(x\right)$ and ${\stackrel{˜}{E}}_{G}^{\left\{10\right\}}\left(x\right)$ in the approximation of $\mathrm{exp}\left(-{x}^{2}\right)$ calculated as the differences between the Gaussian function and the truncated power series ${f}_{G}^{\left\{10\right\}}\left(x\right)$, ${f}_{G}^{\left\{12\right\}}\left(x\right)$, ${f}_{G}^{\left\{14\right\}}\left(x\right)$ and

${\stackrel{˜}{f}}_{G}^{\left\{10\right\}}\left(x\right)$, i.e., ${E}_{G}^{\left\{10\right\}}\left(x\right)=\mathrm{exp}\left(-{x}^{2}\right)-{f}_{G}^{\left\{10\right\}}\left(x\right)$, ${E}_{G}^{\left\{12\right\}}\left(x\right)=\mathrm{exp}\left(-{x}^{2}\right)-{f}_{G}^{\left\{12\right\}}\left(x\right)$, ${E}_{G}^{\left\{14\right\}}\left(x\right)=\mathrm{exp}\left(-{x}^{2}\right)-{f}_{G}^{\left\{14\right\}}\left(x\right)$ and ${\stackrel{˜}{E}}_{G}^{\left\{10\right\}}\left(x\right)=\mathrm{exp}\left(-{x}^{2}\right)-{\stackrel{˜}{f}}_{G}^{\left\{10\right\}}\left(x\right)$. The definition of the function ${E}_{Ge}^{\left\{10\right\}}\left(x\right)$ whose graph is shown in Figure 1 will be given below.

We see that the tenth-degree economized power series ${\stackrel{˜}{f}}_{G}^{\left\{10\right\}}\left(x\right)$ approximates

Figure 1. Plots of different errors in the approximation of $\mathrm{exp}\left(-{x}^{2}\right)$. (a) Graph of ${E}_{G}^{\left\{10\right\}}\left(x\right)$ ; (b) Graph of ${E}_{G}^{\left\{12\right\}}\left(x\right)$ ; (c) Curves of ${\stackrel{˜}{E}}_{G}^{\left\{10\right\}}\left(x\right)$ (solid line) and ${E}_{G}^{\left\{14\right\}}\left(x\right)$ (symbol $\odot$ ); (d) Graph of ${E}_{Ge}^{\left\{10\right\}}\left(x\right)$.

$\mathrm{exp}\left(-{x}^{2}\right)$ on [−1, 1] better than the tenth-degree Maclaurin series and nearly as well as the twelfth and fourteenth-degree Maclaurin series ${f}_{G}^{\left\{12\right\}}\left(x\right)$ and ${f}_{G}^{\left\{14\right\}}\left(x\right)$.

Indeed, its maximum error (at $x=±1$ ) is $2.26131782×{10}^{-5}$ whereas the error of approximation equals $1.21277450477565×{10}^{-3}$ for ${f}_{G}^{\left\{10\right\}}\left(x\right)$, $-1.7611438411323396×{10}^{-4}$ for ${f}_{G}^{\left\{12\right\}}\left(x\right)$ and $2.229831429946445×{10}^{-5}$ for ${f}_{G}^{\left\{14\right\}}\left(x\right)$. We “economize” in sense that we get about the same precision with a lower-degree polynomial.

We have to add that we can get a much more efficient economized power series by first adding to the series ${\stackrel{˜}{f}}_{G}^{\left\{10\right\}}\left(x\right)$ the associated error fitted by a high-degree polynomial, and then applying the procedure of economization to the resulting polynomial. To this end, we discretize the problem in the interval [0, 1] and evaluate the function ${\stackrel{˜}{E}}_{G}^{\left\{10\right\}}\left(x\right)$ at ${X}_{k}=kh$ (for $k=0,1,\cdots ,p-1$ ) where p is the number of mesh points and h the step size, thus creating two p-components real vectors X and Y such that ${X}_{k}=kh$ and ${Y}_{k}={\stackrel{˜}{E}}_{G}^{\left\{10\right\}}\left({X}_{k}\right)$, $k=0,1,2,\cdots ,p-1$, where ${X}_{k}$ and ${Y}_{k}$ design the k-th components of the vectors X and Y respectively. We then appeal to the maple 18 software (the Fit command) to construct the (2K)th-degree polynomial ${P}_{2K}\left(x\right)$ of type $\underset{j=0}{\overset{K}{\sum }}{B}_{j}{x}^{2j}$, $K>5$, that best fits the above set of data points, i.e., $\left({X}_{k},{Y}_{k}\right)$, $k=0,1,2,\cdots ,p-1$. It is worth noting that in maple, the Fit command fits a model function to given data by minimizing the least-square error. In the case we are concerned with, the calling sequence is: $Fit\left({P}_{2K}\left(x\right),X,Y,x\right)$ where ${P}_{2K}\left(x\right)$ is to be replaced by

$\underset{j=0}{\overset{K}{\sum }}{B}_{j}{x}^{2j}$ and ${B}_{0},{B}_{1},\cdots ,{B}_{K}$ are adjustable parameters to be computed. With K = 7 and p = 101, we find:

$\left\{\begin{array}{l}{B}_{0}=3.629622950984207597588×{10}^{-7}\\ {B}_{1}=-2.673950577493475324288×{10}^{-5}\\ {B}_{2}=3.21731897337007×{10}^{-4}\\ {B}_{3}=-1.4340314605314449260836×{10}^{-3}\\ {B}_{4}=2.95693533428559792089×{10}^{-3}\\ {B}_{5}=-2.95236680605017687949×{10}^{-3}\\ {B}_{6}=1.279508864767511361964×{10}^{-3}\\ {B}_{7}=-1.22788990618675818847685×{10}^{-4}\end{array}$ (31)

Applying the procedure of economization to the fourteen-degree polynomial ${\stackrel{˜}{f}}_{G}^{\left\{10\right\}}\left(x\right)+{P}_{14}\left(x\right)$, we find a new tenth-degree economized series, which we denote by ${f}_{Ge}^{\left\{10\right\}}\left(x\right)$ :

$\begin{array}{c}{f}_{Ge}^{\left\{10\right\}}\left(x\right)=0.9999995697531816685-0.9999686090106535367159{x}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+0.4996268919513314237{x}^{4}-0.1650294823388696094779{x}^{6}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+0.03835801149191082515380786{x}^{8}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-0.00510734148478025040218{x}^{10}\end{array}$ (32)

The function ${E}_{Ge}^{\left\{10\right\}}\left(x\right)$ defined by the expression

${E}_{Ge}^{\left\{10\right\}}\left(x\right)=\mathrm{exp}\left(-{x}^{2}\right)-{f}_{Ge}^{\left\{10\right\}}\left(x\right)$ (33)

is shown in Figure 1. It is clear that the series ${f}_{Ge}^{\left\{10\right\}}\left(x\right)$ is more accurate than all the above power series.

So far in this section, we have used the ${T}_{j}\left(x\right)$ polynomials to economize the fourteenth-degree Maclaurin series of the Gaussian function on the domain $-1\le x\le 1$. In what follows the economization will be done on the interval [0, 1] using shifted Chebyshev polynomials of the first kind ${T}_{j}^{*}\left(x\right)$. We get

$\begin{array}{l}{f}_{G}^{\left\{14\right\}}\left(x\right)={f}_{G\left(*\right)}^{\left\{14\right\}}\left(x\right)\\ =\frac{24725514565}{33822867546}-\frac{671360027}{2013265920}{T}_{1}^{*}\left(x\right)-\frac{1528406863}{32212254720}{T}_{2}^{*}\left(x\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{431317481}{24159191040}{T}_{3}^{*}\left(x\right)+\frac{5406881}{16106127360}{T}_{4}^{*}\left(x\right)-\frac{747037}{1610612736}{T}_{5}^{*}\left(x\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1148881}{96636764160}{T}_{6}^{*}\left(x\right)+\frac{70579}{9395240960}{T}_{7}^{*}\left(x\right)-\frac{4397}{8053063680}{T}_{8}^{*}\left(x\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\frac{1547}{12079595520}{T}_{9}^{*}\left(x\right)-\frac{1}{2147483648}{T}_{10}^{*}\left(x\right)-\frac{7}{8053063680}{T}_{11}^{*}\left(x\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\frac{19}{48318382080}{T}_{12}^{*}\left(x\right)-\frac{1}{24159191040}{T}_{13}^{*}\left(x\right)-\frac{1}{676457349120}{T}_{14}^{*}\left(x\right)\end{array}$ (34)

where the asterisk in brackets in expression ${f}_{G\left(*\right)}^{\left\{14\right\}}\left(x\right)$ refers to the use of the ${T}_{j}^{*}\left(x\right)$ polynomials in the approximation of the Gaussian function ${f}_{G}\left(x\right)$.

It is worth noting that Equation (34) is obtained by using the following expression [14]:

$\left[\begin{array}{c}\frac{1}{2}{\left(4x\right)}^{0}\\ \frac{1}{2}{\left(4x\right)}^{1}\\ ⋮\\ \frac{1}{2}{\left(4x\right)}^{N}\end{array}\right]=\underset{˜}{A}\left[\begin{array}{c}{T}_{0}^{*}\\ {T}_{1}^{*}\\ ⋮\\ {T}_{N}^{*}\end{array}\right]$ (35)

where:

$N=14$

${T}_{0}^{*}=\frac{1}{2}$ and thus $x=2{T}_{0}^{*}$

$\underset{˜}{A}={\left[{a}_{ij}\right]}_{i,j=0}^{N}$ is a $\left(N+1\right)×\left(N+1\right)$ triangular matrix such that [14]

${a}_{ii}=1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=0,1,\cdots ,N;\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{ij}=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}j>i;$ (36)

${a}_{i+1,0}=2\left({a}_{i0}+{a}_{i1}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=0,1,2,\cdots ,N$ (37)

and

${a}_{ij}={a}_{i-1,j-1}+2{a}_{i-1,j}+{a}_{i-1,j+1},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i,j=1,2,\cdots ,N.$ (38)

It follows immediately from Equation (35) that

$\left[\begin{array}{c}{T}_{0}^{*}\\ {T}_{1}^{*}\left(x\right)\\ ⋮\\ {T}_{N}^{*}\left(x\right)\end{array}\right]={\underset{˜}{A}}^{-1}\left[\begin{array}{c}\frac{1}{2}{\left(4x\right)}^{0}\\ \frac{1}{2}{\left(4x\right)}^{1}\\ ⋮\\ \frac{1}{2}{\left(4x\right)}^{N}\end{array}\right]$ (39)

Since $|{T}_{j}^{*}\left(x\right)|\le 1$ $\forall j\in ℕ$ and $x\in \left[0,1\right]$, the last six terms in the right-side of Equation (34) are rather tiny in magnitude ( $\approx 1.280672×{10}^{-6}$, $\approx 4.65661287×{10}^{-10}$, $\approx 8.69234403×{10}^{-10}$, $\approx 3.93225087×{10}^{-10}$, $\approx 4.13921144×{10}^{-11}$ and $\approx 1.4782898×{10}^{-12}$ respectively). We therefore can chop off these terms (keep terms up to ${T}_{8}^{*}\left(x\right)$ ) without risk of appreciable change in the final results, and then re-expand back to a monomial series. Doing this gives the following eighth-degree polynomial:

$\begin{array}{c}{\stackrel{˜}{f}}_{G\left(*\right)}^{\left\{8\right\}}\left(x\right)=\frac{338228631227}{338228674560}+\frac{10451}{503316480}x-\frac{335730191}{335544320}{x}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1073473}{188743680}{x}^{3}+\frac{1974601}{4194304}{x}^{4}+\frac{330473}{3932160}{x}^{5}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\frac{14501951}{47185920}{x}^{6}+\frac{457969}{3440640}{x}^{7}-\frac{4397}{245760}{x}^{8}\end{array}$ (40)

where the asterisk in brackets refers to the fact that the polynomial ${\stackrel{˜}{f}}_{G\left(*\right)}^{\left\{8\right\}}\left(x\right)$ results from expanding a series of shifted Chebyshev polynomials of the first kind in a Maclaurin series. We remark that all coefficients of all orders between 0 and 8 are present in the series (40), which is contrary to the result obtained by doing economization using the ${T}_{j}\left(x\right)$ polynomials.

Expanding in shifted Chebyshev series the fourteenth-degree polynomials obtained by adding to the series (40) the associated error, i.e., $\mathrm{exp}\left(-{x}^{2}\right)-{\stackrel{˜}{f}}_{G\left(*\right)}^{\left\{8\right\}}\left(x\right)$, fitted by a polynomial of degree 14, and then truncating the resulting series by keeping terms up to ${T}_{10}^{*}\left(x\right)$, we obtain, after re-expanding back to a monomial series:

$\begin{array}{c}{f}_{Ge\left(*\right)}^{\left\{10\right\}}\left(x\right)=\frac{\text{19218139958724}00}{\text{1921813993952917}}-\frac{\text{1826255126789}}{\text{743588417}00\text{23661913}}x\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\frac{\text{46726932}0\text{469}0\text{532}}{\text{4672739946124667}}{x}^{2}-\frac{\text{1}0\text{2312372739415}}{\text{642352316915332223}}{x}^{3}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{\text{2178621924686}0\text{46}}{\text{43459}0\text{74987}0\text{7751}}{x}^{4}-\frac{\text{256326879832656}}{\text{41}0\text{16598156699859}}{x}^{5}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\frac{\text{1351939948987433}}{\text{913823779811}0\text{217}}{x}^{6}-\frac{\text{2}0\text{17}0\text{5968295}0\text{979}}{\text{55887252994313927}}{x}^{7}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{\text{531462878751634}}{\text{616434556}0\text{849269}}{x}^{8}-\frac{\text{5}0\text{2318618293295}}{\text{15}0\text{521}0\text{7665351233}}{x}^{9}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{\text{2463}00\text{847}0\text{73295}}{\text{59133395}0\text{8637}0\text{6}0\text{8}}{x}^{10}\end{array}$ (41)

Note that this expression can be used to approximate $\mathrm{exp}\left(-{x}^{2}\right)$ on $\left[-\text{1},\text{1}\right]$ by multiplying all terms of odd exponents by the sign of the independent variable x.

In Figure 2, we show the plot of the error in the approximation of $\mathrm{exp}\left(-{x}^{2}\right)$ based on the use of Equation (41), together with the plots of

${\stackrel{˜}{E}}_{G\left(*\right)}^{\left\{8\right\}}\left(x\right)\equiv \mathrm{exp}\left(-{x}^{2}\right)-{\stackrel{˜}{f}}_{G\left(*\right)}^{\left\{8\right\}}\left(x\right)$, ${E}_{Ge}^{\left\{10\right\}}\left(x\right)$ and ${E}_{G}^{\left\{14\right\}}\left(x\right)$. We see that the ${T}_{j}^{*}\left(x\right)$ polynomials economize the Gaussian function $\mathrm{exp}\left(-{x}^{2}\right)$ more efficiently than

Figure 2. Comparison of ${E}_{Ge\left(*\right)}^{\left\{10\right\}}\left(x\right)$ with other errors in the approximation of $\mathrm{exp}\left(-{x}^{2}\right)$. (a) Graph of ${E}_{G}^{\left\{14\right\}}\left(x\right)$ ; (b) Graph of ${\stackrel{˜}{E}}_{G\left(*\right)}^{\left\{8\right\}}\left(x\right)$ ; (c) Graph of ${E}_{Ge}^{\left\{10\right\}}\left(x\right)$ ; (d) Graph of ${E}_{Ge\left(*\right)}^{\left\{10\right\}}\left(x\right)$.

the ${T}_{j}\left(x\right)$ ones, but there is a price to pay since for a given degree 2p, the ${T}_{j}^{*}\left(x\right)$ polynomials lead to an economized polynomial whose number of terms, i.e., 2p + 1, is almost twice the number of terms in the economized power series obtained when the ${T}_{j}\left(x\right)$ polynomials are used, which is exactly $p+1$.

3. Application to the Asymptotic Iteration Method for the Gaussian Potential

3.1. Basic Equations of the Asymptotic Iteration Method (AIM)

In this subsection, we briefly outline the asymptotic iteration method; the details can be found in [17] and [18].

The AIM was introduced to solve the second-order homogeneous linear differential equations of the form [17] [18] [19]

${y}^{″}\left(x\right)={\lambda }_{0}\left(x\right){y}^{\prime }\left(x\right)+{s}_{0}\left(x\right)y\left(x\right)$ (42)

where ${\lambda }_{0}\left(x\right)\ne 0$ and ${s}_{0}\left(x\right)$ have sufficiently many continuous derivatives in some interval, not necessarily bounded. The differential Equation (42) has a general solution [17] [18]

$y\left(x\right)=\mathrm{exp}\left({\int }_{}^{x}\alpha \left(\xi \right)\text{d}\xi \right)\left[{c}_{2}+{c}_{1}{\int }_{}^{x}\mathrm{exp}\left({\int }_{}^{\xi }\left({\lambda }_{0}\left(\eta \right)+2\alpha \left(\eta \right)\right)\text{d}\eta \right)\text{d}\xi \right]$ (43)

where

$\frac{{s}_{k}\left(x\right)}{{\lambda }_{k}\left(x\right)}=\frac{{s}_{k-1}\left(x\right)}{{\lambda }_{k-1}\left(x\right)}\equiv \alpha \left(x\right)$ (44)

for sufficiently large k.

In Equation (44), ${\lambda }_{k}\left(x\right)$ and ${s}_{k}\left(x\right)$ are defined as follows [17] [18]:

${\lambda }_{k}\left(x\right)={{\lambda }^{\prime }}_{k-1}\left(x\right)+{s}_{k-1}\left(x\right)+{\lambda }_{0}\left(x\right){\lambda }_{k-1}\left(x\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,2,3,\cdots ;$ (45)

${s}_{k}\left(x\right)={{s}^{\prime }}_{k-1}\left(x\right)+{s}_{0}\left(x\right){\lambda }_{k-1}\left(x\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,2,3,\cdots$ (46)

The convergence (quantization) condition of the method, as given in (44), can also be written as follows [15] [20]:

${\delta }_{k}\left(x\right)={\lambda }_{k}\left(x\right){s}_{k-1}\left(x\right)-{\lambda }_{k-1}\left(x\right){s}_{k}\left(x\right)=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,2,3,\cdots$ (47)

For a given radial potential such as the Gaussian one, the radial Schrödinger equation is converted to the form of the Equation (42). Once this form has been obtained, it is easy to determine ${s}_{0}\left(x\right)$ and ${\lambda }_{0}\left(x\right)$ and calculate ${s}_{k}\left(x\right)$ and ${\lambda }_{k}\left(x\right)$ by using Equations (45) and (46). The energy eigenvalues are then obtained from the quantization condition given by Equation (47).

3.2. Asymptotic Iteration Method for Gaussian Potential

We here consider the Gaussian potential of the form

$V\left(r\right)=-A\mathrm{exp}\left(-\lambda {r}^{2}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}r\in \left[0,+\infty \left[$ (48)

where $A>0$ is the depth of the potential and $\lambda >0$ determines its width. The radial Schrödinger equation (SE) for a particle with mass m that moves in three-dimensional space under the effect of the attractive Gaussian potential (48) can be written as

$\frac{{\text{d}}^{2}{R}_{n\mathcal{l}}\left(r\right)}{\text{d}{r}^{2}}+\left(\epsilon +\stackrel{˜}{A}\mathrm{exp}\left(-{r}^{2}\right)-\frac{\mathcal{l}\left(\mathcal{l}+1\right)\lambda }{{r}^{2}}\right){R}_{n\mathcal{l}}\left(r\right)=0$ (49)

where $r=\sqrt{\lambda }r$, $\stackrel{˜}{A}=2mA/\left({\hslash }^{2}\lambda \right)$ and $\epsilon =2mE/\left({\hslash }^{2}\lambda \right)$, $E$ being the energy of the particle. This is a second order non-linear differential equation. In order to solve this equation via the AIM, we should first model it with a second order linear differential equation and then convert this model equation to the form of Equation (42). Mutuk [15] solved the non-linear differential Equation (42) for $\lambda =1$ via the AIM by suggesting a wave function of the form

${R}_{n\mathcal{l}}\left(r\right)={r}^{\mathcal{l}+1}\mathrm{exp}\left(-\beta {r}^{2}\right){f}_{n\mathcal{l}}\left(r\right)$ (50)

and making use of the tenth-degree truncated Maclaurin series of $\mathrm{exp}\left(-{r}^{2}\right)$, i.e.,

$\mathrm{exp}\left(-{x}^{2}\right)\approx 1-{r}^{2}+\frac{{r}^{4}}{2}-\frac{{r}^{6}}{6}+\frac{{r}^{8}}{24}-\frac{{r}^{10}}{120}$ (51)

to construct a linear model of this non-linear equation. He obtained a second order linear homogeneous differential equation for the factor ${f}_{n\mathcal{l}}\left(r\right)$ with the general form

$\frac{{\text{d}}^{2}{f}_{n\mathcal{l}}\left(r\right)}{\text{d}{r}^{2}}={\lambda }_{0}\left(r\right)\frac{\text{d}{f}_{n\mathcal{l}}\left(r\right)}{\text{d}r}+{s}_{0}\left(r\right){f}_{n\mathcal{l}}\left(r\right)$ (52)

where

${\lambda }_{0}\left(r\right)=\left(-\frac{2\left(\mathcal{l}+1\right)}{r}+4\beta r\right),$ (53)

${s}_{0}\left(r\right)=\stackrel{˜}{A}\left(\frac{{r}^{10}}{120}-\frac{{r}^{8}}{24}+\frac{{r}^{6}}{6}-\frac{{r}^{4}}{2}+{r}^{2}-1\right)-\epsilon +2\beta \left(2\mathcal{l}-2\beta {r}^{2}+3\right)$ (54)

We have to emphasize that in the AIM, energy eigenvalues are calculated from the quantization condition given by Equation (47). For each iteration, this equation will depend on two variables, $\epsilon$ and r. The eigenvalues calculated by means of ${\delta }_{k}\left(r\right)=0$ should however be independent from the choice of r. Actually, this will be the case for most iteration sequences. The choice of r can be critical to the speed of the convergence of the eigenvalues, as well as the stability of the process [17] [20]. This suitable choice of r minimizes the potential or maximizes the radial wave function given by Equation (50) in the case of the attractive Gaussian potential.

In the AIM, the wave function can be written as

$R\left(r\right)=f\left(r\right)g\left(r\right),$ (55)

where $f\left(r\right)$ represents the asymptotic behavior. In our case, $f\left(r\right)={r}^{\mathcal{l}+1}\mathrm{exp}\left(-\beta {r}^{2}\right)$. Hence, we have taken ${r}_{0}=\sqrt{\mathcal{l}+1}/\sqrt{2\beta }$, which is the value of r that minimizes the wave function. $\beta$ is an arbitrary parameter related to the convergence.

The convergence of the eigenvalues for the cases of $\beta =5$, $\beta =10$, $\beta =15$, $\beta =20$ and $\beta =25$ is reported in Table 1 where we compute the eigenvalue associated with $n=0$ and $\mathcal{l}=0$ by means of the AIM using the maple 18 software which is known to be a powerful symbolic mathematical software. It is clear that the eigenvalues converge for all the five values of $\beta$ whatever the method used to approximate the Gaussian potential, which is contrary to the results obtained by Mutuk [15], results in which the eigenvalues associated with $\beta =25$ start to diverge when the iteration number exceeds 25. We think that the discrepancy between our results and the Mutuk ones for big values of $\beta$ is

Table 1. The convergence of the eigenvalues of the attractive Gaussian potential for different β values and various approximations of $\mathrm{exp}\left(-{r}^{2}\right)$ with $n=0$ and $\mathcal{l}=0$. k is the iteration number. Potential parameters are $A=400$ atomic units (a.u) and $\lambda =1$.

due to the fact that during the implementation of the AIM, the precision level has been set to 50 digits, which means that our results have been computed with high precision and are more accurate. It is clear from Table 1 that the approximation of $\mathrm{exp}\left(-{r}^{2}\right)$ based on the use of the ${T}_{j}^{\ast }\left(r\right)$ polynomials has the advantage that the energies converge significantly faster towards the accurate value of ${E}_{00}$, i.e., −341.8952145612, than those calculated using the two other approximations.

Table 2 presents the results for a few values of n and $\mathcal{l}$ computed by means of 50 iterations using Equations (32) and (41) to approximate $\mathrm{exp}\left(-{r}^{2}\right)$ (third and fourth columns). The energy eigenvalues are obtained with $\beta =10$ because the solutions are in many cases very close after few iterations when this value of $\beta$ is used. Our results are compared with the Mutuk ones [15] (second column) and the numerically calculated ones by the spectral Galerkin method (SGM) [10] [21] [22] [23] based on expanding the radial wave function on a finite basis of Coulomb Sturmian functions defined by [24] [25]:

${S}_{n,l}^{\kappa }\left(r\right)={N}_{n,l}^{\kappa }{r}^{l+1}{\text{e}}^{-\kappa r}{L}_{n-l-1}^{2l+1}\left(2\kappa r\right)$ (56)

Table 2. Comparison of the energy eigenvalues of the Gaussian potential in a.u. obtained by using AIM for various approximations of $\mathrm{exp}\left(-{r}^{2}\right)$ with those calculated by means of the SGM for different values of n and $\mathcal{l}$. We have chosen ${N}_{s}=500$ as the number of Coulomb Sturmian functions and 0.75 as the value of $\kappa$.

where ${L}_{m}^{\alpha }\left(x\right)$ denotes the associated Laguerre polynomial and n the principal quantum number. The normalization constant ${N}_{n,l}^{\kappa }$, given by

${N}_{n,l}^{\kappa }=\sqrt{\frac{\kappa }{n}}{\left(2\kappa \right)}^{l+1}{\left[\frac{\left(n-l-1\right)!}{\left(n+l\right)!}\right]}^{1/2}$ (57)

is obtained from the normalization condition ${\int }_{0}^{\infty }{\left[{S}_{n,l}^{\kappa }\left(r\right)\right]}^{\ast }{S}_{n,l}^{\kappa }\left(r\right)\text{d}r=1$.

Note that the spectral methods have the advantage of “exponential convergence” property, depending on the size of the basis, which makes them more accurate than local methods. Unlike finite difference methods, spectral methods are global methods, where the computation at any given point depends not only on information at neighboring points, but also on information from the entire domain. We appreciate that the results associated with the ${f}_{Ge\left(*\right)}^{\left\{10\right\}}\left(r\right)$ appear to approach the numerical eigenvalues reasonably well for all taken values of n and $\mathcal{l}$.

4. Conclusion

In this work, we have applied the procedure of economization to the Gaussian function of the form ${f}_{G}\left(x\right)=\mathrm{exp}\left(-{x}^{2}\right)$ by using Chebyshev polynomials of the first kind on one hand and the shifted ones on the other, with an application to the solution of the radial Schrödinger equation for the attractive Gaussian well via the Asymptotic Iteration Method (AIM). We have seen that the use of ${T}_{j}^{\ast }\left(x\right)$ polynomials leads to more efficient economized power series of $\mathrm{exp}\left(-{x}^{2}\right)$ which can be used to well model the non-linear radial Schrödinger equation for the Gaussian potential with a second order linear differential equation solvable by means of AIM.

Cite this paper: Nyengeri, H. , Manariyo, B. , Nizigiyimana, R. and Mugisha, S. (2020) Application of the Economization of Power Series to Solving the Schrödinger Equation for the Gaussian Potential via the Asymptotic Iteration Method. Open Access Library Journal, 7, 1-17. doi: 10.4236/oalib.1106505.
References

[1]   Bekir, E. (2019) Efficient Chebyshev Economization for Elementary Functions. Communications Faculty of Sciences University of Ankara Series A2-A3, 61, 33-56.
http://communications.science.ankara.edu.tr/index.php?series=A2-A3

[2]   Lanczos, C. (1956) Applied Analysis. Prentice-Hall, Inc., Englewood Cliffs.

[3]   Conte, S. and Carl de Boor, D. (1980) Elementary Numerical Analysis: An Algorithmic Approach. Third Edition, McGraw-Hill, Inc., New York.

[4]   Kendall, A. and Weimin, H. (2004) Elementary Numerical Analysis. Third Edition, John Wiley & Sons, Inc., Iowa City.

[5]   Guilpin, Ch. (1999) Manuel de Calcul Numérique Appliqué. EDP Sciences.

[6]   Spanier, J. and Oldham, K.B. (1987) An Atlas of Functions. Hemisphere Publication Corporation/Springer-Verlag, New York.

[7]   Mason, J.C. and Handscomb, D.C. (2002) Chebyshev Polynomials. First Edition, Chapman and Hall/CRC, London. https://doi.org/10.1201/9781420036114

[8]   Horner, J.S. (1977) Chebyshev Polynomials in the Solution of Ordinary and Partial Differential Equations. Doctor of Philosophy Thesis, Department of Mathematics, University of Wollongong, Wollongong. http://ro.uow.edu.au/theses/1543

[9]   Hamming, R.W. (1973) Numerical Methods for Scientists and Engineers. Second Edition, Dover Publications, Inc., New York.

[10]   Fletcher, C.A.J. (1984) Computational Galerkin Methods. Springer-Verlag, New York.
https://doi.org/10.1007/978-3-642-85949-6_2

[11]   Press, W.H., Teukolsky, S.A., Vetterling, T.T. and Flanney, B.P. (2007) Numerical Recipes. Third Edition: The Art of Scientific Computing. Cambridge University Press, New York.

[12]   Unruh, P.F. (1968) Chebyshev Aproximations. Master’s Report, Kansas State University, Manhattan. https://ia800706.us.archive.org/25/items/chebyshevapproxi00unru/chebyshevapproxi00unru.pdf

[13]   Mudde, M.H. (2017) Chebyshev Aproximation. Master Thesis, University of Groningen, Faculty of Science and Engineering, Groningen.

[14]   López-Bonilla, J., Ramírez-García, E. and Sasa-Caraveo, C. (2010) Power Expansion in Terms of Shifted Chebyshev-Lanczos Polynomials. Revista Notas de Matemática, 6, 18-22.

[15]   Mutuk, H. (2019) Asymptotic Iteration and Variational Methods for Gaussian Potential. Pramana—Journal of Physics, 92, 66. https://doi.org/10.1007/s12043-019-1729-z

[16]   Ralston, A. and Rabinowitz, P. (2001) A First Course in Numerical Analysis. Second Edition, Dover Publications, Mineola.

[17]   Ciftci, H., Hall, R.L. and Saad, N. (2003) Asymptotic Iteration Method for Eigenvalue Problems. Journal of Physics A: Mathematical and General, 36, 11807-11816.
https://doi.org/10.1088/0305-4470/36/47/008

[18]   Ciftci, H., Hall, R.L. and Saad, N. (2005) Construction of Exact Solutions to Eigenvalue Problems by Asymptotic Iteration Method. Journal of Physics A: Mathematical and General, 38, 1147-1155.
https://doi.org/10.1088/0305-4470/38/5/015

[19]   Ciftci, H., Hall, R.L. and Saad, N. (2005) Perturbation Theory in a Framework of Iteration Methods. Physics Letters A, 340, 388-396. https://doi.org/10.1016/j.physleta.2005.04.030

[20]   Marakoc, M. and Boztosun, I. (2006) Accurate Iteration and Perturbative Solutions of the Yukawa Potential. International Journal of Modern Physics E, 15, 1253-1262.
https://doi.org/10.1142/S0218301306004806

[21]   Mortensen, M. (2017) Shenfun-Automating the Spectral Galerkin Method. In: Skallerud, B.H. and Anderson, H.I., Eds., Ninth National Conference on Computational Mechanics, International Center for Numerical Methods in Engineering (CIMNE), 273-298.

[22]   Shen, J. (1994) Efficient Spectral Galerkin Method I. Direct Solvers of Second- and Fourth-Order Equations Using Legendre Polynomials. SIAM Journal of Scientific Computing, 15, 1489-1505.
https://doi.org/10.1137/0915089

[23]   Shen, J. (1995) Efficient Spectral-Galerkin Method II. Direct Solvers of Second- and Fourth-Order Equations Using Chebyshev Polynomials. SIAM Journal on Scientific Computing, 16, 74-87.
https://doi.org/10.1137/0916006

[24]   Pont, M., Proulx, D. and Shakeshaft, R. (1991) Numerical Integration of Time-Dependent Schrödinger Equation for an Atom in a Radiation Field. Physical Review A, 44, 4486-4492.
https://doi.org/10.1103/PhysRevA.44.4486

[25]   Nyengeri, H., Nizigiyimana, R., Ndenzako, E., Bigirimana, F., Niyonkuru, D. and Girukwishaka, A. (2018) Application of the Fröbenius Method to the Schrödinger Equation for a Spherically Symmetric Hyperbolic Potential. Open Access Library Journal, 5, e4950.
https://doi.org/10.4236/oalib.1104950

Top