The easiest and most obvious way to obtain a polynomial approximation to a
given function is to use a truncated Taylor series of the form or more generally  . In this truncated method, the more the number of the retained terms, the higher the accuracy of the approximation.
However, this method suffers from the uneven distribution of errors in the approximation. The closer the evaluated point to the origin of expansion, the higher the accuracy and vice versa. This means that for a desired level of accuracy, the points far from the origin will need substantially more terms than those close to the origin of expansion. For computational purposes, however, it may be undesirable to require as many as terms when N is large. Indeed, it may be unnecessary to use more than a few terms, especially if interest in the function is restricted to a small range of the argument.
The powers of a variable x appeared originally purely in algebraic problems . With the development of calculus, the great importance of power expansions became evident. The expansion discovered by Taylor in 1715 and by Maclaurin in 1742 allows predicting the evolution of a function from its value and all its derivatives in one particular point . The “Taylor series” thus became one of the cornerstones of analytical research and was particularly useful in establishing the existence of solutions of differential equations . It should be recalled, however, that the Taylor expansion suffers from slow convergence speed for points far from the origin of expansion. This problem could be alleviated by using minimization methods such as least square (LS) algorithm  . In this case, the function is approximated with a finite degree polynomial
whose coefficients are selected such that
is minimum, where is an arbitrary weighting function and is the interval in which the function is approximated. The minimization in Equation (1) yields 
We have to indicate that the system of Equation (2) is difficult to solve because it requires the computation of a full two-dimensional matrix. The reason is that the function is approximated with non-orthogonal power series basis . This could be avoided if the function is approximated with an orthogonal basis. That is, if the orthogonal basis is given by , then the coefficients are determined by
Using an orthogonal basis will cause the off diagonal terms to be null, and can occasionally lead to the so-called “economized power series”. As a side note, we should indicate that much attention has also been paid to the problem of inventing methods of summing a series in such a way that it shall become convergent, although the original series, if added term by term, increases to infinity .
Economization of power series is a procedure that replaces a very accurate (or even exact) polynomial approximation of degree N by an “economized” polynomial of a smaller degree n such that, in the range of
interest, the absolute error introduced by the replacement is less than some acceptable value E :
The procedure of economization, or telescoping as it is sometimes called  , is accomplished by utilizing the properties of Chebyshev polynomials of the first kind    , among which the minimax property  . According to the minimax principle, Chebyshev approximations are associated with the approximations which minimize the maximum error.
We have to emphasize that the economization algorithm has many distinct phases   . More precisely, the economization of power series has four basic steps:
Step 1. Expand in a Taylor series valid on the interval . Truncate this series to obtain a polynomial
which approximates within a prescribed tolerance error E for all x in .
Step 2. Expand in a Chebyshev series,
making use of the following matrix equation :
Step 3. Truncate this Chebyshev series to a smaller number of terms by retaining the first n terms, choosing n so that the maximum error given by
is acceptable, where designs the resulting small Chebyshev series:
Step 4. Replace by its polynomial form, which leads to
using the following matrix equation :
If necessary in step 1, i.e., when we have an interval other than , make a transformation of independent variables so that the expansion is valid on that interval, by means of the expression  
In this case, it is necessary to change variable back to x after step 4, making use of the expression 
For the special domain , we can write
In this domain, the Chebyshev polynomials are denoted and defined by : and for , . They are called shifted Chebyshev polynomials of the first kind.
Note that Equations (7) and (11) can, in general, be summarized as 
・ and are the -element vectors, i.e.,
・ P and C are lower triangle matrices such that 
The main purpose of this paper is to develop a technique for generating a polynomial approximation for the Gaussian function which, among all polynomial approximations with the same degree, has a very small maximum error. This technique is based on the telescoping procedure of power series proposed by Lanczos , and the polynomial fitting of the error in the approximation, with the objective of economizing a sufficiently accurate truncated Maclaurin series of the Gaussian function. The resulting economized series will be used to compute bound state energies associated with the attractive Gaussian potential via the Asymptotic Iteration Method (AIM). Similar computations have been made by Mutuk  who applied the AIM to the Gaussian potential using a truncated Maclaurin series to approximate the function .
The rest of this paper is organized as follows. In Section 2, we apply the procedure of economization to the Gaussian function by using firstly Chebyshev polynomials of the first kind, and secondly the polynomials. For each economized series, the exact error is calculated and fitted by a power series having the same degree as the initial non-economized finite power series. The new finite series obtained by adding the approximate error to the associated economized series in turn undergoes the procedure of economization, which leads to a much more efficient economized power series. The originality of our work is precisely the multiple application of the economization method, which alleviates one of the most harmful aspects of the telescoping method, i.e., the low accuracy of the economized series around the origin of expansion . Section 3 contains a brief introduction to the AIM for the Gaussian potential using the economized series obtained in Section 2 to approximate the Gaussian function. We also present and comment our results concerning bound state energies of the attractive Gaussian potential for a given well depth. We compare them with those given by the exact Hamiltonian diagonalization on a finite basis of Coulomb Sturmian functions. The conclusion is given in Section 4.
2. Gaussian Function Economization
We here consider the Gaussian function of the form
and the interval [−1, 1] for the independent variable x. The Maclaurin series expansion of this function is given by
We denote by the Nth-degree truncated Maclaurin series of and we choose . We have:
Expanding in a Chebyshev series, we obtain:
where we have used relations given in Equation (7). Of course if we expand the Chebyshev polynomials again in terms of power series of x, we obtain the same polynomial back.
Let us truncate the Chebyshev series (28) by neglecting the last two terms, and denote by the resulting expression, i.e.,
Replacing by its polynomial form (see Equation (11)), we obtain the tenth-degree economized power series of associated with the finite series which is a polynomial of degree 14:
Figure 1 shows four errors , , and in the approximation of calculated as the differences between the Gaussian function and the truncated power series , , and
, i.e., , , and . The definition of the function whose graph is shown in Figure 1 will be given below.
We see that the tenth-degree economized power series approximates
Figure 1. Plots of different errors in the approximation of . (a) Graph of ; (b) Graph of ; (c) Curves of (solid line) and (symbol ); (d) Graph of .
on [−1, 1] better than the tenth-degree Maclaurin series and nearly as well as the twelfth and fourteenth-degree Maclaurin series and .
Indeed, its maximum error (at ) is whereas the error of approximation equals for , for and for . We “economize” in sense that we get about the same precision with a lower-degree polynomial.
We have to add that we can get a much more efficient economized power series by first adding to the series the associated error fitted by a high-degree polynomial, and then applying the procedure of economization to the resulting polynomial. To this end, we discretize the problem in the interval [0, 1] and evaluate the function at (for ) where p is the number of mesh points and h the step size, thus creating two p-components real vectors X and Y such that and , , where and design the k-th components of the vectors X and Y respectively. We then appeal to the maple 18 software (the Fit command) to construct the (2K)th-degree polynomial of type , , that best fits the above set of data points, i.e., , . It is worth noting that in maple, the Fit command fits a model function to given data by minimizing the least-square error. In the case we are concerned with, the calling sequence is: where is to be replaced by
and are adjustable parameters to be computed. With K = 7 and p = 101, we find:
Applying the procedure of economization to the fourteen-degree polynomial , we find a new tenth-degree economized series, which we denote by :
The function defined by the expression
is shown in Figure 1. It is clear that the series is more accurate than all the above power series.
So far in this section, we have used the polynomials to economize the fourteenth-degree Maclaurin series of the Gaussian function on the domain . In what follows the economization will be done on the interval [0, 1] using shifted Chebyshev polynomials of the first kind . We get
where the asterisk in brackets in expression refers to the use of the polynomials in the approximation of the Gaussian function .
It is worth noting that Equation (34) is obtained by using the following expression :
・ and thus
・ is a triangular matrix such that 
It follows immediately from Equation (35) that
Since and , the last six terms in the right-side of Equation (34) are rather tiny in magnitude ( , , , , and respectively). We therefore can chop off these terms (keep terms up to ) without risk of appreciable change in the final results, and then re-expand back to a monomial series. Doing this gives the following eighth-degree polynomial:
where the asterisk in brackets refers to the fact that the polynomial results from expanding a series of shifted Chebyshev polynomials of the first kind in a Maclaurin series. We remark that all coefficients of all orders between 0 and 8 are present in the series (40), which is contrary to the result obtained by doing economization using the polynomials.
Expanding in shifted Chebyshev series the fourteenth-degree polynomials obtained by adding to the series (40) the associated error, i.e., , fitted by a polynomial of degree 14, and then truncating the resulting series by keeping terms up to , we obtain, after re-expanding back to a monomial series:
Note that this expression can be used to approximate on by multiplying all terms of odd exponents by the sign of the independent variable x.
In Figure 2, we show the plot of the error in the approximation of based on the use of Equation (41), together with the plots of
, and . We see that the polynomials economize the Gaussian function more efficiently than
Figure 2. Comparison of with other errors in the approximation of . (a) Graph of ; (b) Graph of ; (c) Graph of ; (d) Graph of .
the ones, but there is a price to pay since for a given degree 2p, the polynomials lead to an economized polynomial whose number of terms, i.e., 2p + 1, is almost twice the number of terms in the economized power series obtained when the polynomials are used, which is exactly .
3. Application to the Asymptotic Iteration Method for the Gaussian Potential
3.1. Basic Equations of the Asymptotic Iteration Method (AIM)
In this subsection, we briefly outline the asymptotic iteration method; the details can be found in  and .
The AIM was introduced to solve the second-order homogeneous linear differential equations of the form   
where and have sufficiently many continuous derivatives in some interval, not necessarily bounded. The differential Equation (42) has a general solution  
for sufficiently large k.
In Equation (44), and are defined as follows  :
The convergence (quantization) condition of the method, as given in (44), can also be written as follows  :
For a given radial potential such as the Gaussian one, the radial Schrödinger equation is converted to the form of the Equation (42). Once this form has been obtained, it is easy to determine and and calculate and by using Equations (45) and (46). The energy eigenvalues are then obtained from the quantization condition given by Equation (47).
3.2. Asymptotic Iteration Method for Gaussian Potential
We here consider the Gaussian potential of the form
where is the depth of the potential and determines its width. The radial Schrödinger equation (SE) for a particle with mass m that moves in three-dimensional space under the effect of the attractive Gaussian potential (48) can be written as
where , and , being the energy of the particle. This is a second order non-linear differential equation. In order to solve this equation via the AIM, we should first model it with a second order linear differential equation and then convert this model equation to the form of Equation (42). Mutuk  solved the non-linear differential Equation (42) for via the AIM by suggesting a wave function of the form
and making use of the tenth-degree truncated Maclaurin series of , i.e.,
to construct a linear model of this non-linear equation. He obtained a second order linear homogeneous differential equation for the factor with the general form
We have to emphasize that in the AIM, energy eigenvalues are calculated from the quantization condition given by Equation (47). For each iteration, this equation will depend on two variables, and r. The eigenvalues calculated by means of should however be independent from the choice of r. Actually, this will be the case for most iteration sequences. The choice of r can be critical to the speed of the convergence of the eigenvalues, as well as the stability of the process  . This suitable choice of r minimizes the potential or maximizes the radial wave function given by Equation (50) in the case of the attractive Gaussian potential.
In the AIM, the wave function can be written as
where represents the asymptotic behavior. In our case, . Hence, we have taken , which is the value of r that minimizes the wave function. is an arbitrary parameter related to the convergence.
The convergence of the eigenvalues for the cases of , , , and is reported in Table 1 where we compute the eigenvalue associated with and by means of the AIM using the maple 18 software which is known to be a powerful symbolic mathematical software. It is clear that the eigenvalues converge for all the five values of whatever the method used to approximate the Gaussian potential, which is contrary to the results obtained by Mutuk , results in which the eigenvalues associated with start to diverge when the iteration number exceeds 25. We think that the discrepancy between our results and the Mutuk ones for big values of is
Table 1. The convergence of the eigenvalues of the attractive Gaussian potential for different β values and various approximations of with and . k is the iteration number. Potential parameters are atomic units (a.u) and .
due to the fact that during the implementation of the AIM, the precision level has been set to 50 digits, which means that our results have been computed with high precision and are more accurate. It is clear from Table 1 that the approximation of based on the use of the polynomials has the advantage that the energies converge significantly faster towards the accurate value of , i.e., −341.8952145612, than those calculated using the two other approximations.
Table 2 presents the results for a few values of n and computed by means of 50 iterations using Equations (32) and (41) to approximate (third and fourth columns). The energy eigenvalues are obtained with because the solutions are in many cases very close after few iterations when this value of is used. Our results are compared with the Mutuk ones  (second column) and the numerically calculated ones by the spectral Galerkin method (SGM)     based on expanding the radial wave function on a finite basis of Coulomb Sturmian functions defined by  :
Table 2. Comparison of the energy eigenvalues of the Gaussian potential in a.u. obtained by using AIM for various approximations of with those calculated by means of the SGM for different values of n and . We have chosen as the number of Coulomb Sturmian functions and 0.75 as the value of .
where denotes the associated Laguerre polynomial and n the principal quantum number. The normalization constant , given by
is obtained from the normalization condition .
Note that the spectral methods have the advantage of “exponential convergence” property, depending on the size of the basis, which makes them more accurate than local methods. Unlike finite difference methods, spectral methods are global methods, where the computation at any given point depends not only on information at neighboring points, but also on information from the entire domain. We appreciate that the results associated with the appear to approach the numerical eigenvalues reasonably well for all taken values of n and .
In this work, we have applied the procedure of economization to the Gaussian function of the form by using Chebyshev polynomials of the first kind on one hand and the shifted ones on the other, with an application to the solution of the radial Schrödinger equation for the attractive Gaussian well via the Asymptotic Iteration Method (AIM). We have seen that the use of polynomials leads to more efficient economized power series of which can be used to well model the non-linear radial Schrödinger equation for the Gaussian potential with a second order linear differential equation solvable by means of AIM.
 Bekir, E. (2019) Efficient Chebyshev Economization for Elementary Functions. Communications Faculty of Sciences University of Ankara Series A2-A3, 61, 33-56.
 Horner, J.S. (1977) Chebyshev Polynomials in the Solution of Ordinary and Partial Differential Equations. Doctor of Philosophy Thesis, Department of Mathematics, University of Wollongong, Wollongong. http://ro.uow.edu.au/theses/1543
 Ciftci, H., Hall, R.L. and Saad, N. (2003) Asymptotic Iteration Method for Eigenvalue Problems. Journal of Physics A: Mathematical and General, 36, 11807-11816.
 Ciftci, H., Hall, R.L. and Saad, N. (2005) Construction of Exact Solutions to Eigenvalue Problems by Asymptotic Iteration Method. Journal of Physics A: Mathematical and General, 38, 1147-1155.
 Marakoc, M. and Boztosun, I. (2006) Accurate Iteration and Perturbative Solutions of the Yukawa Potential. International Journal of Modern Physics E, 15, 1253-1262.
 Mortensen, M. (2017) Shenfun-Automating the Spectral Galerkin Method. In: Skallerud, B.H. and Anderson, H.I., Eds., Ninth National Conference on Computational Mechanics, International Center for Numerical Methods in Engineering (CIMNE), 273-298.
 Shen, J. (1994) Efficient Spectral Galerkin Method I. Direct Solvers of Second- and Fourth-Order Equations Using Legendre Polynomials. SIAM Journal of Scientific Computing, 15, 1489-1505.
 Shen, J. (1995) Efficient Spectral-Galerkin Method II. Direct Solvers of Second- and Fourth-Order Equations Using Chebyshev Polynomials. SIAM Journal on Scientific Computing, 16, 74-87.
 Pont, M., Proulx, D. and Shakeshaft, R. (1991) Numerical Integration of Time-Dependent Schrödinger Equation for an Atom in a Radiation Field. Physical Review A, 44, 4486-4492.
 Nyengeri, H., Nizigiyimana, R., Ndenzako, E., Bigirimana, F., Niyonkuru, D. and Girukwishaka, A. (2018) Application of the Fröbenius Method to the Schrödinger Equation for a Spherically Symmetric Hyperbolic Potential. Open Access Library Journal, 5, e4950.