The Explicit Fatunla’s Method for First-Order Stiff Systems of Scalar Ordinary Differential Equations: Application to Robertson Problem

Show more

1. Introduction

Numerical solutions for ordinary differential equations (ODEs) are very important in scientific computation, as they are widely used to model real world problems [1] [2] [3] [4] [5] . Indeed, ODEs are found in studies of electrical circuits, chemical kinetics [2] [6] [7] , vibrations, simple pendulum [7] , rigid body rotation [7] , atmospheric chemistry problems [3] [8] [9] , motions of the planet in a gravity field like the Kepler problem [7] [10] , biosciences and many more fields. However, most realistic models developed from these equations cannot be solved analytically, especially in case of nonlinear differential equations. The study of such models therefore requires the use of numerical methods. Nevertheless, there are several ODEs, known as “stiff” equations, which classical numerical methods do not handle very efficiently.

The phenomenon of stiffness was first recognized by Curtiss and Hirschfelder [6] in 1952. Since then, an enormous amount of effort has gone into the analysis of stiff problems and, as a result, a great many numerical methods have been proposed for their solution.

Stiff problems are too important to ignore, and are too expensive to overpower. They are too important to ignore because they occur in many physically important situations. They are too expensive to overpower because of their size and the inherent difficulty they present to classical methods. Even if one can bear the expense, classical methods of solution require so many steps that round off errors may invalidate the solution [11] .

Stiff problem entails rapidly decaying transient solution, which arises naturally in wide variety of applications including the study of spring and damping systems, the analysis of control system and problems in the chemical kinetics [12] [13] . Stiff differential equations also occur in other kind of studies, such as biochemistry, biomedical systems, weather prediction, mathematical biology and electronics. The importance of stiff equations is discussed by Shampine and Gear [11] and Bjurel et al. [14] , who present a comprehensive survey of application areas in which stiff equations arise.

We have to emphasize that while the intuitive meaning of the term stiff is clear to all specialists, much controversy is going on about its mathematical definition. The main purpose of this paper is to outline some of the important characteristics of stiff problems and to present the explicit one-step algorithm suggested by Fatunla [15] for numerical integration of stiff and highly oscillatory differential equations, with an application to the Robertson problem (RP) [16] [17] . To the best of our knowledge there is no mention of this application in the literature. Note that the RP consists of a system of three non-linear ODEs modeling the kinetics of three species. It is very popular in numerical studies [2] [3] [18] [19] and is often used as a test problem in the stiff integration comparisons.

The Explicit Fatunla’s method (EFM) has been used by Frapiccini et al. [20] to solve numerically the time-dependent Schrödinger equation describing physical processes whose complexity requires the use of state-of-the-art methods.

The rest of this paper is organized as follows. In Section 2, we concentrate on some basic properties of stiff differential equations. Section 3 contains a brief introduction to the EFM, which has been shown to be available for solving stiff ODEs [15] [20] . We show how to use this method when one is confronted with first-order stiff systems of scalar ODEs. In Section 4, we apply the method in question to the RP and compare the obtained results with those given by the solver RADAU [4] which is based on implicit Runge-Kutta methods. Finally, we present our conclusions in Section 5.

2. Stiff Differential Equations and Stability Analysis

As stated above, stiff differential equations appeared in the early 1950s as a result of some pioneering work by Curtiss and Hirschfelder [6] , and some ten years later, one could say, in the words of Dahlquist [2] [21] , that “… Around 1960, things became completely different and everyone became aware that the world was full of stiff problems”.

Stiffness is a subtle, difficult, and important concept in the numerical solution of ODEs. It depends on the differential equation, the initial conditions, and the numerical methods [22] . Dictionary definitions of the word stiff involve terms like “not easily bent” [23] [24] [25] , “rigid” [23] [24] , “stubborn” [23] [24] [25] and “hard” [25] .

The differential equations we consider in this work are of the form

$\text{d}y/\text{d}x=f\left(x,y\left(x\right)\right)$ (1)

where x is the independent variable which often plays the role of time (t), and $y\left(x\right)$ is an unknown function that is being sought. The given function $f\left(x,y\left(x\right)\right)$ of two variables defines the differential equation. This equation is called a first-order differential equation because it contains a first-order derivative. Sometimes, for convenience, we will omit the x argument in $y\left(x\right)$.

Let us indicate that the general solution of the first-order Equation (1) normally depends on an arbitrary integration constant. To single out a particular solution, we need to specify an additional condition. Usually such a condition is taken to be of the form

$y\left({x}_{0}\right)={y}_{0}.$ (2)

The differential Equation (1) and the initial value condition (2) together form an initial value problem (IVP):

$\text{d}y/\text{d}x=f\left(x,y\left(x\right)\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}y\left({x}_{0}\right)={y}_{0}.$ (3)

We have to emphasize that many physical applications lead to higher-order systems of ODEs, but there is a simple reformulation that can convert them into equivalent first order systems [1] . Thus, we do not lose any generality by restricting our attention to the first-order case throughout this section.

2.1. Stability Analysis

Examining the stability question for the general problem (3) is too complicated. Instead, we examine the stability of numerical methods for the model problem

$\text{d}y/\text{d}x=\lambda y\left(x\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}y\left({x}_{0}\right)={y}_{0}$ (4)

whose analytical solution is

$y\left(x\right)={y}_{0}\mathrm{exp}\left(\lambda \left(x-{x}_{0}\right)\right)$ (5)

where $\lambda $ is a constant. The equation

$\text{d}y/\text{d}x=\lambda y\left(x\right)$ (6)

is called the “Dahlquist test equation” [26] . The simple test problem (4) has traditionally been used for stability analysis since we can easily obtain analytical expressions describing the solution produced by the numerical method. Studying the behavior of a numerical method in solving this problem is also useful in predicting its behavior in solving the general problem (3). Indeed, if we expand $\text{d}y/\text{d}x=f\left(x,y\left(x\right)\right)$ about $\left({x}_{0},{y}_{0}\right)$, we obtain

$\frac{\text{d}y}{\text{d}x}\approx f\left({x}_{0},{y}_{0}\right)+{\frac{\partial f}{\partial x}|}_{(x={x}_{0},y={y}_{0})}\left(x-{x}_{0}\right)+{\frac{\partial f}{\partial y}|}_{(x={x}_{0},y={y}_{0})}\left(y\left(x\right)-{y}_{0}\right)$ (7)

$=\lambda \left(y\left(x\right)-{y}_{0}\right)+g\left(x\right)$ (8)

with $\lambda ={\partial f/\partial x|}_{\left(x={x}_{0},y={y}_{0}\right)}$ and $g\left(x\right)=f\left({x}_{0},{y}_{0}\right)+{\partial f/\partial x|}_{\left(x={x}_{0},y={y}_{0}\right)}\left(x-{x}_{0}\right)$.

This is a valid approximation if $\left|x-{x}_{0}\right|$ is sufficiently small, i.e., $x\in \left[{x}_{0}-h,{x}_{0}+h\right]$ where h is a small positive real number. Introducing $V\left(x\right)=y\left(x\right)-{y}_{0}$, we obtain

$\text{d}V/\text{d}x\approx \lambda V\left(x\right)+g\left(x\right)$. (9)

The inhomogeneous term $g\left(x\right)$ will drop out of all derivations concerning numerical stability, because we are concerned with differences of solutions of the equation. Dropping $g\left(x\right)$ in Equation (9), we obtain the model Equation (6).

If we are dealing with a set of m equations, i.e.

$\text{d}y/\text{d}x=f\left(x,y\right),$ (10)

where $y\left(x\right)={\left({y}_{1}\left(x\right),\cdots ,{y}_{m}\left(x\right)\right)}^{\text{T}}$ is a vector of m components, and $f\left(x,y\right)={\left({f}_{1}\left(x,{y}_{1},\cdots ,{y}_{m}\right),\cdots ,{f}_{m}\left(x,{y}_{1},\cdots ,{y}_{m}\right)\right)}^{\text{T}}$ is a vector-valued function of $m+1$ variables, the partial derivative $\partial f/\partial y$ intervening in Equation (7) becomes a matrix called the Jacobian of $f$. Thus the model equation becomes

$\text{d}y/\text{d}x=Jy$ (11)

where $J$ is the Jacobian of $f$ and can be regarded as a constant matrix. For a linear system of differential equations, $\text{d}y/\text{d}x=A\left(x\right)y$, determined by the matrix $A\left(x\right)$, the Jacobian $J$ is precisely the matrix $A\left(x\right)$. The differential system (11) reduces to a set of m equations like (6), i.e.

$\frac{\text{d}{Z}_{i}}{\text{d}x}={\lambda}_{i}{Z}_{i},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}1\le i\le m$ (12)

with ${\lambda}_{1},\cdots ,{\lambda}_{m}$ the eigenvalues of $J$ and ${Z}_{i}$, $1\le i\le m$, the components of the vector $Z$ defined by

$Z=Ry\left(x\right)$ (13)

where $R=\left({R}_{1},\cdots ,{R}_{m}\right)$ is a matrix of right eigenvectors of the Jacobian, which means that $J{R}_{i}={\lambda}_{i}{R}_{i},i=1,2,\cdots ,m$.

We now study the behavior of the solution computed by the explicit Euler method (EEM) [1] [27] for the test problem (4) using a fixed step size h. We assume that we have computed solution values ${y}_{1},{y}_{2},\cdots ,{y}_{n}$ at points ${x}_{1},{x}_{2},\cdots ,{x}_{n}$, respectively, where ${x}_{j}={x}_{0}+jh$ for $j=1,2,\cdots ,n$. The above mentioned numerical method, when applied to the model problem (4), leads to the following equation:

${y}_{n+1}={y}_{n}+\lambda h{y}_{n}.$ (14)

The ratio of the computed solutions at ${x}_{n+1}$ and ${x}_{n}$ is given by

$\frac{{y}_{n+1}}{{y}_{n}}=1+\lambda h.$ (15)

We compare this with the ratio of the true solutions at the same points, which is

$\frac{y\left({x}_{n+1}\right)}{y\left({x}_{n}\right)}=\mathrm{exp}\left(\lambda {x}_{n+1}\right)/\mathrm{exp}\left(\lambda {x}_{n}\right)=\mathrm{exp}\left(\lambda h\right).$ (16)

It is clear that when $\lambda $ is real, $1+\lambda h$ is a reasonable approximation to $\mathrm{exp}\left(\lambda h\right)$ except if $\lambda h<-2$. For large negative $\lambda h$, $\mathrm{exp}\left(\lambda h\right)$ is much smaller than 1, while $1+\lambda h$ is (in magnitude) greater than 1. This means that the numerical solution is growing while the true solution is decaying. We say that the numerical solution produced by the EEM for $\lambda h<-2$ is unstable.

Another way of looking at stability is to look at the propagation of local errors. For the EEM we have [1] [13] [27] :

${e}_{n+1}=\left(1+h\frac{\partial f}{\partial y}\right){e}_{n}=\left(1+h\lambda \right){e}_{n},$ (17)

where ${e}_{n}=y\left({x}_{n}\right)-{y}_{n}$. If $\partial f/\partial y=\lambda $, we say that the errors are growing if $\left|1+\lambda h\right|>1$. In other words, the errors grow if $\lambda h<-2$ and, for such step sizes, the method is called unstable.

To sum up, the requirement $\left|{y}_{n}\right|\to 0$ as $n\to \infty $, which, for $\lambda <0$, mirrors the asymptotic behavior of the analytical solution (5), implies that the inequality

$0<h<-\frac{2}{\lambda}$ (18)

must be satisfied. If $\lambda \ll 0$, then the step size of integration is severely restricted by Equation (18) even though the true solution ${y}_{0}\mathrm{exp}\left(\lambda \left(x-{x}_{0}\right)\right)$ becomes negligible very quickly.

Before finishing this subsection, let us define some stability concepts for one-step numerical methods. For this purpose, we consider the model problem (4) where $\lambda $ is a constant (possibly complex) and ${x}_{0}=0$. If we assume that $y\left(0\right)=\eta $, the exact solution of this problem is

$y\left(x\right)=\eta \mathrm{exp}\left(\lambda x\right).$ (19)

It is clear that $\underset{x\to \infty}{\mathrm{lim}}y\left(x\right)=0$ if and only if $\mathrm{Re}\left(\lambda \right)<0$. We have therefore to look for the conditions that have to be imposed on the above mentioned numerical methods in order that the numerical solution ${y}_{n}=y\left(nh\right)\to 0$ as $n\to \infty $ where h is the step size of integration. By applying any one-step numerical method to the model problem in question, we obtain [20] [28]

${y}_{n+1}=R\left(z\right){y}_{n}$ (20)

where $z=\lambda h$ and $R\left(z\right)$ is the so-called stability function. In order that ${y}_{n}$ tends to zero as $n\to \infty $, we must impose $\mathrm{Re}\left(\lambda h\right)<1$ thereby implying some constraints on the step size h. The set is called the stability domain of the numerical method. This latter one is said to be A-stable (absolutely stable) [1] [29] if its stability domain is included in ${\u2102}^{-1}=\left\{z;\mathrm{Re}\left(z\right)\le 1\right\}$. It is L-stable [1] if, apart from being A-stable, the stability function has the property $\underset{\mathrm{Re}\left(\lambda h\right)\to -\infty}{\mathrm{lim}}\left|R\left(\lambda h\right)\right|=0$. A numerical scheme is called $A\left(\alpha \right)$ -stable [30] [31] for $\alpha \in \left]0,\text{\pi}/2\right[$ if its region of absolute stability includes the infinite wedge

${S}_{\alpha}=\left\{z=\lambda h\in \u2102;\left|\text{\pi}-\mathrm{arg}\left(z\right)\right|<\alpha \right\}$. (21)

This definition calls for a numerical method to have a stability region which is unbounded but which does not include the whole complex left hand half-plane. One of the reasons why this is such a useful definition is that many problems have eigenvalues of the Jacobian that lie in a sector ${S}_{\alpha}$.

L-stable methods are the most stable ones [32] and the backward Euler method [1] [27] is an example of an A-stable method.

2.2. Stiffness

We now briefly discuss stiffness and the difficulties involved in solving stiff equations. As discussed earlier, a system of ODEs of the form $\text{d}y/\text{d}x=f\left(x,y\right)$ may be approximated by $\text{d}y/\text{d}x=Jy$ over a small interval of the independent variable x. If $J$ is diagonalizable, this new system may be transformed into

$\text{d}Z/\text{d}x=DZ,$ (22)

where $D$ is a diagonal matrix with eigenvalues ${\lambda}_{i}$ of $J$ (possibly complex) on its diagonal, and $Z$ is related to $y$ by Equation (13).

Many differential equation systems of practical importance in scientific modeling exhibit a distressing behavior when solved by classical numerical methods. This behavior is distressing because these systems, classified as stiff, are characterized by very high instability when approximated by standard numerical methods.

An intuitive idea of what stiff equations are is that they are problems with some smooth and some transient solutions, where all solutions reach the smooth one after a short time (after the transient phase has finished) [31] .

Since stiffness is closely related to the behavior of perturbations to a given solution, it is important to study the effect of a small perturbation $\epsilon Y\left(x\right)$ to a solution $y\left(x\right)$. The parameter $\epsilon $ is small, in the sense that we are interested only in asymptotic behavior of the perturbed solution as this quantity approaches zero. If $y\left(x\right)$ is replaced by $y\left(x\right)+\epsilon Y\left(x\right)$ in the differential equation and the solution expanded in a series in power of $\epsilon $, with ${\epsilon}^{2}$ and higher powers replaced by zero, we obtain the system

${y}^{\prime}\left(x\right)+\epsilon {Y}^{\prime}\left(x\right)=f\left(x,y\left(x\right)\right)+\epsilon JY\left(x\right)$ (23)

where $J\left(x\right)$ is the Jacobian of $f$, ${y}^{\prime}\left(x\right)=\text{d}y/\text{d}x$ and ${Y}^{\prime}\left(x\right)=\text{d}{Y}^{\prime}/\text{d}x$. Subtracting Equation (10) from Equation (23), we obtain the equation

${Y}^{\prime}=J\left(x\right){Y}^{\prime}\left(x\right)$ (24)

which controls the behavior of the perturbation. The Jacobian matrix $J\left(x\right)$ has a crucial role in the understanding of stiff problems. In fact, its spectrum is sometimes used to characterize stiffness [7] [33] [34] . In an interval $\Delta x$, chosen so that there is a moderate change in the value of the solution to ${y}^{\prime}=f\left(x,y\left(x\right)\right)$, and very little change in $J\left(x\right)$, the eigenvalues of $J\left(x\right)$ determine the growth rate of components of the perturbation. The existence of one or more large and negative values of $\lambda \Delta x$, for $\lambda \in \sigma \left(J\left(x\right)\right)$, the spectrum of $J\left(x\right)$, indicates that stiffness is almost certainly present. If $J\left(x\right)$ possesses complex eigenvalues, then we interpret this test for stiffness as the existence of a $\lambda =\mathrm{Re}\left(\lambda \right)+i\mathrm{Im}\left(\lambda \right)\in \sigma \left(J\left(x\right)\right)$ such that $\mathrm{Re}\left(\lambda \Delta x\right)$ is negative with large magnitude. A definition of stiffness in terms of the spectrum of the Jacobian matrix has been proposed by Gaffney [35] [36] who, in 1984 discussed the concept of stiffness oscillatory systems. This definition states that the IVP

${y}^{\prime}\left(x\right)=f\left(x,y\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}y\left({x}_{0}\right)={y}_{0}$ (25)

is considered to be stiff oscillatory over the interval S if for every $x\in S$ the eigenvalues $\left\{{\lambda}_{j}={u}_{j}+i{v}_{j},j=1,2,\cdots ,m\right\}$ of the Jacobian $J=\left(\partial f/\partial y\right)$ possess the following properties:

${u}_{j}<0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}j=1,2,\cdots ,m,$ (26)

$\underset{1\le j\le m}{\mathrm{max}}\left|{u}_{j}\right|\gg \underset{1\le j\le m}{\mathrm{min}}\left|{u}_{j}\right|$ (27)

or if the stiffness ratio $S$ satisfies

$S=\underset{j,k}{\mathrm{max}}\left|\frac{{u}_{j}}{{u}_{k}}\right|\gg 1$ (28)

and $\left|{u}_{j}\right|\ll \left|{v}_{j}\right|$ for at least one pair of j in $1\le j\le m$.

If an explicit method like the Euler’s one is used to solve the differential system ${y}^{\prime}\left(x\right)=f\left(x,y\left(x\right)\right)$ when $J$ has a real eigenvalue $\lambda \ll 0$, the step size of integration is controlled by a transient solution (which quickly becomes negligible), whereas outside the transient phase we wish the step size to be controlled by accuracy alone. This suggests a rather more pragmatic definition of stiffness, where the definition is not based on an analysis of the actual differential equation to be solved but is instead based on the relative performance of implicit and explicit integration methods. Perhaps the best and the oldest definition of this type is due to Curtiss and Hirschfelder [6] who, in 1952, said: “Stiff equations are equations where certain implicit methods, and in particular backward-differentiation formulae (BDFs), perform better, usually tremendously better, than explicit methods”.

The same situation can occur for complex eigenvalues with negative real parts. The problem is that the EEM is not A-stable or even $A\left(\alpha \right)$ -stable for any $\alpha <\text{\pi}/2$. The same is true for all explicit methods like Euler’s one: no such explicit method can be $A\left(\alpha \right)$ -stable. We are therefore forced to use implicit methods, like the backward Euler method, to solve stiff systems. These methods require more work per step than explicit methods, however, since a system of nonlinear algebraic equations must be solved at each step. For further details of stiff problems, the reader is referred to Shampine and Gear [11] , Lambert [37] and Söderling et al. [38] .

Before finishing this section, let us introduce a definition of stiffness which involves a certain norm that depends on the differential system to be solved. If $J\left(x,y\right)$, the Jacobian matrix of the system, is diagonalizable, this definition is as follows. A system of ODEs is stiff if its stiff indicator, defined by

$\sigma \left(J\left(x,y\right)\right)=\frac{m\left[J\left(x,y\right)\right]+M\left[J\left(x,y\right)\right]}{2}$ (29)

is “large” and negative. Here, $m\left[J\right]$ and $M\left[J\right]$ are the greatest lower bounds (glb) and the least upper bound (lub) logarithmic Lipschitz constants, respectively [38] [39] . Note that $M\left[J\right]$ corresponds to the usual logarithmic norm [39] .

The stiffness indicator is easily computed. Let $\lambda \left[A\right]$ denote the eigenvalues of a matrix $A$. Then, for the Euclidian norm, it holds that

${M}_{2}\left[A\right]=\mathrm{max}\lambda \left[He\left(A\right)\right];\text{\hspace{0.17em}}\text{\hspace{0.17em}}{m}_{2}\left[A\right]=\mathrm{min}\lambda \left[He\left(A\right)\right]$ (30)

where $He\left(A\right)=\left(A+{A}^{\text{T}}\right)/2$ denotes the hermitian part of the matrix $A$. Other norms can also be used, provided that $m\left[J\right]$ and $M\left[J\right]$ are computed using well-known expressions.

3. Explicit Fatunla’s Method

The idea behind the EFM is to take into account the stiffness of the differential system by introducing the stiffness parameters in interpolating functions that approximate the solution. This allows one to deal with differential systems where the Jacobian matrix displays eigenvalues (possibly complex with large negative real part) that differ by many orders of magnitude. That explains why Fatunla’s method has the capacity to solve stiff equations.

In his paper [15] , Fatunla considers initial value problems of type

${y}^{\prime}=f\left(x,y\left(x\right)\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}y\left(0\right)={y}_{0},$ (31)

with $y\left(x\right)\in {\mathbb{R}}^{m}$ in the finite interval $S=\left[0,{x}_{f}\right]\in \mathbb{R}$, where ${x}_{f}=Nh$ for some positive integer $N>0$. It is assumed that $y\left(x\right)$ is sufficiently differentiable. Throughout this section, we use the same vector notation as in reference [15] . More precisely, we write $y={\left({}^{1}y,{}^{2}y,\cdots ,{}^{m}y\right)}^{\text{T}}$ and $f={\left({}^{1}f,{}^{2}f,\cdots ,{}^{m}f\right)}^{\text{T}}$. The numerical estimates ${y}_{n}$ to the theoretical solution $y\left({x}_{n}\right)$ at the points ${x}_{n}=nh,n=1,2,\cdots ,N$, are to be generated.

According to the EFM, the theoretical solution $y\left(x\right)$ is, on every subinterval $\left[{x}_{n},{x}_{n}+h\right]$, approximated by the interpolating function

$\stackrel{\u02dc}{F}\left(x\right)=\left(I-\mathrm{exp}\left({\Omega}_{1}x\right)\right)a+\left(I-\mathrm{exp}\left(-{\Omega}_{2}x\right)\right)b+c$, (32)

$a,b$ and $c$ being m-tuples with real entries, $I$ is the identity matrix, whilst ${\Omega}_{1}$ and ${\Omega}_{2}$ are diagonal matrices, usually called the stiffness matrices; or

$\stackrel{\u02dc}{\stackrel{\u02dc}{F}}\left(x\right)=\left(I-\mathrm{exp}\left({\Omega}_{1}x\right)\right)a+\left(I-\mathrm{exp}\left(-{\Omega}_{1}^{*}x\right)\right){a}^{*}+b$, (33)

where $a$ and $b$ are m-tuples with complex entries, and (*) denotes complex conjugate. The choice of interpolation is determined by Equation (39).

3.1. Case of Real Interpolation Formula

By demanding that the interpolating function (32) coincides with the theoretical solution at the endpoints of the interval $\left[{x}_{n},{x}_{n+1}\right]$, the following recursion formula is readily obtainable [15] [20] [40] :

${y}_{n+1}={y}_{n}+R{f}_{n}+S{f}_{n}^{\left(1\right)}$ (34)

where we use the notations ${f}_{n}=f\left({x}_{n},{y}_{n}\right)$, ${f}_{n}^{\left(1\right)}={\text{d}f\left(x,y\right)/\text{d}x|}_{x={x}_{n}}$. $R$ and $S$ represent diagonal matrices defined by

$R={\Omega}_{1}\Phi -{\Omega}_{1}\Xi $, $S=\Phi +\Xi $ (35)

where $\Phi $ and $\Xi $ are diagonal matrices with non zero entries in the main diagonal given by

${}^{j}\Phi =\frac{\mathrm{exp}\left({}^{j}\Omega {}_{1}h\right)-1}{{}^{j}\Omega {}_{1}\left({}^{j}\Omega {}_{1}+{}^{j}\Omega {}_{2}\right)}$, ${}^{j}\Xi =\frac{\mathrm{exp}\left(-{}^{j}\Omega {}_{2}h\right)-1}{{}^{j}\Omega {}_{2}\left({}^{j}\Omega {}_{1}+{}^{j}\Omega {}_{2}\right)}$ (36)

The components of the stiffness matrices are given by [15] [20] :

${}^{j}\Omega {}_{1}=\frac{1}{2}\left(-{}^{j}D+\sqrt{{}^{j}D{}^{2}+4{}^{j}E}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{}^{j}\Omega {}_{2}={}^{j}\Omega {}_{1}+{}^{j}D$, (37)

where ${}^{j}D$ and ${}^{j}E$, ( $j=1,\cdots ,m$ ) are expressed in terms of the components of the derivatives ${f}_{n}^{\left(k\right)}$, $k=0,1,2,3$ :

${}^{j}D=\frac{{}^{j}f{}_{n}^{\left(0\right)}{}^{j}f{}_{n}^{\left(3\right)}-{}^{j}f{}_{n}^{\left(1\right)}{}^{j}f{}_{n}^{\left(2\right)}}{{}^{j}f{}_{n}^{\left(1\right)}{}^{j}f{}_{n}^{\left(1\right)}-{}^{j}f{}_{n}^{\left(0\right)}{}^{j}f{}_{n}^{\left(2\right)}}$, ${}^{j}E=\frac{{}^{j}f{}_{n}^{\left(1\right)}{}^{j}f{}_{n}^{\left(3\right)}-{}^{j}f{}_{n}^{\left(2\right)}{}^{j}f{}_{n}^{\left(2\right)}}{{}^{j}f{}_{n}^{\left(1\right)}{}^{j}f{}_{n}^{\left(1\right)}-{}^{j}f{}_{n}^{\left(0\right)}{}^{j}f{}_{n}^{\left(2\right)}}$ (38)

provided that the denominator in Equation (38) is not zero.

3.2. Case of the Complex Interpolation Formula

The complex interpolation formula (see Equation (33)) is adopted if in Equation (37), the following relationship holds:

${\left({}^{j}D\right)}^{2}<-4{}^{j}E.$ (39)

The components of the stiffness matrices, which, in this case, are ${\Omega}_{1}$ and ${\Omega}_{1}^{*}$, can be written as

${}^{j}\Omega {}_{1}={\lambda}_{j}+i{u}_{j},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{}^{j}\Omega {}_{2}={}^{j}\Omega {}_{1}^{*}={\lambda}_{j}-i{u}_{j}$, (40)

for some real numbers ${\lambda}_{j}$ and ${u}_{j}$. By imposing the same restrictions on (33) as (32), we still obtain the interpolation formula (34) but now with components ${R}_{j}$ and ${S}_{j}$ of $R$ and $S$, respectively, given by [15]

${}^{j}R\left({\lambda}_{j},{u}_{j}\right)=\frac{-{\text{e}}^{{\lambda}_{j}h}\left[\left({\lambda}_{j}^{2}-{u}_{j}^{2}\right)\mathrm{sin}\left(h{u}_{j}\right)-2{\lambda}_{j}{u}_{j}\mathrm{cos}\left(h{u}_{j}\right)\right]-2{\lambda}_{j}{u}_{j}}{{u}_{j}\left({\lambda}_{j}^{2}+{u}_{j}^{2}\right)}$, (41)

${}^{j}S\left({\lambda}_{j},{u}_{j}\right)=\frac{{\text{e}}^{{\lambda}_{j}h}\left[{\lambda}_{j}\mathrm{sin}\left(h{u}_{j}\right)-{u}_{j}\mathrm{cos}\left(h{u}_{j}\right)\right]+{u}_{j}}{{u}_{j}\left({\lambda}_{j}^{2}+{u}_{j}^{2}\right)}.$ (42)

3.3. Local Truncation Error

Fatunla [15] has shown that his method, when applied to the scalar test problem ${y}^{\prime}=\lambda y$, is L-stable and exponentially fitted for any complex value $\lambda $ with negative real part. This means that the corresponding stability function $R\left(\lambda h\right)=\mathrm{exp}\left(\lambda h\right)$ gives the optimum stability properties. Furthermore, it can be shown that the jth component of the local truncation error at $t={t}_{n+1}$ is given by [20] [40] :

$\begin{array}{c}{}^{j}T{}_{n+1}=\frac{{h}^{5}}{5!}[{}^{j}f{}_{n}^{\left(4\right)}+\left({}^{j}\Omega {}_{2}^{3}-{}^{j}\Omega {}_{2}^{2}{}^{j}\Omega {}_{1}+{}^{j}\Omega {}_{2}{}^{j}\Omega {}_{1}^{2}-{}^{j}\Omega {}_{1}^{3}\right){}^{j}f{}_{n}^{\left(1\right)}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-{}^{j}\Omega {}_{1}{}^{j}\Omega {}_{2}\left({}^{j}\Omega {}_{1}^{2}-{}^{j}\Omega {}_{1}{}^{j}\Omega {}_{2}+{}^{j}\Omega {}_{2}^{2}\right){f}_{n}^{\left(0\right)}]+O\left({h}^{6}\right)\end{array}$ (43)

for the real interpolation formula and by [15]

${}^{j}T{}_{n+1}=\frac{{h}^{5}}{5}\left[{}^{j}f{}_{n}^{\left(4\right)}+\left({\lambda}_{j}^{2}+{u}_{j}^{2}\right)\left(3{\lambda}_{j}^{2}-{u}_{j}^{2}\right){}^{j}f{}_{n}+4{\lambda}_{j}\left({u}_{j}^{2}{\lambda}_{j}^{2}\right){}^{j}f{}_{n}^{\left(1\right)}\right]+O\left({h}^{6}\right)$ (44)

for the complex interpolation formula.

4. Application to the Robertson Problem

As stated earlier, the RP consists of a system of three non-linear ODEs, i.e.

$\{\begin{array}{c}{\stackrel{\dot{}}{y}}_{1}=-{k}_{1}{y}_{1}+{k}_{3}{y}_{2}{y}_{3}\\ {\stackrel{\dot{}}{y}}_{2}={k}_{1}{y}_{1}-{k}_{2}{y}_{2}^{2}-{k}_{3}{y}_{2}{y}_{3}\text{\hspace{0.17em}},\\ {\stackrel{\dot{}}{y}}_{3}={k}_{2}{y}_{2}^{2}\end{array}$ (45)

modeling the kinetics of three species, with $y\left(0\right)={y}_{0}={\left(\begin{array}{ccc}1& 0& 0\end{array}\right)}^{\text{T}}$ and parameters ${k}_{1}=0.04$, ${k}_{2}=3\times {10}^{7}$ and ${k}_{3}={10}^{4}$. ${y}_{1}$, ${y}_{2}$ and ${y}_{3}$ denote the concentrations of the three species. The RP is well-known to be stiff [7] [19] [38] . Note that ${\stackrel{\dot{}}{y}}_{i}$ means $\text{d}{y}_{i}/\text{d}t$. The Jacobian matrix is here given by

$J\left(y\left(t\right)\right)=\left[\begin{array}{ccc}-{k}_{1}& {k}_{3}{y}_{3}& {k}_{3}{y}_{2}\\ {k}_{1}& -2{k}_{2}{y}_{2}-{k}_{1}{y}_{3}& -{k}_{3}{y}_{2}\\ 0& 2{k}_{2}{y}_{2}& 0\end{array}\right].$ (46)

Its Hermitian part is

${H}_{e}\left(J\left(y\left(t\right)\right)\right)=\left[\begin{array}{ccc}-{k}_{1}& \frac{1}{2}\left({k}_{1}+{k}_{3}{y}_{3}\right)& \frac{1}{2}{k}_{3}{y}_{2}\\ \frac{1}{2}\left({k}_{1}+{k}_{3}{y}_{3}\right)& -2{k}_{2}{y}_{2}-{k}_{3}{y}_{3}& \frac{1}{2}\left(2{k}_{2}-{k}_{3}\right){y}_{2}\\ \frac{1}{2}{k}_{3}{y}_{2}& \frac{1}{2}\left(2{k}_{2}-{k}_{3}\right){y}_{2}& 0\end{array}\right]$ (47)

This one has certainly three real eigenvalues which are solution of the cubic equation

$a{\lambda}^{3}+b{\lambda}^{2}+c\lambda +d=0$ (48)

where $a=1$, $b=2{k}_{2}{y}_{2}+{k}_{3}{y}_{3}+{k}_{1}$, $c=-\left[\left(\frac{1}{4}{k}_{1}-2{k}_{2}{y}_{2}-\frac{1}{2}{k}_{3}{y}_{3}\right){k}_{1}+\left(-{k}_{2}{k}_{3}+{k}_{2}^{2}+\frac{1}{2}{k}_{3}^{2}\right){y}_{2}^{2}+\frac{1}{2}{k}_{3}^{2}{y}_{3}^{2}\right]$ and $d=-\left(-\frac{1}{2}{k}_{1}{k}_{3}+\frac{1}{2}{k}_{3}^{2}{y}_{3}+{k}_{1}{k}_{2}+\frac{1}{2}{k}_{3}^{2}{y}_{2}\right){k}_{2}{y}_{2}^{2}.$

Using the Cardan method for cubic equations in one variable, we obtain

${\lambda}_{1}={\mu}_{0}+\stackrel{\xaf}{{\mu}_{0}}-\frac{b}{3a}$, ${\lambda}_{2}=j{\mu}_{0}+\stackrel{\xaf}{j{\mu}_{0}}-\frac{b}{3a}$, ${\lambda}_{3}={j}^{2}{\mu}_{0}+\stackrel{\xaf}{{j}^{2}{\mu}_{0}}-\frac{b}{3a}$ (49)

with ${\mu}_{0}^{3}=\left(-q+i\sqrt{\Delta /27}\right)/2$, $j=\mathrm{exp}\left(2i\text{\pi}/3\right)=-1/2+i\sqrt{3}/2$ and $\Delta =-\left(3{p}^{3}+27{q}^{2}\right)>0$, where p and q are defined by $p=\left(3ac-{b}^{2}\right)/3{a}^{3}$, $q=\left(2{b}^{3}-9abc+27{a}^{2}d\right)/27$. We can therefore study the stiffness of the RP by use of the stiffness indicator (see Equation (29))

${\sigma}_{2}\left[J\left(t\right)\right]=\left({m}_{2}\left[J\left(t\right)\right]+{M}_{2}\left[J\left(t\right)\right]\right)/2$ (50)

where ${M}_{2}\left[J\left(t\right)\right]$ and ${m}_{2}\left[J\left(t\right)\right]$ are, respectively, the largest and smallest eigenvalues of ${H}_{e}\left(J\left(t\right)\right)$.

The components of the solution obtained by means of the EFM is shown in Figure 1. This figure contains also plots of the same components computed by the RADAU solver [4] which is based on implicit Runge-Kutta methods. In Figure 2, we show the stiffness indicator ${\sigma}_{2}$, together with the variation of the integration step size associated with the EFM.

We want to emphasize that implementation of the EFM to solve the RP requires the calculation of the function $f$ and its four first derivatives, i.e. ${f}^{\left(1\right)}$, ${f}^{\left(2\right)}$, ${f}^{\left(3\right)}$ and ${f}^{\left(4\right)}$ at each value ${t}_{n}$. The function $f$ is here given by

$f\left(y\left(t\right)\right)=\left(\begin{array}{c}-{k}_{1}{y}_{1}+{k}_{3}{y}_{2}{y}_{3}\\ {k}_{1}{y}_{1}-{k}_{2}{y}_{2}^{2}-{k}_{3}{y}_{2}{y}_{3}\\ {k}_{2}{y}_{2}^{2}\end{array}\right).$ (51)

We have used maple 18 software to establish analytical expression of ${f}^{\left(1\right)}$, ${f}^{\left(2\right)}$, ${f}^{\left(3\right)}$ and ${f}^{\left(4\right)}$.

As expected, we remark, from the plot of the stiffness indicator, that the RP is very stiff. We also find that our results agree noticeably with those computed by the RADAU solver except for the solution component ${y}_{2}$ for small values of the time. We think that the observed discrepancy between our results and the

Figure 1. The Robertson equation. Solution components y_{1} and y_{3} (top) and the component y_{2} (bottom).

Figure 2. Stiffness indicator (top) and the integration step size (bottom) for the EFM.

RADAU ones for t at around 10^{−4} atomic units (a.u.) is due to the fact that during the implementation of the EFM, we have chosen, for
$t\le 0.1$ a.u., a very small truncation error (10^{−12} in magnitude) to control the integration step size, which means that our results are probably more accurate. In the case of
$t>0.1$ a.u., the time step has been adapted according to the condition
$\left|{T}_{n+1}\right|\le {10}^{-9}$. It is because we have used two distinct conditions on the truncation error for the control of the integration step size that we have a discontinuity at
$t=0.1$ a.u. in the graph of the step size.

5. Conclusions

The object of this paper has been to present some basic characteristics of stiff differential equations, as well as to introduce the EFM which has been shown to be available for solving stiff problems. We have shown that the stiffness indicator of the Jacobian matrix $J$ gives a sufficient information to estimate the computational costs of explicit schemes like the Euler’s one. Numerical results obtained solving the Robertson problem using the EFM on the one hand and the solver RADAU on the other hand, confirm the fact that the explicit method in question can be a good candidate to solve stiff ODEs.

We have to emphasize that modern codes for solving ODEs automatically vary the step size, estimate the local error, and provide facilities to compute the solution at intermediate points via interpolation [12] . That is why during the implementation of the EFM we calculate the truncation error ${T}_{n+1}$ to control the size of the integration step imposing a boundary criterion for ${\left|T\right|}_{n+1}$.

References

[1] Atkinson, K.E., Han, W. and Stewart, D. (2009) Numerical Solution of OrdinaryDifferential Equations. John Wiley & Sons, Hoboken, NJ.

[2] Hairer, E. and Wanner, G. (1996) Solving Ordinary Differential Equations II, Stiff Differential-Algebraic Problems. 2nd Edition, Springer-Verlag, Berlin.

[3] Niemeyer, K.E. and Sung, C.J. (2014) GPU-Based Parallel Integration of Large Numbers of Independent ODE Systems in Numerical Computations with GPUs. Chapter 8, 159-188.

[4] Mazzia, F. and Iavernaro, F. (2003) Test Set for Initial Value Problem Solvers, Release 2.4. http://www.dm.uniba.it/~testset

[5] Akinfenwa, O., Jator, S. and Yoa, N. (2011) An Eight Order Backward Differential Formula with Continuous Coefficients of Stiff Ordinary Differential Equations. International Journal of Mathematical and Computer Sciences, 7, 171-176.

[6] Curtiss, C.F. and Hirschfelder, J.O. (1952) Integration of Stiff Equations. Proceedings of the National Academy of Sciences of the United States of America, 38, 235-243. https://doi.org/10.1073/pnas.38.3.235

[7] Butcher, J.C. (2016) Numerical Methods for Ordinary Differential Equation. 3rd Edition, John Wiley & Sons, Hoboken, NJ.

[8] Sandu, A., Verwer, J.G., Blom, J.G., Spee, E.J., Carmichael, G.R. and Potra, F.A. (1997) Benchmarking Stiff ODE Solvers for Atmospheric Chemistry Problems II: Rosenbrock Solvers. Atmospheric Environment, 31, 3459-3479.

https://doi.org/10.1016/S1352-2310(97)83212-8

[9] Kin, J. and Cho, S.Y. (1997) Computational Accuracy and Efficiency of the Time-Splitting Method in Solving Atmospheric Transport/Chemistry Equations. Atmospheric Environment, 31, 2215-2224.

https://doi.org/10.1016/S1352-2310(97)88636-0

[10] Bonnard, B., Faubourg, L. and Trélat, E. (2006) Mécanique Céleste et Controle des Véhicules Spatiaux. Springer, Berlin.

[11] Shampire, L.F. and Gear, C.W. (1979) A User’s View of Solving Stiff Ordinary Differential Equations. Society for Industrial and Applied Mathematics, 21, 1-17.

[12] Ebady, A.M.N., Habib, H.M. and El-Zahar, E.R. (2012) A Fourth Order A-Stable Explicit One-Step Method for Solving Stiff Differential Systems Arising in Chemical Reactions. International Journal of Pure and Applied Mathematics, 81, 803-812.

[13] Gupta, G.K., Sacks-Davis, R. and Tischer, P.E. (1985) A Review of Recent Developments in Solving ODEs. Computing Surveys, 17, 5-47.

https://doi.org/10.1145/4078.4079

[14] Bjurel, G., Dahlquist, G.G., Lindberg, B., Linde, S. and Oden, L. (1970) Survey of Stiff Ordinary Differential Equations. Report NA 70.11, KTH Royal Institute of Technology, Stockholm, Sweden.

[15] Fatunla, S.O. (1980) Numerical Integrators for Stiff and Highly Oscillatory Differential Equations. Mathematics of Computation, 34, 373-390.

https://doi.org/10.1090/S0025-5718-1980-0559191-X

[16] Robertson, H.H. (1967) The Solution of a Set of Reaction Rate Equations. In: Walsh, J., Ed., Numerical Analysis: An introduction, Academic Press, Cambridge, Massachusetts, 178-182.

[17] Shampire, L.F. (1994) Numerical Solution of Ordinary Differential Equations. Chapman & Hall, UK.

[18] Edsberg, L. (1974) Integration Package for Chemical Kinetics. Plenum Press, Heidelberg, Germany, 81-94.

[19] Gobbert, M.K. (1996) Rob-ertson’s Example for Stiff Differential Equations. Technical Report, Arizona State University, Tempe, AZ.

[20] Frapiccini, A.L., et al. (2014) Explicit Schemes for Time Propagating Many-Body Wave Functions. Physical Review A, 89, Article ID: 023418.

[21] Aiken, R.C. (1985) Stiff Computation. Oxford University Press, Oxford, England.

[22] Moler, C.B. (2004) Numerical Computing with Matlab. Society for Industrial and Applied Mathematics, Philadelphia, Pennsylvania.

[23] Merriam-Webster (1989) Webster’s Ninth New Collegiate Dictionary. Merriam-Webster, Springfield, Massachusetts.

[24] Hornby, A.S., Gatenby, E.V. and Wakefield, H. (1963) The Advanced Learner’s Dictionary of Current English. Oxford University Press, Oxford, England.

[25] Little, W., Fowler, H.W., Coulson, J. and Onions, C.T. (1068) The Shorter Oxford English Dictionary on Historical Principles. 3rd Edition, Clarendon Press, Oxford England.

[26] Dahlquist, G. (1963) A Special Stability Problem for Linear Multistep Methods. BIT Numerical Mathematics, 3, 27-43. https://doi.org/10.1007/BF01963532

[27] Jedrzejewski, F. (2005) Introduction aux Méthodes Numériques. Deuxième Edition, Springer-Verlag, Berlin.

[28] Hairer, E. and Wanner, G. (1999) Stiff Differential Equations Solved by Randau Methods. Journal of Computational and Applied Mathematics, 111, 93-111.

https://doi.org/10.1016/S0377-0427(99)00134-X

[29] Iserles, A. (2008) A First Course in the Numerical Analysis of Differen-tial Equations. 2nd Edition, Cambridge University Press, Cambridge, England.

[30] Anake, T.A., Bishop, S.A., Adesanya, A.O. and Agarana, M.C. (2014) An -Stable Method for Solving Initial-Value Problems of Ordinary Differential Equations. Advances in Differential Equations and Control Processes, 13, 21-35.

[31] Cash, J.R. (2003) Review Paper: Efficient Numerical Methods for the Solution of Stiff Initial-Value problems and Differential Algebraic Equations. Proceedings of the Royal Society A, 459, 797-815. https://doi.org/10.1098/rspa.2003.1130

[32] Lambert, J.D. (1991) Numerical Methods for Ordinary Differential Systems. The Ini-tial-Value Problems. John Wiley & Sons, Hoboken, NJ.

[33] Sekar, S. and Nalini, M. (2015) Numerical Solution of Linear and Nonlinear Stiff Problems Using Adomian Decomposition Method. IOSR Journal of Mathematics, 11, 14-20.

[34] Cartwiright, J.H.E. (1999) Nonlinear Stiffness, Lyapunov Exponents, and Attractor Dimension. Physics Letters A, 264, 298-302.

https://doi.org/10.1016/S0375-9601(99)00793-8

[35] Gaffney, P.W. (1984) A Performance Evaluation of Some FORTRAN Subroutines for the Solution of Stiff Oscillatory ODEs. ACM Transactions on Mathematical Softwares, 10, 58-72. https://doi.org/10.1145/356068.356073

[36] Fatunla, S.O. (1998) Numerical Methods for Initial Value Problems in Ordinary Dif-ferential Equations. Academic Press, Cambridge, Massachusetts.

[37] Lambert, J.D. (1980) Stiffness. In: Gradwell, I. and Sayers, D.K., Eds., Computational Techniques for Ordinary Differential Equations, Academic Press, Cambridge, Massachu-setts.

[38] S?derlind, G., Jay, L. and Calvo, M. (2015) Stiffness 1952-2012: Sixty Years in Search of a Definition. BIT Numerical Mathematics, 55, 531-558.

https://doi.org/10.1007/s10543-014-0503-3

[39] S?derlind, G. (2006) The Logarithmic Norm. His-tory and Modern Theory. BIT Numerical Mathematics, 46, 631-652. https://doi.org/10.1007/s10543-006-0069-9

[40] Madro?ero, J. and Piraux, B. (2009) Explicit Time-Propagation Method to Treat the Dynamics of Driven Complex Systems. Physical Review A, 80, Article ID: 033409.