Numerical Simulation Using GEM for the Optimization Problem as a System of FDEs

Affiliation(s)

^{1}
Department of Mathematics, Faculty of Science, Cairo University, Giza, Egypt.

^{2}
Department of Mathematics, Faculty of Science, Benha University, Benha, Egypt.

Abstract

In this paper, we introduce a numerical treatment using generalized Euler
method (GEM) for the non-linear programming problem which is governed
by a system of fractional differential equations (FDEs). The appeared fractional
derivatives in these equations are in the Caputo sense. We compare our
numerical solutions with those numerical solutions using RK4 method. The
obtained numerical results of the optimization problem model show the simplicity
and the efficiency of the proposed scheme.

Keywords

Nonlinear Programming, Penalty Function, Dynamic System, Caputo Fractional Derivative, Generalized Euler Method, RK4 Method

Nonlinear Programming, Penalty Function, Dynamic System, Caputo Fractional Derivative, Generalized Euler Method, RK4 Method

1. Introduction

Fractional differential equations (FDEs) have recently been applied in various areas of engineering, science, finance, applied mathematics, bio-engineering and others. However, many researchers remain unaware of this field [1] . Consequently, considerable attention has been given to the solutions of FDEs of physical interest. Most FDEs do not have exact solutions, so approximate and numerical techniques ( [2] , [3] ), must be used. Recently, several numerical methods to solve FDEs have been given, such as collocation method [4] .

Definition 1.

The Caputo fractional operator ${D}^{\nu}$ of order $\nu $ is defined in the following form

${D}^{\nu}\psi \left(x\right)=\frac{1}{\Gamma \left(m-\nu \right)}\text{\hspace{0.05em}}{\displaystyle {\int}_{0}^{x}}\frac{{\psi}^{\left(m\right)}\left(t\right)}{{\left(x-t\right)}^{\nu -m+1}}\text{d}t,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\nu >0,\text{\hspace{0.17em}}m-1<\nu \le m,\text{\hspace{0.17em}}m\in \mathbb{N},\text{\hspace{0.17em}}x>0$

For more details on fractional derivatives definitions and its properties see [5] .

Optimization theory is aimed to find out the optimal solution of problems which are defined mathematically from a model arise in wide range of scientific and engineering disciplines. Many methods and algorithms have been developed for this purpose. The penalty function techniques are classical methods for solving non-linear programming (NLP) problem [6] . In this type of methods the optimization problem is formulated as a system of FDEs so that the equilibrium point of this system converges to the local minimum of the optimization problem ( [7] [8] [9] ).

2. Mathematical Formulation

In this section, we will reformulate the optimization problem as a system of FDEs. For achieve this propose, we will consider the non-linear programming problem with equality constraints defined by

$\text{\hspace{0.05em}}\text{minimize}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\psi \left(x\right)\mathrm{,}\text{\hspace{1em}}\text{\hspace{0.05em}}\text{subject}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{to}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}x\in \mathbb{D}$ (1)

with

$\mathbb{D}=\left\{x\in {\Re}^{m}:h\left(x\right)=0\right\}$

where $\psi \mathrm{:}{\Re}^{m}\to \Re $ and $h={\left({h}_{1}\mathrm{,}{h}_{2}\mathrm{,}\cdots \mathrm{,}{h}_{p}\right)}^{\text{T}}\mathrm{:}{\Re}^{m}\to {\Re}^{p}$ ( $p\le m$ ). It is assumed that the functions in the problem are at least twice continuously differentiable, that a solution exists, and that $\nabla h\left(x\right)$ has full rank. To obtain a solution of (1), the penalty function method solves a sequence of unconstrained optimization problems. The well-known penalty function for this problem can be defined as follows

$\Psi \left(x;\mu \right)=\psi \left(x\right)+\mu \frac{1}{\theta}{\displaystyle \underset{\mathcal{l}=1}{\overset{p}{\sum}}}{\left({h}_{\mathcal{l}}\left(x\right)\right)}^{\theta}$ (2)

for some constant $\theta >0$ and $\mu >0$ is an auxiliary penalty variable.

The corresponding unconstrained optimization problem of (2) is defined as follows

$\text{minimize}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\Psi \left(x\mathrm{;}\mu \right),\text{\hspace{1em}}\text{\hspace{0.05em}}\text{s}\text{.t}\mathrm{.}\text{\hspace{0.05em}}\text{\hspace{0.17em}}x\in {\Re}^{n}$ (3)

Now, we consider the unconstrained optimization problem (3), an approach based on fractional dynamic system can be described by the following FDEs

${D}^{\nu}x\left(t\right)=-{\nabla}_{x}\Psi \left(x;\mu \right),\text{\hspace{1em}}0<\nu \le 1$ (4)

with the initial conditions $x\left({t}_{0}\right)={a}_{i},\text{\hspace{0.05em}}\text{\hspace{0.05em}}i=1,2,\cdots ,m$ .

Note that, a point ${x}_{e}$ is called an equilibrium point of (4) if it satisfies the right hand side of the Equation (4). Also, we can rewrite the fractional dynamic system (4) in more general form as follows

${D}^{\nu}{x}_{i}\left(t\right)={\chi}_{i}\left(t,\mu ;{x}_{1},{x}_{2},\cdots ,{x}_{m}\right),\text{\hspace{1em}}i=1,2,\cdots ,m$ (5)

The steady state solution of the non-linear system of FDEs (5) must be coincided with local optimal solution of the NLP problem (1). The existence and the uniqueness of this system of FDEs are studied in more papers [10] . For more details about NLP problem, see [11] .

3. The Proposed Methods

3.1. Generalized Taylor’s Formula

In this subsection, we give the generalization of Taylor’s formula that involves Caputo fractional derivatives [12] . Suppose that

${D}^{k\nu}\psi \left(x\right)\in C\left(0,a\right],\text{\hspace{0.05em}}\text{\hspace{1em}}\text{\hspace{0.05em}}\text{for}\text{\hspace{0.05em}}\text{\hspace{0.17em}}k=0,1,\cdots ,n+1,\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{where}\text{\hspace{0.05em}}\text{\hspace{0.17em}}0<\nu \le 1$

Then we have

$\psi \left(x\right)={\displaystyle \underset{i=0}{\overset{n}{\sum}}}\frac{{x}^{i\nu}}{\Gamma \left(i\nu +1\right)}{D}^{i\nu}\psi \left({0}^{+}\right)+\frac{\left({D}^{\left(n+1\right)\nu}\psi \right)\left(\xi \right)}{\Gamma \left(\left(n+1\right)\nu +1\right)}{x}^{\left(n+1\right)\nu},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}0\le \xi \le x,\text{\hspace{0.17em}}\forall x\in \left(0,a\right]$ (6)

The generalized Taylor’s Formula (6) can be reduced to the classical Taylor’s formula in case of $\nu =1$ [13] .

3.2. Generalized Euler Method

In this subsection, we will introduce a generalization of the classical Euler’s method with Caputo derivatives. To achieve this aim, we consider the following IVP in its general form:

${D}^{\nu}\Upsilon \left(t\right)=\psi \left(t,\Upsilon \left(t\right)\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\Upsilon \left(0\right)={\Upsilon}_{0},\text{\hspace{0.17em}}\text{\hspace{0.17em}}0<\nu \le 1,\text{\hspace{0.17em}}0<t<a$ (7)

In the proposed method we will not find a function $\Upsilon \left(t\right)$ that satisfies IVP (7) but we will find a set of points $\left({t}_{j}\mathrm{,}\Upsilon \left({t}_{j}\right)\right)$ and use it for our approximation. For convenience we divide the interval $\left[\mathrm{0,}a\right]$ into n subintervals $\left[{t}_{j}\mathrm{,}{t}_{j+1}\right]$ of equal width $h=a/n$ by using the nodes ${t}_{j}=jh$ , for $j=0,1,\cdots ,n$ . Assume that $\Upsilon \left(t\right)\mathrm{,}{D}^{\nu}\Upsilon \left(t\right)$ and ${D}^{2\nu}\Upsilon \left(t\right)$ are continuous on $\left[\mathrm{0,}a\right]$ and use the generalized Taylor’s Formula (6) to expand $\Upsilon \left(t\right)$ about $t={t}_{0}=0$ . For each value t there is a value ${c}_{1}$ so that

$\Upsilon \left(t\right)=\Upsilon \left({t}_{0}\right)+\frac{{D}^{\nu}\Upsilon \left({t}_{0}\right)}{\Gamma \left(\nu +1\right)}{t}^{\nu}+\frac{{D}^{2\nu}\Upsilon \left({c}_{1}\right)}{\Gamma \left(2\nu +1\right)}{t}^{2\nu}$ (8)

Now, when ${D}^{\nu}\Upsilon \left({t}_{0}\right)=\psi \left({t}_{0},\Upsilon \left({t}_{0}\right)\right)$ and $h={t}_{1}$ are substituted into Equation (8), the result is an expression for $\Upsilon (t1)$

$\Upsilon \left({t}_{1}\right)=\Upsilon \left({t}_{0}\right)+\psi \left({t}_{0},\Upsilon \left({t}_{0}\right)\right)\frac{{h}^{\nu}}{\Gamma \left(\nu +1\right)}+{D}^{2\nu}\Upsilon \left({c}_{1}\right)\frac{{h}^{2\nu}}{\Gamma \left(2\nu +1\right)}$

If the step size h is chosen small enough, then we may neglect the second-order term (involving ${h}^{2\nu}$ ) and get

$\Upsilon \left({t}_{1}\right)=\Upsilon \left({t}_{0}\right)+\psi \left({t}_{0},\Upsilon \left({t}_{0}\right)\right)\frac{{h}^{\nu}}{\Gamma \left(\nu +1\right)}$

The process is repeated to generate a sequence of points that approximates the solution $\Upsilon \left(t\right)$ . The general formula for GEM when ${t}_{j+1}={t}_{j}+h$ is given by:

$\Upsilon \left({t}_{j+1}\right)=\Upsilon \left({t}_{j}\right)+\psi \left({t}_{j},\Upsilon \left({t}_{j}\right)\right)\frac{{h}^{\nu}}{\Gamma \left(\nu +1\right)},\text{\hspace{1em}}j=0,1,\cdots ,n-1$ (9)

Remark:

1) The generalized Euler’s method (9) is derived by Zaid and Momani for the numerical solution of IVPs with Caputo derivatives [12] .

2) The generalized Euler’s method (9) can be reduced to the classical Euler’s method in the case $\nu =1$ [12] .

4. Numerical Simulation

In this section, we illustrate the effectiveness of the proposed method and validate the solution scheme for solving the system of FDEs which generated from the non-linear programming problem. To achieve this propose, we consider the following two cases of optimization problems.

Example 1:

Consider the following non-linear programming problem (Optimization problem) ( [14] , [15] )

$\begin{array}{l}\text{minimize}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\psi \left(x\right)=100{\left({u}^{2}-v\right)}^{2}+{\left(u-1\right)}^{2},\\ \text{subject}\text{\hspace{0.05em}}\text{to}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}h\left(x\right)=u\left(u-4\right)-2v+12=0\end{array}$ (10)

The optimal solution is ${x}^{*}=\left(2,4\right)$ , where $x=\left(u,v\right)$ . For solving the above problem, we convert it to an unconstrained optimization problem with quadratic penalty function (2) for $\theta =2$ , then we have

$\Psi \left(x;\mu \right)=100{\left({u}^{2}-v\right)}^{2}+{\left(u-1\right)}^{2}+\frac{1}{2}\mu {\left(u\left(u-4\right)-2v+12\right)}^{2}$

where $\mu \in {\Re}^{+}$ is an auxiliary penalty variable. The corresponding non-linear system of FDEs from (5) is defined as

$\begin{array}{l}{D}^{\nu}u\left(t\right)=-400\left({u}^{2}-v\right)u-2\left(u-1\right)-\mu \left(2u-4\right)\left({u}^{2}-4u-2v+12\right),\\ {D}^{\nu}v\left(t\right)=200\left({u}^{2}-v\right)+2\mu \left({u}^{2}-4u-2v+12\right),\text{\hspace{1em}}0<\nu \le 1\end{array}$ (11)

The initial conditions are $u\left(0\right)=0$ and $v\left(0\right)=0$ .

Now, we solve numerically this system of non-linear FDEs using the GEM. In view of the GEM, the numerical scheme of the proposed model (11) is given in the following form

$\begin{array}{l}u\left({t}_{j+1}\right)=u\left({t}_{j}\right)+{\psi}_{1}\left({t}_{j},u\left({t}_{j}\right),v\left({t}_{j}\right)\right)\frac{{h}^{\nu}}{\Gamma \left(\nu +1\right)},\\ v\left({t}_{j+1}\right)=v\left({t}_{j}\right)+{\psi}_{2}\left({t}_{j},u\left({t}_{j}\right),v\left({t}_{j}\right)\right)\frac{{h}^{\nu}}{\Gamma \left(\nu +1\right)}\end{array}$ (12)

where the quantities ${\psi}_{1}\left({t}_{j},u\left({t}_{j}\right),v\left({t}_{j}\right)\right)$ and ${\psi}_{2}\left({t}_{j},u\left({t}_{j}\right),v\left({t}_{j}\right)\right)$ are compu- ted from the following functions, respectively, at the points ${t}_{j}=jh,\text{\hspace{0.05em}}\text{\hspace{0.05em}}j=0,1,\cdots ,n$ .

$\begin{array}{l}{\psi}_{1}\left(t,u\left(t\right),v\left(t\right)\right)=-400\left({u}^{2}-v\right)u-2\left(u-1\right)-\mu \left(2u-4\right)\left({u}^{2}-4u-2v+12\right),\\ {\psi}_{2}\left(t,u\left(t\right),v\left(t\right)\right)=200\left({u}^{2}-v\right)+2\mu \left({u}^{2}-4u-2v+12\right)\end{array}$

In Figure 1 and Figure 2, we presented the behavior of the numerical solution $\left(u\left(t\right),v\left(t\right)\right)$ of the problem (11) using GEM and RK4 method at $\nu =1$ , respectively. From Figure 1, we can see that for $\nu =1$ our solutions obtained using the proposed method are in excellent agreement with the solution using RK4 method and from Figure 2, we can note that the numerical solution takes the same behavior of the numerical solution as in the case $\nu =1$ . All computations in this paper are done using Matlab 8.0.

Figure 1. The numerical solution using GEM with $n=10,h=0.1$ , and RK4 method at $\nu =1$ .

Figure 2. The numerical solution using GEM, with $n=10,h=0.1$ at $\nu =0.9$ .

Example 2:

Consider the equality constrained optimization problem ( [14] , [15] )

$\begin{array}{l}\text{minimize}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\psi \left(x\right)={\left({x}_{1}-1\right)}^{2}+{\left({x}_{1}-{x}_{2}\right)}^{2}+{\left({x}_{2}-{x}_{3}\right)}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\left({x}_{3}-{x}_{4}\right)}^{4}+{\left({x}_{4}-{x}_{5}\right)}^{4},\\ \text{subject}\text{\hspace{0.05em}}\text{to}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}{h}_{1}\left(x\right)={x}_{1}+{x}_{2}^{2}+{x}_{3}^{3}-2-3\sqrt{2}=0,\\ \text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{0.17em}}{h}_{2}\left(x\right)={x}_{2}-{x}_{3}^{2}+{x}_{4}+2-2\sqrt{2}=0,\\ \text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{0.17em}}{h}_{3}\left(x\right)={x}_{1}{x}_{5}-2=0\end{array}$ (13)

The solution of (13) is ${x}^{\mathrm{*}}\approx \left(\mathrm{1.191127,1.362603,1.472818,1.635017,1.679081}\right)$ and this is not an exact solution. For solving the above problem, we convert it to an unconstrained optimization problem with quadratic penalty function (2) for $\theta =2$ , then we have

$\Psi \left(x;\mu \right)=\psi \left(x\right)+\frac{1}{2}\mu {\displaystyle \underset{\mathcal{l}=1}{\overset{3}{\sum}}}{\left({h}_{\mathcal{l}}\left(x\right)\right)}^{2}$ (14)

where $\mu \in {\Re}^{+}$ is an auxiliary penalty variable. The corresponding non-linear system of FDEs from (5) is defined as

${D}^{\nu}x\left(t\right)=-\nabla \psi \left(x\right)-\mu \nabla h\left(x\right)h\left(x\right),\text{\hspace{1em}}0<\nu \le 1$ (15)

The initial condition is $x\left(0\right)={\left(\mathrm{2,2,2,2,2}\right)}^{\text{T}}$ that is not feasible.

The numerical solution of the proposed optimization problem (13) can be obtained by pursuing the procedure stated in the previous example and solving the resulting nonlinear system of ODEs. The obtained numerical results from the proposed methods are presented in Tables 1-3. In Table 1 and Table 2, we presented the numerical solution $x\left(t\right)=\left({x}_{1}\left(t\right),{x}_{2}\left(t\right),\cdots ,{x}_{5}\left(t\right)\right)$ using the GEM and the RK4 method at $\nu =1$ , respectively. But in Table 3, we presented the numerical solution of the same system (15) using the proposed method with $\nu =0.9$ . From these tables, we can conclude that our solutions of the proposed method are in excellent agreement with the RK4 method. In addition, we can see that the behavior of the solution is in agreement with the physical meaning of the proposed problem.

Table 1. The numerical solution of Example 2 using GEM at $\nu =1$ .

Table 2. The numerical solution of Example 2 using the RK4 method at $\nu =1$ .

Table 3. The numerical solution of Example 2 using GEM at $\nu =0.9$ .

5. Conclusion

We implemented the GEM for studying the numerical solution for the system of FDEs which described the NLP model. This work is devoted to introduce a study of the behavior of the numerical solution of the proposed problem for various fractional Brownian motions and also for standard motion $\nu =1$ . Also, we compared the obtained numerical solutions with those numerical solutions using RK4 method. From this study, we can see that the obtained numerical solution using the suggested method is in excellent agreement with the numerical solution using RK4 method and shows that this approach can be solved the problem effectively and illustrates the validity and the great potential of the proposed technique. Finally, the recent appearance of FDEs as models in some fields of applied mathematics makes it necessary to investigate analytical and numerical methods for such equations.

Acknowledgements

We thank the Editor and the Referees for their comments.

Cite this paper

Adel, M. and Khader, M. (2017) Numerical Simulation Using GEM for the Optimization Problem as a System of FDEs.*Applied Mathematics*, **8**, 1761-1768. doi: 10.4236/am.2017.812126.

Adel, M. and Khader, M. (2017) Numerical Simulation Using GEM for the Optimization Problem as a System of FDEs.

References

[1] Podlubny, I. (1999) Fractional Differential Equations. Academic Press, New York.

[2] Khader, M.M. and Adel, M. (2016) Numerical Solutions of Fractional Wave Equations Using an Efficient Class of FDM Based on Hermite Formula. Advances in Difference Equations, 2016, 1-10. https://doi.org/10.1186/s13662-015-0731-0

[3] Sweilam, N.H. and Khader, M.M. (2010) A Chebyshev Pseudo-Spectral Method for Solving Fractional Integro-Differential Equations. ANZIAM, 51, 464-475.

https://doi.org/10.1017/S1446181110000830

[4] Khader, M.M. (2011) On the Numerical Solutions for the Fractional Diffusion Equation. Communications in Nonlinear Science and Numerical Simulations, 16, 2535-2542.

https://doi.org/10.1016/j.cnsns.2010.09.007

[5] Oldham, K.B. and Spanier, J. (1974) The Fractional Calculus. Academic Press, New York.

[6] Luenberger, D.G. (1973) Introduction to Linear and Nonlinear Programming. Addison-Wesley, Boston.

[7] ?zdemir, N. and Evirgen, F. (2010) A Dynamic System Approach to Quadratic Programming Problems with Penalty Method. Bulletin of the Malaysian Mathematical Sciences Society, 10, 79-91.

[8] K.J. Arrow, Hurwicz L. and Uzawa, H. (1958) Studies in Linear and Non-Linear Programming. Stanford University Press, Redwood City.

[9] Yamashita, H. (1976) Differential Equation Approach to Nonlinear Programming. Mathematical Programming, 1, 155-168.

[10] Sun, W. and Yuan, Y. (2006) Optimization Theory and Methods: Nonlinear Programming. Springer-Verlag, New York.

[11] Fiacco, A.V. and Mccormick, G.P. (1968) Nonlinear Programming: Sequential Unconstrained Minimization Techniques. John Wiley, New York.

[12] Odibat, Z.M. and Shawagfeh, N.T. (2007) Generalized Taylor’s Formula. Applied Mathematics and Computation, 186, 286-293.
https://doi.org/10.1016/j.amc.2006.07.102

[13] Arafa, A.A.M., Rida, S.Z. and Khalil, M. (2012) Fractional Modeling Dynamics of HIV and CD4+ T-Cells during Primary Infection. Nonlinear Biomedical Physics, 6, 1-7.

https://doi.org/10.1186/1753-4631-6-1

[14] Schittkowski, K. (1987) More Test Examples for Nonlinear Programming Codes. Springer-Verlag, Berlin.

[15] Khader, M.M., Sweilam, N.H. and Mahdy, A.M.S. (2013) Numerical Study for the Fractional Differential Equations Generated by Optimization Problem Using Chebyshev Collocation Method and FDM. Applied Mathematics and Information Science, 7, 2011-2018.