Multistep Quadrature Based Methods for Nonlinear System of Equations with Singular Jacobian
Abstract: Methods for the approximation of solution of nonlinear system of equations often fail when the Jacobians of the systems are singular at iteration points. In this paper, multi-step families of quadrature based iterative methods for approximating the solution of nonlinear system of equations with singular Jacobian are developed using decomposition technique. The methods proposed in this study are of convergence order , and require only the evaluation of first-order Frechet derivative per iteration. The approximate solutions generated by the proposed iterative methods in this paper compared with some existing contemporary methods in literature, show that methods developed herein are efficient and adequate in approximating the solution of nonlinear system of equations whose Jacobians are singular and non-singular at iteration points.

1. Introduction

System of equations that is used in describing real life phenomena is often nonlinear in nature. Examples of Mathematical models that are formulated using nonlinear system of equations (NLSE) include mathematical models that describe kinematics, combustion, chemical equilibrium, economic problem and neurophysiology problems,   ; Reactor Steering problem,   ; transportation problem,  . Indeed most real life problems are best described using NLSE.

Consider the NLSE,

$G\left(X\right)=0$ (1)

where $X\in {\Re }^{m}$ , 0 is a null vector of dimension m, $G:D\subset {\Re }^{m}\to {\Re }^{m}$ is functional define by

$G\left({X}_{1},{X}_{2},\cdots ,{X}_{m}\right)={\left[{G}_{i}\left({X}_{1},{X}_{2},\cdots ,{X}_{m}\right)\right]}^{\text{T}},$

${G}_{i},i=1,2,\cdots ,m$ are coordinate functions of G and D is an open domain in ${\Re }^{m}$ .

The Newton method in m-dimension is a popular iterative method for approximating the solution of NLSE (1). The sequence of approximations ${\left\{{X}_{k}\right\}}_{k\ge 1}$ generated using the Newton method, converges to the solution $\Phi$ of the NLSE (1) with convergence order $\rho =2$ , under the condition that det $\left({G}^{\prime }\left({X}_{k}\right)\right)\ne 0$ ,  . One setback of the Newton method is that it fails if at any iteration stage of computation, the Jacobian matrix ${G}^{\prime }\left({X}_{k}\right)$ is singular   .

The development of new iterative methods for approximating the solution of (1) has attracted the attention of researchers in recent years, as evident in the literature such as     . One objective of developing iterative methods for the approximation of the solutions of NLSE (1) is to obtain methods with better convergence rate, computational efficiency or modified to solve certain problems. Recently, plethora numbers of iterative methods for approximating solution of (1) have been developed via diverse techniques. These techniques include Taylor series and homotopy     ; decomposition technique in    and quadrature formulas technique     .

Quadrature formulas are veritable tools for the evaluation of the integrals,    . The idea used in developing quadrature based iterative methods is the approximation of the integral in the Taylor expansion of vector function using quadrature formulas,   . The quadrature based iterative functions are implicit-type,      . To implement the implicit iterative formula derived via quadrature formula, the predictor and corrector technique is utilized with the Newton method used often as predictor and the iterative function derived from quadrature formula as corrector. The quadrature based methods breakdown in implementation when the Jacobian ${G}^{\prime }\left(X\right)$ of the NLSE (1) is singular at iteration points. The presence of singular Jacobian ${G}^{\prime }\left(X\right)$ within the domain of evaluation does not suggest in practice the absence of solution to (1). In order to circumvent the problem of having singular Jacobian at a point in the vicinity of the solution $\Phi$ of (1), the Newton method is modified by introducing perturbation term (a diagonal matrix) to the Jacobian of its corrector factor  . In  , the idea of the perturbation term introduced in  is utilized to develop a Two-step iterative method for approximating the solution of (1), where the Jacobian ${G}^{\prime }\left(X\right)$ is singular at some iteration points. In  , similar perturbation term was introduced at every step of the corrector factor in the three step frozen Jocobian iterative method for approximating the solution of (1). Other articles like   have also developed several iterative methods for solving (1) with the help of same perturbation term introduced to the target Jacobian of (1). It is worth of note that the diagonal matrix used as the perturbation term in literature has not been significantly modified since introduced in  . Also, its application has not been extended to quadrature based iterative methods in literature. Motivated and inspired by the work in    and  , to develop families of multi-step quadrature based iterative methods with infused perturbation term to its Jacobian in this paper. It is important to note that the perturbation term developed and used in this work is different from the diagonal matrix that is formed by the coordinates functions of the target NLSE (1) used in literature. To achieve this target, a continuous and differentiable auxiliary function is directly infused into (1). The resulting NLSE is thereafter expressed as coupled equations with generic quadrature formula (1). The decomposition technique is used to resolve the coupled equation, from which some iterative schemes that can be utilized in developing iterative methods for approximating the solution of (1) whose Jacobian is singular are proposed.

2. The Proposed Iterative Methods

Let $\beta$ be the initial approximation close to $\Phi$ a solution of the NLSE (1.1.1) and $\Omega \left(X\right)$ a function such that

$\Omega \left(X\right)=\psi \left(X\right)\odot G\left(X\right)=0$ (2)

where $\psi \left(X\right)$ is a differentiable nonzero scalar function. The notation $\odot$ is component-wise element operator such that

$\psi \left(X\right)\odot G\left(X\right)={\left[\psi \left(X\right)G\left({X}_{1}\right),\psi \left(X\right)G\left({X}_{2}\right),\cdots ,\psi \left(X\right)G\left({X}_{m}\right)\right]}^{\text{T}}$ (3)

The solution of $\Omega \left(X\right)=0$ and $G\left(X\right)=0$ are same because $\psi \left(X\right)\ne 0$ for all values of X.

With the aid of Taylor series expansion of a multi-dimensional function about $\beta$ up to the second term and using the generic quadrature formula to approximate ${G}^{\prime }\left(\beta \right)$ , then (2) can be rewritten as

$G\left(\beta \right)+\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left(\beta +{\theta }_{i}\left(X-\beta \right)\right)\right]\left(X-\beta \right)+H\left(X\right)=0$ (4)

where $H\left(X\right)$ is higher order terms of the Taylor expansion, the division operator in $\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)$ is element wise, ${\theta }_{i}$ and ${\mu }_{i},i=1,2,\cdots ,q$ are knots and weights respectively such that

$\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}=1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{\theta }_{i}=\frac{1}{2},$ (5)

Equation (5) is consistency conditions,  .

The (4) is expressed into coupled equation given in (6) and (7).

$G\left(\beta \right)+\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left(\beta +{\theta }_{i}\left(X-\beta \right)\right)\right]\left(X-\beta \right)+H\left(X\right)=0$ (6)

$H\left(X\right)=G\left(X\right)+G\left(\beta \right)-\left[G\left(\beta \right)\left(\frac{\nabla p{\left(X\right)}^{\text{T}}}{p\left(X\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left(\beta +{\theta }_{i}\left(X-\beta \right)\right)\right]\left(X-\beta \right)$ (7)

In compact form, (6) can be expressed as

$X=\beta +\hslash \left(X\right)$ (8)

where

$\hslash \left(X\right)=-{\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left(\beta +{\theta }_{i}\left(X-\beta \right)\right)\right]}^{-1}\left(G\left(\beta \right)+H\left(X\right)\right)$ (9)

is a nonlinear function.

Applying the decomposition technique due to  to decompose the nonlinear function (9) as

$\hslash \left(X\right)=\hslash \left({X}_{0}\right)+\underset{i=1}{\overset{\infty }{\sum }}\left[\hslash \left(\underset{j=0}{\overset{i}{\sum }}\text{ }{X}_{j}\right)-\hslash \left(\underset{j=0}{\overset{i-1}{\sum }}\text{ }{X}_{j}\right)\right]$ (10)

where ${X}_{0}$ is initial guess.

The idea here is to find the solution vector X of the NLSE (1) in series form through an iterative scheme, such that the solution X is the sum of the initial guess $\beta$ and the sum of consecutive differences of successive and preceding iterate points approximations of X, that is;

$X=\underset{i=0}{\overset{\infty }{\sum }}\text{ }{X}_{j}=\beta +\hslash \left({X}_{0}\right)+\underset{i=1}{\overset{\infty }{\sum }}\left[\hslash \left(\underset{j=0}{\overset{i}{\sum }}\text{ }{X}_{j}\right)-\hslash \left(\underset{j=0}{\overset{i-1}{\sum }}\text{ }{X}_{j}\right)\right]$ (11)

Hence the following scheme from (11) is obtained:

$\begin{array}{l}{X}_{0}=\beta \\ {X}_{1}=\hslash \left({X}_{0}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ {X}_{s+1}=\hslash \left(\underset{j=0}{\overset{i}{\sum }}\text{ }{X}_{j}\right)-\hslash \left(\underset{j=0}{\overset{i-1}{\sum }}\text{ }{X}_{j}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,2,\cdots \end{array}$ (12)

The sum of the respective sides of (12) is

$\underset{i=0}{\overset{s+1}{\sum }}\text{ }{X}_{i}=\beta +\hslash \left(\underset{i=0}{\overset{s}{\sum }}\text{ }{X}_{i}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,2,\cdots$ (13)

From (2) and (12), the solution X of the (1) is approximated as:

$X\approx \beta +\underset{i=0}{\overset{s}{\sum }}\text{ }{X}_{i}=\beta +\hslash \left(\underset{i=0}{\overset{s-1}{\sum }}\text{ }{X}_{i}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,2,\cdots$ (14)

As s becomes large, the approximations of the solution X gets closer to the exact solution of (1).

From Equation (12)

${X}_{0}=\beta$ (15)

Since ${X}_{0}$ is initial guess, setting ${X}_{0}$ in (7) yields

$H\left({X}_{0}\right)=0$ (16)

From (9)and (15),

${X}_{1}=\hslash \left({X}_{0}\right)=-{\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+\left(\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}\right){G}^{\prime }\left(\beta \right)\right]}^{-1}G\left(\beta \right)$ (17)

For $s=1$ in (14), and using (17), the following is obtained.

$X\approx {X}_{0}+{X}_{1}\approx \beta +\hslash \left({X}_{0}\right)\approx \beta -{\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+{G}^{\prime }\left(\beta \right)\right]}^{-1}G\left(\beta \right)$ (18)

Using the formulation in (18), a One-step family of iterative scheme for approximating the solution of (1) is proposed as in Scheme 1.

Scheme 1 Assume ${X}_{0}$ is an initial guess, approximate the solution $\Phi$ of (1.1.1) using the iterative scheme:

${X}_{k+1}={X}_{k}-{\left[G\left({X}_{k}\right)\left(\frac{\nabla \psi {\left({X}_{k}\right)}^{\text{T}}}{\psi \left({X}_{k}\right)}\right)+{G}^{\prime }\left({X}_{k}\right)\right]}^{-1}G\left({X}_{k}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=0,1,2,\cdots$ (19)

Scheme 1 is a family of One-step iterative scheme that can be used to propose iterative methods for solving (1) for some function $\Phi \left({X}_{k}\right)$ .

For $s=2$ in (14), the solution X of (1) can be approximated as:

$\begin{array}{c}X\approx {X}_{0}+{X}_{1}+{X}_{2}\approx {X}_{0}+\hslash \left({X}_{0}+{X}_{1}\right)\\ \approx \beta -{\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left(\beta +{\theta }_{i}\left({X}_{0}+{X}_{1}-\beta \right)\right)\right]}^{-1}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}×G\left(\beta +H\left({X}_{0}+{X}_{1}\right)\right)\end{array}$ (20)

Set $X={X}_{0}+{X}_{1}$ in (7) implies

$\begin{array}{l}H\left({X}_{0}+{X}_{1}\right)=G\left({X}_{0}+{X}_{1}\right)-G\left(\beta \right)\\ -{\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left(\beta +{\theta }_{i}\left({X}_{0}+{X}_{1}-\beta \right)\right)\right]}^{-1}\left({X}_{0}+{X}_{1}-\beta \right)\end{array}$ (21)

From (18),

${X}_{0}+{X}_{1}-\beta =-{\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+{G}^{\prime }\left(\beta \right)\right]}^{-1}G\left(\beta \right)$ (22)

substituting (22) into (21) yields

$\begin{array}{l}H\left({X}_{0}+{X}_{1}\right)=G\left({X}_{0}+{X}_{1}\right)-G\left(\beta \right)\\ -{\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left(\beta +{\theta }_{i}\left({X}_{0}+{X}_{1}-\beta \right)\right)\right]}^{-1}\\ ×{\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+{G}^{\prime }\left(\beta \right)\right]}^{-1}G\left(\beta \right)\end{array}$ (23)

Inserting (23) into (20), gives the equation

$\begin{array}{l}X\approx {X}_{0}+{X}_{1}+{X}_{0}\\ \approx \beta -{\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+{G}^{\prime }\left(\beta \right)\right]}^{-1}G\left(\beta \right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-{\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left(\beta +{\theta }_{i}\left({X}_{0}+{X}_{1}-\beta \right)\right)\right]}^{-1}G\left({X}_{0}+{X}_{1}\right)\end{array}$ (24)

Using (24), a Two-step iterative scheme for the approximation of solution $\Phi$ of (1) as given in Scheme 2 is proposed.

Scheme 2 Assume ${X}_{0}$ is an initial guess, approximate the solution $\Phi$ of (1) using the iterative scheme:

$\begin{array}{l}{\nu }_{k}={X}_{k}-{\left[G\left({X}_{k}\right)\left(\frac{\nabla \psi {\left({X}_{k}\right)}^{\text{T}}}{\psi \left({X}_{k}\right)}\right)+{G}^{\prime }\left({X}_{k}\right)\right]}^{-1}G\left({X}_{k}\right)\\ {X}_{k+1}={\nu }_{k}-{\left[G\left({X}_{k}\right)\left(\frac{\nabla \psi {\left({X}_{k}\right)}^{\text{T}}}{\psi \left({X}_{k}\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left({X}_{k}+{\theta }_{i}\left({\nu }_{k}-{X}_{k}\right)\right)\right]}^{-1}G\left({\nu }_{k}\right),\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=0,1,2,\cdots \end{array}$ (25)

Scheme 2 is used to propose Two-step iterative methods for approximating the solution $\Phi$ of the NLSE (1).

For $s=3$ in (14), the solution of (1) can be approximated as follows:

$\begin{array}{c}X\approx {X}_{0}+{X}_{1}+{X}_{2}+{X}_{3}\approx {X}_{0}+\hslash \left({X}_{0}+{X}_{1}+{X}_{2}\right)\\ \approx \beta -{\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left(\beta +{\theta }_{i}\left({X}_{0}+{X}_{1}+{X}_{2}-\beta \right)\right)\right]}^{-1}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }×\left(G\left(\beta \right)+H\left({X}_{0}+{X}_{1}+{X}_{2}\right)\right)\end{array}$ (26)

Set $X={X}_{0}+{X}_{1}+{X}_{2}$ in (7) yields

$\begin{array}{l}H\left({X}_{0}+{X}_{1}+{X}_{2}\right)=G\left({X}_{0}+{X}_{1}+{X}_{2}\right)-G\left(\beta \right)\\ -\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left(\beta +{\theta }_{i}\left({X}_{0}+{X}_{1}+{X}_{2}-\beta \right)\right)\right]\left({X}_{0}+{X}_{1}+{X}_{2}-\beta \right)\end{array}$ (27)

From (26), (28) is obtained.

$\begin{array}{l}{X}_{0}+{X}_{1}+{X}_{2}-\beta \\ =-{\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left(\beta +{\theta }_{i}\left({X}_{0}+{X}_{1}-\beta \right)\right)\right]}^{-1}G\left({X}_{0}+{X}_{1}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-{\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+{G}^{\prime }\left(\beta \right)\right]}^{-1}G\left(\beta \right)\end{array}$ (28)

Substituting (28) into (27) yields

$\begin{array}{l}H\left({X}_{0}+{X}_{1}+{X}_{2}\right)=G\left({X}_{0}+{X}_{1}+{X}_{2}\right)-G\left(\beta \right)\\ -\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left(\beta +{\theta }_{i}\left({X}_{0}+{X}_{1}+{X}_{2}-\beta \right)\right)\right]\\ ×{\left[\left(G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left(\beta +{\theta }_{i}\left({X}_{0}+{X}_{1}+{X}_{2}-\beta \right)\right)\right]}^{-1}\\ ×G\left({X}_{0}+{X}_{1}+{X}_{2}\right)-{\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+{G}^{\prime }\left(\beta \right)\right]}^{-1}G\left(\beta \right)\right)\end{array}$ (29)

Substitute (29) in (26)

$\begin{array}{l}X\approx {X}_{0}+{X}_{1}+{X}_{2}+{X}_{3}\approx {X}_{0}+\hslash \left({X}_{0}+{X}_{1}+{X}_{2}\right)\\ \approx \beta -{\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left(\beta +{\theta }_{i}\left({X}_{0}+{X}_{1}+{X}_{2}-\beta \right)\right)\right]}^{-1}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}×G\left({X}_{0}+{X}_{1}+{X}_{2}\right)-{\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left(\beta +{\theta }_{i}\left({X}_{0}+{X}_{1}-\beta \right)\right)\right]}^{-1}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}×G\left({X}_{0}+{X}_{1}\right)-{\left[G\left(\beta \right)\left(\frac{\nabla \psi {\left(X\right)}^{\text{T}}}{\psi \left(X\right)}\right)+{G}^{\prime }\left(\beta \right)\right]}^{-1}G\left(\beta \right)\end{array}$ (30)

The formulation in (30) enable the proposal of the three-step iterative scheme for the solution of (1).

Scheme 3 Assume ${X}_{0}$ is an initial guess, approximate the solution $\Phi$ of (1) using the iterative scheme:

$\begin{array}{l}{\nu }_{k}={X}_{k}-{\left[G\left({X}_{k}\right)\left(\frac{\nabla \psi {\left({X}_{k}\right)}^{\text{T}}}{\psi \left(X\right)}\right)+{G}^{\prime }\left({X}_{k}\right)\right]}^{-1}G\left({X}_{k}\right)\\ {W}_{k}={\nu }_{k}-{\left[G\left({X}_{k}\right)\left(\frac{\nabla \psi {\left({X}_{k}\right)}^{\text{T}}}{\psi \left({X}_{k}\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left({X}_{k}+{\theta }_{i}\left({\nu }_{k}-{X}_{k}\right)\right)\right]}^{-1}G\left({\nu }_{k}\right),\\ {X}_{k+1}={W}_{k}-{\left[G\left({X}_{k}\right)\left(\frac{\nabla \psi {\left({X}_{k}\right)}^{\text{T}}}{\psi \left({X}_{k}\right)}\right)+\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left({X}_{k}+{\theta }_{i}\left({W}_{k}-{X}_{k}\right)\right)\right]}^{-1}G\left({W}_{k}\right),\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=0,1,2,\cdots \end{array}$ (31)

Scheme 3 is used to propose Three-step iterative methods for approximating the solution $\Phi$ of the NLSE (1).

A suitable choice of the function $\psi \left({X}_{k}\right)$ in the proposed Scheme 1, Scheme 2 and Scheme 3 yields families of quadrature based iterative methods for approximation of the solution $\Phi$ of (1). It is worthy of note that for $\psi \left({X}_{k}\right)=1$ , Scheme 1 reduces to the classical Newton method, while Scheme 2 and Scheme 3 reduces to the family of approximation methods in  . One major target of proposing $\Omega \left(X\right)$ in (2) is to discover the perturbation function $\psi \left({X}_{k}\right)$ by retaining the solution $\Phi$ of the target NLSE (1). Recall that $\psi \left({X}_{k}\right)$ must be chosen such that it is a nonzero scalar function and its first derivative $\nabla \psi \left({X}_{k}\right)$ does not vanish. This way, the solution of (1) is unperturbed. One function and its first derivative that is nonzero is the exponential function,    . Suppose $\psi \left({X}_{k}\right)$ is replace by ${\mathrm{exp}}^{-\psi \left({X}_{k}\right)}$ , then

$\frac{\nabla \psi {\left({X}_{k}\right)}^{\text{T}}}{\psi \left({X}_{k}\right)}=-\nabla \psi {\left({X}_{k}\right)}^{\text{T}}$ (32)

From (32), a generalization can be made as

$\nabla \psi {\left({X}_{k}\right)}^{\text{T}}=\lambda \left({X}_{k}\right)$ (33)

where $\lambda \left({X}_{k}\right)={\left[{\lambda }_{1}\left({X}_{k}\right),{\lambda }_{2}\left({X}_{k}\right),\cdots ,{\lambda }_{m}\left({X}_{k}\right)\right]}^{\text{T}}$ . Consequently, the following iterative algorithms are obtained from Scheme 1, Scheme 2 and Scheme 3 respectively.

Algorithm 1 Assume ${X}_{0}$ is an initial guess, approximate the solution $\Phi$ of (1) using the iterative method:

${X}_{k+1}={X}_{k}-{\left[{G}^{\prime }\left({X}_{k}\right)-G\left({X}_{k}\right)\lambda \left({X}_{k}\right)\right]}^{-1}G\left({X}_{k}\right),k=0,1,2,\cdots$ (34)

If the parameter $\lambda \left({X}_{k}\right)=0$ , Algorithm 1 reduces to the m-dimensional classical Newton method (1). The major difference between Algorithm 1 and the Wu method in  is the introduction of a dense matrix $G\left({X}_{k}\right)\nabla \psi {\left({X}_{k}\right)}^{\text{T}}$ in place of the diagonal matrix $diag\left({\sigma }_{i}{g}_{i}\left({X}_{k}\right)\right),i=1,2,\cdots ,m$ in the Jacobian $G\left({X}_{k}\right)$ of the target (1).

Algorithm 2 Assume ${X}_{0}$ is an initial guess, approximate the solution $\Phi$ of (1) using the iterative method:

$\begin{array}{l}{\nu }_{k}={X}_{k}-{\left[{G}^{\prime }\left({X}_{k}\right)-G\left({X}_{k}\right)\lambda \left({X}_{k}\right)\right]}^{-1}G\left({X}_{k}\right),\\ {X}_{k+1}={\nu }_{k}-{\left[\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left({X}_{k}+{\theta }_{i}\left({\nu }_{k}-{X}_{k}\right)\right)-G\left({X}_{k}\right)\lambda \left({X}_{k}\right)\right]}^{-1}G\left({\nu }_{k}\right),\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=0,1,2,\cdots \end{array}$ (35)

The Algorithm 2 is a Two-step family of iterative method for approximating the solution of (1).

Algorithm 3 Assume ${X}_{0}$ is an initial guess, approximate the solution $\Phi$ of (1) using the iterative method:

$\begin{array}{l}{\nu }_{k}={X}_{k}-{\left[{G}^{\prime }\left({X}_{k}\right)-G\left({X}_{k}\right)\lambda \left({X}_{k}\right)\right]}^{-1}G\left({X}_{k}\right),\\ {W}_{k}={\nu }_{k}-{\left[\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left({X}_{k}+{\theta }_{i}\left({\nu }_{k}-{X}_{k}\right)\right)-G\left({X}_{k}\right)\lambda \left({X}_{k}\right)\right]}^{-1}G\left({\nu }_{k}\right),\\ {X}_{k+1}={W}_{k}-{\left[\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left({X}_{k}+{\theta }_{i}\left({W}_{k}-{X}_{k}\right)\right)-G\left({X}_{k}\right)\lambda \left({X}_{k}\right)\right]}^{-1}G\left({W}_{k}\right),\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=0,1,2,\cdots \end{array}$ (36)

Remark 1

For numerical implementation, the choice of $\lambda \left({X}_{k}\right)$ is subjectively chosen, however specific values of $\lambda$ (for reference purpose, $\lambda \left({X}_{k}\right)$ is denoted as $\lambda$ ) are used such that the magnitude of their elements is less one in order to achieve better convergence rate and accuracy. Similarly the choice of ${\theta }_{i}$ is also subjective but must satisfy the consistency condition in (5).

2.1. Convergence Analysis of the Proposed Iterative Methods

In this section, the convergence of the iterative methods (Algorithm 1, Algorithm 2 and Algorithm 3) are established using the Taylor series approach,    . In all the proofs, it is assumed that the function $G\left(\cdot \right)$ is thrice Frechet differentiable.

2.2. Convergence Analysis of Algorithm 1

To establish the convergence of Algorithm 1, the proof of Theorem 1 is considered.

Theorem 1 Suppose the function $G:D\subset {\Re }^{m}\to {\Re }^{m}$ is continuous and differentiable in some neighborhood $D\subset {\Re }^{m}$ of $\Phi$ . If ${X}_{0}$ is an initial guess in the neighborhood of $\Phi$ , then the sequence of approximations ${\left\{{X}_{k}\right\}}_{k\ge 0},\left({X}_{k}\in D\right)$ generated by (34) converges to $\Phi$ with convergence order $\rho =2$ .

Proof. Let ${E}_{k}={‖{X}_{k}-\Phi ‖}_{\infty }$ be the error in the kth iteration point. Using the Taylor series expansion of $G\left(X\right)$ and ${G}^{\prime }\left(X\right)$ about $\Phi$ , the following equations are obtained.

$\begin{array}{c}G\left(X\right)=G\left(\Phi \right)+{G}^{\prime }\left(\Phi \right)\left(X-\Phi \right)+\frac{1}{2!}{G}^{″}\left(\Phi \right){\left(X-\Phi \right)}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1}{2!}{G}^{‴}\left(\Phi \right){\left(X-\Phi \right)}^{3}+\cdots \end{array}$ (37)

$\begin{array}{c}{G}^{\prime }\left(X\right)={G}^{\prime }\left(\Phi \right)+{G}^{″}\left(\Phi \right)\left(X-\Phi \right)+\frac{1}{2!}{G}^{‴}\left(\Phi \right){\left(X-\Phi \right)}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1}{3!}{G}^{‴}\text{'}\left(\Phi \right){\left(X-\Phi \right)}^{3}+\cdots \end{array}$ (38)

Setting $X={X}_{k}$ in (37) and (38), implies

$G\left({X}_{k}\right)=G\left(\Phi +{E}_{k}\right)={G}^{\prime }\left(\Phi \right)\left[{E}_{k}+\underset{n=2}{\overset{4}{\sum }}\text{ }{C}_{n}{E}_{k}^{n}+O\left({E}_{k}^{5}\right)\right],k=0,1,2,\cdots$ (39)

${G}^{\prime }\left({X}_{k}\right)={G}^{\prime }\left(\Phi +{E}_{k}\right)={G}^{\prime }\left(\Phi \right)\left[I+\underset{n=2}{\overset{5}{\sum }}\text{ }{C}_{n}{E}_{k}^{n-1}+O\left({E}_{k}^{5}\right)\right],k=0,1,2,\cdots$ (40)

where I is an $m×m$ identity matrix and ${C}_{n}=\frac{1}{n!}{‖{\left({G}^{\prime }\left(\Phi \right)\right)}^{-1}{G}^{n}\left(\Phi \right)‖}_{\infty },n\ge 2$ .

Using (39) and (40)

$\begin{array}{l}{\left[{G}^{\prime }\left({X}_{k}\right)-G\left({X}_{k}\right)\lambda \right]}^{-1}\\ ={\left({G}^{\prime }\left(\Phi \right)\right)}^{-1}\left[I+\left(-2{C}_{2}+\lambda \right){E}_{k}+\left(4{C}_{2}^{2}-3{C}_{3}-3{C}_{2}\lambda +{\lambda }^{2}\right){E}_{k}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\left(-8{C}_{2}^{3}+12{C}_{2}{C}_{3}+4{C}_{4}+8{C}_{2}^{2}-5{C}_{3}\lambda +4{C}_{2}{\lambda }^{2}+{\lambda }^{3}\right){E}_{k}^{3}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(16{C}_{2}^{4}+9{C}_{3}^{2}-5{C}_{5}-20{C}_{2}^{3}-20{C}_{2}^{3}\lambda -7{C}_{4}\lambda -7{C}_{3}{\lambda }^{2}+{\lambda }^{4}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+{C}_{2}\left(13{\lambda }^{2}-36{C}_{3}\right)+{C}_{2}\left(16{C}_{4}+26{C}_{3}\lambda -5{\lambda }^{3}\right){E}_{k}^{4}+O\left({E}_{k}^{5}\right)\right]\end{array}$ (41)

multiply (41) and (39), yields

$\begin{array}{l}{\left[{G}^{\prime }\left({X}_{k}\right)-G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({X}_{k}\right)\\ ={E}_{k}+\left(\lambda -{C}_{2}\right){E}_{k}^{2}+\left(2{C}_{2}^{2}-2{C}_{3}-2{C}_{2}\lambda +{\lambda }^{2}\right){E}_{k}^{3}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\left(-4{C}_{2}^{3}+7{C}_{2}{C}_{3}+4{C}_{4}-3{C}_{4}+5{C}_{2}^{2}-4{C}_{3}\lambda -3{C}_{2}{\lambda }^{2}+{\lambda }^{3}\right){E}_{k}^{4}+O\left({E}_{k}^{5}\right)\end{array}$ (42)

substituting (42) in (34), the following equation is obtained.

$\begin{array}{l}{X}_{k+1}=\Phi +\left(\lambda -{C}_{2}\right){E}_{k}^{2}+\left(-2{C}_{2}^{2}+2{C}_{3}+2{C}_{2}\lambda -{\lambda }^{2}\right){E}_{k}^{3}\\ -\left(-4{C}_{2}^{3}+7{C}_{2}{C}_{3}+4{C}_{4}-3{C}_{4}+5{C}_{2}^{2}-4{C}_{3}\lambda -3{C}_{2}{\lambda }^{2}+{\lambda }^{3}\right){E}_{k}^{4}+O\left({E}_{k}^{5}\right)\end{array}$ (43)

The Equation (45) implies that the sequence of approximations generated by the iterative method (34) converges to the solution $\Phi$ of (1) with convergence order $\rho =2$ .

2.3. Convergence Analysis of the Proposed Algorithm 2

Similar to the proof of Theorem 1, the convergence of Algorithm 2 is established in the proof of Theorem 2.

Theorem 2 Suppose the function $G:D\subset {\Re }^{m}\to {\Re }^{m}$ is continuously differentiable in some neighborhood $D\subset {\Re }^{m}$ of $\Phi$ . If ${X}_{0}$ is an initial guess in the neighborhood of $\Phi$ , then for $\lambda$ the sequence of approximations ${\left\{{X}_{k}\right\}}_{k\ge 0},\left({X}_{k}\in D\right)$ generated by (35) converges to $\Phi$ with convergence order $\rho =3$ .

Proof. From Equation (35), ${\nu }_{k}$ is defined. Setting $X={\nu }_{k}$ in (37) lead to obtaining the following equation.

$G\left({\nu }_{k}\right)={G}^{\prime }\left(\Phi \right)\left[\left({C}_{2}+\lambda \right){E}_{k}^{2}+\left(-2{C}_{2}^{2}+2{C}_{3}+2{C}_{2}\lambda -{\lambda }^{2}\right){E}_{k}^{3}+O\left({E}_{k}^{4}\right)\right]$ (44)

Similarly, set $X={X}_{k}+{\theta }_{i}\left({\nu }_{k}-{X}_{k}\right)$ in (38), then

$\begin{array}{l}\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left({X}_{k}+{\theta }_{i}\left({\nu }_{k}-{X}_{k}\right)\right)\\ ={G}^{\prime }\left(\Phi \right)\left[I+2{C}_{2}\left(\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}\left(1-{\theta }_{i}\right){E}_{k}+\left(\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}\left(3{C}_{3}{\left({\theta }_{i}-1\right)}^{2}\right)+2{C}_{2}\left({C}_{2}-\lambda \right){\theta }_{i}\right){E}_{k}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}\left(4{C}_{3}{\left({\theta }_{i}-1\right)}^{3}+2{C}_{2}\left(-2{C}_{2}^{2}+2{C}_{3}+2{C}_{2}\lambda -{\lambda }^{2}\right){\theta }_{i}+6{C}_{32-\lambda }\right)\left(1-{\theta }_{i}\right)\right){E}_{k}^{3}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}\left(2{C}_{2}\left(4{C}_{2}^{3}-7{C}_{2}{C}_{3}+3{C}_{4}-5{C}_{2}^{2}\lambda +4{C}_{3}\lambda +3{C}_{2}{\lambda }^{2}-{\lambda }^{3}\right){\theta }_{i}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+12{C}_{4}\left({C}_{2}-\lambda \right){\left({\theta }_{i}-1\right)}^{2}{\theta }_{i}\\ \begin{array}{c}\text{ }\\ \end{array}\text{ }\text{ }+3{C}_{3}\left(2\left(-2{C}_{2}^{2}+2{C}_{3}+2{C}_{2}\lambda -{\lambda }^{2}\right)\left(1-{\theta }_{i}\right){\theta }_{i}+{\left({C}_{2}-\lambda \right)}^{2}\right){\theta }_{i}{E}_{k}^{4}+O\left({E}_{k}^{5}\right)\right]\end{array}$ (45)

Using (39) and (45);

$\begin{array}{l}\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left({X}_{k}+{\theta }_{i}\left({\nu }_{k}-{X}_{k}\right)\right)-G\left({X}_{k}\right)\lambda \\ ={G}^{\prime }\left(\Phi \right)\left[I+\left(\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}\left(-\lambda -2{C}_{2}{\mu }_{i}\left({\theta }_{i}-1\right)\right)\right){E}_{k}\end{array}$

$\begin{array}{l}\text{\hspace{0.17em}}+\left(-{C}_{2}\lambda +\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}\left(3{C}_{3}{\left({\theta }_{i}-1\right)}^{2}\right)+2{C}_{2}\left({C}_{2}-\lambda \right){\theta }_{i}\right){E}_{k}^{2}\\ \text{\hspace{0.17em}}+\left(-{C}_{3}\lambda +\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}\left(4{C}_{4}{\left({\theta }_{i}-1\right)}^{3}\right)-2{C}_{2}\left(2{C}_{2}^{2}-2{C}_{3}-2{C}_{2}\lambda +{\lambda }^{2}\right){\theta }_{i}\\ \begin{array}{c}\text{ }\\ \text{ }\end{array}-6{C}_{3}\left({C}_{2}-\lambda \right)\left(1-{\theta }_{i}\right){\theta }_{i}\right){E}_{k}^{3}\end{array}$

$\begin{array}{l}\text{\hspace{0.17em}}+\left(-{C}_{4}\lambda +\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}\left(2{C}_{2}\left(4{C}_{2}^{3}-7{C}_{2}{C}_{3}+3{C}_{4}-5{C}_{2}^{2}\lambda +4{C}_{3}\lambda +3{C}_{2}{\lambda }^{2}-{\lambda }^{3}\right)\\ \text{\hspace{0.17em}}+12{C}_{4}\left({C}_{2}-\lambda \right){\left({\theta }_{i}-1\right)}^{2}+3{C}_{3}\left(22{C}_{2}^{2}-2{C}_{3}-2{C}_{2}\lambda +{\lambda }^{2}\right)\left(1-{\theta }_{i}\right)\\ \begin{array}{c}\text{ }\\ \text{ }\end{array}+{\left({C}_{2}-\lambda \right)}^{2}{\theta }_{i}\right){E}_{k}^{4}+O\left({E}_{k}^{4}\right)\right]\end{array}$ (46)

From (46) and (44),

$\begin{array}{l}{\left[\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left({X}_{k}+{\theta }_{i}\left({\nu }_{k}-{X}_{k}\right)\right)-G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({\nu }_{k}\right)=\left({C}_{2}-\lambda \right){E}_{k}^{2}\\ \text{ }+\left(\left(-2{C}_{2}^{2}-2{C}_{2}-2{C}_{2}\lambda +{\lambda }^{2}\right)+\left({C}_{2}-\lambda \right)\left(\lambda +\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{C}_{2}\left({\theta }_{i}-1\right)\right)\right){E}_{k}^{3}\\ \text{ }+\left(\left(5{C}_{2}^{3}-7{C}_{2}{C}_{3}+3{C}_{4}-7{C}_{2}^{2}\lambda +4{C}_{3}\lambda +4{C}_{2}{\lambda }^{2}-{\lambda }^{3}\right)\\ \text{ }+\left(-2{C}_{2}^{2}\lambda +2{C}_{3}+2{C}_{2}\lambda -{\lambda }^{2}\right)\left(\lambda +\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{C}_{2}\left({\theta }_{i}-1\right)\right)\\ \text{ }+\left({C}_{2}-\lambda \right){\left(\lambda +2\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{C}_{2}\left({\theta }_{i}-1\right)\right)}^{2}\\ \text{ }-\left(\lambda {C}_{3}+3\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{C}_{2}{\left({\theta }_{i}-1\right)}^{2}+2{C}_{2}\left({C}_{2}-\lambda \right){\theta }_{i}\right){E}_{k}^{4}+O\left({E}_{k}^{4}\right)\end{array}$ (47)

Using (47) in the second step of (35), with the expansion of ${\nu }_{k}$ as in (43), the following equation is obtained.

${X}_{k+1}=\Phi +\left(\left({C}_{2}-\lambda \right)\left(\lambda +2\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{C}_{2}\left({\theta }_{i}-1\right)\right)\right){E}_{k}^{3}+O\left({E}_{k}^{4}\right)$ (48)

Equation (48) implies that the sequence of approximations generated by the iterative method (35) converges to $\Phi$ with convergence order $\rho =3$ .

2.4. Convergence Analysis of the Proposed Algorithm 3

The convergence of the propose Algorithm 3 is established by the proof of Theorem 3.

Theorem 3 Suppose the function $G:{\Re }^{m}\to {\Re }^{m}$ is continuously differentiable in some neighborhood $D\in {\Re }^{m}$ of its solution $\Phi$ . If ${X}_{0}$ is an initial guess in the neighborhood of $\Phi$ , then the sequence of approximations ${\left\{{X}_{k}\right\}}_{k\ge 0},\left({X}_{k}\in D\right)$ generated by (36) converges to $\Phi$ with convergence order $\rho =4$ .

Proof. Set $X={W}_{k}$ and $X={X}_{k}+{\theta }_{i}\left({W}_{k}-{X}_{k}\right)$ in (37) and (38) respectively, where ${W}_{k}$ is the second step of (35) then,

$\begin{array}{l}G\left({W}_{k}\right)\\ ={G}^{\prime }\left(\Phi \right)\left[\left(\left({C}_{2}-\lambda \right)\left(\lambda +\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{C}_{2}\left({\theta }_{i}-1\right)\right)\right){E}_{k}^{3}+\left(\left(-{C}_{2}^{3}+2{C}_{2}^{2}-{C}_{2}{\lambda }^{2}+{\lambda }^{3}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(-2{C}_{2}^{2}\lambda +2{C}_{3}+2{C}_{2}\lambda -{\lambda }^{2}\right)\left(\lambda +\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{C}_{2}\left({\theta }_{i}-1\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left({C}_{2}-\lambda \right){\left(\lambda +2\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{C}_{2}\left({\theta }_{i}-1\right)\right)}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\left(\lambda {C}_{2}+3\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{C}_{3}{\left({\theta }_{i}-1\right)}^{2}+2{C}_{2}\left({C}_{2}-\lambda \right){\theta }_{i}\right){E}_{k}^{4}+O\left({E}_{k}^{5}\right)\right]\end{array}$ (49)

and

$\begin{array}{l}\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left({X}_{k}+{\theta }_{i}\left({W}_{k}-{X}_{k}\right)\right)={G}^{\prime }\left(\Phi \right)\left[I+{C}_{2}{E}_{k}+\left(3{C}_{3}\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{\theta }_{i}^{2}\right){E}_{k}^{2}\\ \text{ }+\left(4{C}_{4}\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{\left(1-{\theta }_{i}\right)}^{3}-2{C}_{2}\left({C}_{2}-\lambda \right)\left(\lambda +2{C}_{2}\left(\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{\theta }_{i}^{2}-\frac{1}{2}\right)\right)\right){E}_{k}^{3}+O\left({E}_{k}^{4}\right)\right]\end{array}$ (50)

From (50)

$\begin{array}{l}{\left[\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left({X}_{k}+{\theta }_{i}\left({W}_{k}-{X}_{k}\right)\right)+G\left({X}_{k}\right)\lambda \right]}^{-1}\\ ={\left({G}^{\prime }\left(\Phi \right)\right)}^{-1}\left[I-\left(\lambda -{C}_{2}\right){E}_{k}+\left({\lambda }^{2}+4{C}_{2}^{2}\left(\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{\theta }_{i}^{2}-\frac{1}{2}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-3{C}_{3}\left(\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{\theta }_{i}^{2}-\frac{1}{2}\right)-{C}_{2}\lambda \right){E}_{k}^{2}+\left(4{C}_{4}\left(\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{\left({\theta }_{i}-1\right)}^{3}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+2{C}_{2}^{2}\left(\frac{1}{2}+4\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{\theta }_{i}^{2}\right)\right)+2{C}^{3}-2\left(-\frac{1}{2}-5\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{\theta }_{i}^{2}+2\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{\theta }_{i}^{3}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\lambda \left({\lambda }^{2}+{C}_{3}\left(1-6\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{\theta }_{i}^{2}\right)\right){E}_{k}^{3}+O\left({E}_{k}^{4}\right)\right]\end{array}$ (51)

By multiplying (51) by (49), the following equation is obtained.

$\begin{array}{l}{\left[\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{G}^{\prime }\left({X}_{k}+{\theta }_{i}\left({W}_{k}-{X}_{k}\right)\right)+G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({W}_{k}\right)\\ ={\left({C}_{2}-\lambda \right)}^{2}{E}_{k}^{3}+\left({C}_{2}^{3}\left(-2-8\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{\theta }_{i}^{2}\right)+{C}_{2}^{2}\lambda \left(7+8\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{\theta }_{i}^{2}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }+\lambda \left(3{\lambda }^{2}+{C}_{2}\left(-2-3\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{\theta }_{i}^{2}\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }+{C}_{2}\left(-8{\lambda }^{2}-{C}_{3}\left(2+3\underset{i=1}{\overset{q}{\sum }}\text{ }{\mu }_{i}{\theta }_{i}^{2}\right)\right)\right){E}_{k}^{4}+O\left({E}_{k}^{5}\right)\end{array}$ (52)

Using (48) and (50) in the third step of (36), yields

${X}_{k}=\Phi +\left({C}_{2}-\lambda \right){\left({C}_{2}-\lambda \right)}^{2}{E}_{k}^{4}+O\left({E}_{k}^{5}\right)$ (53)

Equation (53) implies that the sequence of approximations generated by the iterative method (36) converges to the solution $\Phi$ of the (1) with convergence order $\rho =4$ .

2.5. Particular Forms of the Proposed Iterative Methods

Here, some particular forms of the iterative methods in Algorithm 2 and Algorithm 3 are developed by assigning arbitrary values to the parameters ${\mu }_{i}$ and ${\theta }_{i},i=1,2,\cdots$ satisfying the conditions given in (5).

2.6. Particular Forms of Algorithm 2

For $q=1,{\mu }_{1}=1,{\theta }_{1}=\frac{1}{2}$ , in Algorithm 2 give rise to the following iterative method for approximating $\Phi$ of (1).

Algorithm 4 Assume ${X}_{0}$ is an initial guess, approximate the solution $\Phi$ of (1) using the iterative method:

$\begin{array}{l}{\nu }_{k}={X}_{k}-{\left[{G}^{\prime }\left({X}_{k}\right)-G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({X}_{k}\right),\\ {X}_{k+1}={\nu }_{k}-{\left[{G}^{\prime }\left(\frac{{X}_{k}+{\nu }_{k}}{2}\right)-G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({\nu }_{k}\right),\\ k=0,1,2,\cdots \end{array}$ (54)

Algorithm 4 is an iterative method for approximating the solution $\Phi$ of (1) with convergence order $\rho =3$ and error equation satisfying

${E}_{k+1}={\left({C}_{2}-\lambda \right)}^{2}{E}_{k}^{3}+O\left({E}_{k}^{4}\right)$ (55)

For $q=2,{\mu }_{1}=\frac{1}{4},{\mu }_{2}=\frac{3}{4},{\theta }_{1}=0,{\theta }_{2}=\frac{2}{3}$ , in Algorithm 2 give rise to the following new iterative method for approximating $\Phi$ of (1) is obtained.

Algorithm 5 Assume ${X}_{0}$ is an initial guess, approximate the solution $\Phi$ of (1) using the iterative method:

$\begin{array}{l}{\nu }_{k}={X}_{k}-{\left[{G}^{\prime }\left({X}_{k}\right)-G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({X}_{k}\right),\\ {X}_{k+1}={\nu }_{k}-4{\left[{G}^{\prime }\left({X}_{k}\right)+3{G}^{\prime }\left(\frac{{X}_{k}+2{\nu }_{k}}{3}\right)-4G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({\nu }_{k}\right),\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=0,1,2,\cdots \end{array}$ (56)

Algorithm 5 is of convergence order $\rho =3$ iterative method for approximating the solution $\Phi$ of (1) error equation satisfying

${E}_{k+1}={\left({C}_{2}-\lambda \right)}^{2}{E}_{k}^{3}+O\left({E}_{k}^{4}\right)$ (57)

For $q=3,{\mu }_{1}=\frac{1}{6},{\mu }_{2}=\frac{2}{3},{\mu }_{3}=\frac{1}{6},{\theta }_{1}=0,{\theta }_{2}=\frac{1}{2}$ , and ${\theta }_{3}=1$ in Algorithm 2 it reduces to the following new iterative method.

Algorithm 6 Assume ${X}_{0}$ is an initial guess, approximate the solution $\Phi$ of (1) using the iterative method:

$\begin{array}{l}{\nu }_{k}={X}_{k}-{\left[{G}^{\prime }\left({X}_{k}\right)-G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({X}_{k}\right),\\ {X}_{k+1}={\nu }_{k}-6{\left[{G}^{\prime }\left({X}_{k}\right)+4{G}^{\prime }\left(\frac{{X}_{k}+{\nu }_{k}}{2}\right)+{G}^{\prime }\left({\nu }_{k}\right)-6G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({\nu }_{k}\right),\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=0,1,2,\cdots \end{array}$ (58)

Algorithm 6 is a convergence order $\rho =3$ iterative method for approximating the solution $\Phi$ of (1) error equation satisfying

${E}_{k+1}={\left({C}_{2}-\lambda \right)}^{2}{E}_{k}^{3}+O\left({E}_{k}^{4}\right)$ (59)

For $q=3,{\mu }_{1}=\frac{1}{4},{\mu }_{2}=\frac{1}{2},{\mu }_{3}=\frac{1}{4},{\theta }_{1}=0,{\theta }_{2}=\frac{1}{2}$ , and ${\theta }_{3}=1$ in Algorithm 2 it reduces to the following new iterative method.

Algorithm 7 Assume ${X}_{0}$ is an initial guess, approximate the solution $\Phi$ of (1) using the iterative method:

$\begin{array}{l}{\nu }_{k}={X}_{k}-{\left[{G}^{\prime }\left({X}_{k}\right)-G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({X}_{k}\right),\\ {X}_{k+1}={\nu }_{k}-4{\left[{G}^{\prime }\left({X}_{k}\right)+4{G}^{\prime }\left(\frac{{X}_{k}+{\nu }_{k}}{2}\right)+{G}^{\prime }\left({X}_{k}\right)-4G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({\nu }_{k}\right),\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=0,1,2,\cdots \end{array}$ (60)

Algorithm 7 is a convergence order $\rho =3$ iterative method for approximating $\Phi$ of (1) having error equation satisfying

${E}_{k+1}={\left({C}_{2}-\lambda \right)}^{2}{E}_{k}^{3}+O\left({E}_{k}^{4}\right)$ (61)

2.7. Particular Forms of Algorithm 3

Consider some particular forms of Algorithm 3. Set $q=1,{\mu }_{1}=1,{\theta }_{1}=\frac{1}{2}$ , in Algorithm 3 leads to the iterative method.

Algorithm 8 Assume ${X}_{0}$ is an initial guess, approximate the solution $\Phi$ of (1) using the iterative method:

$\begin{array}{l}{\nu }_{k}={X}_{k}-{\left[{G}^{\prime }\left({X}_{k}\right)-G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({X}_{k}\right),\\ {W}_{k}={\nu }_{k}-{\left[{G}^{\prime }\left(\frac{{X}_{k}+{\nu }_{k}}{2}\right)-G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({\nu }_{k}\right),\\ {X}_{k+1}={W}_{k}-{\left[{G}^{\prime }\left(\frac{{X}_{k}+{W}_{k}}{2}\right)-G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({W}_{k}\right),k=0,1,2,\cdots \end{array}$ (62)

Algorithm 8 is an iterative method for approximating the solution $\Phi$ of (1) with convergence order $\rho =4$ and error equation satisfying

${E}_{k+1}=-{\left(\lambda -{C}_{2}\right)}^{3}{E}_{k}^{4}+O\left({E}_{k}^{5}\right)$ (63)

For $q=2,{\mu }_{1}=\frac{1}{4},{\mu }_{2}=\frac{3}{4},{\theta }_{1}=0,{\theta }_{2}=\frac{2}{3}$ , in Algorithm 3 leads to the following iterative method.

Algorithm 9 Assume ${X}_{0}$ is an initial guess, approximate the solution $\Phi$ of (1) using the iterative method:

$\begin{array}{l}{\nu }_{k}={X}_{k}-{\left[{G}^{\prime }\left({X}_{k}\right)-G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({X}_{k}\right),\\ {W}_{k}={\nu }_{k}-4{\left[{G}^{\prime }\left({X}_{k}\right)+3{G}^{\prime }\left(\frac{{X}_{k}+2{\nu }_{k}}{3}\right)-4G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({\nu }_{k}\right),\\ {X}_{k+1}={W}_{k}-4{\left[{G}^{\prime }\left({X}_{k}\right)+3{G}^{\prime }\left(\frac{{X}_{k}+2{W}_{k}}{3}\right)-4G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({W}_{k}\right),\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=0,1,2,\cdots \end{array}$ (64)

Algorithm 9 is an iterative method for approximating the solution $\Phi$ of (1) with convergence order $\rho =4$ . The error equation of Algorithm 9 is

${E}_{k+1}=-{\left(\lambda -{C}_{2}\right)}^{2}{E}_{k}^{4}+O\left({E}_{k}^{5}\right)$ (65)

For $q=3,{\mu }_{1}=\frac{1}{6},{\mu }_{2}=\frac{2}{3},{\mu }_{3}=\frac{1}{6},{\theta }_{1}=0,{\theta }_{2}=\frac{1}{2}$ , and ${\theta }_{3}=1$ in Algorithm 3 the following iterative method is proposed:

Algorithm 10 Assume ${X}_{0}$ is an initial guess, approximate the solution $\Phi$ of (1) using the iterative method:

$\begin{array}{l}{\nu }_{k}={X}_{k}-{\left[{G}^{\prime }\left({X}_{k}\right)-G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({X}_{k}\right),\\ {W}_{k}={\nu }_{k}-6{\left[{G}^{\prime }\left({X}_{k}\right)+4{G}^{\prime }\left(\frac{{X}_{k}+{\nu }_{k}}{2}\right)+{G}^{\prime }\left({\nu }_{k}\right)-6G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({\nu }_{k}\right),\\ {X}_{k+1}={W}_{k}-6{\left[{G}^{\prime }\left({X}_{k}\right)+4{G}^{\prime }\left(\frac{{X}_{k}+{W}_{k}}{2}\right)+{G}^{\prime }\left({\Upsilon }_{k}\right)-6G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({W}_{k}\right),\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=0,1,2,\cdots \end{array}$ (66)

The Algorithm 10 is of convergence order $\rho =4$ for approximating the solution $\Phi$ of (1). Its error equation is

${E}_{k+1}=-{\left(\lambda -{C}_{2}\right)}^{2}{E}_{k}^{4}+O\left({E}_{k}^{5}\right)$ (67)

For $q=3,{\mu }_{1}=\frac{1}{4},{\mu }_{2}=\frac{1}{2},{\mu }_{3}=\frac{1}{4},{\theta }_{1}=0,{\theta }_{2}=\frac{1}{2}$ , and ${\theta }_{3}=1$ in Algorithm 2 it reduces to the following new iterative method.

Algorithm 11 Assume ${X}_{0}$ is an initial guess, approximate the solution $\Phi$ of (1) using the iterative method:

$\begin{array}{l}{\nu }_{k}={X}_{k}-{\left[{G}^{\prime }\left({X}_{k}\right)-G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({X}_{k}\right),\\ {W}_{k}={\nu }_{k}-4{\left[{G}^{\prime }\left({X}_{k}\right)+4{G}^{\prime }\left(\frac{{X}_{k}+{\nu }_{k}}{2}\right)+{G}^{\prime }\left({\nu }_{k}\right)-4G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({\nu }_{k}\right),\\ {X}_{k+1}={W}_{k}-4{\left[{G}^{\prime }\left({X}_{k}\right)+4{G}^{\prime }\left(\frac{{X}_{k}+{W}_{k}}{2}\right)+{G}^{\prime }\left({W}_{k}\right)-4G\left({X}_{k}\right)\lambda \right]}^{-1}G\left({W}_{k}\right),\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=0,1,2,\cdots \end{array}$ (68)

The Algorithm 11 is a convergence order $\rho =3$ for approximating the solution $\Phi$ of (1). Its error equation is

${E}_{k+1}={\left({C}_{2}-\lambda \right)}^{2}{E}_{k}^{4}+O\left({E}_{k}^{5}\right)$ (69)

3. Efficiency Index

In this section, the efficiency index (EI) of the iterative methods proposed are established. Let ${A}_{v}^{\rho }$ represents iterative method v with convergence order $\rho$ . For reference purpose, the iterative methods proposed are denoted as indicated in Table 1. The formula $EI={\rho }^{\frac{1}{T}}$ , is adopted to obtain the efficiency index (EI)

of the iterative methods,  . Assume that the cost of evaluation of the function $G\left(\cdot \right)$ are equal, for any method the computation of $G\left(\cdot \right)$ needs m functional evaluations of the scalar functions ${G}_{i},i=1,2,\cdots ,m$ . Similarly, if the cost of evaluation of the Jacobian ${G}^{\prime }\left(\cdot \right)$ are equal, then the computation of ${G}^{\prime }\left(\cdot \right)$ requires ${m}^{2}$ evaluations of the scalar functions. For ${A}_{1}^{2}$ requires m evaluation of $G\left(\cdot \right)$ and ${m}^{2}$ evaluation of ${G}^{\prime }\left(\cdot \right)$ per iteration and its efficiency index is

${2}^{\left(\frac{1}{m+{m}_{2}}\right)}$ , for $m\ge 2$ . This is same as the efficiency index (EI) of the classical

Newton method $\left({N}_{1}^{2}\right)$ and the Wu method with convergence order $\rho =2$ developed in  . The performance with respect to efficiency index (EI) for the proposed iterative methods compared with the Wu method in  denoted as ${W}_{1}^{2}$ is presented in Table 2, for $m=10$ and 20. Where m is the dimension of the (1).

The Wu method is given as:

${W}_{1}^{2}:{X}_{k+1}={X}_{k}-{\left[{G}^{\prime }\left({X}_{k}\right)-diag\left({\sigma }_{i}{G}_{i}\left({X}_{k}\right)\right)\right]}^{-1}G\left({X}_{k}\right)$ (70)

where the parameter ${\sigma }_{i}\in \left[-1,1\right],i=1,2,\cdots ,m$ .

Table 1. Algorithms and their denotation.

Table 2. Efficiency Index for proposed methods and compared method.

From Table 2, observe that for $m\ge 2$ , the EI is monotonic decrease with increase in the step of the method and nodes (q) of the quadrature formula in the iterative method.

4. Numerical Experimentation

The developed iterative methods are tested on three standard problems in the literature, in order to illustrate their performance and confirm the theoretical convergence order ( $\rho$ ). The computational performance of the iterative methods developed are compared with the performance of the Wu method in  and Haijun method proposed in  . The Haijun method is given as

${H}_{2}^{3}:{X}_{k}={X}_{k}-{\left[{G}^{\prime }\left({X}_{k}\right)-diag\left({\sigma }_{i}{G}_{i}\left({X}_{k}\right)\right)\right]}^{-1}\left(G\left({X}_{k}\right)+G\left({\eta }_{k}\right)\right)$ (71)

where the parameter ${\sigma }_{i}\in \left[-1,1\right],i=1,2,\cdots ,m$ and ${\eta }_{k}$ is approximated using ${W}_{1}^{2}$ .

For the implementation, Intel Celeron(R) CPU 1.6 GHz with 2 GB of RAM processor is used to execute PYTHON 2.7.12 programs. The stopping criterion used for computer programs is ${‖G\left({X}_{k+1}\right)‖}_{\infty }<ϵ$ , where $ϵ$ is error tolerance. The Metrics used in comparison are:

Number of iterations (IT), Central Processing Unit Time or Execution time (CPU-Time), Norm function of last iteration $\left({‖G\left({X}_{k+1}\right)‖}_{\infty }<ϵ\right)$ , and Computational order of convergence ( ${\rho }_{coc}$ ) given in  as

${\rho }_{coc}=\frac{\mathrm{ln}\left(‖G\left({X}_{k}\right)‖\right)}{\mathrm{ln}\left(‖G\left({X}_{k-1}\right)‖\right)}$ (72)

To test the performance of proposed methods, the following problems are solved.

Problem 1 

Consider the NLSE

$G\left(X\right)=0$

where

$G\left({X}_{1},{X}_{2}\right)=\left[\begin{array}{c}{X}_{1}^{3}+{X}_{1}{X}_{2}\\ {X}_{2}+{X}_{2}^{2}\end{array}\right]$

The solutions of Problem 1 in the domain $G:\left(-1.5,1.5\right)×\left(-1.5,1.5\right)$ are ${\Phi }^{\left(1\right)}={\left(0,0\right)}^{\text{T}}$ and ${\Phi }^{\left(2\right)}={\left(1,-1\right)}^{\text{T}}$ . The initial approximation used is ${X}_{0}={\left(0.5,-0.5\right)}^{\text{T}}$ . The numerical results obtained for each method using different values of the parameters ${\lambda }_{i}$ and ${\sigma }_{i}$ are presented in Tables 3-7. All computations are carried out with 200 digit precision and $ϵ={10}^{-15}$ .

Problem 2 

${X}_{1}^{2}-{X}_{2}+1=0$ ,

${X}_{1}-\mathrm{cos}\left(\frac{\text{π}{X}_{2}}{2}\right)=0$ .

Table 3. Computational results for Problem 1 using ${\lambda }_{i}={\sigma }_{i}=1/2$ .

Table 4. Computational results for Problem 1 using ${\lambda }_{i}={\sigma }_{i}=1/3$ .

Table 5. Computational results for Problem 1 using ${\lambda }_{i}={\sigma }_{i}=1/5$ .

Table 6. Computational results for Problem 1 using ${\lambda }_{i}={\sigma }_{i}=1/7$ .

Table 7. Computational results for Problem 1 using ${\lambda }_{i}={\sigma }_{i}=1/9$ .

The solutions of Problem 2 within the domain $D=\left(-1,0\right)×\left(0,2\right)$ are

${\Phi }^{\left(1\right)}={\left(\frac{\sqrt{2}}{2},1.5\right)}^{\text{T}},{\Phi }^{\left(2\right)}={\left(-1,2\right)}^{\text{T}}$ and ${\Phi }^{\left(3\right)}={\left(0,1\right)}^{\text{T}}$ . The numerical solutions to Problem 2 are presented in Table 8 for methods of orders $\rho =2,3$ and 4.

Problem 3 

Consider the chemical equilibrium system modeled in NLSE (1) with

${X}_{1}{X}_{2}+{X}_{1}-3{X}_{5}=0$

$2{X}_{1}{X}_{2}+{X}_{1}+{X}_{2}{X}_{3}^{2}+{R}_{8}{X}_{2}-R{X}_{5}+{R}_{10}{X}_{2}^{2}+{R}_{7}{X}_{1}{X}_{3}+{R}_{9}{X}_{2}{X}_{4}=0$

$2{X}_{2}{X}_{3}^{2}+2{R}_{5}{X}_{3}^{2}-8{X}_{5}+{R}_{6}{X}_{3}+{R}_{7}{X}_{2}{X}_{3}=0$

${R}_{9}{X}_{2}{X}_{4}+2{X}_{4}^{2}-4{R}_{5}=0$

$\begin{array}{l}{X}_{1}\left({X}_{2}+1\right)+{R}_{10}{X}_{2}^{2}+{X}_{2}{X}_{3}^{2}+{R}_{8}{X}_{2}+{R}_{5}{X}_{3}^{2}+{X}_{4}^{2}-1\\ \text{ }+{R}_{6}{X}_{3}+{R}_{7}{X}_{2}{X}_{3}+{R}_{9}{X}_{2}{X}_{4}=0\end{array}$

where

$\begin{array}{l}R=10,{R}_{10}=0.193,{R}_{6}=\frac{0.002597}{\sqrt{40}},{R}_{7}=\frac{0.003448}{\sqrt{40}},\\ {R}_{8}=\frac{0.00001799}{40},{R}_{9}=\frac{0.0002155}{\sqrt{40}},{R}_{10}=\frac{0.00003846}{40}\end{array}$

Using ${X}_{0}={\left(0.6,33.2,0.6,1.5,-0.7\right)}^{\text{T}}$ as initial starting point, 200 digits floating point arithmetics and $‖G\left({X}_{k}\right)‖\le {10}^{-50}$ as stopping criteria, the solution $\Phi$ in $D=\left(1,1\right)×\left(33.5,35.5\right)×\left(-1,1\right)×\left(-0.8,1.8\right)×\left(-1,1\right)$ approximated to 20 decimal places is

$\Phi =\left[\begin{array}{c}0.00311410226598496012\\ 34.59792453029012391022\\ 0.06504177869743799154\\ 0.85937805057794058144\\ 0.03695185914804602454\end{array}\right]$

The computational results obtained for different methods are presented in Table 9.

Table 8. Computational results for Problem 2 using ${\lambda }_{i}={\sigma }_{i}=1/8$ .

Table 9. Computational results for Problem 3.

Results Discussion

The numerical results obtained on Tables 3-9, leads to the following observations about the effectiveness of the proposed methods in approximation of the solution of (1).

・ The numerical results obtained in Tables 3-9, clearly implies that the proposed methods are effective in approximation of solution of (1).

・ Most of the computational order of convergence ${\rho }_{coc}$ of the proposed methods agrees with theoretical value.

・ It is observed that the proposed convergence order $\rho =2$ method ( ${A}_{1}^{2}$ ) produce better precision compared with Wu method ( ${W}_{1}^{2}$ ) for small system. The reason is justifiable since $G\left(X\right)\lambda$ is a dense matrix, more computation cost is incurred as the system become large.

・ Observe from Tables 3-8, Haijun method ( ${H}_{1}^{3}$ ) failed in Problem 1 and 2 while the proposed methods converged to solutions in few number of iterations.

・ The choice of $\lambda$ , its magnitude should be less 1 to get better precision and convergence.

5. Conclusion

In this paper, multistep quadrature based methods for approximation of the solution of NLSE are proposed. The proposed methods require only first order Frechet derivative to attain convergence order $\rho \le 4$ and effectively approximate solution of NLSE with singular Jacobian. The proposed methods are applied on three standard problems in literature so as to describe their effectiveness. Judging from the computational results obtained and presented in tables, the proposed methods are competent compared to some existing methods.

Cite this paper: Ogbereyivwe, O. and Muka, K. (2019) Multistep Quadrature Based Methods for Nonlinear System of Equations with Singular Jacobian. Journal of Applied Mathematics and Physics, 7, 702-725. doi: 10.4236/jamp.2019.73049.
References

   Moré, J.J. (1990) A Collection of Nonlinear Model Problems. In: Allgower, E.L. and George, K., Eds., Computational Solution of Nonlinear Systems of Equations. Lectures in Applied Mathematics, Vol. 26, American Mathematical Society, Providence, RI, 723-762.

   Grosan, G. and Abraham, A. (2008) A New Approach for Solving Nonlinear Equation Systems. IEEE Transactions on Systems, Man, and Cybernetics. Part A: Systems and Humans, 38, 698-714.
https://doi.org/10.1109/TSMCA.2008.918599

   Awawdeh, F. (2010) On New Iterative Method for Solving Systems of Nonlinear Equations. Numerical Algorithms, 54, 395-409.
https://doi.org/10.1007/s11075-009-9342-8

   Tsoulos, I.G. and Staurakoudis, A. (2010) On Locating All Roots of Systems of Nonlinear Equations inside Bounded Domain Using Global Optimization Methods. Nonlinear Analysis: Real World Applications, 11, 2465-2471.
https://doi.org/10.1016/j.nonrwa.2009.08.003

   Lin, Y., Bao, L. and Jia, X. (2010) Convergence Analysis of a Variant of the Newton Method for Solving Nonlinear Equations. Computers & Mathematics with Applications, 59, 2121-2127.
https://doi.org/10.1016/j.camwa.2009.12.017

   Ortega, J.M. and Rheinboldt, W.C. (1970) Iterative Solution of Nonlinear Equation in Several Variables. Academic Press, Cambridge, MA.

   Babajee, D.K.R., Kalyanasundaram, M. and Jayakumar, J. (2015) On Some Improved Harmonic Mean Newton-Like Methods for Solving Systems of Nonlinear Equations. Algorithms, 8, 895-909.
https://doi.org/10.3390/a8040895

   Ahmadabadi, M.N., Ahmad, F., Yuan, G. and Li, X. (2016) Solving Systems of Nonlinear Equations Using Decomposition Techniques. Journal of Linear and Topological Algebra, 5, 187-198.

   Montazeri, H., Soleymani, F., Shateyi, S. and Motsa, S.S. (2012) On a New Method for Computing the Numerical Solution of Systems of Nonlinear Equation. Journal of Applied Mathematics, 2012, Article ID: 751975.
https://doi.org/10.1155/2012/751975

   Xiao, X. and Yin, H. (2015) A New Class of Methods with Higher Order of Convergence for Solving Systems of Nonlinear Equations. Applied Mathematics and Computation, 264, 300-309.
https://doi.org/10.1016/j.amc.2015.04.094

   Noor, M.A., Waseem, M. and Noor, K.I. (2015) New Iterative Technique for Solving a Nonlinear Equations. Applied Mathematics and Computation, 265, 1115-1125.
https://doi.org/10.1016/j.amc.2015.05.129

   Noor, M.A., Waseem, M. and Noor, K.I. (2015) New Iterative Technique for Solving a System of Nonlinear Equations. Applied Mathematics and Computation, 271, 446-466.
https://doi.org/10.1016/j.amc.2015.08.125

   Chun, C. (2005) Iterative Methods Improving Newton’s Method by the Decomposition Method. Computers & Mathematics with Applications, 50, 1559-1568.
https://doi.org/10.1016/j.camwa.2005.08.022

   Park, C.H. and Shim, H.T. (2005) What Is the Homotopy Method for a System of Nonlinear Equations (Survey)? Journal of Applied Mathematics and Computing, 17, 689-700.

   Golbabai, A. and Javidi, M. (2007) A New Family of Iterative Methods for Solving System of Nonlinear Algebraic Equations. Applied Mathematics and Computation, 190, 1717-1722.
https://doi.org/10.1016/j.amc.2007.02.055

   Golbabai, A. and Javidi, M. (2007) Newton-Like Iterative Methods for Solving System of Nonlinear Equations. Applied Mathematics and Computation, 192, 546-551.
https://doi.org/10.1016/j.amc.2007.03.035

   Jafari, H. and Daftardar-Gejji, V. (2006) Revised Adomian Decomposition Method for Solving System of Nonlinear Equations. Applied Mathematics and Computation, 175, 1-7.
https://doi.org/10.1016/j.amc.2005.07.010

   Noor, M.A., Noor, K.I. and Waseem, M. (2013) Decomposition Method for Solving System of Nonlinear Equations. Engineering Mathematics Letters, 2, 34-41.

   Cordero, A. and Torregrosa, J.R. (2007) Variants of Newton Method Using Fifth-Order Quadrature Formulas. Applied Mathematics and Computation, 190, 686-698.
https://doi.org/10.1016/j.amc.2007.01.062

   Cordero, A., Hueso, J.L., Martinez, E. and Terregrosa, J.R. (2009) Iterative Methods of Order Four and Five for Systems of Nonlinear Equations. Journal of Computational and Applied Mathematics, 231, 541-551.
https://doi.org/10.1016/j.cam.2009.04.015

   Liu, Z. (2015) A New Cubic Convergence Method for Solving Systems of Nonlinear Equations. International Journal of Applied Science and Mathematics, 2, 2394-2894.

   Liu, Z. and Fang, Q. (2015) A New Newton-Type Method with Third-Order for Solving Systems of Nonlinear Equations. Journal of Applied Mathematics and Physics, 3, 1256-1261.
https://doi.org/10.4236/jamp.2015.310154

   Noor, M.A. (2007) New Family of Iterative Methods for Nonlinear Equations. Applied Mathematics and Computation, 190, 553-558.
https://doi.org/10.1016/j.amc.2007.01.045

   Biazar, J. and Ghanbari, B. (2008) A New Technique for Solving Systems of Nonlinear Equations. Applied Mathematical Sciences, 2, 2699-2703.

   Podisuk, M., Chundong, U. and Sanprasert, W. (2007) Single-Step Formulas and Multi-Step Formulas of the Integration Method for Solving the IVP of Ordinary Differential Equation. Applied Mathematics and Computation, 190, 1438-1444.
https://doi.org/10.1016/j.amc.2007.02.024

   Weerakoon, S. and Fernando, T.G.I. (2000) A Variant of Newton’s Method with Accelerated Third Order Convergence. Applied Mathematics Letters, 13, 87-93.
https://doi.org/10.1016/S0893-9659(00)00100-2

   Frontini, M. and Sormani, E. (2004) Third-Order Methods from Quadrature Formulae for Solving Systems of Nonlinear Equations. Applied Mathematics and Computation, 149, 771-782.
https://doi.org/10.1016/S0096-3003(03)00178-4

   Khirallah, M.Q. and Hafiz, M.A. (2012) Novel Three Order Methods for Solving a System of Nonlinear Equations. Bulletin of Mathematical Sciences and Applications, 2, 1-12.
https://doi.org/10.18052/www.scipress.com/BMSA.2.1

   Hafiz, M.A. and Bahgat, M.S.M. (2012) An Efficient Two-Step Iterative Method for Solving System of Nonlinear Equation. Journal of Mathematical Research, 4, 28-34.

   Wu, X. (2007) Note on the Improvement of Newton Method for System of Nonlinear Equations. Applied Mathematics and Computation, 189, 1476-1479.
https://doi.org/10.1016/j.amc.2006.12.035

   Haijun, W. (2009) New Third-Order Method for Solving Systems of Nonlinear Equations. Numerical Algorithms, 50, 271-282.
https://doi.org/10.1007/s11075-008-9227-2

   Singh, S. (2013) A System of Nonlinear Equations with Singular Jacobian. International Journal of Innovative Research in Science, Engineering and Technology, 2, 2650-2653.

   Ahmad, F., Ullah, M.Z., Ahmad, S., Alshomrani, A.S., Alqahtani, M.A. and Alzaben, L. (2017) Multi-Step Preconditioned Newton Methods for Solving Systems of Nonlinear Equations. SeMA Journal, 75, 127-137.
https://doi.org/10.1007/s40324-017-0120-6

   Argyros, I.K. (2017) Ball Convergence for a Family of Quadrature-Based Methods for Solving Equations in Banach Space. International Journal of Computational Methods, 14, Article ID: 1750017.
https://doi.org/10.1142/S0219876217500177

   Hueso, J.L., Martínez, E. and Torregrossa, J.R. (2009) Modified Newton’s Method for Systems of Nonlinear Equations with Singular Jacobian. Journal of Computational and Applied Mathematics, 224, 77-83.
https://doi.org/10.1016/j.cam.2008.04.013

   Sharma, J.R., Sharma, R. and Bahl, A. (2016) An Improved Newton-Traub Composition for Solving Systems of Nonlinear Equations. Applied Mathematics and Computation, 290, 98-100.
https://doi.org/10.1016/j.amc.2016.05.051

   Ostrowski, A.M. (1966) Solution of Equations and Systems of Equations. Academic Press, New York.

   Grau-Sanchez, M., Grau, A. and Noguera, M. (2012) On the Computational Efficiency Index and Some Iterative Methods for Solving Systems of Nonlinear Equations. Journal of Computational and Applied Mathematics, 236, 1259-1266.
https://doi.org/10.1016/j.cam.2011.08.008

   Decker, D. and Keller, C. (1980) Newton’s Method at Singular Points. SIAM Journal on Numerical Analysis, 17, 465-471.
https://doi.org/10.1137/0717039

   Meintjes, K. and Morgan, A.P. (1990) Chemical Equilibrium Systems as Numerical Test Problems. ACM Transactions on Mathematical Software, 16, 143-151.
https://doi.org/10.1145/78928.78930

Top