A New Modification of Newton Method with Cubic Convergence
Abstract: Newton’s method is used to find the roots of a system of equations f (x) = 0. It is one of the most important procedures in numerical analysis, and its applicability extends to differential equations and integral equations. Analysis of the method shows a quadratic convergence under certain assumptions. For several years, researchers have improved the method by proposing modified Newton methods with salutary efforts. A modification of the Newton’s method was proposed by McDougall and Wotherspoon [1] with an order of convergence of 1+ √2. On a new type of methods with cubic convergence was proposed by H. H. H. Homeier [2]. In this article, we present a new modification of Newton method based on secant method. Analysis of convergence shows that the new method is cubically convergent. Our method requires an evaluation of the function and one of its derivatives.

1. Introduction

Determining zeros of scalar function f lines up with the most important problems in both theory and practice not only in mathematics but also in many other fields like engineering sciences, physics, computer science, finance. These problems lead to a well-endowed mixture of mathematics, numerical analysis and computing science. Discovering the source of non-linear equation is one of the most important challenges in science and engineering. The main concept to all root finding methods is the recurrence of successive approximation. Actually, analytical method for the non-linear Equation (1.1) is hard or nearly non existent.

$f\left(x\right)=0\text{\hspace{0.17em}}\text{ }\text{\hspace{0.17em}}\text{where}\text{\hspace{0.17em}}\text{ }f:D\subset ℝ\to ℝ$ (1.1)

In current years, researchers have been curious about modifying the Newton method which is the foundation of all algorihms for solving these problems. Current design methods tend to focus on usage of fonction evaluations and avoid usage of the derivatives. In this study, we consider a non-linear Equation (1.1) and we present a new modification of Newton method. The analysis of convergence shows that the new method is cubically convergent. In a recurrence way, our method requires an evaluation of the function and the one of its derivatives.

2. Preliminaries

The idea of the iterative method is that we make some estimate of the solution and we repeatedly improve that estimate using some well defined operations, until we end up with an approximate answer which is quite close to real answer.

Let $f:D\subset ℝ\to ℝ$ be r-times Fréchet differentiable function on an open interval $D\subset ℝ$. ${x}^{*}$ be a real zero of the non linear equation

$f\left(x\right)=0$ (2.1)

As well known, roots of Equation (2.1) can be found analytically only in some special cases. We most commonly solve (2.1) approximately, that is, we find an approximation to the zero ${x}^{*}$ by applying some iterative method of the form:

${x}_{n+1}=\phi \left({x}_{n}\right)$ (2.2)

where ${x}_{n}$ is an approximation to the zero ${x}^{*}$. The function $\phi$ is called iteration function.

Definition 2.1. Let $f\left(x\right)$ be a real valued function with root ${x}^{*}$ and let ${\left({x}_{n}\right)}_{n}$ be a sequence of real number from iterative method (sequence of iterate) that converge toward ${x}^{*}$. If there exists a real number r and a nonzero constant ${C}_{p}$ such that:

$\underset{n\to +\infty }{\mathrm{lim}}\frac{{x}_{n+1}-{x}^{*}}{{\left({x}_{n}-{x}^{*}\right)}^{p}}={C}_{p}\ne 0.$

Then p is called the order of convergence and ${C}_{p}$ is the factor of convergence orthe asymptotic error constant.

Definition 2.2. Let ${e}_{n}={x}_{n}-{x}^{*}$ be the error of the approximation in the nth iteration.

${e}_{n+1}={C}_{p}{e}_{n}^{p}+O\left({e}_{n}^{p+1}\right)$ (2.3)

is the error equation. If the error equation exists, the p is the order of convergence ofthe iterative method.

Theorem 2.3. (Schroder-Traub 1964) Let $\phi$ an iterative function such that ${\phi }^{\left(r\right)}$ is continuous in a neighborhood of ${x}^{*}$. Then, $\phi$ is of order p if only if

$\phi \left({x}^{*}\right)={x}^{*},{\phi }^{\prime }\left({x}^{*}\right)={\phi }^{\left(2\right)}\left({x}^{*}\right)=\cdots ={\phi }^{\left(p-1\right)}\left({x}^{*}\right)=0,{\phi }^{\left(p\right)}\left({x}^{*}\right)\ne 0$ (2.4)

The asymptotic error constant is given by:

$\underset{n\to +\infty }{\mathrm{lim}}\frac{|{x}_{n+1}-{x}^{*}|}{{|{x}_{n}-{x}^{*}|}^{p}}=|\frac{{\phi }^{\left(p\right)}\left({x}^{*}\right)}{p!}|$ (2.5)

Theorem 2.4 (Traub 1964 [3] ) Let ${x}^{*}$ be a simple zero of a function f and let $\phi$ define an iterative method of order p. Then a composite iterative function $\Psi$ introduced by Newtons method

$\Psi \left(x\right)=\phi \left(x\right)-\frac{f\left(\phi \left(x\right)\right)}{{f}^{\prime }\left(x\right)}$ (2.6)

defines an iterative method of order $p+1$.

Theorem 2.5 (Traub 1964 p. 28 [3] ) Let ${\phi }_{1},{\phi }_{2},\cdots ,{\phi }_{s}$ be iteration functions with the orders ${p}_{1},{p}_{2},\cdots ,{p}_{s}$ respectively. Then the composition

$\Psi \left(s\right)={\phi }_{1}\circ {\phi }_{2}\circ \cdots \circ {\phi }_{s}\left(x\right)$ (2.7)

defines the iterative method of order ${p}_{1}{p}_{2}\cdots {p}_{s}$.

Definition 2.6. Let r be the number of function evaluations per iteration of the method. The efficiency index of the method is defined by:

$IE=\sqrt[r]{p}={p}^{\frac{1}{r}}$ (2.8)

where p is the order of convergence of the method.

Definition 2.7. Suppose that ${x}_{n-2}$, ${x}_{n-1}$ et ${x}_{n}$ are three successive iterative closer to the root ${x}^{*}$. Then the computational order of convergence may be approximated by:

$COC\approx \frac{\mathrm{ln}\left({\delta }_{n}÷{\delta }_{n-1}\right)}{{\delta }_{n-1}÷{\delta }_{n-2}}$ (2.9)

where ${\delta }_{n}=f\left({x}_{n}\right)÷{f}^{\prime }\left({x}_{n}\right)$.

3. Construction of the Methods and Convergence Analysis

In this section, we recall the modified Newton method that was proposed by Gentian Zavalani ( [4] ). To determine the order of convergence of the sequence ${\left({x}_{n}\right)}_{n}$, let consider the Taylor expansion of $g\left({x}_{n}\right)$ where the iterative method is ${x}_{n+1}=g\left({x}_{n}\right)$ and the function g satisfied:

1) There exist $\left[a,b\right]$ such that $g\left(x\right)\in \left[a,b\right]$ for all $x\in \left[a,b\right]$

2) There exist $\left[a,b\right]$ such that $|g\left(x\right)|\le L<1$ for all $x\in \left[a,b\right]$

$\begin{array}{c}g\left({x}_{n}\right)=g\left(x\right)+{g}^{\prime }\left(x\right)\left({x}_{n}-x\right)+\frac{{g}^{″}\left(x\right)}{2!}{\left({x}_{n}-x\right)}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{{g}^{\left(3\right)}\left(x\right)}{3!}{\left({x}_{n}-x\right)}^{3}+\cdots +\frac{{g}^{\left(l\right)}\left(x\right)}{l!}{\left({x}_{n}-x\right)}^{l}+\cdots \end{array}$ (3.1)

Definition 3.1. The mapping $F:{ℝ}^{n}\to ℝ$ is (totally or Fréchet) differentiable at x if the Jacobian matrix ${\left(JF\left(x\right)\right)}_{ij}={\left({\partial }_{i}{F}_{j}\right)}_{ij}$ exists at x and

$\underset{h\to 0}{\mathrm{lim}}\frac{‖F\left(x+h\right)-F\left(x\right)+JF\left(x\right)h‖}{‖h‖}=0$ (3.2)

If $n=1$, this defintion reduces to the usual definition of differentiability.

Definition 3.2. For mapping $F:\Omega \subset {ℝ}^{n}\to {ℝ}^{n}$ , a solution ${x}^{*}\in \Omega$ of $F\left(x\right)$ $=0$ is simple if F is differentiable at ${x}^{*}$ and $JF\left({x}^{*}\right)$ is non singular.

In this work, we assume that f admits a unique and simple solution.

Iterative Methods

For any $x,{x}_{n}\in D$ we may write the Taylor’s expansion for f as follows:

$\begin{array}{c}f\left(x\right)=f\left({x}_{n}\right)+{f}^{\prime }\left({x}_{n}\right)\left(x-{x}_{n}\right)+\frac{{f}^{″}\left({x}_{n}\right)}{2!}{\left(x-{x}_{n}\right)}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{{f}^{\left(3\right)}\left({x}_{n}\right)}{3!}{\left(x-{x}_{n}\right)}^{3}+\cdots +\frac{{f}^{\left(r-1\right)}\left({x}_{n}\right)}{\left(r-1\right)!}{\left(x-{x}_{n}\right)}^{r-1}+\cdots \\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\int }_{0}^{1}\frac{{\left(1-t\right)}^{r-1}}{\left(r-1\right)!}{f}^{\left(r\right)}\left({x}_{n}+t\left(x-{x}_{n}\right)\right){\left(x-{x}_{n}\right)}^{r}\text{d}t\end{array}$ (3.3)

for $r=1$, we have:

$f\left(x\right)=f\left({x}_{n}\right)+{\int }_{0}^{1}\text{ }\text{ }{f}^{\prime }\left({x}_{n}+t\left(x-{x}_{n}\right)\right)\left(x-{x}_{n}\right)\text{d}t$ (3.4)

Approximating the integral in (3.4), we have:

${\int }_{0}^{1}\text{ }\text{ }{f}^{\prime }\left({x}_{n}+t\left(x-{x}_{n}\right)\right)\left(x-{x}_{n}\right)\text{d}t\approx {f}^{\prime }\left({x}_{n}\right)\left(x-{x}_{n}\right)$ (3.5)

By using f(x) = 0, we have

$f\left({x}_{n}\right)+{f}^{\prime }\left({x}_{n}\right)\left(x-{x}_{n}\right)=0$ (3.6)

Then

${x}_{n+1}={x}_{n}-\frac{f\left({x}_{n}\right)}{{f}^{\prime }\left({x}_{n}\right)}$ (3.7)

This is known as Newton method for the non linear equations $f\left(x\right)=0$ and has quadratic convergence when ${x}_{0}$ the initial guess is quite close to ${x}^{*}$. If we approximate the integral in (3.4) by using the closed-open quadrature formula ( [5] ):

${\int }_{{x}_{n}}^{x}\text{ }\text{ }{f}^{\prime }\left(t\right)\text{d}t\approx {Q}_{m}\left(t\right)=\left(x-{x}_{n}\right)\underset{j=1}{\overset{m}{\sum }}\text{ }\text{ }{\omega }_{j}{f}^{\prime }\left({x}_{n}+{\tau }_{j}\left(x-{x}_{n}\right)\right)$ (3.8)

· ${\tau }_{j}\in \left[0,1\right]$.

· ${\omega }_{j}$ weigths satisfying ${\sum }_{j=1}^{m}\text{ }\text{ }{\omega }_{j}=1$ and ${\sum }_{j=1}^{m}\text{ }\text{ }{\omega }_{j}{\tau }_{j}=\frac{1}{2}$

Then

${\int }_{0}^{1}\text{ }\text{ }{f}^{\prime }\left({x}_{n}+t\left(x-{x}_{n}\right)\right)\left(x-{x}_{n}\right)\text{d}t\approx \frac{1}{4}\left[{f}^{\prime }\left({x}_{n}\right)+3{f}^{\prime }\left(\frac{{x}_{n}+2x}{3}\right)\right]\left(x-{x}_{n}\right)$ (3.9)

Thus, by using f(x) = 0, we have:

${x}_{n+1}={x}_{n}-\frac{4f\left({x}_{n}\right)}{{f}^{\prime }\left({x}_{n}\right)+3{f}^{\prime }\left(\frac{{x}_{n}+2x}{3}\right)}$ (3.10)

${x}_{n+1}={x}_{n}-4{\left[{f}^{\prime }\left({x}_{n}\right)+3{f}^{\prime }\left(\frac{{x}_{n}+2x}{3}\right)\right]}^{-1}f\left({x}_{n}\right)$ (3.11)

Algorithm For a given ${x}_{0}$, compute approximate solution ${x}_{n+1}$

· Predictor step:

${\rho }_{n}={x}_{n}-\frac{f\left({x}_{n}\right)}{{f}^{\prime }\left({x}_{n}\right)}$ (3.12)

· Correction step:

${x}_{n+1}={x}_{n}-4{\left[{f}^{\prime }\left({x}_{n}\right)+3{f}^{\prime }\left(\frac{{x}_{n}+2x}{3}\right)\right]}^{-1}f\left({x}_{n}\right)$ (3.13)

This is another iterative method for solving the non linear Equation (1.1). These modifications of Newton method are very important and interesting because per iteration, they require one evaluation of the function and two evaluation of the derivative, not requiring the second derivative ${f}^{″}$ but they can converge cubically.

Theorem 3.3. Let $f:D\subset ℝ\to ℝ$ be r-times Fréchet differentiable function on an open interval $D\subset ℝ$. ${x}^{*}$ be a real zero of the non linear equation $f\left(x\right)$ $=0$. The iterative method defined by: For given ${x}_{0}$ ,

$\left\{\begin{array}{l}{\rho }_{n}={x}_{n}-\frac{f\left({x}_{n}\right)}{{f}^{\prime }\left({x}_{n}\right)}\\ {x}_{n+1}={x}_{n}-\frac{4f\left({x}_{n}\right)}{{f}^{\prime }\left({x}_{n}\right)+3{f}^{\prime }\left(\frac{{x}_{n}+2x}{3}\right)}\end{array}$ (3.14)

has cubic convergence and satisfies the error equation:

$\left[{f}^{\prime }\left({x}_{n}\right)+3{f}^{\prime }\left(\frac{{x}_{n}+2x}{3}\right)\right]{e}_{n+1}=\left[{f}^{″}\left({x}_{n}\right){\left({f}^{\prime }\left({x}_{n}\right)\right)}^{-1}{f}^{″}\left({x}_{n}\right)\right]{e}_{n}^{3}+O\left(‖{e}_{n}^{4}‖\right)$ (3.15)

4. New Modified Newton Method

We consider the predictor-corrector method (3.14):

$\left\{\begin{array}{l}{\rho }_{n}={x}_{n}-\frac{f\left({x}_{n}\right)}{{f}^{\prime }\left({x}_{n}\right)}\\ {x}_{n+1}={x}_{n}-\frac{4f\left({x}_{n}\right)}{{f}^{\prime }\left({x}_{n}\right)+3{f}^{\prime }\left(\frac{{x}_{n}+2{\rho }_{n}}{3}\right)}\end{array}$ (4.1)

We replace ${f}^{\prime }\left({x}_{n}\right)$ by finite difference approximation

$\frac{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{{x}_{n}-{x}_{n-1}}$ (4.2)

which is a suitable approximation which does not require new information. The predictor step becomes the secant method scheme. The scheme (3.14) becomes:

$\left\{\begin{array}{l}{\rho }_{n}={x}_{n}-\frac{\left({x}_{n}-{x}_{n-1}\right)f\left({x}_{n}\right)}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}\\ {x}_{n+1}={x}_{n}-\frac{4\left({x}_{n}-{x}_{n-1}\right)f\left({x}_{n}\right)}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)+3\left({x}_{n}-{x}_{n-1}\right){f}^{\prime }\left(\frac{{x}_{n}+2{\rho }_{n}}{3}\right)}\end{array}$ (4.3)

Our option is motivated by the fact that the evaluation of two derivative may delay the convergence of the method. The proposed method requires only by iteration an evaluation of the function and one of its derivative which improved the index of efficiency of the method compared to those propose in many other works. It has a convergence of order $p=3$. More precisely, the algorithm of our iterative method is the following:

1) For a given ${x}_{0}$,

2) computing

${x}_{1}={x}_{0}-\frac{f\left({x}_{0}\right)}{{f}^{\prime }\left({x}_{0}\right)}$ (4.4)

3) For $n\ge 1$,

${\rho }_{n}={x}_{n}-\frac{\left({x}_{n}-{x}_{n-1}\right)f\left({x}_{n}\right)}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}$ (4.5)

and

${x}_{n+1}={x}_{n}-\frac{4\left({x}_{n}-{x}_{n-1}\right)f\left({x}_{n}\right)}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)+3\left({x}_{n}-{x}_{n-1}\right){f}^{\prime }\left(\frac{{x}_{n}+2{\rho }_{n}}{3}\right)}$ (4.6)

Theorem 4.1. Let $f:D\subset ℝ\to ℝ$ be r-times Fréchet differentiable function on an open interval $D\subset ℝ$. ${x}^{*}$ be a real zero of the non linear equation $f\left(x\right)=0$. The iterative method defined by (4.3) has p = 3 order convergence and the error equation is:

$\begin{array}{l}{e}_{n+1}=\frac{{\left({x}_{n}-{x}_{n-1}\right)}^{2}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}\left[f\left({x}_{n}\right)-f\left({x}_{n-1}\right)\begin{array}{c}\\ \end{array}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+{3\left({x}_{n}-{x}_{n-1}\right){f}^{\prime }\left(\frac{{x}_{n}+2{\rho }_{n}}{3}\right)\right]}^{-1}{\left[{f}^{″}\left({x}_{n}\right)\right]}^{2}{e}_{n}^{3}\end{array}$ (4.7)

Proof. Let ${x}^{*}$ the unique root simple of Equation (1.1).

Let ${x}_{n}$ an approximation of ${x}^{*}$ obtained by the Scheme (4.6).

Let ${e}_{n}={x}^{*}-{x}_{n}$ the approximation error.

${e}_{n+1}={x}^{*}-{x}_{n+1}$ (4.8)

${e}_{n+1}={x}^{*}-{x}_{n}+\frac{4\left({x}_{n}-{x}_{n-1}\right)f\left({x}_{n}\right)}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)+3\left({x}_{n}-{x}_{n-1}\right){f}^{\prime }\left(\frac{{x}_{n}+2{\rho }_{n}}{3}\right)}$ (4.9)

${e}_{n+1}={e}_{n}+\frac{4\left({x}_{n}-{x}_{n-1}\right)f\left({x}_{n}\right)}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)+3\left({x}_{n}-{x}_{n-1}\right){f}^{\prime }\left(\frac{{x}_{n}+2{\rho }_{n}}{3}\right)}$ (4.10)

$\begin{array}{l}\left[f\left({x}_{n}\right)-f\left({x}_{n-1}\right)+3\left({x}_{n}-{x}_{n-1}\right){f}^{\prime }\left(\frac{{x}_{n}+2{\rho }_{n}}{3}\right)\right]{e}_{n+1}\\ =\left[f\left({x}_{n}\right)-f\left({x}_{n-1}\right)+3\left({x}_{n}-{x}_{n-1}\right){f}^{\prime }\left(\frac{{x}_{n}+2{\rho }_{n}}{3}\right)\right]{e}_{n}+4\left({x}_{n}-{x}_{n-1}\right)f\left({x}_{n}\right)\end{array}$ (4.11)

By expansion, we have:

$\begin{array}{l}0=f\left({x}^{*}\right)=f\left({x}_{n}\right)+{f}^{\prime }\left({x}_{n}\right)\left({x}^{*}-{x}_{n}\right)+\frac{{f}^{″}\left({x}_{n}\right)}{2!}{\left({x}^{*}-{x}_{n}\right)}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{{f}^{\left(3\right)}\left({x}_{n}\right)}{3!}{\left({x}^{*}-{x}_{n}\right)}^{3}+\frac{{f}^{\left(4\right)}\left({x}_{n}\right)}{4!}{\left({x}^{*}-{x}_{n}\right)}^{4}+\cdots \end{array}$ (4.12)

Then

$0=f\left({x}_{n}\right)+{f}^{\prime }\left({x}_{n}\right){e}_{n}+\frac{{f}^{″}\left({x}_{n}\right)}{2!}{e}_{n}^{2}+\frac{{f}^{\left(3\right)}\left({x}_{n}\right)}{3!}{e}_{n}^{3}+\frac{{f}^{\left(4\right)}\left({x}_{n}\right)}{4!}{e}_{n}^{4}+\cdots$ (4.13)

By replacing ${f}^{\prime }\left(x\right)$ by (4.2)

$-f\left({x}_{n}\right)=\frac{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{{x}_{n}-{x}_{n-1}}{e}_{n}+\frac{{f}^{″}\left({x}_{n}\right)}{2!}{e}_{n}^{2}+\frac{{f}^{\left(3\right)}\left({x}_{n}\right)}{3!}{e}_{n}^{3}+\frac{{f}^{\left(4\right)}\left({x}_{n}\right)}{4!}{e}_{n}^{4}+\cdots$ (4.14)

$\begin{array}{l}-\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}f\left({x}_{n}\right)\\ ={e}_{n}+\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}\frac{{f}^{″}\left({x}_{n}\right)}{2!}{e}_{n}^{2}+\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}\frac{{f}^{\left(3\right)}\left({x}_{n}\right)}{3!}{e}_{n}^{3}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }+\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}\frac{{f}^{\left(4\right)}\left({x}_{n}\right)}{4!}{e}_{n}^{4}+\cdots \end{array}$ (4.15)

By using the predictor step, we have:

${\rho }_{n}-{x}_{n}=-\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}f\left({x}_{n}\right)$ (4.16)

$\begin{array}{l}{\rho }_{n}-{x}_{n}\\ ={e}_{n}+\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}\frac{{f}^{″}\left({x}_{n}\right)}{2!}{e}_{n}^{2}+\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}\frac{{f}^{\left(3\right)}\left({x}_{n}\right)}{3!}{e}_{n}^{3}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}\frac{{f}^{\left(4\right)}\left({x}_{n}\right)}{4!}{e}_{n}^{4}+\cdots \end{array}$

$\frac{{x}_{n}+2{\rho }_{n}}{3}={x}_{n}+\frac{2}{3}\left({\rho }_{n}-{x}_{n}\right)$ (4.17)

Taylor expansion of ${f}^{\prime }\left(\frac{{x}_{n}+2{\rho }_{n}}{3}\right)$ at ${x}_{n}$ gives:

$\begin{array}{c}{f}^{\prime }\left(\frac{{x}_{n}+2{\rho }_{n}}{3}\right)={f}^{\prime }\left({x}_{n}\right)+\frac{2}{3}\left({\rho }_{n}-{x}_{n}\right){f}^{″}\left({x}_{n}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1}{2}{\left(\frac{2}{3}\right)}^{2}{\left({\rho }_{n}-{x}_{n}\right)}^{2}{f}^{\left(3\right)}\left({x}_{n}\right)+\cdots \end{array}$ (4.18)

$\begin{array}{l}{f}^{\prime }\left(\frac{{x}_{n}+2{\rho }_{n}}{3}\right)\\ =\frac{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{{x}_{n}-{x}_{n-1}}+\frac{2}{3}\left[{e}_{n}+\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}\frac{{f}^{″}\left({x}_{n}\right)}{2!}{e}_{n}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}\frac{{f}^{\left(3\right)}\left({x}_{n}\right)}{3!}{e}_{n}^{3}+\cdots \right]{f}^{″}\left( x n \right)\end{array}$

$\begin{array}{l}\text{ }+\frac{1}{2}{\left(\frac{2}{3}\right)}^{2}\left[{e}_{n}+\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}\frac{{f}^{″}\left({x}_{n}\right)}{2!}{e}_{n}^{2}\\ \text{ }+{\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}\frac{{f}^{\left(3\right)}\left({x}_{n}\right)}{3!}{e}_{n}^{3}+\cdots \right]}^{2}{f}^{\left(3\right)}\left({x}_{n}\right)\\ \text{ }+\frac{1}{6}{\left(\frac{2}{3}\right)}^{3}\left[{e}_{n}+\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}\frac{{f}^{″}\left({x}_{n}\right)}{2!}{e}_{n}^{2}\\ \text{ }+{\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}\frac{{f}^{\left(3\right)}\left({x}_{n}\right)}{3!}{e}_{n}^{3}+\cdots \right]}^{3}{f}^{\left(4\right)}\left({x}_{n}\right)+\cdots \end{array}$ (4.19)

$\begin{array}{l}{f}^{\prime }\left(\frac{{x}_{n}+2{\rho }_{n}}{3}\right)\\ =\frac{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{{x}_{n}-{x}_{n-1}}+\frac{2{e}_{n}}{3}{f}^{\left(2\right)}\left({x}_{n}\right)+\frac{{e}_{n}^{2}}{3}\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{\left[{f}^{\left(2\right)}\left({x}_{n}\right)\right]}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{{e}_{n}^{3}}{9}\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{f}^{\left(3\right)}\left({x}_{n}\right){f}^{\left(2\right)}\left({x}_{n}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{{e}_{n}^{4}}{36}\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{f}^{\left(4\right)}\left({x}_{n}\right){f}^{\left(2\right)}\left({x}_{n}\right)+\frac{2{e}_{n}^{2}}{9}{f}^{\left(3\right)}\left({x}_{n}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{2{e}_{n}^{3}}{9}\frac{{x}_{n}-{x}_{n-1}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{f}^{\left(3\right)}\left({x}_{n}\right){f}^{\left(2\right)}\left({x}_{n}\right)+\frac{4{e}_{n}^{3}}{81}{f}^{\left(3\right)}\left({x}_{n}\right)+\cdots \end{array}$ (4.20)

$\begin{array}{l}3\left({x}_{n}-{x}_{n-1}\right){f}^{\prime }\left(\frac{{x}_{n}+2{\rho }_{n}}{3}\right)\\ =3\left(f\left({x}_{n}\right)-f\left({x}_{n-1}\right)\right)+2{e}_{n}\left({x}_{n}-{x}_{n-1}\right){f}^{\left(2\right)}\left({x}_{n}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{e}_{n}^{2}\frac{{\left({x}_{n}-{x}_{n-1}\right)}^{2}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{\left[{f}^{\left(2\right)}\left({x}_{n}\right)\right]}^{2}+\frac{{e}_{n}^{3}}{3}\frac{{\left({x}_{n}-{x}_{n-1}\right)}^{2}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{f}^{\left(3\right)}\left({x}_{n}\right){f}^{\left(2\right)}\left({x}_{n}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{{e}_{n}^{4}}{12}\frac{{\left({x}_{n}-{x}_{n-1}\right)}^{2}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{f}^{\left(4\right)}\left({x}_{n}\right){f}^{\left(2\right)}\left({x}_{n}\right)+\frac{2}{3}{e}_{n}^{2}\left({x}_{n}-{x}_{n-1}\right){f}^{\left(3\right)}\left({x}_{n}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{2}{3}{e}_{n}^{3}\frac{{\left({x}_{n}-{x}_{n-1}\right)}^{2}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{f}^{\left(3\right)}\left({x}_{n}\right){f}^{\left(2\right)}\left({x}_{n}\right)+\frac{4}{27}{e}_{n}^{3}\left({x}_{n}-{x}_{n-1}\right){f}^{\left(3\right)}\left({x}_{n}\right)+\cdots \end{array}$ (4.21)

$\begin{array}{l}f\left({x}_{n}\right)-f\left({x}_{n-1}\right)+3\left({x}_{n}-{x}_{n-1}\right){f}^{\prime }\left(\frac{{x}_{n}+2{\rho }_{n}}{3}\right)\\ =4\left(f\left({x}_{n}\right)-f\left({x}_{n-1}\right)\right)+2{e}_{n}\left({x}_{n}-{x}_{n-1}\right){f}^{\left(2\right)}\left({x}_{n}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{e}_{n}^{2}\frac{{\left({x}_{n}-{x}_{n-1}\right)}^{2}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{\left[{f}^{\left(2\right)}\left({x}_{n}\right)\right]}^{2}+{e}_{n}^{3}\frac{{\left({x}_{n}-{x}_{n-1}\right)}^{2}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{f}^{\left(3\right)}\left({x}_{n}\right){f}^{\left(2\right)}\left( x n \right)\end{array}$

$\begin{array}{l}\text{\hspace{0.17em}}+\frac{4}{27}{e}_{n}^{3}\left({x}_{n}-{x}_{n-1}\right){f}^{\left(3\right)}\left({x}_{n}\right)+\frac{2}{3}{e}_{n}^{2}\left({x}_{n}-{x}_{n-1}\right){f}^{\left(3\right)}\left({x}_{n}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{{e}_{n}^{4}}{12}\frac{{\left({x}_{n}-{x}_{n-1}\right)}^{2}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{f}^{\left(4\right)}\left({x}_{n}\right){f}^{\left(2\right)}\left({x}_{n}\right)+\cdots \end{array}$ (4.22)

By using (4.14) we have,

$\begin{array}{l}4\left({x}_{n}-{x}_{n-1}\right)f\left({x}_{n}\right)\\ =-4\left(f\left({x}_{n}\right)-f\left({x}_{n-1}\right)\right){e}_{n}-2{e}_{n}^{2}\left({x}_{n}-{x}_{n-1}\right){f}^{\left(2\right)}\left({x}_{n}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }-\frac{2}{3}{e}_{n}^{3}\left({x}_{n}-{x}_{n-1}\right){f}^{\left(3\right)}\left({x}_{n}\right)-\frac{1}{6}{e}_{n}^{4}\left({x}_{n}-{x}_{n-1}\right){f}^{\left(4\right)}\left({x}_{n}\right)+\cdots \end{array}$ (4.23)

By using (4.22) and (4.23),

$\begin{array}{l}\left[f\left({x}_{n}\right)-f\left({x}_{n-1}\right)+3\left({x}_{n}-{x}_{n-1}\right){f}^{\prime }\left(\frac{{x}_{n}+2{\rho }_{n}}{3}\right)\right]{e}_{n}+4\left({x}_{n}-{x}_{n-1}\right)f\left({x}_{n}\right)\\ ={e}_{n}^{3}\frac{{\left({x}_{n}-{x}_{n-1}\right)}^{2}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{\left[{f}^{\left(2\right)}\left({x}_{n}\right)\right]}^{2}+{e}_{n}^{4}\frac{{\left({x}_{n}-{x}_{n-1}\right)}^{2}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{f}^{\left(3\right)}\left({x}_{n}\right){f}^{\left(2\right)}\left({x}_{n}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(\frac{4}{27}-\frac{1}{6}\right){e}_{n}^{4}\left({x}_{n}-{x}_{n-1}\right){f}^{\left(4\right)}\left({x}_{n}\right)+\cdots \end{array}$ (4.24)

Then

$\begin{array}{l}\left[f\left({x}_{n}\right)-f\left({x}_{n-1}\right)+3\left({x}_{n}-{x}_{n-1}\right){f}^{\prime }\left(\frac{{x}_{n}+2{\rho }_{n}}{3}\right)\right]{e}_{n+1}\\ =\frac{{\left({x}_{n}-{x}_{n-1}\right)}^{2}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{\left[{f}^{\left(2\right)}\left({x}_{n}\right)\right]}^{2}{e}_{n}^{3}+O\left(‖{e}_{n}^{4}‖\right)\end{array}$ (4.25)

Thus

$\begin{array}{l}{e}_{n+1}=\frac{{\left({x}_{n}-{x}_{n-1}\right)}^{2}}{f\left({x}_{n}\right)-f\left({x}_{n-1}\right)}{\left[{f}^{\left(2\right)}\left({x}_{n}\right)\right]}^{2}\left[f\left({x}_{n}\right)-f\left({x}_{n-1}\right)\begin{array}{c}\\ \end{array}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+{3\left({x}_{n}-{x}_{n-1}\right){f}^{\prime }\left(\frac{{x}_{n}+2{\rho }_{n}}{3}\right)\right]}^{-1}{e}_{n}^{3}\end{array}$ (4.26)

5. Numerical Examples and Comparison

We will compare the performance of our method with some existing methods and use in [1]. Therefore, we will give numerical results for some functions and initial values.

From the analysis made of Table 1, we can notice that the efficiency index of the proposed method is higher than that of the secant method which was already better than the Newton method and others developed in [1]. This is explained by the fact our method even though of order three of convergence requires only two evaluations of the function by iteration whereas most of the method of order three existing in the literature use three evaluations of the function and this by supposing that the evaluation of the function and its derivative have the same numerical cost. Our method defined by 4.3 is preferable.

Table 1. Efficiency index of different numerical methods.

Table 2. $f\left(x\right)={\mathrm{sin}}^{2}\left(x\right)-{x}^{2}+1$ et ${x}_{0}=1$.

Table 3. $f\left(x\right)={x}^{2}-\mathrm{exp}\left(x\right)-3x+2$ et ${x}_{0}=3$.

Table 4. $f\left(x\right)=\mathrm{exp}\left({x}^{2}+7x-30\right)-1$ et ${x}_{0}=3.5$.

Table 5. $f\left(x\right)=11{x}^{11}-1$ et ${x}_{0}=0.7$.

6. Conclusion

We have presented in this paper a modified Newton method of order three of convergence, more efficient than most of the methods known in the literature. Analysis of efficiency shows that this method is preferable for solving the non-linear equations. A comparative study of the number of function evaluations and the number of iterations with convergence of some methods is indicated in Tables 2-5. The result of these tables confirms the theory.

Cite this paper: Goudjo, A. and Kouye, L. (2021) A New Modification of Newton Method with Cubic Convergence. Advances in Pure Mathematics, 11, 1-11. doi: 10.4236/apm.2021.111001.
References

[1]   McDougall, T.J. and Wotherspoon, S.J. (2014) A Simple Modification of Newtons Method to Achieve Convergence of Order 1+√2. Applied Mathematics Letters, 29, 2025.

[2]   Homeier, H.H.H. (2005) On Newton-Type Methods with Cubic Convergence. Journal of Computational and Applied Mathematics, 176, 425-432.
https://doi.org/10.1016/j.cam.2004.07.027

[3]   Traub, J.F. (1964) Iterative Methods for the Solution of Equations. Prentice-Hall, Englewood Cliffs, New Jersey.

[4]   Zavalani, G. (2014) A Modification of Newton Method with Third-Order Convergence. American Journal of Numerical Analysis, 2, 98-101.

[5]   Kincaid, D. and Cheney, W. (1991) Numerical Analysis Mathematics of Scientific Computing. Wadsworth, Inc., Belmont, California.

Top