Determining zeros of scalar function f lines up with the most important problems in both theory and practice not only in mathematics but also in many other fields like engineering sciences, physics, computer science, finance. These problems lead to a well-endowed mixture of mathematics, numerical analysis and computing science. Discovering the source of non-linear equation is one of the most important challenges in science and engineering. The main concept to all root finding methods is the recurrence of successive approximation. Actually, analytical method for the non-linear Equation (1.1) is hard or nearly non existent.
In current years, researchers have been curious about modifying the Newton method which is the foundation of all algorihms for solving these problems. Current design methods tend to focus on usage of fonction evaluations and avoid usage of the derivatives. In this study, we consider a non-linear Equation (1.1) and we present a new modification of Newton method. The analysis of convergence shows that the new method is cubically convergent. In a recurrence way, our method requires an evaluation of the function and the one of its derivatives.
The idea of the iterative method is that we make some estimate of the solution and we repeatedly improve that estimate using some well defined operations, until we end up with an approximate answer which is quite close to real answer.
Let be r-times Fréchet differentiable function on an open interval . be a real zero of the non linear equation
As well known, roots of Equation (2.1) can be found analytically only in some special cases. We most commonly solve (2.1) approximately, that is, we find an approximation to the zero by applying some iterative method of the form:
where is an approximation to the zero . The function is called iteration function.
Definition 2.1. Let be a real valued function with root and let be a sequence of real number from iterative method (sequence of iterate) that converge toward . If there exists a real number r and a nonzero constant such that:
Then p is called the order of convergence and is the factor of convergence orthe asymptotic error constant.
Definition 2.2. Let be the error of the approximation in the nth iteration.
is the error equation. If the error equation exists, the p is the order of convergence ofthe iterative method.
Theorem 2.3. (Schroder-Traub 1964) Let an iterative function such that is continuous in a neighborhood of . Then, is of order p if only if
The asymptotic error constant is given by:
Theorem 2.4 (Traub 1964  ) Let be a simple zero of a function f and let define an iterative method of order p. Then a composite iterative function introduced by Newtons method
defines an iterative method of order .
Theorem 2.5 (Traub 1964 p. 28  ) Let be iteration functions with the orders respectively. Then the composition
defines the iterative method of order .
Definition 2.6. Let r be the number of function evaluations per iteration of the method. The efficiency index of the method is defined by:
where p is the order of convergence of the method.
Definition 2.7. Suppose that , et are three successive iterative closer to the root . Then the computational order of convergence may be approximated by:
3. Construction of the Methods and Convergence Analysis
In this section, we recall the modified Newton method that was proposed by Gentian Zavalani (  ). To determine the order of convergence of the sequence , let consider the Taylor expansion of where the iterative method is and the function g satisfied:
1) There exist such that for all
2) There exist such that for all
Definition 3.1. The mapping is (totally or Fréchet) differentiable at x if the Jacobian matrix exists at x and
If , this defintion reduces to the usual definition of differentiability.
Definition 3.2. For mapping , a solution of is simple if F is differentiable at and is non singular.
In this work, we assume that f admits a unique and simple solution.
For any we may write the Taylor’s expansion for f as follows:
for , we have:
Approximating the integral in (3.4), we have:
By using f(x) = 0, we have
This is known as Newton method for the non linear equations and has quadratic convergence when the initial guess is quite close to . If we approximate the integral in (3.4) by using the closed-open quadrature formula (  ):
· weigths satisfying and
Thus, by using f(x) = 0, we have:
Algorithm For a given , compute approximate solution
· Predictor step:
· Correction step:
This is another iterative method for solving the non linear Equation (1.1). These modifications of Newton method are very important and interesting because per iteration, they require one evaluation of the function and two evaluation of the derivative, not requiring the second derivative but they can converge cubically.
Theorem 3.3. Let be r-times Fréchet differentiable function on an open interval . be a real zero of the non linear equation . The iterative method defined by: For given ,
has cubic convergence and satisfies the error equation:
4. New Modified Newton Method
We consider the predictor-corrector method (3.14):
We replace by finite difference approximation
which is a suitable approximation which does not require new information. The predictor step becomes the secant method scheme. The scheme (3.14) becomes:
Our option is motivated by the fact that the evaluation of two derivative may delay the convergence of the method. The proposed method requires only by iteration an evaluation of the function and one of its derivative which improved the index of efficiency of the method compared to those propose in many other works. It has a convergence of order . More precisely, the algorithm of our iterative method is the following:
1) For a given ,
3) For ,
Theorem 4.1. Let be r-times Fréchet differentiable function on an open interval . be a real zero of the non linear equation . The iterative method defined by (4.3) has p = 3 order convergence and the error equation is:
Proof. Let the unique root simple of Equation (1.1).
Let an approximation of obtained by the Scheme (4.6).
Let the approximation error.
By expansion, we have:
By replacing by (4.2)
By using the predictor step, we have:
Taylor expansion of at gives:
By using (4.14) we have,
By using (4.22) and (4.23),
5. Numerical Examples and Comparison
We will compare the performance of our method with some existing methods and use in . Therefore, we will give numerical results for some functions and initial values.
From the analysis made of Table 1, we can notice that the efficiency index of the proposed method is higher than that of the secant method which was already better than the Newton method and others developed in . This is explained by the fact our method even though of order three of convergence requires only two evaluations of the function by iteration whereas most of the method of order three existing in the literature use three evaluations of the function and this by supposing that the evaluation of the function and its derivative have the same numerical cost. Our method defined by 4.3 is preferable.
Table 1. Efficiency index of different numerical methods.
Table 2. et .
Table 3. et .
Table 4. et .
Table 5. et .
We have presented in this paper a modified Newton method of order three of convergence, more efficient than most of the methods known in the literature. Analysis of efficiency shows that this method is preferable for solving the non-linear equations. A comparative study of the number of function evaluations and the number of iterations with convergence of some methods is indicated in Tables 2-5. The result of these tables confirms the theory.