Newton’s Method and an Exact Opposite That Average into Halley’s Method

Author(s)
Isaac Fried

ABSTRACT

This note is mainly concerned with the creation of oppositely converging and
alternatingly converging iterative methods that have the added advantage of
providing ever tighter bounds on the targeted root. By a slight parametric
perturbation of Newton’s method we create an oscillating super-linear method
approaching the targeted root alternatingly from above and from below.
Further extension of Newton’s method creates an oppositely converging quadratic
counterpart to it. This new method requires a second derivative, but for
it, the average of the two opposite methods rises to become a cubic method.
This note examines also the creation of high order iterative methods by a repeated
specification of undetermined coefficients.

KEYWORDS

Iterative Methods, Alternating Methods, Opposite Methods, Root Bounds, Undetermined Coefficients

Iterative Methods, Alternating Methods, Opposite Methods, Root Bounds, Undetermined Coefficients

1. Introduction

Iterative methods [1] [2] [3] for locating roots of nonlinear equations are of further appeal if converging oppositely [4] or alternatingly [5] [6] so as to establish bounds, or to bracket, the targeted root.

In this note we are mainly concerned with the creation of oppositely and alternatingly converging iterative methods for ever tighter bounds on the targeted root. By a slight parametric perturbation of Newton’s method we create an alternating super-linear method approaching the targeted root in turns from above and from below. Further extension of Newton’s method [7] creates an oppositely converging quadratic counterpart to it. This New method requires a second derivative, but for it, the average of the two opposite methods rises to become a cubic method.

This note examines also the creation of high order iterative methods by a repeated evaluation of undetermined coefficients [7] .

2. The Function

At the heart of this note lies the seeking of simple root a of function $f\left(x\right)$ , $f\left(a\right)=0$ , a function which we assume throughout the paper to have the expanded form

$\begin{array}{l}f\left(x\right)=A\left(x-a\right)+B{\left(x-a\right)}^{2}+C{\left(x-a\right)}^{3}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+D{\left(x-a\right)}^{4}+E{\left(x-a\right)}^{5}+\cdots \mathrm{,}\text{\hspace{0.17em}}A\ne 0\end{array}$ (1)

such that

$\begin{array}{l}f\left(a\right)=0,\text{\hspace{0.17em}}A={f}^{\prime}\left(a\right),\text{\hspace{0.17em}}B=\frac{1}{2!}{f}^{\u2033}\left(a\right),\text{\hspace{0.17em}}C=\frac{1}{3!}{f}^{\u2034}\left(a\right),\\ D=\frac{1}{4!}{f}^{\u2034}\text{'}\left(a\right),\text{\hspace{0.17em}}E=\frac{1}{5!}{f}^{\left(5\right)}\left(a\right)\end{array}$ (2)

and so on.

The condition $A={f}^{\prime}\left(a\right)\ne 0$ guarantees that root a of $f\left(x\right)$ is simple, or, otherwise said, of multiplicity one.

The one-step iterative method is of the general, and expanded, form

$\begin{array}{l}{x}_{1}=F\left({x}_{0}\right),\\ {x}_{1}=F\left(a\right)+{F}^{\prime}\left(a\right)\left({x}_{0}-a\right)+\frac{1}{2!}{F}^{\u2033}\left(a\right){\left({x}_{0}-a\right)}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1}{3!}{F}^{\u2034}\left(a\right){\left({x}_{0}-a\right)}^{3}+\cdots \end{array}$ (3)

and, evidently, if $F\left(a\right)=a$ , namely, if a is a fixed-point of iteration function $F\left(x\right)$ in Equation (3), and if, further, ${F}^{\prime}\left(a\right)=0$ , then iterative method (3) converges quadratically to a. Higher order derivatives of $F\left(x\right)$ being zero at point a moves iterative method (3) to still higher orders of convergence to fixed- point $a=F\left(a\right)$ .

3. Newton’s Method

With comprehensiveness in mind we actually derive this classical, mainstay iterative method of numerical analysis. We start by generally stating it as

$\begin{array}{l}{x}_{1}={x}_{0}+Pf\left({x}_{0}\right),\text{\hspace{0.17em}}P\ne 0,\text{\hspace{0.17em}}{x}_{1}=F\left({x}_{0}\right),\\ F\left(x\right)=x+Pf\left(x\right),\text{\hspace{0.17em}}f\left(a\right)=0,\text{\hspace{0.17em}}F\left(a\right)=a\end{array}$ (4)

for any value of free parameter P. By the fact that $f\left(a\right)=0$ , point a is a fixed- point of iteration function $F\left(x\right)$ of method (4), that is to say, $F\left(a\right)=a$ .

Power series expansion of ${x}_{1}$ yields

${x}_{1}-a=\left(1+AP\right)\left({x}_{0}-a\right)+BP{\left({x}_{0}-a\right)}^{2}+O\left({\left({x}_{0}-a\right)}^{3}\right).$ (5)

Equation (5) suggests that the choice $P=-1/A$ should result in a quadratic method. However, since $A={f}^{\prime}\left(a\right)$ , and since a is unknown, we replace it by the known approximation ${x}_{0}$ , actually take $P=-1/{f}^{\prime}\left({x}_{0}\right)$ to have

${x}_{1}={x}_{0}-\frac{{f}_{0}}{{{f}^{\prime}}_{0}},\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}\text{in}\text{\hspace{0.17em}}\text{short}\text{\hspace{0.17em}}{x}_{1}={x}_{0}-{u}_{0},\text{\hspace{0.17em}}u=\frac{f\left(x\right)}{{f}^{\prime}\left(x\right)}$ (6)

which happens to be still quadratic; convergence of Newton’s method to a simple, namely of multiplicity one ( ${f}^{\prime}\left(a\right)\ne 0$ ), root is verified to be of second order

${x}_{1}-a=\frac{B}{A}{\left({x}_{0}-a\right)}^{2}+2\frac{AC-{B}^{2}}{{A}^{2}}{\left({x}_{0}-a\right)}^{3}+O\left({\left({x}_{0}-a\right)}^{4}\right)$ (7)

where $A\mathrm{,}B\mathrm{,}C$ are as in Equations ((1) and (2)).

For example, for $f=x+10{x}^{2}$ , we generate by Equation (6), starting with ${x}_{0}=1$ , the converging ${x}_{1}$ sequence

$\begin{array}{l}\{1,4.8\times {10}^{-1},2.2\times {10}^{-1},8.7\times {10}^{-2},2.8\times {10}^{-2},5.0\times {10}^{-3},\\ 2.2\times {10}^{-4},5.0\times {10}^{-7},2.5\times {10}^{-12},6.4\times {10}^{-23}\}\end{array}$ (8)

4. Extrapolation to the Limit

Let ${x}_{0},{x}_{1}={x}_{0}-{u}_{0},{x}_{2}={x}_{1}-{u}_{1},u=f/{f}^{\prime}$ be already near root a, then, by Equation (7)

${x}_{1}-a=\frac{B}{A}{\left({x}_{0}-a\right)}^{2}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}{x}_{2}-a=\frac{B}{A}{\left({x}_{1}-a\right)}^{2}$ (9)

nearly. Eliminating $B/A$ from the two equations we are left with

$\left(-2{x}_{0}+3{x}_{1}-{x}_{2}\right){a}^{2}+\left({x}_{0}^{2}-3{x}_{1}^{2}+2{x}_{0}{x}_{2}\right)a+\left({x}_{1}^{3}-{x}_{0}^{2}{x}_{2}\right)=0$ (10)

which we solve for an approximate a, as

${x}_{3}=a={x}_{0}-\frac{3+\sqrt{1+4\rho}}{2\left(2-\rho \right)}{u}_{0},\text{\hspace{0.17em}}{u}_{0}=\frac{f\left({x}_{0}\right)}{{f}^{\prime}\left({x}_{0}\right)}$ (11)

where

$\rho ={u}_{1}/{u}_{0}=\frac{B}{A}\left({x}_{0}-a\right)+O\left({\left({x}_{0}-a\right)}^{2}\right).$ (12)

The square root in Equation (11) may be approximated as

$\sqrt{1+4\rho}=1+2\rho -2{\rho}^{2}+4{\rho}^{3}-10{\rho}^{4}+28{\rho}^{5}-84{\rho}^{6}\pm \cdots $ (13)

and

${x}_{3}-a=\frac{2{B}^{2}\left({B}^{2}-AC\right)}{{A}^{4}}{\left({x}_{0}-a\right)}^{5}+O\left({\left({x}_{0}-a\right)}^{6}\right).$ (14)

For example, for $f\left(x\right)=x+{x}^{2}+{x}^{3}$ , and starting with ${x}_{0}=0.2$ , we compute ${x}_{1}=0.0368$ , ${x}_{2}=0.0135$ ; and then from Equation (11), ${x}_{3}=0.000112$ . Another such cycle starting with ${x}_{0}={x}_{3}$ produces a next ${x}_{3}=-1.36\times {10}^{-20}$ .

5. Estimating the Leading Term B/A

From Equation (9) we have that, approximately

$\frac{B}{A}=\frac{{x}_{2}-{x}_{3}}{{\left({x}_{1}-{x}_{3}\right)}^{2}}$ (15)

if ${x}_{3}$ is already close to a.

We pick from the list in Equation (8) the values

${x}_{1}=5.0\times {10}^{-7},\text{\hspace{0.17em}}{x}_{2}=2.5\times {10}^{-12},\text{\hspace{0.17em}}{x}_{3}=6.4\times {10}^{-23}$ (16)

and have from Equation (15) that $B/A=10$ .

6. Hopping to the Other Side of the Root

If instead of ${x}_{1}={x}_{0}-{f}_{0}/{{f}^{\prime}}_{0}$ of Equation (6) we vault over by the double step

${x}_{1}={x}_{0}-2\frac{{f}_{0}}{{{f}^{\prime}}_{0}}$ (17)

then we land at

${x}_{1}=2a-{x}_{0}+\frac{2B}{A}{\left({x}_{0}-a\right)}^{2}+O\left({\left({x}_{0}-a\right)}^{3}\right)$ (18)

implying that asymptotically, as ${x}_{0}\to a$

${x}_{1}=2a-{x}_{0},\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}{x}_{1}=a-\u03f5\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}{x}_{0}=a+\u03f5$ (19)

and this is good to know.

For example, seeking a root of $f\left(x\right)=x+10{x}^{2}$ we start with ${x}_{0}=0.2$ , which is above the root $a=0$ , and using ${x}_{1}={x}_{0}-2{f}_{0}/{{f}^{\prime}}_{0}$ we obtain ${x}_{1}=-0.04$ which is, indeed, under the root $a=0$ .

7. A Chord Method

Each step of Newton’s method provides us with an $f$ and ${f}^{\prime}$ values, which can be used to polynomially extrapolate $f\left(x\right)$ to zero. A linear extrapolation through the pair of points $\left({x}_{0}\mathrm{,}{f}_{0}\right)\mathrm{,}\left({x}_{1}\mathrm{,}{f}_{1}\right)$ results in the line

$f\left(x\right)=\frac{{f}_{0}-{f}_{1}}{{x}_{0}-{x}_{1}}x+\frac{{f}_{1}{x}_{0}-{f}_{0}{x}_{1}}{{x}_{0}-{x}_{1}}$ (20)

We set $f\left(x\right)=0$ and obtain the extrapolated value

${x}_{2}=\frac{{f}_{0}{x}_{1}-{f}_{1}{x}_{0}}{{f}_{0}-{f}_{1}}$ (21)

which may now be repeated to form a chord iterative method.

For example, from the two values ${x}_{0}=5\times {10}^{-7}$ , ${x}_{1}=2.5\times {10}^{-12}$ taken from the list in Equation (8) for $f\left(x\right)=x+10{x}^{2}$ , we compute from Equation (21) ${x}_{2}=1.25\times {10}^{-17}$ .

Starting with ${x}_{0}=5\times {10}^{-3}$ , ${x}_{1}=2.2\times {10}^{-4}$ we repeatedly compute

${x}_{2}=\left\{5\times {10}^{-3},2.2\times {10}^{-4},1.0\times {10}^{-5},2.3\times {10}^{-8},2.4\times {10}^{-12},5.5\times {10}^{-19},\text{\hspace{0.17em}}1.3\times {10}^{-29}\right\}$ (22)

with no need for any further derivative function evaluation.

Theoretically, by power series expansion

${x}_{2}-a=\frac{{B}^{2}}{{A}^{2}}{\left({x}_{0}-a\right)}^{3}+O\left({\left({x}_{0}-a\right)}^{4}\right).$ (23)

According to Equation (23) ${x}_{0}-a$ and ${x}_{2}-a$ are ultimately of the same sign.

8. A Rational Higher Order Method

To have this we start with

${x}_{1}=x-\frac{{f}_{0}}{{{f}^{\prime}}_{0}}\frac{1}{1+Q{f}_{0}}$ (24)

for open parameter Q. Power series expansion yields

${x}_{1}-a=\frac{1}{A}\left(B+{A}^{2}Q\right){\left({x}_{0}-a\right)}^{2}+O\left({\left({x}_{0}-a\right)}^{3}\right).$ (25)

To have a cubic method we take

$Q=-\frac{B}{{A}^{2}}$ (26)

with A and B as in Equation (2), but evaluated at ${x}_{0}$ rather than at a, to have Halley’s method

${x}_{1}={x}_{0}-\frac{2{{f}^{\prime}}_{0}}{2{{f}^{\prime}}_{0}^{2}-{f}_{0}{{f}^{\u2033}}_{0}}{f}_{0}$ (27)

which is cubic

${x}_{1}-a=\frac{{B}^{2}-AC}{{A}^{2}}{\left({x}_{0}-a\right)}^{3}+O\left({\left({x}_{0}-a\right)}^{4}\right).$ (28)

9. A Polynomial Higher Order Method

Here we start with the quadratic in $f$

${x}_{1}={x}_{0}-\frac{{f}_{0}}{{{f}^{\prime}}_{0}}\left(1+Q{f}_{0}\right)$ (29)

for undetermined parameter Q. Power series expansion of ${x}_{1}$ yields

${x}_{1}-a=\frac{1}{A}\left(B-{A}^{2}Q\right){\left({x}_{0}-a\right)}^{2}+O\left({\left({x}_{0}-a\right)}^{3}\right).$ (30)

To have a cubic method we take

$Q=\frac{B}{{A}^{2}}$ (31)

with A and B evaluated at ${x}_{0}$ rather than at a as in Equation (2), and

${x}_{1}={x}_{0}-\frac{1}{{{f}^{\prime}}_{0}}{f}_{0}-\frac{{{f}^{\u2033}}_{0}}{2{{f}^{\prime}}_{0}^{3}}{f}_{0}^{2},$ (32)

which is verified to be still cubic

${x}_{1}-a=\frac{2{B}^{2}-AC}{{A}^{2}}{\left({x}_{0}-a\right)}^{3}+O\left({\left({x}_{0}-a\right)}^{4}\right).$ (33)

10. An Alternating Super-Linear Method

We take Newton’s method of Equation (6) and parametrically perturb it into

${x}_{1}={x}_{0}-\frac{{f}_{0}}{{{f}^{\prime}}_{0}}\left(1+\u03f5\right)$ (34)

for some parameter $\u03f5$ , to have

${x}_{1}-a=-\u03f5\left({x}_{0}-a\right)+\left(1+\u03f5\right)\frac{B}{A}{\left({x}_{0}-a\right)}^{2}+O\left({\left({x}_{0}-a\right)}^{3}\right)\mathrm{.}$ (35)

which is super-liner if $\left|\u03f5\right|\ll 1$ . Moreover, for a positive $\u03f5$ convergence here is ultimately alternating: ${x}_{0}-a>0$ and ${x}_{1}-a<0$ are of opposite signs.

For example, for $f\left(x\right)=x+10{x}^{2}$ , ${x}_{0}=0.2$ , $\u03f5=1/25$ , we compute from Equation (34)

${x}_{1}=\left\{2\times {10}^{-1},7.5\times {10}^{-2},2.0\times {10}^{-2},2.2\times {10}^{-3},-4.0\times {10}^{-5},1.6\times {10}^{-6},-6.4\times {10}^{-8}\right\}$ (36)

11. An Opposite Quadratic Method

To create a method oppositely converging to Newton’s method of Equations ((6) and (7)) we start with the perturbed Newton’s method

${x}_{1}={x}_{0}-\frac{{f}_{0}}{{{f}^{\prime}}_{0}}\left(1+Q{f}_{0}\right)$ (37)

or in power series form

${x}_{1}-a=\frac{1}{A}\left(B-{A}^{2}Q\right){\left({x}_{0}-a\right)}^{2}+\frac{2}{{A}^{2}}\left(AC-{B}^{2}\right){\left({x}_{0}-a\right)}^{3}+O\left({\left({x}_{0}-a\right)}^{4}\right).$ (38)

To have a quadratic method opposite to Newton’s we set, in view of Equation (7)

$\frac{1}{A}\left(B-{A}^{2}Q\right)=-\frac{B}{A},\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}Q=\frac{2B}{{A}^{2}}$ (39)

resulting in

${x}_{1}={x}_{0}-\frac{{f}_{0}}{{{f}^{\prime}}_{0}}\left(1+\frac{{{f}^{\u2033}}_{0}}{{{f}^{\prime}}_{0}^{2}}{f}_{0}\right)$ (40)

which is, indeed, opposite to Newton’s

${x}_{1}-a=-\frac{B}{A}{\left({x}_{0}-a\right)}^{2}+\frac{2}{{A}^{2}}\left(3{B}^{2}-2AC\right){\left({x}_{0}-a\right)}^{3}+O\left({\left({x}_{0}-a\right)}^{4}\right).$ (41)

Compare Equation (41) with Equation (7).

12. The Average of the Opposites Is a Cubic Method

Method (40) requires ${f}^{\prime}$ and ${f}^{\u2033}$ above the mere ${f}^{\prime}$ of Newton’s method, but for this, the average of the two opposite methods rises to become the cubic method

${x}_{1}=\frac{1}{2}\left({x}_{0}-\frac{{f}_{0}}{{{f}^{\prime}}_{0}}\left(1+\frac{{{f}^{\u2033}}_{0}}{{{f}^{\prime}}_{0}^{2}}{f}_{0}\right)\right)+\frac{1}{2}\left({x}_{0}-\frac{{f}_{0}}{{{f}^{\prime}}_{0}}\right)={x}_{0}-\frac{{f}_{0}}{{{f}^{\prime}}_{0}}-\frac{{{f}^{\u2033}}_{0}{f}_{0}^{2}}{2{{f}^{\prime}}_{0}^{3}}$ (42)

of Equation (32).

13. A Quartic Method

Hereby we advance another undetermined coefficients strategy for constructing high order iterative methods [8] [9] [10] [11] [12] to locate root a of $f\left(x\right)$ , $f\left(a\right)=0$ .

We start with the polynomial iteration function

${x}_{1}={x}_{0}+P{f}_{0}+Q{f}_{0}^{2}+R{f}_{0}^{3}$ (43)

of undetermined coefficients $P\mathrm{,}Q\mathrm{,}R$ , and expand ${x}_{1}$ as

$\begin{array}{c}{x}_{1}-a=\left(1+AP\right)\left({x}_{0}-a\right)+\left(BP+{A}^{2}Q\right){\left({x}_{0}-a\right)}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(CP+2ABQ+{A}^{3}R\right){\left({x}_{0}-a\right)}^{3}+O\left({\left({x}_{0}-a\right)}^{4}\right)\end{array}$ (44)

The coefficients of $\left({x}_{0}-a\right)$ up to ${\left({x}_{0}-a\right)}^{3}$ are made zero with

$P=-\frac{1}{A},\text{\hspace{0.17em}}Q=\frac{B}{{A}^{3}},\text{\hspace{0.17em}}R=\frac{AC-2{B}^{2}}{{A}^{5}}$ (45)

to have a quartic method.

However, as root a is unavailable we replace it in $P\mathrm{,}Q\mathrm{,}R$ by ${x}_{0}$ to have the variable

$P=-\frac{1}{{{f}^{\prime}}_{0}},\text{\hspace{0.17em}}Q=\frac{{{f}^{\u2033}}_{0}}{2{{f}^{\prime}}_{0}^{3}},\text{\hspace{0.17em}}R=\frac{{{f}^{\prime}}_{0}{{f}^{\u2034}}_{0}-3{{f}^{\u2033}}_{0}}{6{{f}^{\prime}}_{0}^{5}}.$ (46)

But, with this replacement of a by ${x}_{0}$ method (43) falls back to a mere quadratic

${x}_{1}-a=2\frac{B}{A}{\left({x}_{0}-a\right)}^{2}+O\left({\left({x}_{0}-a\right)}^{3}\right).$ (47)

To repair this retreat in the order of convergence we propose to further correct ${x}_{1}$ of Equations ((43) and (46)) into

${x}_{1}={x}_{0}-\frac{{f}_{0}}{{{f}^{\prime}}_{0}}+{Z}_{1}Q{f}_{0}^{2}+{Z}_{2}R{f}_{0}^{3}$ (48)

for new parameters ${Z}_{1}\mathrm{,}{Z}_{2}$ . Power series expansion reveals that method (48) is restored to fourth order with ${Z}_{1}=-1,{Z}_{2}=1$ , to have

${x}_{1}={x}_{0}-\frac{{f}_{0}}{{{f}^{\prime}}_{0}}-Q{f}_{0}^{2}+R{f}_{0}^{3},\text{\hspace{0.17em}}Q=\frac{{{f}^{\u2033}}_{0}}{2{{f}^{\prime}}_{0}^{3}},\text{\hspace{0.17em}}R=\frac{{{f}^{\prime}}_{0}{{f}^{\u2034}}_{0}-3{{f}^{\u2033}}_{0}}{6{{f}^{\prime}}_{0}^{5}}$ (49)

for which

${x}_{1}-a=\frac{5{B}^{3}-5ABC+{A}^{2}D}{{A}^{3}}{\left({x}_{0}-a\right)}^{4}+O\left({\left({x}_{0}-a\right)}^{5}\right).$ (50)

14. The Rational Method Revisited

We start here with

${x}_{1}={x}_{0}+\frac{P{f}_{0}}{1+Q{f}_{0}}$ (51)

and expand it into

${x}_{1}-a=\left(1+AP\right)\left({x}_{0}-a\right)+P\left(B-{A}^{2}Q\right){\left({x}_{0}-a\right)}^{2}+O\left({\left({x}_{0}-a\right)}^{3}\right).$ (52)

To have a cubic method we set

$P=-\frac{1}{A},\text{\hspace{0.17em}}Q=\frac{B}{{A}^{2}}$ (53)

or

$P=-\frac{1}{{{f}^{\prime}}_{0}},\text{\hspace{0.17em}}Q=\frac{{{f}^{\u2033}}_{0}}{2{{f}^{\prime}}_{0}^{2}}.$ (54)

Inserting these last P and Q values into Equation (51) we have the disappointing

${x}_{1}-a=\frac{2B}{A}{\left({x}_{0}-a\right)}^{2}+O\left({\left({x}_{0}-a\right)}^{3}\right).$ (55)

We retry for a cubic method by writing, still for P and Q of Equation (54)

${x}_{1}={x}_{0}+\frac{{Z}_{1}P{f}_{0}}{1+{Z}_{2}Q{f}_{0}}$ (56)

for which we get by power series expansion

${x}_{1}-a=\left(1-{Z}_{1}\right)\left({x}_{0}-a\right)+\frac{B}{A}{Z}_{1}\left(1+{Z}_{2}\right){\left({x}_{0}-a\right)}^{2}+O\left({\left({x}_{0}-a\right)}^{3}\right).$ (57)

We take ${Z}_{1}=1,{Z}_{2}=-1$ , and have the method

${x}_{1}={x}_{0}+\frac{P{f}_{0}}{1+Q{f}_{0}},\text{\hspace{0.17em}}P=-\frac{1}{{{f}^{\prime}}_{0}},\text{\hspace{0.17em}}Q=-\frac{{{f}^{\u2033}}_{0}}{2{{f}^{\prime}}_{0}^{2}}$ (58)

which is now restored to third order of convergence

${x}_{1}-a=\frac{{B}^{2}-AC}{{A}^{2}}{\left({x}_{0}-a\right)}^{3}+O\left({\left({x}_{0}-a\right)}^{4}\right).$ (59)

15. Undetermined Variable Factors

Here we start with

${x}_{1}={x}_{0}+P\left({x}_{0}\right)f\left({x}_{0}\right),\text{\hspace{0.17em}}f\left(a\right)=0$ (60)

and expand ${x}_{1}$ to have

$\begin{array}{c}{x}_{1}-a=Pf+\left(1+P{f}^{\prime}+{P}^{\prime}f\right)\left({x}_{0}-a\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(\frac{1}{2}P{f}^{\u2033}+{P}^{\prime}{f}^{\prime}+\frac{1}{2}{P}^{\u2033}f\right){\left({x}_{0}-a\right)}^{2}+O\left({\left({x}_{0}-a\right)}^{3}\right)\end{array}$ (61)

in which $f=f\left(a\right),P=P\left(a\right),{P}^{\prime}={P}^{\prime}\left(a\right),{P}^{\u2033}={P}^{\u2033}\left(a\right)$ . To have a cubic method, and considering that $f\left(a\right)=0$ we impose the two conditions:

$\left[\begin{array}{cc}{f}^{\prime}& f\\ \frac{1}{2}{f}^{\u2033}& {f}^{\prime}\end{array}\right]\left[\begin{array}{c}P\\ {P}^{\prime}\end{array}\right]=\left[\begin{array}{c}-1\\ 0\end{array}\right]$ (62)

and solve system (62) for P as

$P=\frac{\mathrm{det}\left[\begin{array}{cc}-1& f\\ 0& {f}^{\prime}\end{array}\right]}{\mathrm{det}\left[\begin{array}{cc}{f}^{\prime}& f\\ \frac{1}{2}{f}^{\u2033}& {f}^{\prime}\end{array}\right]}=\frac{-2{f}^{\prime}}{2{{f}^{\prime}}^{2}-f{f}^{\u2033}}$ (63)

Since root a is unknown we evaluate P instead at ${x}_{0}$ and have iterative method (60) in the form

${x}_{1}={x}_{0}-\frac{2{{f}^{\prime}}_{0}}{2{{f}^{\prime}}_{0}^{2}-{f}_{0}{{f}^{\u2033}}_{0}}{f}_{0}$ (64)

which we recognize as being the cubic method of Halley

${x}_{1}-a=\frac{{B}^{2}-AC}{{A}^{2}}{\left({x}_{0}-a\right)}^{3}+O\left({\left({x}_{0}-a\right)}^{4}\right).$ (65)

16. Conclusions

In Section 6, we have demonstrated that the double-step Newton’s method places the next point on the other side of the root. Passing a line through two such points we create a chord method as that of Section 7.

A slight perturbation of Newton’s method, as in Equation (34) creates a super-linear method of alternating convergence as in Section 10. By a further modification of Newton’s method we have created a quadratic method opposite to Newton’s. The average of these two opposite methods is a cubic method, as shown in Section 12.

In Section 13, a quartic method is successfully created by repeated undetermined coefficients.

Cite this paper

Fried, I. (2017) Newton’s Method and an Exact Opposite That Average into Halley’s Method.*Applied Mathematics*, **8**, 1427-1436. doi: 10.4236/am.2017.810103.

Fried, I. (2017) Newton’s Method and an Exact Opposite That Average into Halley’s Method.

References

[1] Ostrowski, A. (1960) Solution of Equations and Systems of Equations. Academic Press, New York.

[2] Householder, A.S. (1970) The Numerical Treatment of a Single Nonlinear Equation. McGraw-Hill, New York.

[3] Traub, J.F. (1977) Iterative Methods for the Solution of Equations. Chelsea Publishing Company, New York.

[4] Fried, I. (2009) Oppositely Converging Newton-Raphson Method for Nonlinear Equilibrium Problems. International Journal for Numerical Methods in Engineering, 79, 375-378.

https://doi.org/10.1002/nme.2574

[5] Fried, I. (2013) High-Order Iterative Bracketing Methods. International Journal for Numerical Methods in Engineering, 94, 708-714. https://doi.org/10.1002/nme.4467

[6] Fried, I. (2014) Effective High-Order Iterative Methods via the Asymptotic Form of the Taylor-Lagrange Remainder. Journal of Applied Mathematics, 2014, Article ID: 108976.

https://doi.org/10.1155/2014/108976

[7] Chun, C. and Neta, B. (2008) Some Modification of Newton’s Method by the Method of Undetermined Coefficients. Computers and Mathematics with Applications, 56, 2528-2538.

https://doi.org/10.1016/j.camwa.2008.05.005

[8] Steffensen, I.F. (1933) Remarks on Iteration. Scandinavian Actuarial Journal, 1933, 64-72. https://doi.org/10.1080/03461238.1933.10419209

[9] King, R.F. (1973) A Family of Fourth-Order Methods for Nonlinear Equations. SIAM Journal of Numerical Analysis, 10, 876-879. https://doi.org/10.1137/0710072

[10] Hansen, E. and Patrick, M. (1977) A Family of Root Finding Methods. Numerische Mathematik, 27, 257-269. https://doi.org/10.1007/BF01396176

[11] Neta, B. (1979) A Sixth-Order Family of Methods for Nonlinear Equations. International Journal of Computer Mathematics, 7, 157-161. https://doi.org/10.1080/00207167908803166

[12] Popovski, D.B. (1981) A Note on Neta’s Family of Sixth-Order Methods for Solving Equations. International Journal of Computer Mathematics, 10, 91-93.

https://doi.org/10.1080/00207168108803269

[1] Ostrowski, A. (1960) Solution of Equations and Systems of Equations. Academic Press, New York.

[2] Householder, A.S. (1970) The Numerical Treatment of a Single Nonlinear Equation. McGraw-Hill, New York.

[3] Traub, J.F. (1977) Iterative Methods for the Solution of Equations. Chelsea Publishing Company, New York.

[4] Fried, I. (2009) Oppositely Converging Newton-Raphson Method for Nonlinear Equilibrium Problems. International Journal for Numerical Methods in Engineering, 79, 375-378.

https://doi.org/10.1002/nme.2574

[5] Fried, I. (2013) High-Order Iterative Bracketing Methods. International Journal for Numerical Methods in Engineering, 94, 708-714. https://doi.org/10.1002/nme.4467

[6] Fried, I. (2014) Effective High-Order Iterative Methods via the Asymptotic Form of the Taylor-Lagrange Remainder. Journal of Applied Mathematics, 2014, Article ID: 108976.

https://doi.org/10.1155/2014/108976

[7] Chun, C. and Neta, B. (2008) Some Modification of Newton’s Method by the Method of Undetermined Coefficients. Computers and Mathematics with Applications, 56, 2528-2538.

https://doi.org/10.1016/j.camwa.2008.05.005

[8] Steffensen, I.F. (1933) Remarks on Iteration. Scandinavian Actuarial Journal, 1933, 64-72. https://doi.org/10.1080/03461238.1933.10419209

[9] King, R.F. (1973) A Family of Fourth-Order Methods for Nonlinear Equations. SIAM Journal of Numerical Analysis, 10, 876-879. https://doi.org/10.1137/0710072

[10] Hansen, E. and Patrick, M. (1977) A Family of Root Finding Methods. Numerische Mathematik, 27, 257-269. https://doi.org/10.1007/BF01396176

[11] Neta, B. (1979) A Sixth-Order Family of Methods for Nonlinear Equations. International Journal of Computer Mathematics, 7, 157-161. https://doi.org/10.1080/00207167908803166

[12] Popovski, D.B. (1981) A Note on Neta’s Family of Sixth-Order Methods for Solving Equations. International Journal of Computer Mathematics, 10, 91-93.

https://doi.org/10.1080/00207168108803269