The conjugate gradient method is one of the important ways to find the minimum value of a function for unconstrained optimization.
The conjugate gradient method is widespread because its requirements are a small memory. Unconstrained optimization problem can be expressed as follows:
where is a continuous and derivative function. The CG method generates frequent updates in this format.
where xk is the current iteration point, is the positive step size using the “exact line search” as shown by the following:
and dk is the search direction, which we get as follows:
where k is integer and that gk is the gradient of the function f(x) and that βk is the coefficient of the conjugate gradient associated with the function f(x) at the point xk.
Some of the known conjugation methods are:
The coefficient gradient coefficient is a numerical constant, which determines the difference in different CG methods when denote the gradient of a function f(x) at points , respectively.
The above methods are known as:
Fletcher and Reeves (FR)  , Polka and Ribiere (PR)  , Hestenes and Steifel (HS)  , Dai and Yuan (DY)  , Liu and Story (LS)  , Conjugate Descent (CD) by Fletcher  .
These aforementioned methods behave strictly convex quadratic functions in a behavior that is completely different from what they do in non-quadratic general functions. In any case, most of these methods examine the properties of universal approach in the field of conjugated gradient.
However, in recent years, there have been many attempts that have been directed towards building new formulas for CG methods with good numerical performance and achieving the characteristics of global convergence.
2. The New Conjugate Gradient and Its Algorithm
It is well known that the methods of numerical optimization are iterative methods and there is no specific method suitable for all types of problems. Each method has its advantages and new features as well as some of the characteristics that are not good and are efficient for some types of problems and not efficient for other types of problems.
The new coefficient of gradient is
New method algorithm
Step (1): Set and choose an initial value X0
Step (2): Calculate from (5)
Step (3): Calculate
In the case if , stop
Step (4): Calculate
Step (5): Calculate the new point with the following iterative formula:
Step (6): Test if it is
Otherwise, go to step (1) with k = k + 1
The coefficient is chosen in such a way that is G-conjugate to .
In the conjugate direction algorithm
Proposition: In the conjugate gradient algorithm the direction are G-conjugate.
Proof: By using induction
We first show
when in (3)
by Lemma (1) and ELS we get =zero
Now we assume that is correct. And we prove that
By Lemma (1) and ELS we get .
The fulfillment of the descent condition .
The new method is shown as follows:
By ELS, we get
Thus the descent condition is held.
3. Global Convergence
An analysis of the overall convergence using the Exact Line search (ELS) demonstrates according to the following hypotheses:
1) In the neighborhood N of L the function f(x) is continuous, derivative, bound and defined at the level set , when x0 is an initial point.
2) The gradient is Lipschitz condition when there is a constant number L > 0 so that
According to these assumptions we have the following taken by Zoutendijk  .
Lemma 2: Assuming assumption 1) is correct, we consider the conjugate regression methods formulated in formula (3), where dk is the descent search direction, fulfills the exact line search of the minimization rules, so the following condition defined by the Zoutendijk condition is held:
From Lemma (2), we can obtain a convergence theorem of the conjugate gradient CG method using
Theorem 1: Suppose that the assumption 1) is satisfied. Consider every CG method in the form (4), where is obtained by the exact minimization rules. Then either
Proof. By contradiction, if theorem 1 is not true, there exists a constant such that
Squaring both sides
Dividing both sides of (11) by given
But note that , then from (12) we get
From (10) and (13) we get
This contradicts the Zoutendijk condition in lemma (2) which completes the proof. □
4. Numerical Results
In this section we consider the numerical solution for this research. The conjugate gradient method of ME, Dai and Yuan, and Fletcher and Reeves were tested. Some test problems considered in Andrei  . We are selected based on the number of iteration and number of function evaluation (Table 1 and Table 2).
Table 1. Comparison of the algorithms for n = 100.
Table 2. Comparison of the algorithms for n = 1000.
A new kind of parameter in the conjugate gradient method for large scale unconstrained optimization problems is proposed. Numerical results are detected that the new method is superior in practice with competitive DY and FR methods.
A List of Test Function
F1 Extended Trigonometric Function.
F2 Diagonal 2 function.
F3 Extended Tridiagonal −1 function.
F4 Extended Three Exponential Terms.
F5 Generalized PSC1 function.
F6 Extended PSC1 Function.
F7 Extended Block Diagonal BD1 function.
F8 Extended Quadratic Penalty QP1 function.
F9 Extended Tridiagonal −2 function.
F10 Nondquar (CUTE).
F11 DIXMAANC (CUTE).
F12 DIXMAANE (CUTE).
F13 EDENSCH function (CUTE).
F14 STAIRCASE S1/F52 VARDIM function (CUTE).
F15 ENGVAL1 (CUTE).
F16 DENSCHNA (CUTE).
F17 DENSCHNB (CUTE).
F18 DIGGSB1 (CUTE).
F19 Diagonal 7.
F21 HIMMELBG (CUTE).
 Polak, E. and Ribiere, G. (1969) Note sur la convergence de méthodes de directions conjuguées. ESAIM: Mathematical Modelling and Numerical Analysis-Modélisation Mathématique et Analyse Numérique, 3, 35-43. https://doi.org/10.1051/m2an/196903R100351
 Dai, Y.-H. and Yuan, Y. (1999) A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property. SIAM Journal on Optimization, 10, 177-182.