Received 24 November 2015; accepted 23 February 2016; published 26 February 2016
In the early 1980’s, there were two papers which studied ill-conditioned Load Flow Problems (  and  ). Different numerical method was used to analyze the ill-conditioned equations and got some useful results. Because of the importance of this problem in power system, it is necessary to study this problem again by using new numerical method.
Since 1960’s, people have taken the direct method as a major tool to solve load flow equations; among them, Newton-Gaussian elimination (LU) is a typical representation  . Despite the fact that some people proposed nonlinear programming method  , this method was not widely used.
Recently people have turned their attention to iterative method, e.g., Inexact-Newton-GMRES method, (  and  ), but none of them is involved with the ill-conditioned problems.
The extent of ill-conditioned problem is determined by the conditioned number of the system. From  , for 11-bus and 43-bus systems, they are 0.1086 × 104 and 0.2560 × 105 respectively. For double precision calculation, it is not a severe problem; we can determine the existence of the solutions of the two systems by the normal Newton method without any trouble.
For the conclusion (3) of  and the statements “Supposing that the correction vector Δx is obtained by some way…”. For those statements, the authors of this paper have a different opinion. The details of discussion can be found in Appendix.
Brown method was used by  . Ref.  did a deep and detailed study for this method and pointed out: “The method did not have a clear algorithmic description and hence was hard to implement”. Maybe even more important is that for each iteration, it needs arithmetic operations.  improved Brown method, and also gave a clearer algorithmic description and reduced arithmetic operations from to.  did not take advantage of this point, and still used original Brown method.
Considering that the two papers were published in 1980’s, we should not make excessive demands for authors of  , but the conclusion for 11-bus and 43-bus systems, Newton-Raphson method is “divergent” (Table 2 on page 3653 of Ref.  ), which is incorrect and conflicts with  .
2. Description of the New Algorithm
Before the description of the new method, let us first review the derivation process of the two classical methods:
1) Newton method:
Or, for using explicit Euler numerical integration method:
Take, hence we have:
That is to say, we can use both Static State and Dynamic State approaches to derive Newton method. Despite the Newton method can be recognized as an explicit ODE Euler method, the right side function includes an inverse matrix of. It still belongs to implicit method.
If the Newton method fails to converge, two approaches can be taken to improve the convergence:
a) Using damp factor: Replacing by. This is equivalent to reducing the step size of Euler method, making to fall in the stability region of Euler method. Here ρ is the spectral radius of the Jacobian matrix of.
b) Altering initial values: If the initial value closes to the solution x*, the Jacobian matrix of will close to unit matrix. Such that taking large step size h = 1, the stability of numerical integration can be kept.
2) About the fixed iteration
For, put, here h is the parameter. Fixed iteration method:
will have the same solution of and. We can still interpret it as an explicit ODE Euler
method. In fact, for differential equation using Euler method:
Take h = 1, we can get classical fixed iteration:
It is well known that the convergence condition of fixed iteration is:
If h = 1, the convergence condition of classical Fixed Point Iteration Method does not satisfy. We can reduce h, if only all of the eigenvalues of the Jacobian matrix of is located in the left-half plane, when h is small
enough, the inequality must hold.
We know that explicit ODE method only has very small stability region. If the Jacobian matrix of has large condition number, or the system is stiff, the conventional explicit ODE method will be very inefficient.
3) Does the explicit ODE method with the large stability region exist? The answer is positive. In  the authors of this paper proposed a new ODE method which has a very large stability region. Choosing proper parameter, the region in real axis direction can reach ∞ point. It seems strange, the textbook tells us the stability region of the explicit ODE method must be bounded, however our method includes a parameter ε. It plays a very important role for stability and  gives it a reasonable explanation.
Our purpose is to reach the steady point of the dynamic system as quickly as possible. We do not require the trajectory produced by our method approximates the solution curve of differential equation of. Or we can say that our method is allowed to have large errors, if only the trajectory produced by
our method is within the attracting domain of the steady point. So our method can take the very large step size, and goes to steady point quickly. The traditional explicit ODE methods cannot take large step size. Usually, their trajectory is very close to the solution of the system. In a sense the trajectory approximately reflects the real transient process of the system, however the trajectory produced by our method does not. So, our method can be recognized as “Pseudo-Transient Continuation.”
The existing “Pseudo-Transient Continuation” method can be written as follows:
Various algorithms adopt different strategies to control h. They are known as the “LINEARLY IMPLICIT EULER DISCRETIZATION”,  .
Because each iteration step needs to solve a system of linear equations, so all of them are implicit methods.
Our method is the unique Explicit “Pseudo-Transient Continuation” method. In order to express this difference, our method is named by  as “Explicit Pseudo-Transient Continue” method.
From  and  , we can write our scheme as follows:
Here, is a parameter, h is the step size.
For, , from 1), 2) and 3) we can get, , , Then for ;
If the steady point of exists, for we can get certain xn which satisfies, then xn is the approximation of, the solution we expect.
3. The Advantage of the New Method
1) Our method is an explicit stiff ODE method, so it has a very strong capability to deal with tough problems. Brown’s almost linear equation is as follows:  
Obviously, is a solution.
For n = 10, 30, Newton method diverges drastically, but Brown method converges well.
If n > 35, according to  , Brown method will not work. This has been proved by  , for n = 40.
What happens to our method? For n = 40, 100, 200, 400, we can get the solution easily. In this paper, we just show the result of n = 400:
Virtually, the differential equation to deal with is:
Here the diagonal matrix has elements, ,. If, the division has been avoided, or we put.
As we did in  , we separated the calculation into three stages (in every stage we expect the 2-Norm of is descending about 5 magnitude order) and took parameters, Tol1 = 1D−0, Tol2 = 1D−5,
Tol3 = 1D−10; h1 = 0.015, h2 = 0.15, h3 = 0.8. After 826 iterations, we got the solution:,;. The initial value of 2-Norm of is 0.4005D+04. The results are listed in Table 1. In the table, NFE denotes Number of Function Evaluations. Double precision was used.
Incidentally, the efficiency of Brown method is strongly related to the order of the equations. It is more efficient if the equations are ordered by increasing the degree of nonlinearity. In our method, each component can be calculated independently, so there is no ordering problem.
An experiment has been done: We place the nonlinear equation on the top of the, i.e. the first equation. Both of them have exactly the same results.
2) Our method is as simple as fixed iteration, so it can handle very large systems. Here we give an ill-condi- tioned, non-symmetric linear equations with dimension n = 1,000,000:
Table 1. (Brown).
We ask the solution is, so that we can determine the right side vector b. The iterative initial. The differential equation to deal with is:
If E-Ψtc method is used, taking ε = 0.5. For any step size the numerical integration is stable.
The only possible comparing method is fixed iteration (Euler) method. The maximum step size is h = 0.8. In order to get a good stability, we take h = 0.75. The initial Euclid norm is 0.7217D+03 After 150,000 fixed iterations, we have the following results:
We want to make the Euclid norm less than 1D−08, it seems impossible.
How about the Explicit Pseudo-Transient Continue” method. (E-Ψtc)?
The following data can give the answer: (Variable step size)
3) Another outstanding feature is that our method is very suitable for parallel calculation.
4. Load Flow Calculation
Now we turn our focus to the main topic of this paper―Load Flow Calculation.
1) The Construction of the Differential Equations.
The load flow equations can be formulated by a system of nonlinear.
In rectangular coordinate system, for PQ buses, the power equations are:
For PV buses, the voltage magnitude values Vis are specified and have the relation:
(3.1) and (3.3) constitute the power equations for PV buses.
For elimination method, the optimal ordering of the buses is the first task we need to solve. In our method, this problem does not need to be considered, because the memory of space only depends on the number of buses and not related to the ordering. For the calculating process, as we have pointed out in Section 3, in our method, each component can be calculated independently.
But for every bus, which corresponds to two real number equations. If the nonlinear equations is ordered as following:
The corresponding differential equation should be arranged as follows:
Because in the ith bus, the active power Pi is mainly concerned with fi and reactive power Qi is mainly concerned with ei (  p. 205). The diagonal element of the Jacobian of will be the one which has the largest absolute value in each row. This is crucial for any iterative method.
In fact, for elimination method, every bus corresponds to a 2-dimension sub-block and which is also arranged just the same.
In order to let the readers understand the discussion above more clearly, let us give a further statement by a simple example:
Consider a 2-dimensional linear equations:
The matrix-vector form is:
If we use a fixed point iteration method to solve this linear equations, which will not converge.
This is because the two eigenvalues of the matrix A and, they do not satisfy half-plane condition.
If we alter the sequence of x and y, consider equations.
The two eigenvalues of the matrix B, and, so all eigenvalues of matrix-B are located on the left half-plane. Choosing proper parameter, the fixed point iteration will converge.
From the differential equations’ point of view:
For differential equations:
Choosing a proper step size h, Euler method will make step by step towards the stability point (1, 2).
However, for the differential equations:
No matter what h is, cannot converge.
The matrix B is diagonal dominant, but matrix A is not. This is why for every bus, we arrange the sequence according to. Only by this way, we can guarantee every sub-block of the matrix is diagonal dominant.
Here we would like to emphasize this matter again. The algebraic equations and differential equations were all the same, just the unknown variables have different order―one is and another one is, which causes a completely different iterative process. One is divergent, but another one is convergent.
Now, let us see what is in the reference  .
 Page 1736, right side:
This expression is not correct. It will make the Jacobian matrix loose diagonal dominant. That is to say, for each row of the Jacobian matrix, the element with the largest absolute value will not be diagonal element. Obviously, for PV-bus, the one of diagonal elements will be “−2f”. If flat start is used, then for the first step, the diagonal element will be zero. It will be the worst thing for any algorithm to solve the linear algebraic equations. For elimination method, almost all code, column pivot technique was taken. It automatically adjusts the order of equations so people usually ignore the “order problem”. But for any iterative method, the “order problem” is crucial.
The correct expression should be:
Or the matrix form:
For the details, please see  page 184.
We can also change the order of equations;
We will give two preconditioning strategies:
a) point diagonal preconditioning: The preconditioner is a diagonal matrix of the Jacobian. If you do not want to calculate the diagonal elements, according to the observation of the calculated data, we can further simplify this strategy.
For PQ bus:
For PV bus:
The precondition matrix can be simplified as a constant diagonal matrix. The elements are just Bii, and −2.0. It was named as D1. The differential equation that we deal with is:
b) 2-dimension block diagonal preconditioning: We know every bus corresponds to a 2-dimension diagonal sub-block of the Jacobian matrix. These sub-blocks constitute a block diagonal matrix D2:
The differential equation that we deal with is:
The preconditioning not only raises efficiency, but also helps us to determine the parameter ε = 0.5. Because the Jacobian matrix diagonal elements of and are very close to 1, and it is fully possible, ∞-Norm of the Jacobian matrix < 3.0 (because the largest absolute value element of the Jacobian ma-
trix is in the diagonal place of each row and the sparse property of the matrix) and, is the
spectral radius. (See the reference  ). By this means, there is only one parameter h which needs be determined.
For ordinary load flow problem, if point diagonal preconditioning is used, h is in the range of 5 to 7 is suggested. If an iteration cannot converge, we suggest to reduce h. If h is small enough, say h < 1, the iteration still cannot converge, the iteration should be stopped, then check the whole process to see what mistake was taken.
5. 43-Bus System
For this system, 8 calculating results are given, we take the Largest Absolute Values of the Mismatch (LAVM), less than 1D−05 as the stopping condition (double precision):
1) Normal Newton method: 7 Newton iteration, The LAVM = 0.2123D−08.
2) Optimal Multiplier Newton method: 6 Newton iteration. The LAVM = 0.4134D−08. (The values of optimal multiplier µ are taken from  ).
3) Euler method: step size h = 0.00065 (maximum step-length to meet the stability requirements), 157006 function evaluations. The LAVM = 1D−05.
4) No preconditioning E-Ψct: ε = 0.0005, h = 0.00085, 120076 function evaluations. The LAVM = 1D−05.
5) Point diagonal preconditioning E-Ψct: ε = 0.5, h = 0.35, 42505 function evaluations. The LAVM = 0.9998D−05.
6) 2-dimension block diagonal preconditioning E-Ψct: 2008 function evaluations. The LAVM = 0.9995D−05.
7) Variable step size 2-dimension block diagonal preconditioning E-Ψct: ε = 0.5, h1 = 6.5, h2 = 7.0, h3 = 7.5, h4 = 8.0, h5 = 9.0, h6 = 10.0.
1335 function evaluations. The LAVM = 0.9968D−05.
What conclusion that we can get based on the 1)-7) results?
From 3) we can see that this system is surely an ill-conditioned system, but from 1), the ill-conditioned extent is not severe, using double precision, only 7 Newton iterations are needed to make the LAVM = 0.2123D−08. Compare 1) and 2), Optimal Multiplier Newton method saved one iteration, if the stopping condition is 1D−08. This is the contribution of  .
Compare 3) and 7): the proportion of numbers of function evaluation is 157006/1335 ≈ 118. For 7), besides function evaluations, forty-two 2-dimension sub-matrix need to be inverted in each iteration. The workload of 2-dimension matrix inverse is relatively very small, so we can say that for ill-conditioned Load Flow Problems, using 7) is a good choice, especially when the system is large.
For 7), if someone considers that the workload of 1335 function evaluations is too big to be accepted, here we give another choice. This choice is based on the following observation: For Newton method, the quadratic convergence superiority can be utilized only when the norm of is small, say it is less than 0.1, but for function iteration methods, at the beginning of the iteration, the norm has faster descending speed. So a hybrid calculation strategy is recommended.
8) At the beginning, we use 6) (ε = 0.5, h = 6.0), after 132 function evaluations, the LAVM = 0.9821D−01, then we switch to normal Newton iteration, after 4 iterations, the LAVM = 0.3366D−12. By directly using the normal Newton method, if LAVM < 1D−12 is required, 8 iterations are needed. That is to say, at the cost of 132 function evaluations, we can save 4 normal Newton iterations.
Here, what we want to say is as following:
For any existing Newton program, you can use our method as a tool to choose a good starting point. The textbook tells us: “The HOMOTOPY method can be used to generate a good starting value”. So our method can be considered as a homotopy method. It can be used independently and can also be embedded into the Newton program as a tool to choose starting value.
For ordinary load flow problem, e.g. IEEE-39 IEEE-118, our method is also suitable. You can use the method by two ways: One is directly solve nonlinear equations, like what we did in this paper.  , and another one is to solve the linear equations arising from the Newton method  .
For nonlinear equations, the solution is not unique, as a research paper, we should give the final numerical results or some description for large systems, but neither did  nor  . In this paper, listing all the 8 final results is not practical, because of the limited space. Here we only point out that no matter the LAVM is 1D-05 or 1D-12, the final results are almost the same. For bus 43, all of them are exactly the same: voltage magnitude is 1.0763, angle is −11.176.
6. 11-Bus System
At the beginning, the initial LAVM = 0.1232E+01, we give the following results:
1) As  did, the normal Newton method, after 18 iterations, we have the results (Single precision):
0.3046E+00, 0.1330E+00, 0.8979E−01, 0.1883E−01, 0.2195E−02, 0.7789E−03, 0.2557E−03, 0.8268E−01, 0.2074E−01, 0.5251E−02, 0.1383E−02, 0.4276E−03, 0.2818E−03, 0.1217E−02, 0.3891E−03, 0.3055E−03, 0.5773E−03, 0.2670E−03.
2) Optimal multiplier method (single precision), 18 iterations. Results are:
0.2858E+00, 0.1326E+00, 0.3429E−01, 0.1691E−01, 0.5123E−02, 0.4473E−03, 0.2464E−03, 0.2164E−03, 0.2155E−03, 0.2155E−03, 0.2155E−03, 0.2154E−03, 0.2155E−03, 0.2155E−03, 0.2154E−03, 0.2155E−03, 0.2155E−03, 0.2154E−03.
If we use double precision, we have almost the same results as above 1) and 2). Based on this observation, we can see:
a) Oscillation occurred after 7 normal Newton method iterations, but for the optimal multiplier method, oscillation did not appear.
b) The LAVM cannot go down to zero and this is not caused by the defect of numerical precision of the computer and/or programming code.
c) After 7 iterations, the LAVM of the optimal multiplier method is 0.2464E−03. After 7 iterations the LAVM of the normal Newton method is 0.2557E−03.
3) How about the E-Ψtc method? We use point diagonal preconditioning strategy. Taking ε = 0.5, h = 1.5, after 178 function evaluations, the LAVM is less than 10−4 (0.995493 × 10−4).
Unless it has a very small oscillation, the LAVM basically keeps monotone descending. (Both single and double precision have the same results). The LAVM is about 1/2 of the optimal multiplier method.
4) When using Inexact Newton-GMRES (15), we use single and double precision to calculate, the results are almost the same. Here we give the double precision result: (For every GMRES loop, we calculate the initial residue R0 and the final residue R1 of the linear equations, and take Rl/R0 < 0.1, or Maximum Number of Restart = 100 as the stop condition for each Newton iteration.)
That is to say, after 7 Newton iterations, there are totally (1 + 3 + 4 + 100 + 72 + 100 + 100 = 380) 380 × 15 = 5700 GMRES iterations, finally the LAVM reaches 0.2796D−02, then it no longer descends.
To summarize the results above:
1) Using the normal Newton method, the oscillation occurs;
2) After 9 optimal multiplier iterations, the LAVM keeps 0.2155 × 10−3;
3) With E-Ψtc method, after 178 function evaluations, the LAVM reaches 0.995493 × 10−4;
4) By using Inexact-Newton-GMRES (15), after 5700 GMRES iterations, LAVM = 0.2796 × 10−2.
All results we got show that for the 11-bus system, the LAVM cannot go down to zero, in another word, for this system, the solution of, does not exist. What we should do is to seek the minimum of the function
7. Solving Gradient Equation to Seek the Minimum for Bus-11
The first choice should be the Gauss-Newton method:
if the Jacobi is a nonsingular matrix, Gauss-Newton method will change into Newton method. As we have pointed out after 7 iterations the oscillation occurred.
We can also solve the gradient equations by using Newton method. In order to compare the two results, re-solve it again by LINPACK SUBROUTINE DGECO. By this way, we can get the condition number of the matrix sequence. The results are as follows in Table 2 (Rcond means the Reciprocal of the condition number).
Comparing the two results, we can see that before oscillation occurs, except 6th-iteration has tiny difference, both results are exactly same. It shows, even if the condition number of the increases to the J’s square, we can still get the good results. This fact proves again that bus-11 is not an ill-conditioned system.
What about the Point Diagonal Preconditioning E-Ψct for the gradient equation?
The differential equation should be:
Here DD is the diagonal matrix of. We have the following results in Table 3.
One may think 8858 gradient function evaluations are too expensive, but please note the step size is 20.0 - 600.0. If we use fixed iteration (Euler method), the step size is not greater than 0.76. We try this method, the
Table 2. Results of Newton Method for gradient equation by LINPACK DGECO.
Table 3. Results of Point Diagonal Preconditioning E-Ψtc for gradient equation.
step size is from 0.73 − 0.76, it needs 12,902,751 evaluations. The proportion of the number of the evaluating gradient functions is 12902751/8858 ≈ 1456. In the later period of iteration, these two iterations keep monotone descending. When reaches 0.0002089, LAVM reaches 0.0001027. They keep no variation. So we can determine the minimum of is 0.0002089. At this time, can we say an efficient seeking minimum method, E-Ψtc, has been achieved? Some people can reject this statement, the reason is conventional Gauss-Newton method only needs 7 iterations, which can make LAVM reach 0.0002557. Ours answer is that, after 7 iterations, the LAVM continues to oscillate and if the system is large, say the dimension is 10,000, then to solve 7 linear equations with dimension of 10,000, the workload is surely much greater than 8858 function evaluations.
Usually we evaluate the accuracy of a solution by the mismatch of the equation, but for ill-conditioned problem, it might be misleading. Here we give an extreme example.
Consider a 2-dimension linear algebraic equations:
If substitute x1 by 0.9911, and x2 by −0.4870, the mismatches will be −10−8 and 10−8. People might think the most accurate solution has been obtained. Unfortunately, s/he is wrong. The exact solution of these equations is 2 and −2. From a mathematical theory, discriminating the accuracy of a linear equation, we need to resort the knowledge of the condition number  , but this is a very difficult task to implement, especially for a real world problem.
We suggest taking the following easy ways to identify if the system is ill-conditioned or not:
Using both single and double precision to do the calculation, if the results basically keep coincidence, we can consider the system is well-conditioned.
If using a Normal Newton method, for 43-bus system, single precision cannot make the LAVM less than 1E−5, but for double precision, the LAVM can go down less than 1D−12 easily, so 43-bus system is surely an ill-conditioned system.
For 11-bus system, we do the same process above, the results of single and double precision are almost the same, the LAVM cannot go down to 1E−4 and 1D−4. After 7 Newton iterations, the LAVM are 0.255709E−03 and 0.255653D−03, then oscillation occurs for both iterations. So we confirmed that the 11-bus system is not an ill-conditioned system, but the LAVM cannot go down to zero.
In order to comprehend this phenomenon, consider the equation:
The solution does not exist, but at x = 0.0, reaches the minimum 0.0001. Let us see what happens if the normal Newton method is used. The initial value of x0 = 0.5, , after a sequence of monotone descend, at 6th iteration, x = 0.2995D−02, = 0.1090D−03, then oscillation occurs, at 10th iteration, x = −0.5698d−03, = 0.1003D−03, then oscillation occurs again. This oscillation continues until reaches its minimum 0.0001, at this time the value of x is 0.8337D-04.
If = x2 = 0, the oscillation does not occur for Newton iteration. After 536 iterations, x = 0.2223D−161, = 0.4941D−323. After 537 iterations, x and keep 0.1111D−161, and 0.0000, have no any variation. That is to say, we have an approximate solution, and the accuracy reaches 0.1111 × 10−161.
In engineering design, making or very small, e.g. is almost impossible, if we can make or <10−6, usually the design is considered acceptable.
Strictly speaking, for the engineering design problem, the solution of, in the sense of mathematics, does not exist. What we can do is to find the minimum. As for the size of the minimum, which is determined by the requirements of the engineering projects. Usually considering is good enough and or <10−6 is acceptable.
BUT, finding the minimum of is an even harder task when comparing with solving. In the process of solving, if around some small value is oscillating, we can recognize that the minimum has been obtained. For 43-bus system, the minimum is about 1D−12, so this system is well designed. As for the 11-bus system, the E-Ψtc method gives the minimum 1D−04. We consider it is determined by the parameters of design, but it was said to be “operational”  pg. 1738.
To summarize our opinion, the 11-bus system is not ill-conditioned, the norm of does not go down to zero, the LAVM can reach 10−4, but in engineering it is “operational”. For all the methods tested by us, the only method which can make LAVM to reach to 1D−04 is the E-Ψtc method.
Now let us discuss the reliability of the algorithm. In many literature, there has been too much emphasis on measuring the efficiency of the algorithm proposed by the author(s) but not enough on reliability. For the nonlinear equations, the non-uniqueness of the solution makes measuring the reliability being an important subject. Reference  proposed a strategy to test the reliability: Giving a standard starting point xs which is regarded as very close to the solution, then choosing another two starting points 10xs and 100xs, which are regarded as being far away from the solution. This strategy is totally not working for the power system. Almost all people take flat start (1.0 + j*0) as the standard starting point xs, choosing (10.0 + j*0) and (100.0 + j*0) as the starting point is unimaginable.
For the power system, the authors of this paper propose the following strategy to test the reliability:
1) Using single and double precision to solve the equation. If the LAVM of both goes down to less than 10−5 or 10−6 and the results basically keep coincidence, we can believe the results are reliable. If the results do not coincident, like the 43-bus system, which reminds us that this system could be an ill-conditioned system. In order to get reliable results, we can use double precision to make even smaller LAVM.
2) If single and double precision are used, the iterations do not converge, the doubts about the algorithm and the existence of the solution might be raised. We suggest that people change the algorithm first to see what happens. If the iteration still does not converge, the existence problem should be considered. We observe all the iteration processes, if all of the iterations diverges, we can affirm that the solution does not exist. If the solution around some value oscillates and this value is relatively small, like the 11-bus system, we can recognize the solution exists, in engineering it is called “operational”.
We do not suggest readers to change the initial values to improve the convergence, because for the power system, flat start is good enough, these initial values are very close to the solution.
Even though oscillation can also be regarded as divergence, in this paper, we discriminate those two different phenomena. Here, so-called divergence means drastic oscillation happens in the iteration process.
3) As we have mentioned before, the solution of a nonlinear equation is not unique. The small LAVM does not guarantee the solution we searched is the solution what we want to get. The reliability asks us to list the results produced by different methods and to compare them, but none of  or  gives the final results. Especially  should pay more attention to this demand, because to test a new method, the reliability is more important than anything else. On the last page (pg. 3657) of  , there was a result of the 11-bus system, cited other literature. In order to compare, we list three results (voltage magnitude and angle): 1) page 3657, 2) optimal multiplier (6 iterations) and 3) point diagonal preconditioning E-Ψtc.
The value of the angle of the bus-2 is probably wrong.
From the three results above in Table 4, we can see they are basically coincident. The LAVM = 0.995493D−04 for E-Ψtc method, and the LAVM = 0.447367D−03 for 6 Optimal Multiplier iterations. As we pointed out before, after 9 optimal multiplier iterations, the LAVM keeps 0.2155D−03. In order to compare with the data on page 3657, we list the 6 iterations results instead of 9 iterations results.
We find out that the benefits of the optimal multiplier method can only occur in the initial period/stage of Newton iteration. For the 43-bus system, if the stopping condition is LAVM < 10−8, it can save one Newton iteration. For the 11-bus system, it can eliminate the oscillation which occurs in the normal Newton method. During the later period, we should take care of the loss of quadratic convergence. The applicability of the optimal multiplier method is only for the Newton iterative method. Other methods, such as steepest descent method, are not available.
43-bus system is an ill-conditioned, using double precision can overcome “convergence trouble”.
11-bus system is not an ill-conditioned. It does not have convergence trouble. The problem is that the LAVM of is not as small as we usually expect, say 10−5 or 10−6. It is likely 10−4. Here we use the word “likely” to express the idea we are not sure. This is because that we resort solving to get the LAVM. From
Table 4. Comparison of three results.
Note: For the data on page 3657, in the front of angle should have a negative sign.
strictly mathematical theory, we should solve the gradient equation. We surely did it. The result tells us, for the 11-bus system, LAVM = 0.0001027, the minimum of = 0.0002089. We are very positive considering these results are correct.
 has tested Brown method, as we pointed out that “The method did not have a clear algorithmic description and hence was hard to implement”. The authors of  did a hard work, applied this method to Load Flow Problems. Their effort should be appreciated, but the inference of Newton method when it was used to solve the 43 and the 11 bus systems―“DIVERGENT” was wrong.
For the 11-bus system, applying the Newton method cannot make LAVM down to zero. This is determined by the parameters of the system, and is not caused by the defect of Newton method. “Divergent” and “cannot make LAVM down to zero” are two different concepts.  had the statement that Brown methods “converge” with only 5 iterations, but did not give any data to describe what the “converge” means.
In this paper, we introduce a new method, E-Ψtc. By two examples, one of them is a tough nonlinear equation; another one is super lager scale ill-conditioned linear equations. These results fully demonstrate the capability of the new method. For Load Flow Problems, the preconditioning technique is needed. If the system is ill-condi- tioned, using 2-dimension block diagonal preconditioning. For well-conditioned problems, use point diagonal preconditioning. As for the reliability, we compare our results with the results of existing methods and find that they are basically coincident (we did not list the results of the 43-bus).
Appendix (About Reference  )
In conclusion (3) of  , the authors said: “In the case that the solution continues to oscillate with the normal N-R method…
Applying the optimal multiplier, if the values of µ* approaches zero with the iteration count increase the solution does not exist from the initial estimate...”. The authors of this paper can give an example to show this conclusion is incorrect.
For the 43-bus system, if the voltage initial values of all buses are (0.7, j*0). The normal Newton method (double precision) continues to oscillate. The optimal multipliers of the optimal multiplier method are µ = 0.0411, 0.0218, 0.0217, 0.0000, 0.0009, 0.0003, 0.0048, 0.0005, 0.0000, 0.0000, … According to the conclusion (3), the solution does not exist. But in fact, the solution does exist and we can use Damp Newton method to get it. If the damp factor is small enough, i.e., α = 0.007, then after 1957 iterations, the solution can be obtained. The LAVM is less than 10−5 (incidentally α = 0.008, after 2000 iterations, LAVM = 0.1516D+03). That is to say, for initial value (0.7 + j*0), the solution does exist, but the OPTIMAL MULTIPLIER METHOD cannot get it and gives an incorrect information. This is the reason why the authors of this paper think the conclusion (3) is incorrect.
Now let us discuss the statement of reference  : “supposing that the correction vector Δx is obtained by some way”. For this statement, the authors of this paper consider that it is also incorrect.
In fact,  only give “APPLICATION OF THE OPTIMAL MLTIPLIER TO THE N-R METHOD” (Page 1737 Right Side). According the statement
If, the authors of  should gave a similar description: “APPLICATION OF THE OPTIMAL MLTIPLIER TO THE STEEPEST DESCENT METHOD” But there is not. In fact the derivation process is only available for N-R METHOD and cannot generalize the derivation to the other methods. So the statement should be “supposing that the correction vector Δx is obtained by N-R METHOD”.
What is the benefit of an Optimal Multiplier Method for the 43-bus system? The following figures can answer this question. For flat start, the initial LAVM is 0.5607 × 101, the successive iteration values for the normal Newton method and Optimal Multiplier Method are as follows:
0.3243D+01, 0.6209D+00, 0.1395D+00, 0.2084D−01, 0.1559D−2, 0.1657D−04,
0.2123D−08, 0.1656D−12, 0.4639D−12, 0.3622D−12 (Normal Newton Method)
0.1078D+01, 0.2838D+00, 0.3082D−01, 0.1252D−02, 0.3882D−05,
0.4134D−08, 0.5128D−10, 0.2423D−12, 0.3883D−12, 0.2913D−12
(Optimal Multiplier Method, Optimal Multiplier µ was taken from  )
From the figures above, we can see that if the stopping condition is LAVM < 1D−08, the Optimal Multiplier Method needs 6 iterations, it can save one iteration when compare with normal Newton iteration.
But from another side, we see that at Optimal Multiplier Method’s 5th and 6th iteration, the LAVM is 0.3852D−05 and 0.4134D−08 respectively. That is to say, if the values of µ were taken from  , then Optimal Multiplier Method in the 6th iteration would lose the quadratic convergence nature.
The authors of this paper did the same work of  and found the values of µ have small differences than  , we list them as following:
Ref.  :
0.6522, 1.1043, 1.0740, 1.1333, 1.0006, 1.0011, 0.9876, 1.0011, 1.0001, 0.9876.
0.6525, 1.1042, 1.0746, 1.1311, 1.0011, 1.0000, 1.0000, 1.0117, 1.0449, 1.01002
The LAVMs of ours are:
0.1073D+01, 0.2840D+00, 0.3088D−01, 0.1296D−02, 0.4415D−05,
0.4944D−10, 0.1385D−12, 0.1818D−12, 0.2634D−12, 0.3000D−12.
In order to check the values of µ, we use another method to do the calculation.
For 3rd polynomial equation, there must be a real root, using the normal Newton method we can find this root, then to solve a 2nd polynomial equation, three roots can be obtained. Among them, the one which most close to 1, should be the optimal multiplier µ.
The results are as follows:
0.6525, 1.1042, 1.0746, 1.1311, 1.0011, 1.0000, 1.0000, 1.0000, 1.0000
We believe these results are even more reasonable.
The corresponding LAVMs are:
0.1073D+01, 0.2840D+00, 0.3088D−01, 0.1296D−02, 0.4415D−05,
0.4944D−10, 0.2855D−12, 0.2423D−12, 0.2307D−12, 0.2394D−12.
The results above show in our calculation, Optimal Multiplier Method does not lose quadratic convergence nature.
 Iwamoto, S. and Tamura, Y. (1981) A Load Flow Calculation Method for ILL-Conditioned Power Systems. IEEE Transaction on Power Apparatus and Systems, PAS-100, 1736-1743.
 Tripathy, S.C., Durga Prasad, G., Malik, O.P. and Hope, G.S. (1982) Load-Flow Solutions for ILL-Conditioned Power Systems by a Newton-Like Method. IEEE Transactions on Power Apparatus and Systems, PAS-101, 3648-3657.
 Tinney, W.F. and Hart, C.E. (1967) Power Flow Solution by Newton’s Method. IEEE Transactions on Power Apparatus and Systems, PAS-86, 1449-1456.
 Wallach, Y. (1968) Gradient Methods for Load Flow Problems. IEEE Transactions on Power Apparatus and Systems, PAS-87, 1314-1318.
 Flueck, A.J. and Chiang, H.-D. (1998) Solving the Nonlinear Power Flow Equations with an Inexact Newton Method Using GMRES. IEEE Transactions on Power Systems, 13, 267-273.
 Chen, Y. and Shen, C. (2006) A Jacobian-Free Newton-GMRES (m) Method with Adaptive Preconditioner and Its Application for Power Flow Calculations. IEEE Transactions on Power Systems, 21, 1096-1103.
 More, J.J. and Cosnard, M.Y. (1979) Numerical Solution of Nonlinear Equations. ACM Transactions on Mathematical Software, 5, 64-85.
 Han, T. and Han, Y. (2010) Solving Large Scale Nonlinear Equations by a New ODE Numerical Integration Method. Applied Mathematics, 1, 222-229.
 Han, T., Luo, X. and Han, Y. (2011) Solving Large Scale Unconstrained Minimization Problems by a New ODE Numerical Integration Method. Applied Mathematics, 2, 527-532.
 Han, T. and Han, Y. (2013) Numerical Solution for Super Large Scale Systems. IEEE Access, 1, 537-544.
 More, J.J., Garbow, B.S. and Hillstrom, K.E. (1981) Testing Unconstrained Optimization Software. ACM Transactions on Mathematical Software, 7, 17-41.
 Watson, L.T., Sosonkina, M., Meliville, R.C., Morgan, A.P. and Walker, H.F. (1997) Algorithm 777 HOMPACK90: A Suite of FORTAAN 90 for Globally Convergent Homotopy Algorithms. ACM Transactions on Mathematical Software, 23, 514-549.
 Watson, L.T. (1990) Globally Convergent Homotopy Algorithms for Nonlinear Systems of Equations. Nonlinear Dynamics, 1, 143-191.
 Xu, D.C. and Han, T.M. (2012) Applied Research of a New ODE Method in the Power Flow Computation of Power System. 2012 IEEE Power Engineering and Automatic Conference (PEAM), Wuhan, 18-20 September 2012, 1-4.
 Tian, S.M. and Han, T.M. (2012) A New Method Was Used for Solving Linear Algebraic Equations Arising from Inexact Newton Method Applying to Power Flow Problems. 2012 International Conference on Electronic Information and Electrical Engineering, Changsha, 15-17 June 2012.