Received 8 November 2015; accepted 23 May 2016; published 26 May 2016
Let, then the matrix equation in the following form, can be called “linear matrix equation”:
Matrix equations are arisen in control theory, signal processing, model reduction, image restoration, ordinary and partial differential equations and several applications in science and engineering. There are various approaches either direct methods or iterative methods to evaluate the solution of these equations  -  .
The HPM that was proposed first time by Doctor He  -  , was further developed by scientists and engineers. This general strategy which is a combination of the customary perturbation method and homotopy in topology, deforms to a simple problem which can be easily solved uninterruptedly. Moreover, HPM which does not involve a small parameter in an equation, has a significant advantage that it provides an analytical approximate solution to a wide range of either linear or nonlinear problems in applied sciences. In most cases, employing HPM gives a very speedy convergence of the solution series, and usually only a few iterations to acquire very accurate solutions are required, particularly when the improved version will be applied.
In terms of linear algebra, Keramati  first applied a HPM to solve linear system of equations. The splitting matrix of this method is only the identity matrix. However, this method does not converge for some systems when the spectrum radius is greater than one. To make the method available, the auxiliary parameter and the auxiliary matrix were added to the homotopy method by Liu  . He has adjusted the Richardson method, the Jacobi method, and the Gauss-Seidel method to choose the splitting matrix. Edalatpanah and Rashidi  focused on modification of (HPM) for solving systems of linear equations by choosing an auxiliary matrix to increase the rate of convergence. Furthermore, Saeidian et al.  proposed an iterative method to solve linear systems equations based on the concept of homotopy. They have shown that their modified method presents more cases of convergence. More recently, Khani et al.  have combined the application of homotopy perturbation method and they have used use of different for solving system of linear equations. They mentioned that this modification performs better than the Homotopy Perturbation Method (HPM) for solving linear systems.
According to our knowledge, nevertheless HPM has not been modified to solve a matrix equation. In this survey, the main contribution is to suggest an improvement of the HPM for finding approximated solution for (1). Moreover, the necessary and sufficient conditions for convergence of the modified method will be investigated. Finally, some numerical experiments and applications are drawn in numerical results.
2. Solution of the Linear Matrix Equation
In this section, first the conditions that Equation (1) has a solution are decelerated. Then, some applicable relations by utilizing HPM will be attained. Eventually, convergence of HPM series will be analyzed in detail.
2.1. Existence and Uniqueness
The following theorems characterize the existence and uniqueness to the solution of Equation (1).
Theorem 2.1.  The linear matrix Equation (1) has a solution if and only if. Equivalently, a solution exists if and only if, whereas is denoted a Moore-Penrose pseudo-inverse of matrix.
Theorem 2.2.  Let, and suppose that. Then any matrix in the form
is a solution of (1), where is arbitrary matrix. Furthermore, all solutions of Equation (1) are in this form.
Theorem 2.3.  A solution of the matrix linear Equation (1) is unique if and only if. Alternatively, (1) has a unique solution if and only if.
Remark 2.4. It should be emphasized that when is square and nonsingular matrix, and so. Thus, there is no arbitrary component, leaving only the unique solution. Moreover, in the sense of linear algebra, avoiding the computation of matrix inversion is recommended because of increasing computational complexity.
2.2. Homotopy Perturbation Method
Now, we are ready to apply the convex homotopy function in order to obtain the solution of linear matrix equation. A general type of homotopy method for solving (1) can be described by setting, as follows. Let
A convex homotopy would be in the following form
whenever, the homotopy is defined by
Notice that F is an operator with known solution. In this case, HPM utilizes the homotopy parameter p as an expanding parameter to obtain
and it gives an approximation to the solution of (1) as
By substituting (3) and (4) in (5), and by equating the terms with the identical power of p, after simplification and application of the relations, we obtain
If take, this implies that
Hence, the solution can be expressed in the following form
Remark 2.5. It should be pointed out that we have focused to the solution of matrix equation, whenever all matrices are non square. But it is found that the homotopy function can not be constructed because of disagreement between dimension of matrices. This issue is presented in the following relation:
Thus, we considered all matrices in Equation (1) are square.
2.3. Convergence Analysis
To verify whether the sequence in (12) is converge or not, we give following analysis. Notice that throughout the following theorems, denotes the spectral radius which is defined by, whenever is spectrum of the matrix.
Theorem 2.6. The sequence is a Cauchy sequence if
Proof: It is clear that. Then by taking matrix norm
Hence, if, we have obviously, and then
Thus is a Cauchy sequence. This completes theorem. ,
Definition 2.7.  A matrix is called strictly row diagonally dominant (SRDD), if we have
Theorem 2.8. Consider the matrix where are diagonal elements of matrix. If the matrix be SRDD, then we have
Proof: Suppose and, then it can be easily shown that
Since is SRDD matrix, it is clear that. Hence,
which completes the proof. ,
In Theorem 2.8, the important question is “Does the matrix is SRDD whenever and are SRDD matrices?” The answer of this question is negative. Because, firstly the product of two SRDD matrices is
not SRDD, as a counterexample we can pay attention to the matrices and, and their product is. Secondly, inversion of SRDD matrix is not SRDD. As a counterexample, consider which has the inversion.
Now, if, by pre-multiplying the both sides of Equation (1) by matrix, the following equation can be obtain:
To be more precise, by using convex homotopy function, we can easily verify that
In this part, we would like to show that the series is converges. Thus, first we need the following definition and theorems.
Definition 2.9.  Let are three matrices satisfying. The pair of matrices is a regular splitting of, if is nonsingular and and are nonnegative.
Theorem 2.10. Let is nonsingular matrix such that and are nonnegative. Then the sequence
converges if is nonsingular and is nonnegative.
Proof: Suppose that is a singular matrix such that and are nonnegative. By employing Theorem 2.9 it can be obtain that as both and are a regular splitting of. This implies that
is converges series. ,
3. Numerical Experiments
In this section, some numerical illustrations are provided. All computations have been carried out using MATLAB 2012 (Ra) with roundoff error. Moreover, the error of the approximations have been measured by the following stopping criteria:
whereas, is the approximated solution obtained by HPM.
Example 3.1. First example made approximating the solution of the equation by using modified HPM. In order to this purpose, two matrices A and B are considered:
After evaluating the inversion of and multiplying by, it is observed that is SRDD matrix, and thus the matrix can be obtained as
Furthermore,. Hence, considering seven terms of homotopy series as:
the approximated solution could be obtained as follows:
However, the exact solution of is
In conclusion, it can be seen that the approximation has a good agreement with the exact solution. In this case the residual error is.
Example 3.2. In this example, two strictly diagonally dominant matrices are assumed as follows:
The solution of matrix equation is approximated using HPM. The residual errors have been measured for different dimension of matrices and for different terms in homotopy method. Results are reported in Table 1. In this example, by increasing dimension of the matrices, the error of the approximation will be increased gradually. In addition, considering more terms of the homotopy method, the approximation will be more accurate.
Example 3.3 (Application in matrix inversion). If we substitute in the matrix Equation (1), then we have. Thus, by applying the HPM to this equation, the inversion of matrix can be easily evaluated. For this purpose, assume the following matrix:
This matrix is diagonally dominant and well conditioned matrix. We have used MATLAB command inv(A) with very small error (we considered as exact solution) and compare it with HPM calculation. Results are shown in Figure 1. It is obviously seen that the approximated solutions for different dimensions (N = 10, 20, 30, 40) are very close to the exact solutions.
In this work, the linear matrix equation is solved by improving the well-known perturbation method. Numerical experiments demonstrated that by considering more terms of the approximations, error will be decreased dramatically. Furthermore, if the matrix becomes more strictly row diagonally dominant, convergence of the homotopy series will increase. In conclusion, it is interesting to know that considering special case, the proposed method can compute inverse of the matrix efficiently. Moreover, the author found that this method could be generalized to obtain the solution of other equations.
Special thanks go to the anonymous referee for some valuable suggestions, which have resulted in the improvement of this work. This work is supported by Islamic Azad University, Robat Karim University, Tehran, Iran.
Table 1. Comparison error for different dimensions in Example 3.2.
Figure 1. Comparison error between MATLAB command and HPM for matrix inversion.
Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this article.