Preconditioned Iterative Method for Regular Splitting

Author(s)
Toshiyuki Kohno

ABSTRACT

Several preconditioners are proposed for improving the convergence rate of the iterative method derived from splitting. In this paper, the comparison theorem of preconditioned iterative method for regular splitting is proved. And the convergence and comparison theorem for any preconditioner are indicated. This comparison theorem indicates the possibility of finding new preconditioner and splitting. The purpose of this paper is to show that the preconditioned iterative method yields a new splitting satisfying the regular or weak regular splitting. And new combination preconditioners are proposed. In order to denote the validity of the comparison theorem, some numerical examples are shown.

Several preconditioners are proposed for improving the convergence rate of the iterative method derived from splitting. In this paper, the comparison theorem of preconditioned iterative method for regular splitting is proved. And the convergence and comparison theorem for any preconditioner are indicated. This comparison theorem indicates the possibility of finding new preconditioner and splitting. The purpose of this paper is to show that the preconditioned iterative method yields a new splitting satisfying the regular or weak regular splitting. And new combination preconditioners are proposed. In order to denote the validity of the comparison theorem, some numerical examples are shown.

1. Introduction

There are many iterative methods for solving a linear system of equations,

$Ax=b.$ (1)

Here, $A$ is a $n\times n$ nonsingular M-matrix; $x$ and $b$ are n-dimensional vectors. Matrix $A$ which arises from various problems is usually large and sparse matrix. Then large amount of computation times and memory are needed in order to solve efficiently the problems. Therefore, various preconditioners and iterative methods have been proposed. In this paper, Gauss-Seidel iterative method is treated as classical iterative method. Basically, the classical iterative method can be defined by splitting the coefficient matrix. It is assumed that the splitting for original linear equation satisfies the regular splitting. When Gauss- Seidel iterative method for preconditioned linear system, its splitting is Gauss- Seidel method. However, for the original coefficient matrix $A$ it means to define a new splitting. The new splitting also fulfils the condition of the regular or weak regular splitting. We propose new preconditioners by combining preconditioners satisfying the regular splitting.

The outline of the paper is as follows: In Section 2, we review the preconditioned iterative method and some known results. And the iterative algorithm based on the splitting is shown. Section 3 consists of a comparison theorem and some numerical examples. Finally, in Section 4, we make some concluding remarks.

2. Preconditioned Iterative Method and Some Results

We review some known results [1] [2] . We write $A\le B$ if ${a}_{ij}\le {b}_{ij}$ holds for all elements of $A=\left({a}_{ij}\right)$ and $B=\left({b}_{ij}\right)\in {R}^{n\times n}$ , calling A nonnegative if $A\ge O$ , and the vector $x\in {R}^{n}$ positive ( writing $x>0$ ) if all its elements are positive. Let ${Z}^{n\times n}$ denote that set of all real $n\times n$ matrices which have non-positive off-diagonal elements. A nonsingular matrix $A\in {R}^{n\times n}$ is called an M-matrix if ${A}^{-1}\ge O$ .

Definition 1. Let $A$ be a real matrix. The representation $A=M-N$ is called splitting of $A$ if $M$ is a nonsingular matrix. In addition, the splitting is

(i) Convergent if $\rho \left({M}^{-1}N\right)<1.$

(ii) Regular if ${M}^{-1}\ge O$ and $N\ge O.$

(iii) Weak regular if ${M}^{-1}\ge O$ and ${M}^{-1}N\ge O.$

We can denote a splitting based iterative method as follows,

${x}^{\left(k+1\right)}={M}^{-1}N{x}^{\left(k\right)}+{M}^{-1}b.$ (2)

${M}^{-1}N$ is called the iterative matrix. If the spectral radius of the iterative matrix is less than one, the sequence $\left\{{x}^{\left(k\right)}\right\}$ will converge to the solution of the linear system (1). We can express the matrix $A$ as the matrix sum

$A=D-E-F$ (3)

where $D=\text{diag}\left\{{a}_{11},{a}_{22},\cdots ,{a}_{nn}\right\}$ , $E$ and $F$ are strictly lower and strictly up- per triangular $n\times n$ matrices, respectively. For using Diagonal preconditioner ${D}^{-1}=\text{diag}\left\{1/{a}_{11},1/{a}_{22},\cdots 1/{a}_{nn}\right\}$ , we can rewrite

${A}^{\prime}={D}^{-1}A={D}^{-1}D-{D}^{-1}E-{D}^{-1}F=I-L-U.$ (4)

In this article, suppose the diagonal part of a coefficient matrix is unit diagonal element. So, we consider the matrix sum of a coefficient matrix as follows,

$A=I-L-U.$ (5)

When setting $M=I$ , we have the point Jacobiiterative method. And if $M=I-L$ , then we have the Gauss-Seidel iterative method.

Definition 2. We define $M=I-L$ the Gauss-Seidel regular splitting of $A=M-N$ , if ${M}^{-1}\ge O$ and $N\ge O$ .

For some preconditioner $P$ , we call the following equation the preconditioned iterative system,

$PAx=Pb.$ (6)

Many researchers proposed some preconditioner $P$ . The preconditioner using the first column has been proposed [3] as follows,

${P}_{c}=I+C=\left[\begin{array}{cccc}1& 0& \cdots & 0\\ -{a}_{21}& 1& & \vdots \\ \vdots & 0& \ddots & 0\\ -{a}_{n1}& 0& & 1\end{array}\right].$ (7)

${P}_{c}$ works to eliminate the first column of $A$ . Then ${A}_{c}=\left(I+C\right)A$ can be written,

${A}_{c}=I-L-U+C-CU={M}_{c}-{N}_{c,}$ (8)

where

${M}_{c}=I-{D}_{c}-L+C-{E}_{c},\text{}{N}_{c}=U+{F}_{c}$ (9)

and ${D}_{c}$ , ${E}_{c}$ and ${F}_{c}$ are the diagonal, strictly lower and strictly upper triangular parts of $CU$ , respectively. If ${M}_{c}$ is nonsingular, then the iterative matrix of the Gauss-Seidel method is defined by

${T}_{c}={M}_{c}^{-1}{N}_{c}={\left(I-{D}_{c}-L+C-{E}_{c}\right)}^{-1}\left(U+{F}_{c}\right).$ (10)

In 1991, Gunawardena et al. proposed the preconditioner ${P}_{S}=I+S$ [4] to eliminates the elements of the first upper co-diagonal of $A$ ,

${P}_{S}=I+S=\left[\begin{array}{cccc}1& -{a}_{12}& 0& 0\\ 0& 1& \ddots & \vdots \\ \vdots & \ddots & \ddots & -{a}_{n-1,n}\\ 0& \cdots & 0& 1\end{array}\right].$ (11)

In 1997, Kohno et al. proposed the preconditioner ${P}_{S\left(\alpha \right)}=I+S\left(\alpha \right)$ with parameter $\alpha $ to accelerate its convergence for the preconditioned iterative method [5] . Moreover, Kotakemori et al. proposed the preconditioner by using the upper triangular matrix [6] ,

${P}_{U}=I+\beta U=\left[\begin{array}{cccc}1& -{\beta}_{1}{a}_{12}& \cdots & -{\beta}_{1}{a}_{n2}\\ 0& 1& \ddots & \vdots \\ \vdots & \ddots & \ddots & -{\beta}_{n-1}{a}_{n-1,n}\\ 0& \cdots & 0& 1\end{array}\right].$ (12)

Parameters of each preconditioner are changed for each row.

The preconditioner ${P}_{\mathrm{max}}=I+{S}_{\mathrm{max}}$ using the maximum absolute value of the element of the upper diagonal part was proposed [7] ,

${S}_{\mathrm{max}}=\{\begin{array}{ll}-{a}_{iki}\hfill & 1\le i<n,i+1\le j\le n\hfill \\ 0\hfill & \text{otherwise,}\hfill \end{array}$ (13)

where ${k}_{i}=\mathrm{min}{I}_{i},\text{}{I}_{i}=\left\{j:\left|{a}_{ij}\right|\text{ismaximalfor}i+1\le j\le n\right\}$ for $1\le i<n$ .

3. Comparison Theorem

We now consider the comparison theorem for the two regular splitting of normal and preconditioned linear system in Equation (1) and (2). By using some preconditioner $P$ , we have preconditioned splitting $PA={M}_{P}-{N}_{P}$ , if ${M}_{P}$ and $P$ are nonsingular. Rewrite two splitting like following relation,

$A=M-N={P}^{-1}{M}_{P}-{P}^{-1}{N}_{P}$ (14)

because the iterative matrix of $PA$ transformed as follows,

${M}_{P}^{-1}{N}_{P}={M}_{P}^{-1}P{P}^{-1}{N}_{P}={\left({P}^{-1}{M}_{P}\right)}^{-1}{P}^{-1}{N}_{P}.$ (15)

A related lemma and theorems [8] [9] [10] are shown below.

Lemma 3. Let $A=M-N$ be a regular splitting of $A$ . If ${A}^{-1}\ge O$ , then

$\rho \left({M}^{-1}N\right)<1.$ (16)

Conversely, if $\rho \left({M}^{-1}N\right)<1$ , then ${A}^{-1}\ge O$ .

Theorem 4. Let $A\in {R}^{n\times n}$ be irreducible. Then each of the following conditions is equivalent to the statement: $A$ is a nonsingular M-matrix.

(i) ${A}^{-1}\ge O.$

(ii) $Ax\ge 0$ for some $x>0$ .

Corollary 5. If $A\in {Z}^{n\times n}$ is a nonnegative diagonally dominant matrix with ${a}_{ii}>0$ for all $i$ , then ${A}^{-1}\ge O$ .

Theorem 6. Let $T$ be a nonnegative matrix. If $Tx\ge \alpha x$ for some positive vector $x$ , then $\rho \left(T\right)\ge \alpha $ .

We solve the comparison theorem for any preconditioner $P$ .

Theorem 7. Let $A=M-N={P}^{-1}{M}_{P}-{P}^{-1}{N}_{P}$ be two regular splitting of $A$ . If ${A}^{-1}\ge O$ and ${\left({P}^{-1}{M}_{P}\right)}^{-1}\ge {M}^{-1}\ge O$ , then

$\rho \left({M}_{P}^{-1}{N}_{P}\right)\le \rho \left({M}^{-1}N\right)<1.$ (17)

Proof. Clearly, ${A}^{-1}\ge O$ , $\rho \left({M}^{-1}N\right)<1$ from Lemma 3. From the assumption ${\left({P}^{-1}{M}_{P}\right)}^{-1}\ge {M}^{-1}\ge O$ and Theorem 4, we have the following relation

$\left\{{\left({P}^{-1}M\right)}^{-1}-{M}^{-1}\right\}Ax\ge 0.$ (18)

It follows that

$\begin{array}{c}\left\{{\left({P}^{-1}{M}_{P}\right)}^{-1}-{M}^{-1}\right\}Ax=\left({M}_{P}^{-1}P-{M}^{-1}\right)Ax\\ ={M}_{P}^{-1}PAx-{M}^{-1}Ax\\ ={M}_{P}^{-1}P\left({P}^{-1}\left({M}_{P}-{N}_{P}\right)\right)x-{M}^{-1}\left(M-N\right)x\\ =\left(I-{M}_{P}^{-1}{N}_{P}\right)x-\left(I-{M}^{-1}N\right)x\\ ={M}^{-1}Nx-{M}_{P}^{-1}{N}_{P}x\ge 0.\end{array}$ (19)

Because the iterative matrix ${M}^{-1}N$ is nonnegative, there exists a positive vector $x$ satisfied the following equation

${M}_{P}^{-1}{N}_{P}x\le \rho \left({M}^{-1}N\right)x.$ (20)

From Theorem 6, we have

$\rho \left({M}_{P}^{-1}{N}_{P}\right)\le \rho \left({M}^{-1}N\right)<1.$ (21)

Example 1. We test the following matrix,

${A}_{1}=\left[\begin{array}{cccc}1& -1& 0& 0\\ 0& 1& -1& 0\\ 0& 0& 1& -1\\ -0.5& 0& 0& 1\end{array}\right].$ (22)

This matrix was shown in [10] as a counterexample to the condition of the parameter of preconditioner ${P}_{S\left(\alpha \right)}=I+S\left(\alpha \right)$ . We check whether or not the condition of Theorem 7 is satisfied. This matrix has two regular splitting

$\begin{array}{c}{A}_{1}=M-N=\left[\begin{array}{cccc}1& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& 1& 0\\ -0.5& 0& 0& 1\end{array}\right]-\left[\begin{array}{cccc}0& 1& 0& 0\\ 0& 0& 1& 0\\ 0& 0& 0& 1\\ 0& 0& 0& 0\end{array}\right]\\ ={P}_{S}^{-1}\left({M}_{S}-{N}_{S}\right)={\left[\begin{array}{cccc}1& 1& 0& 0\\ 0& 1& 1& 0\\ 0& 0& 1& 1\\ 0& 0& 0& 1\end{array}\right]}^{-1}\left(\left[\begin{array}{cccc}1& 0& 0& 0\\ 0& 1& 0& 0\\ -0.5& 0& 1& 0\\ -0.5& 0& 0& 1\end{array}\right]-\left[\begin{array}{cccc}0& 0& 1& 0\\ 0& 0& 0& 1\\ 0& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right]\right).\end{array}$ (23)

$M-N$ and ${M}_{S}-{N}_{S}$ are Gauss-Seidel regular splitting, respectively. The assumption of Theorem 7 is satisfied as following inequality,

${\left({P}_{S}^{-1}{M}_{S}\right)}^{-1}-{M}^{-1}=\left[\begin{array}{cccc}0& 1& 0& 0\\ 0& 0& 1& 0\\ 0.5& 0.5& 0& 1\\ 0& 0.5& 0& 0\end{array}\right]\ge O.$ (24)

Using the preconditioner ${P}_{S}=I+S$ is equivalent to using the following splitting,

${A}_{1}=\left[\begin{array}{cccc}1& -1& 1& -1\\ 0& 1& -1& 1\\ 0& 0& 1& -1\\ -0.5& 0& 0& 1\end{array}\right]-\left[\begin{array}{cccc}0& 0& 1& -1\\ 0& 0& 0& 1\\ 0& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right].$ (25)

This splitting satisfies the regular splitting. And the following inequality is satisfied,

${\left({P}_{\mathrm{max}}^{-1}{M}_{\mathrm{max}}\right)}^{-1}={\left({P}_{S}^{-1}{M}_{S}\right)}^{-1}\ge {M}^{-1}\ge O.$ (26)

Therefore, we have the spectral radius of each iterative matrix,

$\rho \left({M}_{S}^{-1}{N}_{S}\right)=0.500\le \rho \left({M}_{C}^{-1}{N}_{C}\right)=0.707\le \rho \left({M}^{-1}N\right)=0.794<1.$ (27)

For display, eigenvalues are given in approximate values. When using ${P}_{S\left(\alpha \right)}$ $=I+S\left(\alpha \right)$ with parameter $\alpha $ , the regular splitting is not satisfied for $\alpha >1$ . However, it is well-known that the spectral radius may be smaller than the one of ${P}_{S}=I+S$ in the range of $\alpha >1$ . For example, by using $\alpha =1.1$ for each

row, we have the assumption condition ${\left({P}_{S\left(\alpha \right)}^{-1}{M}_{S\left(\alpha \right)}\right)}^{-1}\ge {\left({P}_{S}^{-1}{M}_{S}\right)}^{-1}$ and the

spectral radius $\rho \left({M}_{S\left(\alpha \right)}^{-1}{N}_{S\left(\alpha \right)}\right)=0.328$ . But ${P}_{S\left(\alpha \right)}A={M}_{S\left(\alpha \right)}-{N}_{S\left(\alpha \right)}$ does not satisfy the regular splitting, since ${N}_{S\left(\alpha \right)}$ is not nonnegative. And more, comparison condition between ${P}_{S}$ and ${P}_{C}$ is not indicated. Because elements used in each preconditioner are different, comparison of matrices is not satisfied. Therefore, we show following corollary.

Corollary 8. Let $A=M-N={P}^{-1}{M}_{P}-{P}^{-1}{N}_{P}$ be two splitting of $A$ . If

$\left\{{\left({P}^{-1}M\right)}^{-1}-{M}^{-1}\right\}Ax\ge 0$ (28)

then

$\rho \left({M}_{P}^{-1}{N}_{P}\right)\le \rho \left({M}^{-1}N\right)<1.$ (29)

In Theorem 7 and Corollary 8, notice that the vector x is positive vector. When setting $x={\left(1,1,\cdots ,1\right)}^{\text{T}}$ , $Ax$ indicates the sum of each row and $Ax\ge 0$ . There- fore, $A\in {Z}^{n\times n}$ is a diagonally dominant matrix. For example 1, if $x={\left(1,1,1,1\right)}^{\text{T}}$ is chosen, it is $Ax=\left(0,0,0,0.5\right)\ge 0$ . We set $x={\left(1.2,1.1,1.1,0.8\right)}^{\text{T}}$ in order to make it a vector without zero. As a result, we can confirm

$Ax=\left(0.1,0.0,0.3,0.2\right)\ge 0$ and comparison condition

$\left\{{\left({P}_{S}^{-1}{M}_{S}\right)}^{-1}-{\left({P}_{C}^{-1}{M}_{C}\right)}^{-1}\right\}Ax\ge 0$ between ${P}_{S}$ and ${P}_{C}$ .

Example 2. Let

${A}_{2}=\left[\begin{array}{ccccc}1& -0.3& -0.1& -0.1& -0.1\\ -0.1& 1& -0.1& -0.3& -0.1\\ -0.1& -0.1& 1& -0.1& -0.3\\ -0.1& -0.1& -0.1& 1& -0.3\\ -0.1& -0.1& -0.1& -0.1& 1\end{array}\right].$ (30)

When using the Gauss-Seidel splitting for each preconditioned linear system, we have the following relations

${\left({P}_{S}^{-1}{M}_{S}\right)}^{-1}\ge {M}^{-1}\ge O$ (31)

${\left({P}_{\mathrm{max}}^{-1}{M}_{\mathrm{max}}\right)}^{-1}\ge {M}^{-1}\ge O$ (32)

and

$\left\{{\left({P}_{\mathrm{max}}^{-1}{M}_{\mathrm{max}}\right)}^{-1}-{\left({P}_{S}^{-1}{M}_{S}\right)}^{-1}\right\}Ax=\left(0,0.104,0.191,0.040,0.033\right)\ge 0$ (33)

where $x={\left(1,1,1,1,1\right)}^{\text{T}}$ . The relation of each spectral radius is

$\rho \left({M}_{\mathrm{max}}^{-1}{N}_{\mathrm{max}}\right)=0.166<\rho \left({M}_{S}^{-1}{N}_{S}\right)=0.197<\rho \left({M}^{-1}N\right)=\mathrm{0.348.}$ (34)

We test the following preconditioner combining two preconditioners,

${P}_{S+C}=I+S+C=\left[\begin{array}{cccc}1& -{a}_{12}& 0& 0\\ -{a}_{21}& 1& \ddots & \vdots \\ \vdots & \ddots & \ddots & -{a}_{n-1,n}\\ -{a}_{n1}& \cdots & 0& 1\end{array}\right].$ (35)

In this case, the condition of Theorem 7 satisfies, we have the spectral radius of preconditioned Gauss-Seidel iterative matrix is 0.156. And more, by setting the combination preconditioner ${P}_{U+C}=I+U+C$ , weak regular splitting is satisfied, the spectral radius is 0.078.

We show spectral radii of some preconditioners in Table 1 for examples 1 and 2.

Table 1. The spectral radius of each example.

*Denote that it does not satisfy Corollary 8. In Example 2, $\alpha =\left(1.0,1.6,1.6,1.6\right)$ .

4. Conclusion

In order to consider effective preconditioner and splitting with small calculation, we proved their comparison theorem. The splitting formula in Equation (15) obtained by preconditioned Gauss-Seidel iterative method with ${P}_{S}=I+S$ is the regular splitting. This splitting has a strange shape, but this iterative method converges. This result means that there is splitting what reduces the spectral radius of iterative matrix. Using preconditioner ${P}_{S\left(\alpha \right)}=I+S\left(\alpha \right)$ , smaller spectral radii are obtained for two examples, but their splitting does not satisfy both the regular and weak regular splitting. And, we were able to test the combination preconditioner and show a smaller spectral radius. However, there are many preconditioners to reduce the spectral radius even if the weak regular splitting is satisfied. Finding a new effective splitting and preconditioner is a future work.

Acknowledgements

The author would like to thank the referees who point out some improvements in the earlier manuscript. This study was supported by JSPS KAKENHI Grant Number JP 26400181.

Cite this paper

Kohno, T. (2017) Preconditioned Iterative Method for Regular Splitting.*Advances in Pure Mathematics*, **7**, 180-187. doi: 10.4236/apm.2017.72009.

Kohno, T. (2017) Preconditioned Iterative Method for Regular Splitting.

References

[1] Frommer, A. and Szyld, D.B. (1992) H-Splitting and Two-Stage Iterative Methods. Numerische Mathematik, 63, 345-356.

https://doi.org/10.1007/BF01385865

[2] Niki, H., Harada, K., Morimoto, M. and Sakakihara, M. (2004) The Survey of Preconditioners Used for Accelerating the Rate of Convergence in the Gauss-Seidel Method. Journal of Computational and Applied Mathematics, 164-165, 587-600.

https://doi.org/10.1016/j.cam.2003.11.012

[3] Milaszewicz, J.P. (1987) Improving Jacobi and Gauss-Seidel Iterations. Linear Algebra and Its Applications, 93, 161-170.

https://doi.org/10.1016/S0024-3795(87)90321-1

[4] Gunawardena, A.D., Jain, S.K. and Snyder, L. (1991) Modified Iterative Methods for Consistent Linear System. Linear Algebra and Its Applications, 154-156, 123-143.

https://doi.org/10.1016/0024-3795(91)90376-8

[5] Kohno, T., Kotakemori, H. and Niki, H. (1997) Improving the Modified Gauss-Seidel Method for Z-Matrices. Linear Algebra and Its Applications, 267, 113-123.

https://doi.org/10.1016/S0024-3795(97)00063-3

[6] Kotakemori, H., Niki, H. and Okamoto, N. (1996) Accelerated Iterative Method for Z-Matrices. Journal of Computational and Applied Mathematics, 75, 87-97.

https://doi.org/10.1016/S0377-0427(96)00061-1

[7] Kotakemori, H., Harada, K., Morimoto, M. and Niki, H. (2002) A Comparison Theorem for the Iterative Method with the Preconditioner (I + Smax). Journal of Computational and Applied Mathematics, 145, 373-378.

https://doi.org/10.1016/S0377-0427(01)00588-X

[8] Varga, R.S. (2000) Matrix Iterative Analysis. Springer.

https://doi.org/10.1007/978-3-642-05156-2

[9] Berman, A. and Plemmons, R.J. (1994) Nonnegative Matrices in the Mathematical Sciences. SIAM.

https://doi.org/10.1137/1.9781611971262

[10] Li, W. (2005) A Note on the Preconditioned Gauss-Seidel (GS) Method for Linear Systems. Journal of Computational and Applied Mathematics, 182, 81-90.

https://doi.org/10.1016/j.cam.2004.11.041

[1] Frommer, A. and Szyld, D.B. (1992) H-Splitting and Two-Stage Iterative Methods. Numerische Mathematik, 63, 345-356.

https://doi.org/10.1007/BF01385865

[2] Niki, H., Harada, K., Morimoto, M. and Sakakihara, M. (2004) The Survey of Preconditioners Used for Accelerating the Rate of Convergence in the Gauss-Seidel Method. Journal of Computational and Applied Mathematics, 164-165, 587-600.

https://doi.org/10.1016/j.cam.2003.11.012

[3] Milaszewicz, J.P. (1987) Improving Jacobi and Gauss-Seidel Iterations. Linear Algebra and Its Applications, 93, 161-170.

https://doi.org/10.1016/S0024-3795(87)90321-1

[4] Gunawardena, A.D., Jain, S.K. and Snyder, L. (1991) Modified Iterative Methods for Consistent Linear System. Linear Algebra and Its Applications, 154-156, 123-143.

https://doi.org/10.1016/0024-3795(91)90376-8

[5] Kohno, T., Kotakemori, H. and Niki, H. (1997) Improving the Modified Gauss-Seidel Method for Z-Matrices. Linear Algebra and Its Applications, 267, 113-123.

https://doi.org/10.1016/S0024-3795(97)00063-3

[6] Kotakemori, H., Niki, H. and Okamoto, N. (1996) Accelerated Iterative Method for Z-Matrices. Journal of Computational and Applied Mathematics, 75, 87-97.

https://doi.org/10.1016/S0377-0427(96)00061-1

[7] Kotakemori, H., Harada, K., Morimoto, M. and Niki, H. (2002) A Comparison Theorem for the Iterative Method with the Preconditioner (I + Smax). Journal of Computational and Applied Mathematics, 145, 373-378.

https://doi.org/10.1016/S0377-0427(01)00588-X

[8] Varga, R.S. (2000) Matrix Iterative Analysis. Springer.

https://doi.org/10.1007/978-3-642-05156-2

[9] Berman, A. and Plemmons, R.J. (1994) Nonnegative Matrices in the Mathematical Sciences. SIAM.

https://doi.org/10.1137/1.9781611971262

[10] Li, W. (2005) A Note on the Preconditioned Gauss-Seidel (GS) Method for Linear Systems. Journal of Computational and Applied Mathematics, 182, 81-90.

https://doi.org/10.1016/j.cam.2004.11.041