A Block-Preconditioned Inexact Linear Solver for Computing the Complex Eigenpairs of a Large Sparse Matrix

Richard Olatokunbo Akinola^{1}^{*},
Stephen Yakubu Kutchin^{1},
Ayodeji Sunday Ayodele^{1},
Kingsley Obiajulu Muka^{2}

Show more

1. Introduction

Let $A$ be a large sparse, real n by n nonsymmetric matrix and $B\in {\mathbb{R}}^{n\times n}$ a symmetric positive definite matrix. In this paper, we consider the problem of computing the eigenpair $\left(z\mathrm{,}\lambda \right)$ from the following generalized complex eigenvalue problem

$Az=\lambda Bz\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}z\in {\u2102}^{n}\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}z\ne 0\mathrm{,}$ (1)

where $\lambda \in \u2102$ is the eigenvalue of the pencil $\left(A\mathrm{,}B\right)$ and $z$ its corresponding complex eigenvector. We assume that the eigenpair of interest $\left(z\mathrm{,}\lambda \right)$ is algebraically simple, with ${\psi}^{H}$ the corresponding left eigenvector such that [1]

${\psi}^{H}Bz\ne 0.$ (2)

By adding the normalization

${z}^{H}Bz=\mathrm{1,}$ (3)

to (1), the combined system of equations can be expressed in the form $F\left(z\right)=0$ as

$F\left(z\right)=\left[\begin{array}{c}\left(A-\lambda B\right)z\\ -\frac{1}{2}{z}^{H}Bz+\frac{1}{2}\end{array}\right]=0\mathrm{.}$ (4)

Note that ${z}^{H}Bz$ is real since $B$ is symmetric and positive definite. This results in solving a system of nonlinear n complex and one real equation for the $n+1$ complex unknowns $v={\left[z\mathrm{,}\lambda \right]}^{\text{T}}$ . The reason we cannot use Newton’s method to solve (4) is because $\stackrel{\xaf}{z}$ in the normalization ${z}^{H}Bz={\stackrel{\xaf}{z}}^{\text{T}}Bz$ is not differentiable.

Recall that for a real eigenpair $\left(z\mathrm{,}\lambda \right)$ , (4) is $\left(n+1\right)$ real equations for $\left(n+1\right)$ real unknowns and Newton’s method for solving (4) involves the solution of the $\left(n+1\right)$ square linear systems

$\left[\begin{array}{cc}A-{\lambda}^{\left(k\right)}B& -B{z}^{\left(k\right)}\\ -{\left(B{z}^{\left(k\right)}\right)}^{\text{T}}& 0\end{array}\right]\left[\begin{array}{c}\Delta {z}^{\left(k\right)}\\ \Delta {\lambda}^{\left(k\right)}\end{array}\right]=-\left[\begin{array}{c}\left(A-{\lambda}^{\left(k\right)}B\right){z}^{\left(k\right)}\\ -\frac{1}{2}{z}^{\left(k\right)}{}^{{}^{\text{T}}}B{z}^{\left(k\right)}+\frac{1}{2}\end{array}\right]\mathrm{,}$ (5)

for the $\left(n+1\right)$ real unknowns $\Delta {v}^{\left(k\right)}=\left[\Delta {z}^{\left(k\right)}{}^{{}^{\text{T}}}\mathrm{,}\Delta {\lambda}^{\left(k\right)}\right]$ , and updating ${v}^{\left(k+1\right)}={v}^{\left(k\right)}+\Delta {v}^{\left(k\right)}$ . Secondly, if instead of the normalization (3), we add ${c}^{H}z=1$ , where $c$ is a fixed complex vector (see, for example, [2] ), (1) and ${c}^{H}z=1$ provide $\left(n+1\right)$ complex equations for $\left(n+1\right)$ complex unknowns, and the Jacobian of this new system is

$\left[\begin{array}{cc}\left(A-\lambda B\right)& -Bz\\ {c}^{H}& 0\end{array}\right]\mathrm{.}$

The above Jacobian is square and can be easily shown to be nonsingular, using the ABCD Lemma [3] if the eigenvalue of interest is algebraically simple and ${c}^{H}z\ne 0$ . Thirdly, if $\left(z\mathrm{,}\lambda \right)$ is complex, then, as stated earlier, we have n complex and one real equation. Also, if $z$ solves (4), then so does $z{\text{e}}^{i\theta}$ for any $\theta $ , such that $0\le \theta \le 2\pi $ .

Our approach for analyzing the solution of (4) for $v$ begins by splitting the eigenpair $\left(z\mathrm{,}\lambda \right)$ into their real and imaginary parts: $z={z}_{1}+i{z}_{2}$ , $\lambda =\alpha +i\beta $ where ${z}_{1}\mathrm{,}{z}_{2}\in {\mathbb{R}}^{n}$ , and $\alpha \mathrm{,}\beta \in \mathbb{R}$ . After expanding (4), we will obtain a real $2n+1$ under-determined system of nonlinear equations in $2n+2$ real unknowns $v={\left[{z}_{1}\mathrm{,}{z}_{2}\mathrm{,}\alpha \mathrm{,}\beta \right]}^{\text{T}}$ , and it is natural to use the Gauss-Newton method (see, for example, Deuflhard ( [4] , pp. 222-223)) to obtain a solution (see also, [5] [6] [7] [8] ). By linearizing the under-determined system of nonlinear equations, we obtain an under-determined system of linear equations involving the Jacobian. This paper is structured as follows: in Section 2, we show that the Jacobian has a unique nullvector at the root. This is then followed in Section 3, we present two orthogonality results and Algorithm 1 is given. In Section 4, we present an inexact inverse iteration algorithm with preconditioning for solving the large system of equations encountered.

Algorithm 1. Computing the complex eigenvalues of the pencil $\left(A\mathrm{,}B\right)$ .

The main mathematical tools used in this paper are the LU factorization, inexact inverse iteration and preconditioned Generalized Minimal Residual (GMRES) [9] . The main reason for using inexact inverse iteration is due to the fact that as mentioned in an earlier paper, we do not solve $Mu={B}_{1}w$ , in practice but we will use it to show that the solution is possible. Theorem 2.1 shows that the Jacobian has a single nullvector at the root, while Theorem 3.1 gives an important orthogonality result. Algorithms 1-3 are presented. We remark that in the limit, the approximate nullvector converges to the exact. A numerical example is given which supports the validity of the algorithms presented though, as usual, relies on good initial guesses to the desired eigenpair. The classical inverse iteration for the matrix pencil converges slowly for some eigenvalue problems while we present algorithms that converge quadratically. Throughout this paper, unless otherwise stated all norms are the 2-norm.

The following result helps to enforce the validity of the results in this paper.

Lemma 1.1: [10] Let ${F}_{w}\left(w\right)$ be of full rank. If ${F}_{w}\left(w\right)\Delta w=F\left(w\right)$ , is an under-determined linear system of equations, then its least squares solution

$\Delta w=-{F}_{w}{\left(w\right)}^{\text{T}}{\left[{F}_{w}\left(w\right){F}_{w}{\left(w\right)}^{\text{T}}\right]}^{-1}F\left(w\right)$ , is orthogonal to the nullspace of ${F}_{w}\left(w\right)$ .

Algorithm 2. Inexact inverse iteration algorithm.

Algorithm 3. Complex eigenvalues of the pencil $\left(A\mathrm{,}B\right)$ using Inexact Inverse Iteration with preconditioning.

In the next section, we will express both $z$ and $\lambda $ as $\lambda =\alpha +i\beta $ and $z={z}_{1}+i{z}_{2}$ , convert the nonlinear system (4) to a real under-determined system of nonlinear equations and prove some important results.

2. Computation of Complex Eigenpairs by Solving an Under-Determined System of Nonlinear Equations

In this section, we will expand the system of nonlinear n complex and one real equations in $n+1$ complex unknowns (4) by writing $z$ and $\lambda $ as $z={z}_{1}+i{z}_{2}$ and $\lambda =\alpha +i\beta $ , respectively. The reason for having an underdetermined system of equations instead of a square system of equations is because, expanding ${z}^{H}Bz=1$ gives only one real equation, since $B$ is symmetric positive definite, while $\left(A-\lambda B\right)z=0$ results in 2n real equations. This results in a $2n+1$ real under-determined system of nonlinear equations in $2n+2$ real unknowns. This will then be followed by presenting the underdetermined system of nonlinear equations and explicit expression for its Jacobian. Furthermore, we will show in the main result of this section-Theorem 2.1 that if the eigenvalue of interest in $\left(A\mathrm{,}B\right)$ is algebraically simple, then the Jacobian has linearly independent rows. We will find the right nullvector of the Jacobian at the root and proof that it is unique.

If we let $z={z}_{1}+i{z}_{2}$ and $\lambda =\alpha +i\beta $ , then the square nonlinear system of Equations (4) can be written as

$\begin{array}{c}\left(A-\lambda B\right)z=\left[A-\left(\alpha +i\beta \right)B\right]\left({z}_{1}+i{z}_{2}\right)\\ =\left(A-\alpha B\right){z}_{1}+\beta B{z}_{2}+i\left[\left(A-\alpha B\right){z}_{2}-\beta B{z}_{1}\right]\mathrm{,}\end{array}$ (6)

and

${z}^{H}Bz={z}_{1}^{\text{T}}B{z}_{1}+{z}_{2}^{\text{T}}B{z}_{2}\mathrm{.}$ (7)

Hence,

$-\frac{1}{2}{z}^{H}Bz+\frac{1}{2}=-\frac{1}{2}\left({z}_{1}^{\text{T}}B{z}_{1}+{z}_{2}^{\text{T}}B{z}_{2}\right)+\frac{1}{2}=0\mathrm{.}$

Since $\left(A-\lambda B\right)z=0$ , we equate the real and imaginary parts of (6) to zero and obtain the 2n real equations $\left(A-\alpha B\right){z}_{1}+\beta B{z}_{2}=0$ and

$\left(A-\alpha B\right){z}_{2}-\beta B{z}_{1}=0$ . This means, $F\left(v\right)$ consists of the 2n real equations

arising from (6) and one real equation $-\frac{1}{2}\left({z}_{1}^{\text{T}}B{z}_{1}+{z}_{2}^{\text{T}}B{z}_{2}\right)+\frac{1}{2}=0$ ;

$F\left(v\right)=\left[\begin{array}{c}\left(A-\alpha B\right){z}_{1}+\beta B{z}_{2}\\ -\beta B{z}_{1}+\left(A-\alpha B\right){z}_{2}\\ -\frac{1}{2}\left({z}_{1}^{\text{T}}B{z}_{1}+{z}_{2}^{\text{T}}B{z}_{2}\right)+\frac{1}{2}\end{array}\right]=0\mathrm{,}$ (8)

where $F\mathrm{:}{\mathbb{R}}^{\left(2n+2\right)}\to {\mathbb{R}}^{\left(2n+1\right)}$ . The Jacobian ${F}_{v}\left(v\right)$ of $F\left(v\right)$ which has the following explicit expression

${F}_{v}\left(v\right)=\left[\begin{array}{cccc}\left(A-\alpha B\right)& \beta B& -B{z}_{1}& B{z}_{2}\\ -\beta B& \left(A-\alpha B\right)& -B{z}_{2}& -B{z}_{1}\\ -{\left(B{z}_{1}\right)}^{\text{T}}& -{\left(B{z}_{2}\right)}^{\text{T}}& 0& 0\end{array}\right]\mathrm{,}$ (9)

is a $2n+1$ by $2n+2$ real matrix. From the Jacobian (9) above, we define the real 2n by 2n matrix $M$ as

$M=\left[\begin{array}{cc}\left(A-\alpha B\right)& \beta B\\ -\beta B& \left(A-\alpha B\right)\end{array}\right]\mathrm{.}$ (10)

Also, we form the 2n by 2 real matrix

$N=\left[\begin{array}{cc}-B{z}_{1}& B{z}_{2}\\ -B{z}_{2}& -B{z}_{1}\end{array}\right]=\left[\begin{array}{cc}-{B}_{1}w& {B}_{1}{w}_{1}\end{array}\right]\mathrm{,}$ (11)

consisting of the product of ${B}_{1}=\left[\begin{array}{cc}B& O\\ O& B\end{array}\right]$ and the matrix of right nullvectors of $M$ at the root, where

$w=\left[\begin{array}{c}{z}_{1}\\ {z}_{2}\end{array}\right]\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{w}_{1}=\left[\begin{array}{c}{z}_{2}\\ -{z}_{1}\end{array}\right]\mathrm{,}$ (12)

and $O$ is the n by n zero matrix. The Jacobian (9) can be rewritten in the following partitioned form

${F}_{v}\left(v\right)=\left[\begin{array}{ccc}M& -{B}_{1}w& {B}_{1}{w}_{1}\\ -{\left({B}_{1}w\right)}^{\text{T}}& 0& 0\end{array}\right]=\left[\begin{array}{cc}M& N\\ -{\left({B}_{1}w\right)}^{\text{T}}& {0}^{\text{T}}\end{array}\right]\mathrm{,}$ (13)

with $M$ , $N$ are as defined in (10) and (11) respectively. Note that because at the root,

$\left[\begin{array}{cc}\left(A-\alpha B\right)& \beta B\\ -\beta B& \left(A-\alpha B\right)\end{array}\right]\left[\begin{array}{c}{z}_{1}\\ {z}_{2}\end{array}\right]=\left[\begin{array}{c}\left(A-\alpha B\right){z}_{1}+\beta B{z}_{2}\\ \left(A-\alpha B\right){z}_{2}-\beta B{z}_{1}\end{array}\right]=0\mathrm{,}$

this implies that $\left[\begin{array}{c}{z}_{1}\\ {z}_{2}\end{array}\right]$ or its nonzero scalar multiple is a right nullvector of $M$ . In the same vein, we find

$\left[\begin{array}{cc}\left(A-\alpha B\right)& \beta B\\ -\beta B& \left(A-\alpha B\right)\end{array}\right]\left[\begin{array}{c}{z}_{2}\\ -{z}_{1}\end{array}\right]=\left[\begin{array}{c}\left(A-\alpha B\right){z}_{2}-\beta B{z}_{1}\\ -\left\{\left(A-\alpha B\right){z}_{1}+\beta B{z}_{2}\right\}\end{array}\right]=0\mathrm{,}$

and $\left[\begin{array}{c}{z}_{2}\\ -{z}_{1}\end{array}\right]$ or its nonzero scalar multiple is also a right nullvector of $M$ at the

root. Since the eigenvalue $\lambda $ of $\left(A\mathrm{,}B\right)$ is algebraically simple by assumption, then by (2), we need to give explicit expressions for the left nullvector of $\left(A-\lambda B\right)$ in order to prove that the Jacobian has full row rank at the root. Observe that if we define $\psi ={\psi}_{1}+i{\psi}_{2}$ , where ${\psi}_{1}\mathrm{,}{\psi}_{2}\in {\mathbb{R}}^{n}\backslash \left\{0\right\}$ for all $\psi \in \mathcal{N}{\left(A-\lambda B\right)}^{H}\backslash \left\{0\right\}$ , then this implies

$\begin{array}{c}{\psi}^{H}\left(A-\lambda B\right)=\left({\psi}_{1}^{\text{T}}-i{\psi}_{2}^{\text{T}}\right)\left[\left(A-\alpha B\right)-i\beta B\right]\\ ={\psi}_{1}^{\text{T}}\left(A-\alpha B\right)-\beta {\psi}_{2}^{\text{T}}B-i\left[\beta {\psi}_{1}^{\text{T}}B+{\psi}_{2}^{\text{T}}\left(A-\alpha B\right)\right]={0}^{\text{T}}\mathrm{.}\end{array}$

Hence, ${\psi}_{1}^{\text{T}}\left(A-\alpha B\right)-\beta {\psi}_{2}^{\text{T}}B={0}^{\text{T}}$ and $\beta {\psi}_{1}^{\text{T}}B+{\psi}_{2}^{\text{T}}\left(A-\alpha B\right)={0}^{\text{T}}$ . The implication of this is that

$\begin{array}{c}\left[{\psi}_{1}^{\text{T}}\text{\hspace{1em}}{\psi}_{2}^{\text{T}}\right]M=\left[{\psi}_{1}^{\text{T}}\text{\hspace{1em}}{\psi}_{2}^{\text{T}}\right]\left[\begin{array}{cc}\left(A-\alpha B\right)& \beta B\\ -\beta B& \left(A-\alpha B\right)\end{array}\right]\\ =\left[{\psi}_{1}^{T}\left(A-\alpha B\right)-\beta {\psi}_{2}^{\text{T}}B\text{\hspace{1em}}\beta {\psi}_{1}^{\text{T}}B+{\psi}_{2}^{\text{T}}\left(A-\alpha B\right)\right]={0}^{\text{T}}\mathrm{.}\end{array}$

which means, $\left[{\psi}_{1}^{\text{T}},{\psi}_{2}^{\text{T}}\right]$ or its nonzero scalar multiple is a left nullvector of $M$ . Similarly,

$\begin{array}{c}\left[{\psi}_{2}^{\text{T}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-{\psi}_{1}^{\text{T}}\right]M=\left[{\psi}_{2}^{\text{T}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-{\psi}_{1}^{\text{T}}\right]\left[\begin{array}{cc}\left(A-\alpha B\right)& \beta B\\ -\beta B& \left(A-\alpha B\right)\end{array}\right]\\ =\left[\beta {\psi}_{1}^{\text{T}}B+{\psi}_{2}^{\text{T}}\left(A-\alpha B\right)\text{\hspace{1em}}-\left\{{\psi}_{1}^{\text{T}}\left(A-\alpha B\right)-\beta {\psi}_{2}^{\text{T}}B\right\}\right]={0}^{\text{T}}\mathrm{,}\end{array}$

and it shows that $\left[{\psi}_{2}^{\text{T}},-{\psi}_{1}^{\text{T}}\right]$ is also a left nullvector of $M$ .

So we form the matrix $C$ consisting of the 2-dimensional left nullvectors of $M$ at the root (in practice $C$ is not computed), as

$C=\left[\begin{array}{cc}{\psi}_{1}& {\psi}_{2}\\ {\psi}_{2}& -{\psi}_{1}\end{array}\right]\mathrm{.}$ (14)

Now, observe that the condition (2), implies

${\psi}^{H}Bz=\left[{\psi}_{1}^{\text{T}}B{z}_{1}+{\psi}_{2}^{\text{T}}B{z}_{2}\right]+i\left[{\psi}_{1}^{\text{T}}B{z}_{2}-{\psi}_{2}^{\text{T}}B{z}_{1}\right]\ne 0.$

Therefore, at the root, either ${\psi}_{1}^{\text{T}}B{z}_{1}+{\psi}_{2}^{\text{T}}B{z}_{2}\ne 0$ or ${\psi}_{1}^{\text{T}}B{z}_{2}-{\psi}_{2}^{\text{T}}B{z}_{1}=0$ ; ${\psi}_{1}^{\text{T}}B{z}_{1}+{\psi}_{2}^{\text{T}}B{z}_{2}=0$ or ${\psi}_{1}^{\text{T}}B{z}_{2}-{\psi}_{2}^{\text{T}}B{z}_{1}\ne 0$ or both ${\psi}_{1}^{\text{T}}B{z}_{1}+{\psi}_{2}^{\text{T}}B{z}_{2}$ and ${\psi}_{1}^{\text{T}}B{z}_{2}-{\psi}_{2}^{\text{T}}B{z}_{1}$ are nonzero. It excludes the possibility that they are both zero.

Before we continue with the rest of the analysis, we pause a little to present the main result of this section which shows that the Jacobian (9) has a one dimensional nullvector at the root.

Theorem 2.1 Assume that the eigenpairs $\left(z\mathrm{,}\lambda \right)$ of the pencil $\left(A\mathrm{,}B\right)$ are algebraically simple. If ${z}_{1}$ and ${z}_{2}$ are nonzero vectors, then $\varphi =\tau \left[{z}_{2}^{\text{T}}\mathrm{,}-{z}_{1}^{\text{T}}\mathrm{,0,0}\right]$ , $\tau \in \mathbb{R}$ is the unique nonzero nullvector of ${F}_{v}\left(v\right)$ at the root.

Proof: See [5] .

After linearizing $F\left(v\right)=0$ , we have the following under-determined linear system of equations

${F}_{v}\left({v}^{\left(k\right)}\right)\Delta {v}^{\left(k\right)}=-F\left({v}^{\left(k\right)}\right)\mathrm{.}$ (15)

Let ${n}^{\left(k\right)}$ be the exact nullvector of the Jacobian ${F}_{v}\left({v}^{\left(k\right)}\right)$ . By adding the extra condition ${n}^{\text{T}}\Delta {v}^{\left(k\right)}=0$ , which stems from Lemma 1.1 to the underdetermined linear system of Equations (15), we obtain the following square linear system of equations

$\left[\begin{array}{c}{F}_{v}\left({v}^{\left(k\right)}\right)\\ {n}^{\left(k\right)}{}^{{}^{\text{T}}}\end{array}\right]\Delta {v}^{\left(k\right)}=-\left[\begin{array}{c}F\left({v}^{\left(k\right)}\right)\\ 0\end{array}\right]\mathrm{.}$ (16)

3. Square System of Equations for the Numerical Computation of the Complex Eigenvalues of a Matrix

In the preceding section, we saw that by adding an extra equation to the under-determined linear system of equations 15, we obtained a square system of equations (16). However, in practice we would never compute ${n}^{\left(k\right)}$ , but Theorem 2.1 guarantees the existence of a unique nullvector ${\varphi}^{\left(k\right)}$ at the root. This is the motivation for the discussion in this section. In this section, we will use ${\varphi}^{\left(k\right)}$ defined by ${\varphi}^{\left(k\right)}=\left[{z}_{2}^{\left(k\right)}\mathrm{,}-{z}_{1}^{\left(k\right)}\mathrm{,0,0}\right]$ as an approximation to $n$ in (16) and show that the solution obtained by solving (15) is equivalent to the solution obtained by solving

$\left[\begin{array}{c}{F}_{v}\left({v}^{\left(k\right)}\right)\\ {\varphi}^{\left(k\right)}{}^{{}^{\text{T}}}\end{array}\right]\Delta {v}^{\left(k\right)}=-\left[\begin{array}{c}F\left({v}^{\left(k\right)}\right)\\ 0\end{array}\right]\mathrm{,}$ (17)

in the absence of round off errors. To do this, we will show that ${\varphi}^{\left(k\right)}{}^{{}^{\text{T}}}\Delta {v}^{\left(k\right)}=0$ for each k, and this is presented in the main result of this section: Theorem 3.1. Algorithm 1 is given for computing an algebraically simple eigenpair of the pencil $\left(A\mathrm{,}B\right)$ . Note that since $M$ has been shown to be singular at the root in section 2, this section is anchored on the assumption that when $v$ is not at the root, $M$ is nonsingular.

First, we define the 2n by 2n matrix $J$ as (see, also [11] )

$J=\left[\begin{array}{cc}0& I\\ -I& 0\end{array}\right]\mathrm{,}$ (18)

and

$Jw=\left[\begin{array}{cc}0& I\\ -I& 0\end{array}\right]\left[\begin{array}{c}{z}_{1}\\ {z}_{2}\end{array}\right]=\left[\begin{array}{c}{z}_{2}\\ -{z}_{1}\end{array}\right]\mathrm{.}$ (19)

The matrix $J$ satisfies the following properties:

1) ${J}^{\text{T}}=-J$ .

2) ${J}^{\text{T}}J={I}_{2n}$ , where ${I}_{2n}$ is the 2n by 2n identity matrix.

3) ${J}^{2}=-{I}_{2n}\mathrm{.}$

4) The matrix $J$ commutes with $M$ , i.e., $JM=MJ$ .

5) For $w\in {\mathbb{R}}^{2n}$ , ${w}^{\text{T}}{B}_{1}Jw={w}^{\text{T}}J{B}_{1}w=0$ .

6) Let $u$ be an unknown vector that solves $Mu={B}_{1}w$ . By premultiplying both sides by $J$ we obtain $JMu=J{B}_{1}w$ and hence $MJu=J{B}_{1}w$ because of the commutativity of $M$ and $J$ . Therefore, if $Mu={B}_{1}w$ , then $Ju$ solves $M\left(Ju\right)=J{B}_{1}w$ .

We begin by writing the linear system of Equations (15) explicitly. For ease of notation, we shall drop the superscripts and define ${w}^{+}=w+\Delta w$ where ${w}^{+}={w}^{\left(k+1\right)}$ , and replace ${w}^{\left(k\right)}$ and $\left[\Delta {z}_{1}^{\left(k\right)}{}^{{}^{\text{T}}}\mathrm{,}\Delta {z}_{2}^{\left(k\right)}{}^{{}^{\text{T}}}\right]$ with $w$ and $\Delta w$ respectively. This means that (15) can now be rewritten as:

$\left[\begin{array}{ccc}M& -{B}_{1}w& {B}_{1}Jw\\ -{\left({B}_{1}w\right)}^{\text{T}}& 0& 0\end{array}\right]\left[\begin{array}{c}\Delta w\\ \Delta \alpha \\ \Delta \beta \end{array}\right]=-\left[\begin{array}{c}Mw\\ -\frac{1}{2}{w}^{\text{T}}{B}_{1}w+\frac{1}{2}\end{array}\right]\mathrm{,}$ (20)

which is equivalent to the following system of equations

$M\Delta w-\Delta \alpha {B}_{1}w+\Delta \beta {B}_{1}Jw=-Mw$

$-{w}^{\text{T}}{B}_{1}\Delta w=\frac{1}{2}{w}^{\text{T}}{B}_{1}w-\frac{1}{2}\mathrm{.}$

After rearrangement, the first n-equation reduces to

$M{w}^{+}-\Delta \alpha {B}_{1}w+\Delta \beta {B}_{1}Jw=0\mathrm{.}$ (21)

By multiplying both sides of the $\left(n+1\right)th$ equation by 2, we obtain:

$2{w}^{\text{T}}{B}_{1}\Delta w+{w}^{\text{T}}{B}_{1}w=1.$

This in turn reduces to

${w}^{\text{T}}{B}_{1}\left(w+2\Delta w\right)=1.$ (22)

Observe that since ${w}^{+}=w+\Delta w$ , $2\Delta w=2{w}^{+}-2w$ and $w+2\Delta w=2{w}^{+}-w$ . Now, ${w}^{\text{T}}{B}_{1}\left(w+2\Delta w\right)={w}^{\text{T}}{B}_{1}\left(2{w}^{+}-w\right)=2{w}^{\text{T}}{B}_{1}{w}^{+}-{w}^{\text{T}}{B}_{1}w$ . Consequently,

${w}^{\text{T}}{B}_{1}{w}^{+}=\frac{1}{2}\left({w}^{\text{T}}{B}_{1}w+1\right)\mathrm{.}$ (23)

The combined set of Equations (21) and (23), which is the simplified form of (20), can be expressed as:

$\left[\begin{array}{ccc}M& -{B}_{1}w& {B}_{1}Jw\\ {\left({B}_{1}w\right)}^{\text{T}}& 0& 0\end{array}\right]\left[\begin{array}{c}{w}^{+}\\ \Delta \alpha \\ \Delta \beta \end{array}\right]=\left[\begin{array}{c}0\\ \frac{1}{2}\left({w}^{\text{T}}{B}_{1}w+1\right)\end{array}\right]\mathrm{.}$ (24)

We assume that the 2n by 2n matrix $M$ is nonsingular except at the root. This is what forms the basis for the following discussion. That is to say, we want to show that when not at the root, ${\varphi}^{\left(k\right)}{}^{{}^{\text{T}}}\Delta {v}^{\left(k\right)}=0$ .

First of all, let the exact nullvector $n$ of

${F}_{v}\left(v\right)=\left[\begin{array}{ccc}M& -{B}_{1}w& {B}_{1}Jw\\ -{\left({B}_{1}w\right)}^{\text{T}}& 0& 0\end{array}\right]\mathrm{,}$

be defined as $n=\left[{n}_{w}^{\text{T}}\mathrm{,}{n}_{\alpha}\mathrm{,}{n}_{\beta}\right]$ , where ${n}_{w}\in {\mathbb{R}}^{2n}$ , ${n}_{\alpha}\mathrm{,}{n}_{\beta}$ are real scalars, $Jw$ and $M$ are defined respectively by (19) and (10). Hence,

$\left[\begin{array}{ccc}M& -{B}_{1}w& {B}_{1}Jw\\ -{\left({B}_{1}w\right)}^{\text{T}}& 0& 0\end{array}\right]\left[\begin{array}{c}{n}_{w}\\ {n}_{\alpha}\\ {n}_{\beta}\end{array}\right]=0\mathrm{,}$

then after expanding the matrix-vector multiplication, we obtain

$M{n}_{w}-{n}_{\alpha}{B}_{1}w+{n}_{\beta}\left({B}_{1}Jw\right)=0$ (25)

${w}^{\text{T}}{B}_{1}{n}_{w}=0.$ (26)

We make distinctly clear at this juncture, that the nullvector $n=\left[{n}_{w}^{\text{T}}\mathrm{,}{n}_{\alpha}\mathrm{,}{n}_{\beta}\right]$ is not exactly the same as $\varphi =\left[{\left(Jw\right)}^{\text{T}}\mathrm{,0,0}\right]$ because, the later has the form of the exact nullvector at the root, but is evaluated at the kth iterate while the former is the nullvector even when not at the root.

Another way of writing (24) is as follows

$M{w}^{+}=\Delta \alpha {B}_{1}w-\Delta \beta {B}_{1}Jw\mathrm{.}$ (27)

This means that we could solve (24) by solving

$Mu={B}_{1}w\mathrm{,}$ (28)

for $u$ . After which the solution of (27) is given by

${w}^{+}=\Delta \alpha u-\Delta \beta Ju\mathrm{.}$ (29)

With this expression for ${w}^{+}$ , it can be observed that

$\begin{array}{c}M{w}^{+}=\Delta \alpha Mu-\Delta \beta MJu=\Delta \alpha {B}_{1}w-\Delta \beta JMu\\ =\Delta \alpha {B}_{1}w-\Delta \beta J{B}_{1}w=\Delta \alpha {B}_{1}w-\Delta \beta {B}_{1}Jw\mathrm{.}\end{array}$

Which means that ${w}^{+}$ is well defined. Furthermore, from (25)

$M{n}_{w}={n}_{\alpha}{B}_{1}w-{n}_{\beta}\left({B}_{1}Jw\right)\mathrm{,}$

using the fact that $J$ commutes with ${B}_{1}$ and (28) gives

${n}_{w}={n}_{\alpha}u-{n}_{\beta}Ju\mathrm{.}$ (30)

Since $w$ is ${B}_{1}$ -orthogonal to ${n}_{w}$ by virtue of Equation (26), taking the ${B}_{1}$ -inner product of both sides of the above with $w$ yields

${w}^{\text{T}}{B}_{1}{n}_{w}={n}_{\alpha}{w}^{\text{T}}{B}_{1}u-{n}_{\beta}{w}^{\text{T}}{B}_{1}Ju=0.$

From which we deduce

${n}_{\alpha}={w}^{\text{T}}{B}_{1}Ju\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{n}_{\beta}={w}^{\text{T}}{B}_{1}u\mathrm{.}$ (31)

Consider the problem of solving the under-determined linear system of Equations (20) for the $2n+2$ real unknowns $\Delta v=\left[\Delta {w}^{\text{T}}\mathrm{,}\Delta \alpha \mathrm{,}\Delta \beta \right]$ . It was stated in Lemma 1.1 that the minimum norm solution to an under-determined linear system of equations is orthogonal to the nullspace. It is an application of this result that yields the following important relationship:

$0={n}^{\text{T}}\Delta v={n}_{w}^{\text{T}}\Delta w+{n}_{\alpha}\Delta \alpha +{n}_{\beta}\Delta \beta \mathrm{.}$ (32)

If we add the nullvector $n$ to the last row of (24), then

$\left[\begin{array}{ccc}M& -{B}_{1}w& {B}_{1}Jw\\ {\left({B}_{1}w\right)}^{\text{T}}& 0& 0\\ {n}_{w}^{T}& {n}_{\alpha}& {n}_{\beta}\end{array}\right]\left[\begin{array}{c}{w}^{+}\\ \Delta \alpha \\ \Delta \beta \end{array}\right]=\left[\begin{array}{c}0\\ \frac{1}{2}\left({w}^{\text{T}}{B}_{1}w+1\right)\\ {n}_{w}^{\text{T}}w\end{array}\right]\mathrm{.}$ (33)

By expanding the second to the last row, ${w}^{\text{T}}{B}_{1}{w}^{+}=\frac{1}{2}\left({w}^{\text{T}}{B}_{1}w+1\right)$ . But from

(29), ${w}^{+}=\Delta \alpha u-\Delta \beta Ju$ . This implies that, by taking the inner product of both sides with $w$ , yields

${w}^{\text{T}}{B}_{1}{w}^{+}=\Delta \alpha \left({w}^{\text{T}}{B}_{1}u\right)-\Delta \beta \left({w}^{\text{T}}{B}_{1}Ju\right)=\frac{1}{2}\left({w}^{\text{T}}{B}_{1}w+1\right)\mathrm{.}$

Using the definition (31) for ${n}_{\alpha}$ and ${n}_{\beta}$ , we obtain

${n}_{\beta}\Delta \alpha -{n}_{\alpha}\Delta \beta =\frac{1}{2}\left({w}^{\text{T}}{B}_{1}w+1\right)\mathrm{,}$ (34)

where the unknown quantities $\Delta \alpha $ and $\Delta \beta $ are to be determined, so we need an extra equation to be able to do so. The last row of the matrix-vector multiplication (cf. (33)) above comes from (32) since

$\begin{array}{l}{n}_{w}^{\text{T}}{w}^{+}+{n}_{\alpha}\Delta \alpha +{n}_{\beta}\Delta \beta \\ ={n}_{w}^{\text{T}}\left(w+\Delta w\right)+{n}_{\alpha}\Delta \alpha +{n}_{\beta}\Delta \beta \\ ={n}_{w}^{\text{T}}w+\underset{=0}{\underset{\ufe38}{\left({n}_{w}^{\text{T}}\Delta w+{n}_{\alpha}\Delta \alpha +{n}_{\beta}\Delta \beta \right)}}={n}_{w}^{\text{T}}w\mathrm{.}\end{array}$

If we substitute the expression (30) for ${n}_{w}$ and (29) for ${w}^{+}$ into the left hand side, then one obtains

$\left[{n}_{\alpha}{u}^{\text{T}}-{n}_{\beta}{\left(Ju\right)}^{\text{T}}\right]\left[\Delta \alpha u-\Delta \beta Ju\right]+{n}_{\alpha}\Delta \alpha +{n}_{\beta}\Delta \beta ={n}_{w}^{\text{T}}w\mathrm{.}$

Furthermore, by expanding the first term on the left hand side, using the properties of $J$ , then

$\begin{array}{l}\left[{n}_{\alpha}{u}^{\text{T}}-{n}_{\beta}{\left(Ju\right)}^{\text{T}}\right]\left(\Delta \alpha u-\Delta \beta Ju\right)\\ ={n}_{\alpha}\Delta \alpha {u}^{\text{T}}u+{n}_{\beta}\Delta \beta {u}^{\text{T}}{J}^{\text{T}}Ju\\ ={n}_{\alpha}\Delta \alpha {\Vert u\Vert}^{2}+{n}_{\beta}\Delta \beta {\Vert u\Vert}^{2}\mathrm{.}\end{array}$

Consequently,

$\begin{array}{l}{n}_{\alpha}\Delta \alpha {\Vert u\Vert}^{2}+{n}_{\beta}\Delta \beta {\Vert u\Vert}^{2}+{n}_{\alpha}\Delta \alpha +{n}_{\beta}\Delta \beta \\ =\left(1+{\Vert u\Vert}^{2}\right)\left({n}_{\alpha}\Delta \alpha +{n}_{\beta}\Delta \beta \right)={n}_{w}^{\text{T}}w\mathrm{.}\end{array}$

Observe that because $u$ is real, $\left(1+{\Vert u\Vert}^{2}\right)$ is nonzero. Accordingly, after dividing both sides by $\left(1+{\Vert u\Vert}^{2}\right)$

${n}_{\alpha}\Delta \alpha +{n}_{\beta}\Delta \beta =\frac{{n}_{w}^{\text{T}}w}{\left(1+{\Vert u\Vert}^{2}\right)}\mathrm{.}$ (35)

We combine the two equations (34) and (35) below

$\left[\begin{array}{cc}{n}_{\beta}& -{n}_{\alpha}\\ {n}_{\alpha}& {n}_{\beta}\end{array}\right]\left[\begin{array}{c}\Delta \alpha \\ \Delta \beta \end{array}\right]=\left[\begin{array}{c}\frac{1}{2}\left({w}^{\text{T}}{B}_{1}w+1\right)\\ \frac{{n}_{w}^{\text{T}}w}{\left(1+{\Vert u\Vert}^{2}\right)}\end{array}\right]\mathrm{,}$ (36)

to compute $\Delta \alpha $ and $\Delta \beta $ simultaneously. The matrix on the left hand side is always nonsingular except at the root (in which case all entries are zero), this is because, its determinant is ${n}_{\alpha}^{2}+{n}_{\beta}^{2}$ . Equation (35) can now be applied to simplify

$\begin{array}{c}{w}^{\text{T}}{B}_{1}J{w}^{+}={w}^{\text{T}}{B}_{1}J\left(\Delta \alpha u-\Delta \beta Ju\right)\\ ={w}^{\text{T}}{B}_{1}\left(\Delta \alpha Ju+\Delta \beta u\right)\\ =\Delta \alpha \left({w}^{\text{T}}{B}_{1}Ju\right)+\Delta \beta \left({w}^{\text{T}}{B}_{1}u\right)\\ ={n}_{\alpha}\Delta \alpha +{n}_{\beta}\Delta \beta =0.\end{array}$ (37)

Notice that we have used the property ${J}^{2}=-{I}_{2n}$ to arrive at the second step above and the definition (29) for ${w}^{+}$ .

Next, we want to establish the orthogonality of $\varphi $ and $\Delta v$ in the next key result. Before we do that, notice from Theorem (2.1) that $\varphi $ , at the root, is a scalar multiple of $\left[{z}_{2}^{\text{T}}\mathrm{,}-{z}_{1}^{\text{T}}\mathrm{,0,0}\right]$ and by the definition of $J$ in (18), we can also write $\varphi =\left[{\left(Jw\right)}^{\text{T}}\mathrm{,0,0}\right]$ , with $w=\left[{z}_{1}^{\text{T}}\mathrm{,}{z}_{2}^{\text{T}}\right]$ . This result holds when $v={v}^{\left(k\right)}$ is at the root or not, but because the analysis used to establish the orthogonality is based on the assumption that $M$ is nonsinsingular when not at the root. As a result of this, after presenting Algorithm (1) (to follow shortly), we then show that the same result holds when $M$ is singular at the root.

Theorem 3.1 Let $\varphi $ be an approximation to the exact nullvector $n$ of ${F}_{v}\left(v\right)$ . Then, $\varphi $ is orthogonal to $\Delta v$ .

Proof: To proof this, recall that $v=\left[{w}^{\text{T}}\mathrm{,}\alpha \mathrm{,}\beta \right]$ , ${v}^{+}=\left[{w}^{+}{}^{{}^{\text{T}}}\mathrm{,}{\alpha}^{+}\mathrm{,}{\beta}^{+}\right]$ and $\varphi =\left[{\left(Jw\right)}^{\text{T}}\mathrm{,0,0}\right]$ . This implies

$\begin{array}{c}{\varphi}^{\text{T}}\Delta v={\varphi}^{\text{T}}\left({v}^{+}-v\right)={\left(Jw\right)}^{\text{T}}\left({w}^{+}-w\right)={w}^{\text{T}}{J}^{\text{T}}{w}^{+}-{w}^{\text{T}}{J}^{\text{T}}w\\ ={w}^{\text{T}}Jw-{w}^{\text{T}}J{w}^{+}=-{w}^{\text{T}}J{w}^{+}=\mathrm{0,}\end{array}$ (38)

showing that ${\varphi}^{\text{T}}\Delta v=0$ . In arriving at the last step above, we have used the properties of $J$ and a special case of (37) where ${B}_{1}={I}_{2n}$ .

We present Algorithm (1), which involves the solution of two linear systems. The first is the 2n by 2n linear system of equations in (28), while the second is the 2 by 2 linear system (36).

Stop Algorithm 1 as soon as

$\Vert \Delta {v}^{\left(k\right)}\Vert \le tol\mathrm{,}$

where $\Delta v=\left[{w}^{+}-w\mathrm{,}\omega \right]$ . The above analysis shows that ${\varphi}^{\text{T}}\Delta v=0$ when ${v}^{\left(k\right)}$ is not at the root. Next, we want to show that the same result holds at the root.

In a manner analogous to the proof of Lemma (2.1), we postmultiply both sides of the above system of equation by ${C}^{\text{T}}$ where $C$ is the 2n by 2 real matrix defined by (14), consisting of the left nullvectors of $M$ . If $M$ and $N$ are as defined respectively in (10) and (11), then, this is the same as

${C}^{\text{T}}M{w}^{+}+{C}^{\text{T}}N\left[\begin{array}{c}\Delta \alpha \\ \Delta \beta \end{array}\right]=0\mathrm{.}$

But by the definition of $C$ , the first term on the left hand side of the equation above is zero, since ${C}^{\text{T}}M={0}^{\text{T}}$ . It can be recalled from the proof of Lemma 2.1 that the 2 by 2 real matrix $H={C}^{\text{T}}N$ is nonsingular at the root. This implies

${C}^{\text{T}}N\left[\begin{array}{c}\Delta \alpha \\ \Delta \beta \end{array}\right]=H\left[\begin{array}{c}\Delta \alpha \\ \Delta \beta \end{array}\right]=\left[\begin{array}{cc}-\left({\psi}_{1}^{\text{T}}B{z}_{1}+{\psi}_{2}^{\text{T}}B{z}_{2}\right)& {\psi}_{1}^{\text{T}}B{z}_{2}-{\psi}_{2}^{\text{T}}B{z}_{1}\\ {\psi}_{1}^{\text{T}}B{z}_{2}-{\psi}_{2}^{\text{T}}B{z}_{1}& {\psi}_{1}^{\text{T}}B{z}_{1}+{\psi}_{2}^{\text{T}}B{z}_{2}\end{array}\right]\left[\begin{array}{c}\Delta \alpha \\ \Delta \beta \end{array}\right]=0\mathrm{.}$

Accordingly, $\Delta \alpha =\Delta \beta =0$ because of the nonsingularity of $H$ . Therefore,

$M{w}^{+}=0\mathrm{.}$ (39)

From the property of $M$ at the root, it has two nonzero nullvectors and hence singular. The implication of this fact and the above is that, ${w}^{+}$ is in the nullspace of $M$ . But, we have already established that the nullspace of $M$ consists of ${w}^{\text{T}}=\left[{z}_{1}^{\text{T}}\mathrm{,}{z}_{2}^{\text{T}}\right]$ and ${w}_{1}=Jw=\left[{z}_{2}^{\text{T}}\mathrm{,}-{z}_{1}^{\text{T}}\right]$ . Hence, we can write

${w}^{+}=\mu w+\tau {w}_{1}\mathrm{.}$

Now, from the last equation in (24),

$\begin{array}{c}\frac{1}{2}\left({w}^{\text{T}}{B}_{1}w+1\right)={w}^{\text{T}}{B}_{1}{w}^{+}=\mu {w}^{\text{T}}{B}_{1}w+\tau {w}^{\text{T}}{B}_{1}{w}_{1}\\ =\mu {w}^{\text{T}}{B}_{1}w+\tau \left({z}_{1}^{\text{T}}B{z}_{2}-{z}_{2}^{\text{T}}B{z}_{1}\right)=\mu {w}^{\text{T}}{B}_{1}w\mathrm{.}\end{array}$

Consequently,

$\mu =\frac{{w}^{\text{T}}{B}_{1}w+1}{2{w}^{\text{T}}{B}_{1}w}\mathrm{.}$

But at the root, ${w}^{\text{T}}{B}_{1}w={z}_{1}^{\text{T}}B{z}_{1}+{z}_{2}^{\text{T}}B{z}_{2}=1$ . Therefore, $\mu =1$ , ${w}^{+}=w$ , ${z}_{1}^{+}={z}_{1}$ and ${z}_{2}^{+}={z}_{2}$ . This will now be used to deduce the following corollary at the root.

Corollary 3.1 Let $\varphi =\left[{\left(Jw\right)}^{\text{T}}\mathrm{,0,0}\right]$ . Let ${v}^{+}={\left[{w}^{+}\mathrm{,}{\alpha}^{+}\mathrm{,}{\beta}^{+}\right]}^{\text{T}}$ . Then, $\varphi $ is orthogonal to $\Delta v$ at the root.

Proof: This follows from the second to the last line of the proof of Theorem 3.1 (cf., Equation (38)) where ${w}^{+}=w$ . Hence,

${\varphi}^{\text{T}}\Delta v=-{w}^{\text{T}}J{w}^{+}=-{w}^{\text{T}}Jw=0.$

4. Inexact Inverse Iteration with Preconditioning for Solving (28)

In Section 2, we found two nonzero nullvectors for $M$ at the root. As a result of this property of $M$ at the root, in this section, we will describe an inexact inverse iteration technique for solving the large sparse system of Equations (28) in step 2 of Algorithm 1 and present Algorithm 2 and Algorithm 3. Result of a numerical experiment is given which supports the theory in Section 5.

We give the following version of inexact inverse iteration in Algorithm 2. We will use a fixed tolerance. Note that because of the special nature of $M$ at the root, the choice of a preconditioner is crucial for convergence to the desired accuracy to be achieved. The inexact linear solver that we use is the preconditioned GMRES [9] where we define the following block tridiagonal preconditioner,

$\mathcal{P}\approx \left[\begin{array}{cc}A-\alpha B& \beta B\\ & A-\alpha B\end{array}\right]\mathrm{.}$ (40)

Next, we present Algorithm 3, which is the inexact inverse iteration equivalence of Algorithm 1. The stopping criterion for the outer iteration in Algorithm 3 depends on the norm of the eigenvalue residuals, that is

$\Vert {r}_{1}^{\left(k\right)}\Vert =\Vert \left(A-{\alpha}^{\left(k\right)}\right){z}_{1}^{\left(k\right)}+{\beta}^{\left(k\right)}B{z}_{2}^{\left(k\right)}\Vert \le tol\mathrm{,}$

and

$\Vert {r}_{2}^{\left(k\right)}\Vert =\Vert \left(A-{\alpha}^{\left(k\right)}\right){z}_{2}^{\left(k\right)}-{\beta}^{\left(k\right)}B{z}_{1}^{\left(k\right)}\Vert \le tol\mathrm{,}$

and

$\Vert \left[\begin{array}{c}\Delta \alpha \\ \Delta \beta \end{array}\right]\Vert \le tol\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\Vert {w}^{+}-w\Vert \le tol\mathrm{.}$

5. Numerical Experiments

As mentioned earlier, the sparse linear system of equations in step 2 of Algorithm 1 is solved with an LU type factorization of $M$ which is expensive and besides the L and U factors may have more nonzero elements than $M$ . In this section, our main goal is to use preconditioned GMRES with the block triangular preconditioner $\mathcal{P}$ in (40) to solve the system $Mu={B}_{1}w$ inexactly. We do this by considering a single numerical experiment with a fixed and a decreasing tolerance. Results are presented by means of tables and figures.

Example 5.1

Consider the 200 by 200 matrix $A$ bwm200.mtx from the matrix market library [12] . It is the discretised Jacobian of the Brusselator wave model for a chemical reaction. The resulting eigenvalue problem with $B=I$ was also studied in [13] and we are interested in finding the rightmost eigenvalue of $A$ which is closest to the imaginary axis and its corresponding eigenvector.

In this example, we take ${\alpha}^{\left(0\right)}=0.0,{\beta}^{\left(0\right)}=2.5$ in line with [13] and took ${z}_{1}^{\left(0\right)}=1/2\Vert 1\Vert $ and ${z}_{2}^{\left(0\right)}=1/\Vert 1\Vert $ , where $1$ is the vector of all ones.

In Table 1 and Table 2, we present results for the computation of the eigenpairs $\left(z\mathrm{,}\lambda \right)$ for the matrix in Example 5.1, and stop once the norms of ${\left[\Delta \alpha \mathrm{,}\Delta \beta \right]}^{\text{T}}$ and the eigenvalue residuals ${r}_{1}^{\left(k\right)}$ and ${r}_{2}^{\left(k\right)}$ are smaller than the outer tolerance ${\tau}_{\text{outer}}=2.5\times {10}^{-14}$ . f represents the number of inner iterations used by preconditioned GMRES to satisfy the fixed inner tolerance $\tau =0.6$ in Table 1, while d in Table 2 represents number of inner iterations used by preconditioned GMRES to satisfy the decreasing inner tolerance ${\tau}_{\text{inner}}=min\left\{\mathrm{0.6,0.6}\Vert {r}_{1}^{\left(k\right)}\Vert \right\}$ . Quadratic convergence to $\lambda =1.81999\text{e}-05+2.13950i$ is easily observed from the second up to the seventh iterates in columns five and six of Table 2. However, this quadratic convergence is lost in the last iterate. This could be due to the fact that we are solving a nearly singular system as we approach the root. As shown in columns five and six of Table 1, as we approach the root, more number of inner iterations

Table 1. Table showing convergence to the eigenvalue $\lambda =1.81999\text{e}-05+2.13950i$ with a fixed inner tolerance for Example 5.1. The last column shows the number of inner iterations it took to satisfy the fixed inner tolerance $\tau =0.6$ .

Table 2. Table showing quadratic convergence to the eigenvalue $\lambda =1.81999\text{e}-05+2.13950i$ with a decreasing tolerance for Example 5.1. The last column shows the number of inner iterations it took to satisfy the decreasing inner tolerance $\tau =min\left\{\mathrm{0.6,0.6}\Vert {r}_{1}^{\left(k\right)}\Vert \right\}$ .

Figure 1. Convergence history of the eigenvalue residuals on Example 5.1 with a fixed tolerance ${\tau}_{\text{inner}}=0.6$ .

were needed to satisfy the decreasing inner tolerance as against the fixed tolerance. From Table 1, we observed superlinear convergence in columns five and six.

Figure 1 and Figure 2 shows a plot of the norm of the eigenvalue residuals against the outer iterations with fixed and decreasing inner tolerances respectively. While the norm of the eigenvalue residuals decayed almost

Figure 2. Convergence history of the eigenvalue residuals on Example 5.1 with a decreasing tolerance ${\tau}_{\text{inner}}=min\left\{\mathrm{0.6,0.6}\Vert {r}_{1}^{\left(k\right)}\Vert \right\}$ .

superlinearly in Figure 1, we observed a quadratic decrease in the norm of the eigenvalue residuals in Figure 2. It is quite surprising to see that Algorithm 3 works because $M$ is singular at the root, which means we solved a singular system at the root.

6. Conclusions

While Ruhe ( [2] Section 3) used the normalisation ${c}^{H}z=1$ and solved the resulting $\left(n+1\right)$ by $\left(n+1\right)$ nonlinear system of equations to obtain ${\left[z\mathrm{,}\lambda \right]}^{\text{T}}$ . We have been able to show that, with the addition of the non differentiable normalisation ${z}^{H}z=1$ , it is still possible to convert the resulting system of under-determined nonlinear equations into a square one.

Nevertheless, in this work, we obtained Algorithm 1 which consists of a combination of a 2n -by-2n system of equations (which is the same as those in [13] ) and a 2-by-2 system. A numerical example show that using an LU-solver on the one hand and preconditioned GMRES as an inexact solver on the other hand to solve the large sparse 2n-by-2n system of Equations (28), followed by the 2 by 2 system in each case, give similar results in the limit. By and large, the algorithms presented in this paper relies on good initial guesses to the desired eigenpair of interest.

Acknowledgements

The first author acknowledges funds provided by the University of Bath, United Kingdom during his PhD as well as an anonymous referee for useful comments.

References

[1] Stewart, G.W. (2001) Matrix Algorithms. Vol. II, Eigensystems, SIAM.

[2] Ruhe, A. (1973) Algorithms for the Nonlinear Eigenvalue Problem. SIAM Journal on Matrix Analysis and Applications, 10, 674-689.

https://doi.org/10.1137/0710059

[3] Keller, H.B. (1977) Numerical Solution of Bifurcation and Nonlinear Eigenvalue Problems. In: Rabinowitz, P., Ed., Applications of Bifurcation Theory, Academic Press, New York, 359-384.

[4] Deuflhard, P. (2004) Newton Methods for Nonlinear Problems. Springer, 174-175.

[5] Akinola, R.O. (2014) Theoretical Expression for the Nullvector of the Jacobian: Inverse Iteration with a Complex Shift. International Journal of Innovation in Science and Mathematics, 2, 367-371.

[6] Akinola, R.O. and Spence, A. (2014) Two-Norm Normalization for the Matrix Pencil: Inverse Iteration with a Complex Shift. International Journal of Innovation in Science and Mathematics, 2, 435-439.

[7] Akinola, R.O. and Spence, A. (2015) Numerical Computation of the Complex Eigenvalues of a Matrix by solving a Square System of Equations. Journal of Natural Sciences Research, 5, 144-157.

[8] Akinola, R.O. (2015) Computing the Complex Eigenpair of a Large Sparse Matrix in Complex Arithmetic. International Journal of Pure & Engineering Mathematics (IJPEM), 3, 137-158.

[9] Saad, Y. and Schultz, M.H. (1986) GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems. SIAM Journal on Scientific and Statistical Computing, 7, 856-869.

https://doi.org/10.1137/0907058

[10] Boyd, S. (2008/2009) Lecture 8: Least Norm Solutions of Under-Determined Equations, EE263 Autumn.

[11] Freitag, M.A. and Spence, A. (2009) The Calculation of the Distance to Instability by the Computation of a Jordan Block, Linear and Nonlinear Eigen Problems for PDEs, 274-275.

[12] Biosvert, B., Pozo, R., Remington, K., Miller, B. and Lipman, R. Matrix Market.

http://math.nist.gov/MatrixMarket/

[13] Parlett, B.N. and Saad, Y. (1987) Complex Shift and Invert Strategies for Real Matrices. Linear Algebra and Its Applications, 575-595.