Block Decompositions and Applications of Generalized Reflexive Matrices

Author(s)
Hsin-Chu Chen

ABSTRACT

Generalize reflexive matrices are a special class of matrices that have the relation where and are some generalized reflection matrices. The nontrivial cases ( or ) of this class of matrices occur very often in many scientific and engineering applications. They are also a generalization of centrosymmetric matrices and reflexive matrices. The main purpose of this paper is to present block decomposition schemes for generalized reflexive matrices of various types and to obtain their decomposed explicit block-diagonal structures. The decompositions make use of unitary equivalence transformations and, therefore, preserve the singular values of the matrices. They lead to more efficient sequential computations and at the same time induce large-grain parallelism as a by-product, making themselves computationally attractive for large-scale applications. A numerical example is employed to show the usefulness of the developed explicit decompositions for decoupling linear least-square problems whose coefficient matrices are of this class into smaller and independent subproblems.

Generalize reflexive matrices are a special class of matrices that have the relation where and are some generalized reflection matrices. The nontrivial cases ( or ) of this class of matrices occur very often in many scientific and engineering applications. They are also a generalization of centrosymmetric matrices and reflexive matrices. The main purpose of this paper is to present block decomposition schemes for generalized reflexive matrices of various types and to obtain their decomposed explicit block-diagonal structures. The decompositions make use of unitary equivalence transformations and, therefore, preserve the singular values of the matrices. They lead to more efficient sequential computations and at the same time induce large-grain parallelism as a by-product, making themselves computationally attractive for large-scale applications. A numerical example is employed to show the usefulness of the developed explicit decompositions for decoupling linear least-square problems whose coefficient matrices are of this class into smaller and independent subproblems.

KEYWORDS

Generalized Reflexive Matrices, Reflexive Matrices, Centrosymmetric Matrices, Generalized Simultaneous Diagonalization, Simultaneous Diagonalization, Linear Least-Square Problems

Generalized Reflexive Matrices, Reflexive Matrices, Centrosymmetric Matrices, Generalized Simultaneous Diagonalization, Simultaneous Diagonalization, Linear Least-Square Problems

1. Introduction

In [1] we introduced two special classes of rectangular matrices A and B that have the relations

$A=PAQ\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}B=-PBQ,\mathrm{}A,B\in {\mathcal{C}}^{n\times m},$

where P and Q are two generalized reflection matrices of dimensions n and m, respectively. A matrix X is said to be a generalized reflection matrix if $X={X}^{*}={X}^{-1}$ , i.e., if X is unitary and Hermitian. The matrices A (and B) are referred to as generalized reflexive (and antireflexive respectively) matrices. They are a generalization of centrosymmetric (anti-centrosymmetric) matrices whose special properties have been under extensive studies [2] - [11] and a generalization of reflexive (antireflexive) matrices U (V), exploited in [1] [12] [13] , that have the relations

$U=PUP\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}V=-PVP,\mathrm{}U,\mathrm{}V\in {\mathcal{C}}^{n\times n},$

where P is some reflection (symmetric signed permutation) matrix.

Like U, the generalized reflexive matrices A arise naturally and frequently from physical problems with some sort of reflexive symmetry. Although the generalized antireflexive matrices B also also possess many interesting properties, in this paper, we shall focus only on generalized reflexive matrices. Our main objective is, thus, to present a generalized simultaneous diagonalization theorem and various decomposition schemes for the matrices A so that linear least-squares problems (or linear systems) whose coefficient matrices are of this class can be solved more efficiently. The decomposition schemes can be applied to a great number of scientific and engineering problems.

The organization of this paper is as follows. In §2, we present a generalization to the classical simultaneous diagonalization of two diagonalizable commuting square matrices. Our generalization, referred to as the generalized simultaneous diagonalization, simultaneously diagonalize a rectangular matrix H and two square matrices F and G that have the relation $FH=HG$ , assuming F and G are diagonalizable. Based on this simultaneous diagonalization, we develop explicit and semi-explicit decomposed forms in §3 for some important types of generalized reflexive matrices. An application of the decompositions to linear least-squares problems of this class is also given to show the usefulness of the decompositions. More numerical examples are provided in §4 to demonstrate the frequent occurrences of generalized reflexive matrices in many scientific and engineering disciplines.

Throughout this paper, we use the superscripts T, *, and −1 to denote the transpose, conjugate transpose, and inverse of matrices (vectors), respectively. The symbol $\oplus $ stands for the direct sum of matrices as usual. Unless otherwise noted, we use ${I}_{k}$ to denote the identity matrix of dimension k. All matrix-matrix multiplications and additions are assumed to be conformable if their dimensions not mentioned. .

2. Generalized Simultaneous Diagonalization

Before developing the (semi-)explicit block-diagonal structures for some important types of generalized reflexive matrices, we present first the following theoretically simple yet computationally useful observation regarding a simultaneous diagonalization process. Although diagonalization usually refers to square matrices, in this paper, we use the same term for rectangular matrices. In other words, a rectangular matrix $A=\left({a}_{ij}\right)\in {\mathcal{C}}^{n\times m}$ is also said to be diagonal if ${a}_{ij}=0$ for $i\ne j$ . Block-diagonal rectangular matrices are defined in an analogous way.

Theorem 2.1. (Generalized Simultaneous Diagonalization) Let $F\in {\mathcal{C}}^{n\times n}$ and $G\in {\mathcal{C}}^{m\times m}$ be diagonalizable, $A\in {\mathcal{C}}^{n\times m}$ . If $FA=AG$ , then there exist nonsingular matrices ${S}_{f}$ and ${S}_{g}$ such that

${S}_{f}^{-1}F{S}_{f},\mathrm{}{S}_{f}^{-1}A{S}_{g}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}{S}_{g}^{-1}G{S}_{g}$

are all diagonal matrices.

Proof. The proof given below basically employs the same technique used in [14] [15] for the simultaneous diagonalization of two square matrices that commute. Let ${X}_{f}$ and ${X}_{g}$ be the matrices that diagonalize F and G, respectively:

${X}_{f}^{-1}F{X}_{f},={\Lambda}_{f}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.05em}}{X}_{g}^{-1}G{X}_{g}={\Lambda}_{g}$ (1)

where the diagonal elements of ${\Lambda}_{f}$ (respectively ${\Lambda}_{g}$ ) are the eigenvalues of F (respectively G). Suppose that the matrix F has k distinct eigenvalues ${\lambda}_{1},\cdots ,{\lambda}_{k}$ with multiplicities ${p}_{1},\cdots ,{p}_{k}$ , respectively, where ${p}_{1}+\cdots +{p}_{k}=n$ ; and the matrix G has l distinct eigenvalues ${\mu}_{1},\cdots ,{\mu}_{l}$ with multiplicities ${q}_{1},\cdots ,{q}_{l}$ , respectively, where ${q}_{1}+\cdots +{q}_{l}=m$ . Assume further that among the k distinct eigenvalues of F, s of them are also eigenvalues of G, $1\le s\le \mathrm{min}\left\{k,\mathrm{}l\right\}$ . If $s=0$ , then all ${\lambda}_{i}$ and ${\mu}_{j}$ are distinct, implying that A is a null matrix, as can be seen later. Therefore, we exclude this trivial case. Without loss of generality, we can assume that

${\Lambda}_{f}=bdiag\left({\lambda}_{1}{I}_{{p}_{1}}\mathrm{,}\cdots \mathrm{,}{\lambda}_{s}{I}_{{p}_{s}}\mathrm{,}\cdots \mathrm{,}{\lambda}_{k}{I}_{{p}_{k}}\right)\mathrm{,}$

${\Lambda}_{g}=bdiag\left({\mu}_{1}{I}_{{q}_{1}},\mathrm{}\cdots ,\mathrm{}{\mu}_{s}{I}_{{q}_{s}},\mathrm{}\cdots ,\mathrm{}{\mu}_{l}{I}_{{q}_{l}}\right)$ (2)

where
$bdiag(\cdots )$ denotes a block-diagonal matrix and
${\lambda}_{1}={\mu}_{1},\mathrm{}\cdots ,\mathrm{}{\lambda}_{s}={\mu}_{s}$ . Note that
${\lambda}_{s+1}\mathrm{,}\cdots \mathrm{,}{\lambda}_{k}$ and
${\mu}_{s+1}\mathrm{,}\cdots \mathrm{,}{\mu}_{l}$ are all distinct. Now, partition the matrix
${X}_{f}^{-1}A{X}_{g}$ , denoted by B, according to the block forms of
${\Lambda}_{f}$ and
${\Lambda}_{g}$ as
$B=\left({B}_{ij}\right)$ so that
${B}_{ij}$ are p_{i}-by-q_{j} submatrices,
$i=1,\mathrm{}\cdots ,\mathrm{}k$ and
$j=1,\mathrm{}\cdots ,\mathrm{}l$ . If
$FA=AG$ , we have
${\Lambda}_{f}B=B{\Lambda}_{g}$ which implies that

${\lambda}_{i}{B}_{ij}={B}_{ij}{\mu}_{j}\mathrm{}\text{or}\mathrm{}\left({\lambda}_{i}-{\mu}_{j}\right){B}_{ij}=0.$ (3)

Since ${\lambda}_{i}={\mu}_{j}$ only if $i=j=1,\mathrm{}\cdots ,\mathrm{}s$ , we know that B is a block-diagonal matrix, or more precisely, ${B}_{ij}=0$ if $i\ne j$ or if $i=j>s$ . (This can be considered as a block-equivalence decomposition for rectangular matrices.) It is well-known that for any matrix B in ${\mathcal{C}}^{n\times m}$ there exist unitary matrices $U\in {\mathcal{C}}^{n\times n}$ and $V\in {\mathcal{C}}^{m\times m}$ such that the singular value decomposition ${U}^{\mathrm{*}}BV$ is diagonal with nonnegative elements [16] . Now, let ${U}_{i}$ and ${V}_{i}$ be the matrices that diagonalize ${B}_{ii},i=1,\mathrm{}\cdots ,\mathrm{}s$ and take

$U={U}_{1}\oplus \cdots \oplus {U}_{s}\oplus {I}_{{p}_{s+1}}\oplus \cdots \oplus {I}_{{p}_{k}},$

$V={V}_{1}\oplus \cdots \oplus {V}_{s}\oplus {I}_{{q}_{s+1}}\oplus \cdots \oplus {I}_{{q}_{l}}.$ (4)

Let ${\Sigma}_{a}={U}^{-1}{X}_{f}^{-1}A{X}_{g}V$ . We see that ${\Sigma}_{a}={U}^{-1}BV$ is diagonal. Taking

${S}_{f}={X}_{f}U\mathrm{}\text{and}\mathrm{}{S}_{g}={X}_{g}V,$ it is clear that

${S}_{f}^{-1}F{S}_{f}={\Lambda}_{f},\mathrm{}{S}_{f}^{-1}A{S}_{g}={\Sigma}_{a}\mathrm{}\text{and}\mathrm{}{S}_{g}^{-1}G{S}_{g}={\Lambda}_{g}.$ (5)

Therefore, they are all diagonal matrices.

Remark 1: Note that the converse of this theorem is not true in general. It is simple to construct such examples from diagonal matrices.

Remark 2: If the diagonalizable matrix F is the same as G, and A is diagonalizable (A is a square matrix in this case), by taking ${U}_{i}$ to be the matrices such that ${U}_{i}^{-1}{B}_{ii}{U}_{i}$ are diagonal and replacing ${S}_{g}$ with ${S}_{f}$ , this theorem along with its converse part (it now exists) then reduces to the classical simultaneous diagonalization theorem for commuting square matrices as given in ( [15] , p. 50).

Note also that this theorem is different from the simultaneous diagonalization theorems presented in [14] [16] where the simultaneous diagonalization applies to rectangular matrices of the same size.

Corollary 2.2. Let $F\in {\mathcal{C}}^{n\times n}$ and $G\in {\mathcal{C}}^{m\times m}$ be Hermitian, $A\in {\mathcal{C}}^{n\times m}$ . If $FA=AG$ , then there exist unitary matrices ${S}_{f}$ and ${S}_{g}$ such that

${S}_{f}^{\mathrm{*}}F{S}_{f}\mathrm{,}{S}_{f}^{\mathrm{*}}A{S}_{g}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.05em}}{S}_{g}^{\mathrm{*}}G{S}_{g}$

are all diagonal matrices.

Proof. Since Hermitian matrices are diagonalizable by unitary matrices, the proof is trivial.

The usefulness of Theorem 2.1 or Corollary 2.2 lies in the fact that if we know the eigenpairs of the matrices F and G, then the matrix A can be block-diagonalized into independent submatrices by the eigenvectors (with some proper ordering) of F and G so that a single large problem can be handled via smaller and independent subproblems, yielding computational efficiency and large-grain parallelism at the same time. The question then boils down to whether those eigenpairs can easily be obtained or not. This of course depends on F and G. Fortunately, for our generalized reflexive matrices that come from physical problems, their eigenpairs of P and Q are explicitly known in most cases, as can be seen from the example presented in Section 4. In the next section, we present several generalized reflexive decompositions that lead to either explicit or semi-explicit block-decomposed forms, which are computationally attractive.

3. Decompositions for Generalized Reflexive Matrices

We now turn to generalized reflexive matrices A, which are not necessary square matrices. The decomposition schemes presented below for A are special applications of the general results developed in the previous section. Our main purpose is to obtain explicit forms of the block structure for some frequently encountered cases of A. Let $PA=AQ$ be generalized reflexive. Recall that P and Q are two generalized reflection matrices, which are unitary Hermitian matrices. Therefore, they have at most two distinct eigenvalues 1 and −1. Furthermore, the relation $A=PAQ$ can be expressed as $PA=AQ$ since $P={P}^{*}={P}^{-1}$ . From Corollary 2.2, we know that A can be block-diagonalized into two independent submatrices. This information along, however, is not enough from the computational point of view. we still need to know the eigenpairs of P and Q in order to obtain the explicit decomposed form of A. In the following, we derive several explicit or semi-explicit decomposed forms for some important types of generalized reflexive matrices, starting with the simplest one.

Theorem 3.1. Let $P\in {\mathcal{C}}^{n\times n}$ and $Q\in {\mathcal{C}}^{m\times m}$ , n and m even, be two matrices that take the following forms.

$P=\left[\begin{array}{cc}0& {P}_{1}^{*}\\ {P}_{1}& 0\end{array}\right]\mathrm{}\text{and}\mathrm{}Q=\left[\begin{array}{cc}0& {Q}_{1}^{*}\\ {Q}_{1}& 0\end{array}\right]$ (6)

where ${P}_{1}$ and ${Q}_{1}$ are unitary. Let $A\in {\mathcal{C}}^{n\times m}$ be partitioned as $\left({A}_{ij}\right)$ , $i,\mathrm{}j=1,2$ , with each ${A}_{ij}\in {\mathcal{C}}^{p\times q}$ , $p=\frac{n}{2}$ and $q=\frac{m}{2}$ . If $A=PAQ$ , then there exist two unitary matrices X and Y such that

${X}^{*}AY=\left({A}_{11}+{A}_{12}{Q}_{1}\right)\oplus \left({A}_{22}-{A}_{21}{Q}_{1}^{*}\right)=\left({A}_{11}+{P}_{1}^{*}{A}_{21}\right)\oplus \left({A}_{22}-{P}_{1}{A}_{12}\right).$ (7)

Proof. Clearly, both P and Q are generalized reflection matrices. Therefore, A is a generalized reflexive matrix. Take X and Y to be the unitary matrices

$X=\frac{1}{\sqrt{2}}\left[\begin{array}{cc}I& -{P}_{1}^{*}\\ {P}_{1}& I\end{array}\right]\text{and}\mathrm{}Y=\frac{1}{\sqrt{2}}\left[\begin{array}{cc}I& -{Q}_{1}^{*}\\ {Q}_{1}& I\end{array}\right].$ (8)

Then

$\begin{array}{c}{X}^{*}AY=\frac{1}{2}\left[\begin{array}{cc}I& {P}_{1}^{*}\\ -{P}_{1}& I\end{array}\right]\left[\begin{array}{cc}{A}_{11}& {A}_{12}\\ {A}_{21}& {A}_{22}\end{array}\right]\left[\begin{array}{cc}I& -{Q}_{1}^{*}\\ {Q}_{1}& I\end{array}\right]\\ =\frac{1}{2}\left[\begin{array}{cc}\left({A}_{11}+{A}_{12}{Q}_{1}\right)+\left({P}_{1}^{*}{A}_{21}+{P}_{1}^{*}{A}_{22}{Q}_{1}\right)& \left({A}_{12}-{P}_{1}^{*}{A}_{21}{Q}_{1}^{*}\right)+\left({P}_{1}^{*}{A}_{22}-{A}_{11}{Q}_{1}^{*}\right)\\ \left({A}_{21}-{P}_{1}{A}_{12}{Q}_{1}\right)+\left({A}_{22}{Q}_{1}-{P}_{1}{A}_{11}\right)& \left({A}_{22}-{A}_{21}{Q}_{1}^{*}\right)+\left({P}_{1}{A}_{11}{Q}_{1}^{*}-{P}_{1}{A}_{12}\right)\end{array}\right]\\ =\left[\begin{array}{cc}{A}_{11}+{A}_{12}{Q}_{1}& 0\\ 0& {A}_{22}-{A}_{21}{Q}_{1}^{*}\end{array}\right]=\left[\begin{array}{cc}{A}_{11}+{P}_{1}^{*}{A}_{21}& 0\\ 0& {A}_{22}-{P}_{1}{A}_{12}\end{array}\right]\end{array}$ (9)

where we have used the unitarity of ${P}_{1}$ and ${Q}_{1}$ and the relations ${A}_{11}={P}_{1}^{*}{A}_{22}{Q}_{1}$ and ${A}_{21}={P}_{1}{A}_{12}{Q}_{1}$ , which results from the assumption of $A=PAQ$ . Note that ${X}^{*}PX={I}_{p}\oplus -{I}_{p}$ and ${Y}^{*}QY={I}_{q}\oplus -{I}_{q}$ , which also explains, via Corollary 2.2, why this decomposition is possible.

Theorem 3.2. Let $P\in {\mathcal{C}}^{n\times n}$ and $Q\in {\mathcal{C}}^{m\times m}$ , $n=2p+r$ and $m=2q+s$ , be the following two generalized reflection matrices:

$P=\left[\begin{array}{ccc}0& 0& {P}_{1}^{*}\\ 0& {I}_{r}& 0\\ {P}_{1}& 0& 0\end{array}\right]\mathrm{}\text{and}\mathrm{}Q=\left[\begin{array}{ccc}0& 0& {Q}_{1}^{*}\\ 0& {I}_{s}& 0\\ {Q}_{1}& 0& 0\end{array}\right]$ (10)

where ${P}_{1}$ and ${Q}_{1}$ are unitary matrices of dimensions p and q, respectively; $\alpha =\pm 1$ , $\beta =\pm 1$ . Let $A\in {\mathcal{C}}^{n\times m}$ be partitioned as $\left({A}_{ij}\right)$ , $i,\mathrm{}j=1,2,3$ , with ${A}_{11}\in {\mathcal{C}}^{p\times q}$ , ${A}_{22}\in {\mathcal{C}}^{p\times q}$ , and ${A}_{33}\in {\mathcal{C}}^{p\times q}$ . If $A=PAQ$ , then there exist two unitary matrices X and Y such that

${X}^{*}AY=\left[\begin{array}{cc}{A}_{11}+{A}_{13}{Q}_{1}& \sqrt{2}{A}_{12}\\ \sqrt{2}{A}_{21}& {A}_{22}\end{array}\right]\oplus \left({A}_{33}-{A}_{31}{Q}_{1}^{*}\right)\mathrm{}\text{if}\mathrm{}\alpha =\beta =1,$

${X}^{*}AY=\left({A}_{11}+{A}_{13}{Q}_{1}\right)\oplus \left[\begin{array}{cc}{A}_{22}& \sqrt{2}{A}_{23}\\ \sqrt{2}{A}_{32}& {A}_{33}-{A}_{31}{Q}_{1}^{*}\end{array}\right]\mathrm{}\text{if}\text{\hspace{0.17em}}\alpha =\beta =-1,$

${X}^{*}AY=\left[\begin{array}{c}{A}_{11}+{A}_{13}{Q}_{1}\\ \sqrt{2}{A}_{21}\end{array}\right]\oplus \left[\begin{array}{cc}\sqrt{2}{A}_{32}& {A}_{33}-{A}_{31}{Q}_{1}^{*}\end{array}\right]\mathrm{}\text{if}\text{\hspace{0.17em}}\alpha =-\beta =1,$

and

${X}^{*}AY=\left[\begin{array}{cc}{A}_{11}+{A}_{13}{Q}_{1}& \sqrt{2}{A}_{12}\end{array}\right]\oplus \left[\begin{array}{c}\sqrt{2}{A}_{23}\\ {A}_{33}-{A}_{31}{Q}_{1}^{*}\end{array}\right]\mathrm{}\text{if}\text{\hspace{0.17em}}\alpha =-\beta =-1.$

Proof. Take X and Y to be the following two unitary matrices:

$X=\frac{1}{\sqrt{2}}\left[\begin{array}{ccc}I& 0& -{P}_{1}^{*}\\ 0& \sqrt{2}{I}_{r}& 0\\ {P}_{1}& 0& I\end{array}\right]\mathrm{}\text{and}\mathrm{}Y=\frac{1}{\sqrt{2}}\left[\begin{array}{ccc}I& 0& -{Q}_{1}^{*}\\ 0& \sqrt{2}{I}_{s}& 0\\ {Q}_{1}& 0& I\end{array}\right].$ (11)

Then the unitary transformation ${X}^{\mathrm{*}}AY$ yields

$\begin{array}{l}{X}^{*}AY=\frac{1}{2}\left[\begin{array}{ccc}I& 0& {P}_{1}^{*}\\ 0& {I}_{r}& 0\\ -{P}_{1}& 0& I\end{array}\right]\left[\begin{array}{ccc}{A}_{11}& {A}_{12}& {A}_{13}\\ {A}_{21}& {A}_{22}& {A}_{23}\\ {A}_{31}& {A}_{32}& {A}_{33}\end{array}\right]\left[\begin{array}{ccc}I& 0& -{Q}_{1}^{*}\\ 0& \sqrt{2}{I}_{s}& 0\\ {Q}_{1}& 0& I\end{array}\right]\\ =\frac{1}{2}\left[\begin{array}{ccc}\left({A}_{11}+{A}_{13}{Q}_{1}\right)+\left({P}_{1}^{*}{A}_{31}+{P}_{1}^{*}{A}_{33}{Q}_{1}\right)& \sqrt{2}\left({A}_{12}+{P}_{1}^{*}{A}_{32}\right)& \left({A}_{13}-{P}_{1}^{*}{A}_{31}{Q}_{1}^{*}\right)+\left({P}_{1}^{*}{A}_{33}-{A}_{11}{Q}_{1}^{*}\right)\\ \sqrt{2}\left({A}_{21}+{A}_{23}{Q}_{1}\right)& 2{A}_{22}& \sqrt{2}\left({A}_{23}-{A}_{21}{Q}_{1}^{*}\right)\\ \left({A}_{31}-{P}_{1}{A}_{13}{Q}_{1}\right)+\left({A}_{33}{Q}_{1}-{P}_{1}{A}_{11}\right)& \sqrt{2}\left({A}_{32}-{P}_{1}{A}_{12}\right)& \left({A}_{33}-{A}_{31}{Q}_{1}^{*}\right)+\left({P}_{1}{A}_{11}{Q}_{1}^{*}-{P}_{1}{A}_{13}\right)\end{array}\right]\end{array}$ (12)

If $A=PAQ$ , we immediately have the following relations among the submatrices ${A}_{ij}$ .

${A}_{11}={P}_{1}^{*}{A}_{33}{Q}_{1},\mathrm{}{A}_{13}={P}_{1}^{*}{A}_{31}{Q}_{1}^{*},$

${A}_{12}={P}_{1}^{*}{A}_{32},\mathrm{}{A}_{21}={A}_{23}{Q}_{1}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.05em}}{A}_{22}={A}_{22}.$

Employing these relations and the unitarity of ${P}_{1}$ and ${Q}_{1}$ for (12), we obtain a much simplified form of the transformation ${X}^{\mathrm{*}}AY$ . Namely,

${X}^{*}AY=\frac{1}{2}\left[\begin{array}{ccc}2\left({A}_{11}+{A}_{13}{Q}_{1}\right)& \sqrt{2}\left(1+\beta \right){A}_{12}& 0\\ \sqrt{2}\left(1+\alpha \right){A}_{21}& \left(1+\alpha \beta \right){A}_{22}& \sqrt{2}\left(1-\alpha \right){A}_{23}\\ 0& \sqrt{2}\left(1-\beta \right){A}_{32}& 2\left({A}_{33}-{A}_{31}{Q}_{1}^{*}\right)\end{array}\right].$ (13)

Accordingly, we have the results we want:

${X}^{*}AY=\left[\begin{array}{ccc}{A}_{11}+{A}_{13}{Q}_{1}& \sqrt{2}{A}_{12}& 0\\ \sqrt{2}{A}_{21}& {A}_{22}& 0\\ 0& 0& {A}_{33}-{A}_{31}{Q}_{1}^{*}\end{array}\right]\mathrm{}\text{for}\text{\hspace{0.17em}}\alpha =\beta =1,$

${X}^{*}AY=\left[\begin{array}{ccc}{A}_{11}+{A}_{13}{Q}_{1}& 0& 0\\ 0& {A}_{22}& \sqrt{2}{A}_{23}\\ 0& \sqrt{2}{A}_{32}& {A}_{33}-{A}_{31}{Q}_{1}^{*}\end{array}\right]\mathrm{}\text{for}\text{\hspace{0.17em}}\alpha =\beta =-1,$

${X}^{*}AY=\left[\begin{array}{ccc}{A}_{11}+{A}_{13}{Q}_{1}& 0& 0\\ \sqrt{2}{A}_{21}& 0& 0\\ 0& \sqrt{2}{A}_{32}& {A}_{33}-{A}_{31}{Q}_{1}^{*}\end{array}\right]\mathrm{}\text{for}\text{\hspace{0.17em}}\alpha =-\beta =1,$

and

${X}^{*}AY=\left[\begin{array}{ccc}{A}_{11}+{A}_{13}{Q}_{1}& \sqrt{2}{A}_{12}& 0\\ 0& 0& \sqrt{2}{A}_{23}\\ 0& 0& {A}_{33}-{A}_{31}{Q}_{1}^{*}\end{array}\right]\mathrm{}\text{for}\text{\hspace{0.17em}}\alpha =-\beta =-1.$

Note that in (13), ${A}_{13}{Q}_{1}$ can be replaced by ${P}_{1}^{\mathrm{*}}{A}_{31}$ and ${A}_{31}{Q}_{1}^{\mathrm{*}}$ replaced by ${P}_{1}{A}_{13}$ since ${A}_{31}={P}_{1}{A}_{13}{Q}_{1}$ . Computationally, one should use the expressions that are easier to compute. Note also that X and Y do not depend on $\alpha $ and $\beta $ , and

${X}^{*}PX={I}_{p}\oplus {I}_{r}\oplus -{I}_{p}\mathrm{}\text{and}\mathrm{}{Y}^{*}QY={I}_{q}\oplus {I}_{r}\oplus -{I}_{q}.$

Remark 3: In Theorem 3.1 and 3.2 if the unitarity requirement of P, Q, X, and Y is lifted, a slightly more general case can be obtained simply by replacing the conjugate transpose with the inverse (existence of ${P}_{1}^{-1}$ and ${Q}_{1}^{-1}$ assumed) in places of ${P}_{1}^{\mathrm{*}}$ , ${Q}_{1}^{\mathrm{*}}$ , ${X}^{\mathrm{*}}$ , and ${Y}^{\mathrm{*}}$ . With this replacement, all the results in the proofs remain intact. The matrices A in this case, however, are not necessarily generalized reflexive since P and Q may not be generalized reflection matrices.

Remark 4: Obviously, Theorem 3.2 reduces to Theorem 3.1 if ${I}_{r}$ and ${I}_{s}$ in (10) do not exist, i.e., $r=s=0$ . If ${I}_{r}$ is present and ${I}_{s}$ disappears, then by partitioning A as $\left({A}_{ij}\right)$ , $i=1,2,3$ and $j=1,2$ , according to the block forms of P and Q, we have

${X}^{*}AY=\left[\begin{array}{cc}{A}_{11}+{A}_{12}{Q}_{1}& 0\\ \frac{1}{\sqrt{2}}\left(1+\alpha \right){A}_{21}& \frac{1}{\sqrt{2}}\left(1-\alpha \right){A}_{22}\\ 0& {A}_{32}-{A}_{31}{Q}_{1}^{*}\end{array}\right]$ (14)

which is decoupled into two independent sub-blocks when $\alpha =\pm 1$ . Analogous to (13), ${A}_{12}{Q}_{1}$ and ${A}_{31}{Q}_{1}^{\mathrm{*}}$ can be expressed as ${P}_{1}^{\mathrm{*}}{A}_{31}$ and ${P}_{1}{A}_{12}$ , respectively since in this case ${A}_{31}={P}_{1}{A}_{12}{Q}_{1}$ . Instead, if ${I}_{r}$ disappears and ${I}_{s}$ remains and the matrix A is partitioned in accordance with P and Q as

$A=\left[\begin{array}{ccc}{A}_{11}& {A}_{12}& {A}_{13}\\ {A}_{21}& {A}_{22}& {A}_{23}\end{array}\right],$

then we have

${X}^{*}AY=\left[\begin{array}{ccc}{A}_{11}+{A}_{13}{Q}_{1}& \frac{1}{\sqrt{2}}\left(1+\beta \right){A}_{12}& 0\\ 0& \frac{1}{\sqrt{2}}\left(1-\beta \right){A}_{22}& {A}_{23}-{A}_{21}{Q}_{1}^{*}\end{array}\right]$ (15)

where ${A}_{13}{Q}_{1}={P}_{1}^{*}{A}_{21}$ and ${A}_{21}{Q}_{1}^{*}={P}_{1}{A}_{13}$ because ${A}_{21}={P}_{1}{A}_{13}{Q}_{1}$ . This transformation again decouples the matrix A into two independent sub-blocks when $\beta =\pm 1$ .

4. Applications

As seen from the transformations presented in the previous section, the decomposed forms of A of this class are very simple to compute. This is especially true when P and Q are reflection (symmetric signed permutation) matrices, which arise frequently in a very wide range of real-world applications, because any reflection matrix can be symmetrically permuted to yield one of the forms of (6) and (10), with ${P}_{1}$ and ${Q}_{1}$ being some signed permutation matrices whereas ${P}_{2}$ and ${Q}_{2}$ some reflection matrices. Furthermore, the decompositions preserve all singular values because they make use of unitarily equivalence transformations, which can be applied to both square matrices and rectangular matrices. Therefore, they are useful not only for linear systems but for linear least-squares problems and singular value problems as well. The only requirement is the existence of the generalized reflexivity property of the matrix A. When P is the same as Q, the decompositions lead to similarity transformations and, accordingly, preserve all eigenvalues. It is exactly this simplicity and preservance of singular values or eigenvalues that makes these decompositions computationally attractive. To demonstrate the usefulness of these decompositions in attacking applications of this type, we present in this section an application of the decompositions to one of the numerical examples described in [1] , where the same problem is solved using only basic generalized reflexive properties, without resorting to matrix decompositions.

Numerical example. Consider the following overdetermined linear system:

$\left[\begin{array}{rrrr}\hfill 1& \hfill -1& \hfill 0& \hfill 0\\ \hfill 0& \hfill -1& \hfill 0& \hfill 0\\ \hfill 1& \hfill 0& \hfill 0& \hfill -1\\ \hfill 0& \hfill 1& \hfill 0& \hfill -1\\ \hfill 0& \hfill 0& \hfill 1& \hfill -1\\ \hfill 0& \hfill 0& \hfill 0& \hfill -1\\ \hfill 0& \hfill -1& \hfill 1& \hfill 0\end{array}\right]\left[\begin{array}{c}{x}_{1}\\ {x}_{2}\\ {x}_{3}\\ {x}_{4}\end{array}\right]=\left[\begin{array}{c}50\\ -152\\ 78\\ 33\\ 30\\ -123\\ 2\end{array}\right].$ (16)

Let A be the coefficient matrix of the overdetermined system. It is simple to observe that A is a generalized reflexive matrix: $A=PAQ$ where

$P=\left[\begin{array}{ccc}0& 0& {I}_{3}\\ 0& -1& 0\\ {I}_{3}& 0& 0\end{array}\right]\mathrm{}\text{and}\mathrm{}Q=\left[\begin{array}{cc}0& {I}_{2}\\ {I}_{2}& 0\end{array}\right]$ (17)

are two reflection matrices. It deserves mentioning that the coefficient matrix A is the edge-node incidence matrix of a level network with reflexive symmetry.

Whether this overdetermined linear system is to be solved via its normal equation or using a QR decomposition instead, we can decompose the original problem into two independent subproblems first, using the decomposition techniques presented in the previous section. Let

$X=\frac{1}{\sqrt{2}}\left[\begin{array}{ccc}{I}_{3}& 0& -{I}_{3}\\ 0& \sqrt{2}& 0\\ {I}_{3}& 0& {I}_{3}\end{array}\right]\mathrm{}\text{and}\mathrm{}Y=\frac{1}{\sqrt{2}}\left[\begin{array}{rr}\hfill {I}_{2}& \hfill -{I}_{2}\\ \hfill {I}_{2}& \hfill {I}_{2}\end{array}\right].$ (18)

The overdetermined system $Ax=b$ is then transformed to $\stackrel{\u02dc}{A}\stackrel{\u02dc}{x}=\stackrel{\u02dc}{b}$ with

$\stackrel{\u02dc}{A}={X}^{\text{T}}AY,\mathrm{}\stackrel{\u02dc}{x}=\left(\sqrt{2}{Y}^{\text{T}}\right)x\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\stackrel{\u02dc}{b}=\left(\sqrt{2}{X}^{\text{T}}\right)b$

where $\sqrt{2}$ is intentionally inserted to avoid unnecessary multiplications of $\frac{1}{\sqrt{2}}$ in forming $\stackrel{\u02dc}{b}$ from b. Now, let $Ax=b$ be partitioned, according to the block forms of X and Y, as

$\left[\begin{array}{cc}{A}_{11}& {A}_{12}\\ {A}_{21}& {A}_{22}\\ {A}_{31}& {A}_{32}\end{array}\right]\left[\begin{array}{c}{x}_{1}\\ {x}_{2}\end{array}\right]=\left[\begin{array}{c}{b}_{1}\\ {b}_{2}\\ {b}_{3}\end{array}\right].$ (19)

The transformation ${X}^{\text{T}}AY$ can easily be obtained without actually performing expensive matrix-matrix multiplications. We simply use the explicit form of (14) by substituting ${I}_{2}$ for ${Q}_{1}$ and −1 for $\alpha $ , yielding

$\stackrel{\u02dc}{A}={X}^{\text{T}}AY={\stackrel{\u02dc}{A}}_{1}\oplus {\stackrel{\u02dc}{A}}_{2}$

where

${\stackrel{\u02dc}{A}}_{1}={A}_{11}+{A}_{12}=\left[\begin{array}{cc}1& -1\\ 0& -1\\ 1& -1\end{array}\right]\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.05em}}{\stackrel{\u02dc}{A}}_{2}=\left[\begin{array}{c}\sqrt{2}{A}_{22}\\ {A}_{32}-{A}_{31}\end{array}\right]=\left[\begin{array}{rr}\hfill 0& \hfill -\sqrt{2}\\ \hfill 1& \hfill -1\\ \hfill 0& \hfill -1\\ \hfill 1& \hfill 1\end{array}\right].$ (20)

It is simple to obtain $\stackrel{\u02dc}{b}$ without resorting to a dense matrix-vector multiplication.

$\stackrel{\u02dc}{b}=\left[\begin{array}{c}{b}_{1}+{b}_{3}\\ \sqrt{2}{b}_{2}\\ {b}_{3}-{b}_{1}\end{array}\right]={\left[\begin{array}{ccccccccc}80& -275& 80& |& \sqrt{2}\left(33\right)& |& -20& 29& -76\end{array}\right]}^{\text{T}}.$

This transformation then decouples the original system $Ax=b$ into

${\stackrel{\u02dc}{A}}_{1}{\stackrel{\u02dc}{x}}_{1}={\stackrel{\u02dc}{b}}_{1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}{\stackrel{\u02dc}{b}}_{1}={\left[\begin{array}{ccc}80& -275& 80\end{array}\right]}^{\text{T}}$ (21)

and

${\stackrel{\u02dc}{A}}_{2}{\stackrel{\u02dc}{x}}_{2}={\stackrel{\u02dc}{b}}_{2}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}{\stackrel{\u02dc}{b}}_{2}={\left[\begin{array}{cccc}\sqrt{2}\left(33\right)& -20& 29& -76\end{array}\right]}^{\text{T}}.$ (22)

The normal equations of (21) and (22) are simply

$\left[\begin{array}{rr}\hfill 2& \hfill -2\\ \hfill -2& \hfill 3\end{array}\right]{\stackrel{\u02dc}{x}}_{1}=\left[\begin{array}{c}160\\ 115\end{array}\right]\mathrm{}\text{and}\mathrm{}\left[\begin{array}{cc}2& 0\\ 0& 5\end{array}\right]{\stackrel{\u02dc}{x}}_{2}=\left[\begin{array}{c}-96\\ -151\end{array}\right],$

respectively, whose solutions are ${\stackrel{\u02dc}{x}}_{1}={\left[\mathrm{355275}\right]}^{\text{T}}$ and ${\stackrel{\u02dc}{x}}_{2}={\left[\mathrm{}-48-30.2\right]}^{\text{T}}$ . The final solution x can now be retrieved from ${\stackrel{\u02dc}{x}}_{1}$ and ${\stackrel{\u02dc}{x}}_{2}$ with ease.

$x=\frac{1}{2}\left[\begin{array}{c}{\stackrel{\u02dc}{x}}_{1}-{\stackrel{\u02dc}{x}}_{2}\\ {\stackrel{\u02dc}{x}}_{1}+{\stackrel{\u02dc}{x}}_{2}\end{array}\right]={\left[\begin{array}{rrrr}\hfill 201.5& \hfill 152.6& \hfill 153.5& \hfill 122.4\end{array}\right]}^{\text{T}},$

whose correctness can be verified from the normal equation of the original system.

At this point, it is clear that the main reason why transformations of this type are so cheap to obtain is not only that explicit forms are available but that no arithmetic multiplications or divisions are involved in forming the decoupled subsystems except the central block row of A and b and the central block column of A, if any, such as ${A}_{22}$ and ${b}_{2}$ in this example. The dimensions of these blocks are usually very small for large-scale problems with reflexive symmetry because they involve only the nodes/edges on the line or plane of symmetry. Therefore, this extra work can easily be offset by the tremendous savings resulting from solving two smaller subproblems whose sizes are only about half of the original problem. It is worth mentioning that solving sequentially two independent decomposed subproblems each of half size of a single problem is about four times faster than solving the undecomposed one. This is exactly where computational efficiency comes from. The large-grain parallelism induced by these decompositions is an additional advantage when the subproblems are solved on a multiprocessor on multiple networked computers.

We close this section by emphasizing the fact that a great number of scientific and engineering applications require solutions to linear least-squares problems, singular value problems, linear systems, or eigenvalue problems whose coefficient matrices are either generalized reflexive nontrivial reflection matrices P and Q or reflexive P (or Q). Instead of giving more numerical examples, we just mention that the node-edge (or edge-node) incidence matrix of any finite network or graph that possesses reflexive symmetry or that can be redrawn as one that displays reflexive symmetry is generalized reflexive. Refer to [1] for more numerical examples.

5. Conclusions

Generalized reflexive matrices, a newly exploited special class of matrices $A\in {\mathcal{C}}^{n\times m}$ that have the relation $A=PAQ$ with P and Q being some generalized reflection matrices, are a generalization of centrosymmetric matrices and reflexive matrices. Although it is not trivial to realize their existence purely from the entries of a given matrix, this new class of matrices indeed arise very often from physical problems in many areas of scientific and engineering applications, especially from those with reflexive symmetry. Three such nontrivial numerical examples, each from a distinct real-world application area, can be found in [1] .

A major part of this paper has been devoted to the exploration of computationally attractive decompositions for taking advantage of the special relation possessed by this class of matrices. The decompositions are based on a generalized simultaneous diagonalization theorem presented in this paper and derived using the eigenvectors of P and Q via unitarily equivalence transformations. When the eigenpairs of P and Q are explicitly known, which is usually the case for generalized reflexive matrices that arise from physical problems with reflexive symmetry, the decompositions yield simple and explicit forms of the decomposed submatrices for the matrices A. One of the generalized reflexive matrices presented in this paper has also been employed to serve as an example to show the usefulness of the derived explicit decompositions for decoupling linear least-squares problems whose coefficient matrices are of this class into smaller and independent subproblems. These decompositions, though theoretically simple, can lead to much more efficient computation for large-scale applications. It also induces large-grain parallelism as a by-product. Furthermore, they preserve either the singular values or the eigenvalues of the matrices and, therefore, immediately applicable not only for handling linear least-squares problems and linear systems but for attacking singular value problems or eigenvalue problems.

Cite this paper

Chen, H. (2018) Block Decompositions and Applications of Generalized Reflexive Matrices.*Advances in Linear Algebra & Matrix Theory*, **8**, 122-133. doi: 10.4236/alamt.2018.83011.

Chen, H. (2018) Block Decompositions and Applications of Generalized Reflexive Matrices.

References

[1] Chen, H.-C. (1998) Generalized Reflexive Matrices: Special Properties and Applications. SIAM Journal on Matrix Analysis and Applications, 19, 140-153.

https://doi.org/10.1137/S0895479895288759

[2] Zehfuss, G. (1862) Zwei Sätze über determinanten. Zeitschrift für Angewandte Mathematik und Physik, VII, 436-439.

[3] Aitken, A.C. (1949) Determinants and Matrices. 6th Edition, Wiley-Interscience, New York.

[4] Good, I.J. (1970) The Inverse of a Centrosymmetric Matrix. Technometrics, 12, 925-928.

https://doi.org/10.1080/00401706.1970.10488743

[5] Andrew, A.L. (1973) Solution of Equations Involving Centrosymmetric Matrices. Technometrics, 15, 405-407. https://doi.org/10.1080/00401706.1973.10489052

[6] Andrew, A.L. (1973) Eigenvectors of Certain Matrices. Linear Algebra and Its Applications, 7, 151-162. https://doi.org/10.1016/0024-3795(73)90049-9

[7] Pye, W.C., Boullino, T.L. and Atchison, T.A. (1973) The Pseudoinverse of a Centrosymmetric Matrix. Linear Algebra and Its Applications, 6, 201-204.

https://doi.org/10.1016/0024-3795(73)90020-7

[8] Cantoni, A. and Butler, P. (1976) Eigenvalues and Eigenvectors of Symmetric Centrosymmetric Matrices. Linear Algebra and Its Applications, 13, 275-288.

https://doi.org/10.1016/0024-3795(76)90101-4

[9] Weaver, J.R. (1985) Centrosymmetric (Cross-Symmetric) Matrices, Their Basic Properties, Eigenvalues, and Eigenvectors. The American Mathematical Monthly, 92, 711-717.

https://doi.org/10.1080/00029890.1985.11971719

[10] Weaver, J.R. (1988) Real Eigenvalues of Nonnegative Matrices Which Commute with a Symmetric Matrix Involution. Linear Algebra and Its Applications, 110, 243-253.

https://doi.org/10.1016/0024-3795(83)90138-6

[11] Tao, D. and Yasuda, M. (2002) A Spectral Characterization of Generalized Real Symmetric Centrosymmetric and Generalized Real Symmetric Skew-Centrosymmetric Matrices. SIAM Journal on Matrix Analysis and Applications, 23, 885-895.

https://doi.org/10.1137/S0895479801386730

[12] Chen, H-C. and Sameh, A. (1989) A Matrix Decomposition Method for Orthotropic Elasticity Problems. SIAM Journal on Matrix Analysis and Applications, 10, 39-64.

https://doi.org/10.1137/0610004

[13] Chen, H.-C. and Sameh, A. (1989) A Domain Decomposition Method for 3D Elasticity Problems. In: Brebbia, C.A. and Peters, A., Eds., Applications of Supercomputers in Engineering: Fluid Flow and Stress Analysis Applications, Computational Mechanics Publications, Southampton University, Southampton, England, 171-188.

[14] Gibson, P.M. (1974) Simultaneous Diagonalization of Rectangular Complex Matrices. Linear Algebra and Its Applications, 9, 45-53. https://doi.org/10.1016/0024-3795(74)90025-1

[15] Horn, R.A. and Johnson, C.A. (1985) Matrix Analysis. Cambridge University Press, New York.

https://doi.org/10.1017/CBO9780511810817

[16] Eckart, C. and Young, G. (1939) A Principal Axis Transformation for Non-Hermitian Matrices. Bulletin of the American Mathematical Society, 45, 118-121.

https://doi.org/10.1090/S0002-9904-1939-06910-3

[1] Chen, H.-C. (1998) Generalized Reflexive Matrices: Special Properties and Applications. SIAM Journal on Matrix Analysis and Applications, 19, 140-153.

https://doi.org/10.1137/S0895479895288759

[2] Zehfuss, G. (1862) Zwei Sätze über determinanten. Zeitschrift für Angewandte Mathematik und Physik, VII, 436-439.

[3] Aitken, A.C. (1949) Determinants and Matrices. 6th Edition, Wiley-Interscience, New York.

[4] Good, I.J. (1970) The Inverse of a Centrosymmetric Matrix. Technometrics, 12, 925-928.

https://doi.org/10.1080/00401706.1970.10488743

[5] Andrew, A.L. (1973) Solution of Equations Involving Centrosymmetric Matrices. Technometrics, 15, 405-407. https://doi.org/10.1080/00401706.1973.10489052

[6] Andrew, A.L. (1973) Eigenvectors of Certain Matrices. Linear Algebra and Its Applications, 7, 151-162. https://doi.org/10.1016/0024-3795(73)90049-9

[7] Pye, W.C., Boullino, T.L. and Atchison, T.A. (1973) The Pseudoinverse of a Centrosymmetric Matrix. Linear Algebra and Its Applications, 6, 201-204.

https://doi.org/10.1016/0024-3795(73)90020-7

[8] Cantoni, A. and Butler, P. (1976) Eigenvalues and Eigenvectors of Symmetric Centrosymmetric Matrices. Linear Algebra and Its Applications, 13, 275-288.

https://doi.org/10.1016/0024-3795(76)90101-4

[9] Weaver, J.R. (1985) Centrosymmetric (Cross-Symmetric) Matrices, Their Basic Properties, Eigenvalues, and Eigenvectors. The American Mathematical Monthly, 92, 711-717.

https://doi.org/10.1080/00029890.1985.11971719

[10] Weaver, J.R. (1988) Real Eigenvalues of Nonnegative Matrices Which Commute with a Symmetric Matrix Involution. Linear Algebra and Its Applications, 110, 243-253.

https://doi.org/10.1016/0024-3795(83)90138-6

[11] Tao, D. and Yasuda, M. (2002) A Spectral Characterization of Generalized Real Symmetric Centrosymmetric and Generalized Real Symmetric Skew-Centrosymmetric Matrices. SIAM Journal on Matrix Analysis and Applications, 23, 885-895.

https://doi.org/10.1137/S0895479801386730

[12] Chen, H-C. and Sameh, A. (1989) A Matrix Decomposition Method for Orthotropic Elasticity Problems. SIAM Journal on Matrix Analysis and Applications, 10, 39-64.

https://doi.org/10.1137/0610004

[13] Chen, H.-C. and Sameh, A. (1989) A Domain Decomposition Method for 3D Elasticity Problems. In: Brebbia, C.A. and Peters, A., Eds., Applications of Supercomputers in Engineering: Fluid Flow and Stress Analysis Applications, Computational Mechanics Publications, Southampton University, Southampton, England, 171-188.

[14] Gibson, P.M. (1974) Simultaneous Diagonalization of Rectangular Complex Matrices. Linear Algebra and Its Applications, 9, 45-53. https://doi.org/10.1016/0024-3795(74)90025-1

[15] Horn, R.A. and Johnson, C.A. (1985) Matrix Analysis. Cambridge University Press, New York.

https://doi.org/10.1017/CBO9780511810817

[16] Eckart, C. and Young, G. (1939) A Principal Axis Transformation for Non-Hermitian Matrices. Bulletin of the American Mathematical Society, 45, 118-121.

https://doi.org/10.1090/S0002-9904-1939-06910-3