Back
 ALAMT  Vol.11 No.2 , June 2021
Decompositions of Some Special Block Tridiagonal Matrices
Abstract: In this paper, we present a unified approach to decomposing a special class of block tridiagonal matrices K (α ,β ) into block diagonal matrices using similarity transformations. The matrices K (α ,β )∈ Rpq× pq are of the form K (α ,β = block-tridiag[β B,A,α B] for three special pairs of (α ,β ): K (1,1), K (1,2) and K (2,2) , where the matrices A and B, A, B∈ Rp× q , are general square matrices. The decomposed block diagonal matrices (α ,β ) for the three cases are all of the form: (α ,β ) = D1 (α ,β ) ⊕ D2 (α ,β ) ⊕---⊕ Dq (α ,β ) , where Dk (α ,β ) = A+ 2cos ( θk (α ,β )) B, in which θk (α ,β ) , k = 1,2, --- q , depend on the values of α and β. Our decomposition method is closely related to the classical fast Poisson solver using Fourier analysis. Unlike the fast Poisson solver, our approach decomposes K (α ,β ) into q diagonal blocks, instead of p blocks. Furthermore, our proposed approach does not require matrices A and B to be symmetric and commute, and employs only the eigenvectors of the tridiagonal matrix T (α ,β ) = tridiag[β b, a,αb] in a block form, where a and b are scalars. The transformation matrices, their inverses, and the explicit form of the decomposed block diagonal matrices are derived in this paper. Numerical examples and experiments are also presented to demonstrate the validity and usefulness of the approach. Due to the decoupled nature of the decomposed matrices, this approach lends itself to parallel and distributed computations for solving both linear systems and eigenvalue problems using multiprocessors.

1. Introduction

In this paper, we present explicit similarity transformations to decompose block tridiagonal matrices K ( α , β ) R p q × p q of the following form:

K ( α , β ) = [ A α B B A B B A B β B A ] (1)

for some special pairs of ( α , β ) , where A , B R p × p , into block diogonal matrices. The ( α , β ) pairs to be considered in this paper include (1,1), (1,2), and (2,2). We shall show that the transformations K ( α , β ) for these three ( α , β ) pairs all lead to the block diagonal matrices K ˜ ( α , β ) of the following single unified form:

K ˜ ( α , β ) = D 1 ( α , β ) D 2 ( α , β ) D q ( α , β ) (2)

where the operation symbol denotes the matrix direct sum and the diagonal submatrices D k ( α , β ) = A + 2 cos ( θ k ( α , β ) ) B in which θ k ( α , β ) , k = 1 , 2 , , q , are explicitly known, although they depend on the values of α and β . Our decomposition method is closely related to the classical fast Poisson solver [1] [2] using Fourier analysis.

The block decomposition scheme to be addressed has been presented by the author in [3] and formal proof was given for K ( 1 , 1 ) . The decompositions for the other two cases were simply mentioned without derivations. Unfortunately, that paper consists of two errors. First, the eigenvectors used to form the transformation matrix for decomposing K ( 2,2 ) and the decomposed submatrices are incorrect, which will be addressed in Theorem 2 of Section 2 in this paper. Second, the transformation matrix Q for K ( 1 , 2 ) is not orthogonal, although the eigenvectors and the decomposed submatrices presented are correct, meaning that the expression Q T K Q should read Q 1 K Q . The main purposes of this paper include the following tasks.

1) We show that the transformation matrix for decomposing K ( 1 , 1 ) is orthogonal in Theorem 1.

2) We take this opportunity to correct the errors made in our previous paper by providing formal proof with the transformation matrix formed by the correct eigenvectors for decomposing K ( 2,2 ) in Theorem 2.

3) We also present a formal derivation for the decomposition of K ( 1 , 2 ) into a block diagonal matrix in Theorem 3.

4) Numerical examples and experiments for all three cases will be given in Section 3 to demonstrate the validity and the advantages of the decompositions.

The block decompositions are all based on similarity transformations with known eigenvectors of certain tridiagonal matrices and they all yield a single unified form of block diagonal matrices. Since similarity transformations preserve all eigenvalues, the eigenvalues of the original matrix K ( α , β ) , which is of size pq by pq, can be obtained from the q diagonal blocks D k ( α , β ) , each of size only p by p. This block decomposition scheme provides a much more efficient means for solving eigenvalue problems with this type of coefficient matrices. It can also be employed for solving linear systems with efficiency because the transformation matrices are explicitly known. In addition, the decoupled structure of the transformed matrix lends itself to parallel computation with coarse-grain parallelism.

2. Decompositions

In this following, we present our key observations that lead to the proposed block decomposition method for this class of matrices K ( α , β ) using transformation matrices whose entries are inherent in the special block tridiagonal form of K ( α , β ) . Whenever there is no confusion, we shall simply use K to denote K ( α , β ) . Throughout the paper, the operation symbols and are used to denote the matrix direct sum and the Kronecker product.

Theorem 1. When α = β = 1 , the block tridiagonal matrix K ( α , β ) is orthogonally similar to the block diagonal matrix D = D 1 D 2 D q ,

D k = A + 2 cos ( θ k ) B , where θ k = ( k q + 1 ) π , k = 1 , 2 , , q .

Proof. Let v k T = 2 q + 1 [ sin ( θ k ) , sin ( 2 θ k ) , , sin ( q θ k ) ] and V k = v k I p ,

k = 1 , 2 , , q , where I p is the identity matrix of dimension p and the symbol denotes the Kronecker product. It has been shown in [3] that

V i T K V j = ( v i T v j ) [ A + 2 cos ( θ j ) B ] = { A + 2 cos ( θ i ) B for i = j 0 otherwise

by stating that v i T v j is orthonormal. This paper will skip the proof and just provide the details to show that v i T v j is indeed orthonormal. To this end, we need the following formula [4]:

k = 1 q cos ( k x ) = 1 2 [ S ( x ) 1 ] ,

S ( x ) = [ sin ( ( 2 q + 1 ) ( x / 2 ) ) sin ( x / 2 ) ] .

Let t i j and u i j denote ( θ i θ j ) and ( θ i + θ j ) , respectively. We have

v i T v j = 2 q + 1 k = 1 q sin ( k θ i ) sin ( k θ j ) = 1 q + 1 k = 1 q [ cos k ( θ i θ j ) cos k ( θ i + θ j ) ] = 1 q + 1 k = 1 q ( cos k t i j cos k u i j ) = 1 2 ( q + 1 ) [ S ( t i j ) S ( u i j ) ] .

Note that, by L’Hospital’s rule,

S ( x ) = ( 2 q + 1 ) [ cos ( ( 2 q + 1 ) ( x / 2 ) ) cos ( x / 2 ) ]

if sin ( x / 2 ) = 0 . This will be the case when x = 2 k π for any integer k. Now since ( q 1 ) ( i j ) ( q 1 ) , we have π < t i j = θ i θ j = i j q + 1 π < π . The

denominator of S ( t i j ) will be equal to zero only if i = j . When i = j , we have

S ( t i j ) = S ( 0 ) = 2 q + 1.

When i j , S ( t i j ) can be simplified as

S ( t i j ) = sin ( ( 2 q + 1 ) ( t i j / 2 ) ) sin ( t i j / 2 ) = sin ( 2 ( q + 1 ) ( t i j / 2 ) ) cos ( t i j / 2 ) cos ( 2 ( q + 1 ) ( t i j / 2 ) ) sin ( t i j / 2 ) sin ( t i j / 2 ) = sin ( ( i j ) π ) cos ( t i j / 2 ) cos ( ( i j ) π ) sin ( t i j / 2 ) sin ( t i j / 2 ) = cos ( ( i j ) π ) = ( 1 ) i j + 1

since sin ( i j ) π = 0 , where we have uesd the fact that sin ( x y ) = sin x cos y sin y cos x . Therefore,

S ( t i j ) = { 2 q + 1 for i = j ( 1 ) i j otherwise (3)

Likewise, since 0 < ( i + j ) 2 q , we have

0 < u i j = θ i + θ j = i + j 2 q + 1 π < 2 π .

The denominator of S ( u i j ) will never be equal to zero. Accordingly,

S ( u i j ) = cos ( ( i + j ) π ) = ( 1 ) i + j + 1 . (4)

Finally from (3) and (4), we obtain

S ( t i j ) S ( u i j ) = { 2 ( q + 1 ) for i = j 0 otherwise

and, therefore,

v i T v j = 1 2 ( q + 1 ) [ S ( t i j ) S ( u i j ) ] = { 1 for i = j 0 otherwise

indicating the vectors v i are orthonormal. Accordingly, we have

V i T K V j = { A + 2 cos ( θ j ) B for i = j 0 otherwise

This completes the proof.

Theorem 2. When α = β = 2 , the block tridiagonal matrix K ( α , β ) is similar to the block diagonal matrix D = D 1 D 2 D q , D k = A + 2 cos ( θ k ) B ,

where θ k = ( k 1 q 1 ) π , k = 1 , 2 , , q .

Proof. This block diagonalization was mentioned previously in Corollary 2 in [3] without a proof. Unfortunately, the eigenvectors v k T used to form the transformation matrix Q and the decomposed submatrices Dk consist of errors,

in which 1) the vector 2 q [ cos ( θ k ) , cos ( 2 θ k ) , , cos ( q θ k ) ] as stated in that paper should be replaced by 2 q 1 [ 1 , cos ( θ k ) , cos ( 2 θ k ) , , cos ( ( q 1 ) θ k ) ] with θ k = ( k 1 q 1 ) π ; and 2) the expression sin ( θ k ) in Dk should read cos ( θ k ) .

In this paper, we give a formal proof with the correct eigenvectors and provide the explicit form of the inverse of the transformation matrix Q for K ( 2,2 ) .

Let v k T = 2 q 1 [ 1 , cos ( θ k ) , cos ( 2 θ k ) , , cos ( ( q 1 ) θ k ) ] . Let also

V k = v k I p , k = 1 , 2 , , q ; and Q = [ V 1 , V 2 , , V q ] . We now show that the similarity transformation Q 1 K Q block-diagonalizes K into D. It deserves mentioning that Q in this case is not orthogonal. Q 1 , however, exists and is explicitly known. Therefore, It suffices to show that K V j = V j D j for all j = 1 , 2 , , q .

K V j = 2 q 1 [ A 2 B B A B B A B 2 B A ] [ I cos ( θ j ) I cos ( ( q 2 ) θ j ) I cos ( ( q 1 ) θ j ) I ] = 2 q 1 [ A + 2 cos ( θ j ) B cos ( θ j ) A + [ 1 + cos ( 2 θ j ) ] B cos ( 2 θ j ) A + [ cos ( θ j ) + cos ( 3 θ j ) ] B cos ( ( q 2 ) θ j ) A + [ cos ( ( q 3 ) θ j ) + cos ( ( q 1 ) θ j ) ] B cos ( ( q 1 ) θ j ) A + 2 cos ( ( q 2 ) θ j ) B ]

Applying the identities:

1 + cos ( 2 x ) = 2 cos 2 ( x ) and cos ( x + y ) + cos ( x y ) = 2 cos ( x ) cos ( y )

and noting that:

cos ( ( q 2 ) θ j ) = cos ( ( q 1 ) θ j ) cos ( θ j ) + sin ( ( q 1 ) θ j ) sin ( θ j ) = cos ( ( q 1 ) θ j ) cos ( θ j )

where we have used the fact that sin ( ( q 1 ) θ j ) = sin ( ( j 1 ) π ) = 0 , we obtain

K V j = 2 q 1 [ I cos ( θ j ) I cos ( 2 θ j ) I cos ( ( q 2 ) θ j ) I cos ( ( q 1 ) θ j ) I ] ( A + 2 B cos ( θ j ) ) = V j D j . (5)

Equation (5) holds for all j, 1 j q . Accordingly, by arranging all V j together to form the matrix Q, we obtain K Q = Q D . In other words, the transformation Q 1 K Q block-diagonalizes the matrix K ( 2,2 ) :

Q 1 K Q = D (6)

where Q = [ V 1 , V 2 , , V q ] and D = D 1 D 2 D q .

This is a similarity transformation and, therefore, all eigenvalues of K ( 2,2 ) are preserved in the decomposed matrix D. It is worth mentioning that obtaining all the eigenvalues from D is far more efficient than from the original matrix K since D consists of only q diagonal blocks: D j , j = 1 , , q . When it comes to solving linear systems in the transformed space that involves Q, however, one needs to employ the LU decomposition of Q or to find Q 1 . Normally, finding the LU decomposition is more efficient and preferred. However, it does not make sense to find the LU decomposition of Q if the inverse of Q is readily available. In the following, we show that Q 1 can be obtained explicitly.

Let C be the matrix formed by v k : C = [ v 1 , v 2 , , v q ] , whose explicit form is

C = 2 q 1 [ 1 1 1 1 1 u ( 1,1 ) u ( 1, q 2 ) 1 1 u ( 2,1 ) u ( 2, q 2 ) 1 1 u ( q 2,1 ) u ( q 2, q 2 ) ( 1 ) q 2 1 1 ( 1 ) q 2 ( 1 ) q 1 ]

where the symbol u ( i , j ) denotes cos ( i j q 1 π ) . It is well known that

C 1 = 2 q 1 [ 1 / 4 1 / 2 1 / 2 1 / 4 1 / 2 u ( 1 , 1 ) u ( 1 , q 2 ) 1 / 2 1 / 2 u ( 2 , 1 ) u ( 2 , q 2 ) 1 / 2 1 / 2 u ( q 2 , 1 ) u ( q 2 , q 2 ) ( 1 ) q 2 / 2 1 / 4 1 / 2 ( 1 ) q 2 / 2 ( 1 ) q 1 / 4 ]

Let S = diag ( 2 ,1, ,1, 2 ) , a diagonal matrix. It can easily be seen that

C 1 = S 2 C S 2 .

Recall that V k = v k I p , k = 1 , 2 , , q ; and Q = [ V 1 , V 2 , , V q ] , which can be expressed as:

Q = [ v 1 I p , v 2 I p , , v q I p ] = [ v 1 , v 2 , , v q ] I p = C I p .

Therefore,

Q 1 = ( C I p ) 1 = C 1 I p = ( S 2 C S 2 ) I p = ( S 2 I p ) ( C I p ) ( S 2 I p ) = ( S 2 I p ) Q ( S 2 I p )

where we have used the following two properties of Kronecker products [5]:

( A B ) 1 = A 1 B 1 and ( A B ) I = ( A I ) ( B I ) .

Note that S 2 I p is a diagonal matrix and is almost an identity matrix except the 1st and the last blocks. This shows that Q 1 is almost the same as Q. Computationally, Q 1 can be obtained directly from C 1 , which is explicitly known, since Q 1 is simply a block structure of C 1 . Note that C is symmetric, but not orthogonal.

Theorem 3. When α = 1 and β = 2 , the block tridiagonal matrix K ( α , β ) is similar to the block diagonal matrix D = D 1 D 2 D q ,

D k = A + 2 cos ( θ k ) B , where θ k = ( 2 k 1 2 q ) π , k = 1 , 2 , , q .

Proof. Let v k T = 2 q [ sin ( θ k ) , sin ( 2 θ k ) , , sin ( q θ k ) ] ; V k = v k I p ,

k = 1 , 2 , , q ; and Q = [ V 1 , V 2 , , V q ] . We have

K V j = 2 q [ A B B A B B A B 2 B A ] [ sin ( θ j ) I sin ( 2 θ j ) I sin ( ( q 1 ) θ j ) I sin ( q θ j ) I ] = 2 q [ sin ( θ j ) A + sin ( 2 θ j ) B sin ( 2 θ j ) A + [ sin ( θ j ) + sin ( 3 θ j ) ] B sin ( ( q 1 ) θ j ) A + [ sin ( ( q 2 ) θ j ) + sin ( q θ j ) ] B sin ( q θ j ) A + 2 sin ( ( q 1 ) θ j ) B ]

Using the identities sin ( 2 x ) = 2 sin ( x ) cos ( x ) and sin ( x + y ) + sin ( x y ) = 2 sin ( x ) cos ( y ) yields

K V j = 2 q [ sin ( θ j ) I sin ( 2 θ j ) I sin ( ( q 1 ) θ j ) I sin ( q θ j ) I ] ( A + 2 cos ( θ j ) B ) = V j D j

where we have used the fact that

sin ( ( q 1 ) θ j ) = sin ( q θ j ) cos ( θ j ) cos ( q θ j ) sin ( θ j ) = sin ( q θ j ) cos ( θ j ) since

cos ( q θ j ) = cos ( 2 j 1 2 π ) = 0 . Note that the matrix Q here is not orthogonal

either. It can be shown that Q 1 exists. The transformation Q 1 K Q , therefore, block-diagonalizes K into D.

In the following, we show that Q 1 is almost identical to Q T and, therefore, can be explicitly obtained from Q T without any difficulty. Again, let C be the matrix formed by v k : C = [ v 1 , v 2 , , v q ] , V k = v k I p , and Q = [ V 1 , V 2 , , V q ] as was done in the previos section. We have in this case:

C = 2 q [ s ( 1 , 1 ) s ( 1 , 2 ) s ( 1 , q ) s ( 2 , 1 ) s ( 2 , 2 ) s ( 2 , q ) s ( ( q 1 ) , 1 ) s ( ( q 1 ) , 2 ) s ( ( q 1 ) , q ) 1 1 ( 1 ) q 1 ] .

whose inverse is:

C 1 = 2 q [ s ( 1 , 1 ) s ( 2 , 1 ) s ( ( q 1 ) , 1 ) | 1 / 2 s ( 1 , 2 ) s ( 2 , 2 ) s ( ( q 1 ) , 2 ) | 1 / 2 | s ( 1 , q ) s ( 2 , q ) s ( ( q 1 ) , q ) | ( 1 ) q 1 / 2 ]

where the symbol s ( i , j ) denotes sin ( i θ j ) with θ j = ( 2 j 1 2 q ) π . If C is partitioned as [ R r ] , with R consisting of the first ( q 1 ) rows and r being the last row of C, we clearly see that C 1 = [ R T | 1 2 r T ] . In other words, C 1 is the

same as C T except the last column. Now let S = diag ( 1, ,1, 2 ) , a diagonal matrix. We have C 1 = S 2 C S 2 . Following the same derivation as we have done in Theorem 2, we conclude that:

Q 1 = ( C I p ) 1 = C 1 I p = ( S 2 I p ) Q ( S 2 I p )

since Q = C I p . Note that C in this case is neither symmetric nor orthogonal.

3. Numerical Experiments

To demonstrate the validity and advantage of this block decomposition approach, we present numerical experiments for all three cases of the ( α , β ) pair discussed in this paper. Our main task in the experiments is to find all the eigenvalues of the following matrix K ( α , β ) R p q × p q , with p = 4 and q = 5 :

K ( α , β ) = [ A α B B A B B A B β B A ] (7)

where,

A = [ 4 2 2 8 2 2 8 2 2 4 ] and B = [ 1 2 2 1 ]

The unshown entries in A and B are all zeros. The matrix K ( 1,1 ) can be obtained from a finite element discretization of the heat or membrane equation over a rectangular domain, subject to Dirichlet boundary condition along two opposite sides of the boundary [6]. The matrices K ( 1,2 ) and K ( 2,2 ) can be obtained from discretization of the same problem subject to certain Neumann boundary conditions. The values of p and q depend on the number of grid lines in the discretization domain.

We intentionally keep the dimension of A and B small so that one can easily see that A B B A by explicit multiplications. In other words, the matrices A and B do not commute and, therefore, the traditional fast Poisson solver fails to simultaneously decompose A and B. Using the block decomposition approach presented in the previous section, it is not difficult to see that K ( α , β ) is similar to a block diagonal matrix K ˜ ( α , β ) of the form:

K ˜ ( α , β ) = D 1 ( α , β ) D 2 ( α , β ) D q ( α , β )

for all three cases where,

D k ( α , β ) = [ 4 2 cos ( θ k ) 2 2 8 4 cos ( θ k ) 2 2 8 4 cos ( θ k ) 2 2 4 2 cos ( θ k ) ] (8)

with θ k = { ( k q + 1 ) π for ( α , β ) = ( 1 , 1 ) ( k 1 q 1 ) π for ( α , β ) = ( 2 , 2 ) ( 2 k 1 2 q ) π for ( α , β ) = ( 1 , 2 ) , k = 1 , 2 , , q .

In the following, we present the results from our experiments, which were performed using the Octave software. Specifically, the Octave built-in function e i g ( X ) is used to obtain the eigenvalues of a square matrix X. For comparison purposes, we first display the eigenvalues computed from the original matrices K ( α , β ) (Equation (7)) without applying the block decomposition for all three ( α , β ) cases. Note that each K ( α , β ) is a 20 by 20 matrix. The numerical results are listed in Table 1 where all eigenvalues are listed in the order produced by Octave without reordering.

We then present in Table 2, Table 3, and Table 4 the eigenvalues obtained directly from the decomposed diagonal blocks D1 through D5 (Equation (8)) of K ˜ ( 1,1 ) , K ˜ ( 1,2 ) , and K ˜ ( 2,2 ) , respectively, where each D k is a 4 by 4 matrix. As can be seen from these tables, all eigenvalues are preserved after the block decomposition. For example, all the eigenvalues shown in Table 2 are identical to those of K ( 1,1 ) in Table 1, except for the ordering. This is due to the fact that all of the three block decompositions are similarity transformations. The advantage of using the decomposed matrices to compute the eigenvalues is apparent because the diagonal blocks of the decomposed matrix are explicitly known and need only A, B, and θ k , without the need of forming the block-tridiagonal matrix K ( α , β ) .

To close this section, it deserves mentioning that the computational complexity of finding all eigenvalues of a square matrix of size n × n is O ( n 3 ) in general. For the matrices K ( α , β ) presented in this paper, n = p q . Without decompositions, the computational complexity is O ( p 3 q 3 ) . With the proposed

Table 1. Eigenvalues computed from the original matrices K ( α , β ) without decomposition.

Table 2. Eigenvalues computed directly from D1 through D5 of K ˜ ( 1,1 ) .

Table 3. Eigenvalues computed directly from D1 through D5 of K ˜ ( 1,2 ) .

Table 4. Eigenvalues computed directly from D1 through D5 of K ˜ ( 2,2 ) .

decomposition, the computational complexity reduces to only O ( q p 3 ) , a significant saving in computation. The advantage of the decomposition is obvious, not to mention the additional advantage that can be exploited from the coarse-grain parallelism offered by the block decomposition when the problem is to be solved using multiple processors.

4. Conclusions

In this paper, we have presented a unified block decomposition scheme for three special cases of block tridiagonal matrices of the form K ( α , β ) , as shown in Equation (1). This class of block tridiagonal matrices arises frequently from the finite difference approximation to solving certain partial differential equations such as the Laplace’s, Poisson’s, or Helmholtz equations using five- or nine-point schemes, over a rectangular or cubic domain [7]. They can also arise from some finite-element discretization of the same equation [8] and from surface fitting with the B-spline functions [9]. The values of α and β typically depend on the boundary conditions of the physical problem: α = β = 1 for Dirichlet-Dirichlet conditions, α = 1 and β = 2 for Dirichlet-Neumann conditions, and α = β = 2 for Neumann-Neumann conditions.

The block decompositions presented are all based on similarity transformations with known eigenvectors of tridiagonal matrices of the same form as K ( α , β ) , with the submatrices A and B replaced by scalars a and b . We have also derived the explicit decomposed block diagonal matrices K ˜ ( α , β ) for all of the three cases ( K ( 1,1 ) , K ( 1,2 ) , and K ( 2,2 ) ):

K ˜ ( α , β ) = D 1 ( α , β ) D 2 ( α , β ) D q ( α , β )

where D k ( α , β ) = A + 2 cos ( θ k ( α , β ) ) B in which θ k ( α , β ) , k = 1 , 2 , , q , depend on the values of α and β , as can be seen in Section 2. The availability of the explicit decomposed matrices offers great computational and programming advantages. Numerical experiments have been conducted using the software Octave to demonstrate its validity and advantages. Although analogous to the classical fast Poisson solver, this approach does not require matrices A and B be symmetric and commute. This approach also exploits large-grain parallelism and lends itself to parallel and distributed computations for finding the solution of both linear systems and eigenvalue problems.

Cite this paper: Chen, H. (2021) Decompositions of Some Special Block Tridiagonal Matrices. Advances in Linear Algebra & Matrix Theory, 11, 54-65. doi: 10.4236/alamt.2021.112005.
References

[1]   Buzbee, B.L., Golub, G.H. and Nielson, C.W. (1970) On Direct Methods for Solving Poisson’s Equations. SIAM Journal on Numerical Analysis, 7, 627-656.
https://doi.org/10.1137/0707049

[2]   Strang, G. (1986) Introduction to Applied Mathematics. Wellesley-Cambridge Press, Cambridge, MA.

[3]   Chen, H.C. (2002) A Block Fourier Decomposition Method. PARA 2002, 2367, 351-358.
https://doi.org/10.1007/3-540-48051-X_35

[4]   Tolstov, G.P. and Series, F. (1962) Translated from the Russian by R.A. Silverman. Dover Publications, Inc., New York.

[5]   Marco, T. (2017) Properties of the Kronecker Product.
https://www.statlect.com/matrix-algebra/Kronecker-product-properties

[6]   Meirovitch, L. (1980) Computational Methods in Structural Dynamics. Sijthoff & Noordhoff, Maryland.

[7]   Milne, W.E. (1970) Numerical Solution of Differential Equations. Dover Publications, Inc., New York.

[8]   Wait, R. and Mitchell, A.R. (1985) Finite Element Analysis and Applications. John Wiley & Sons, New York.

[9]   Chen, F. and Goshtasby, A. (1989) A Parallel B-Spline Surface Fitting Algorithm. ACM Transactions on Graphics, 8, 41-50.
https://doi.org/10.1145/49155.214377

 
 
Top