Back
 AM  Vol.11 No.9 , September 2020
Gershgorin and Rayleigh Bounds on the Eigenvalues of the Finite-Element Global Matrices via Optimal Similarity Transformations
Abstract: The large finite element global stiffness matrix is an algebraic, discreet, even-order, differential operator of zero row sums. Direct application of the, practically convenient, readily applied, Gershgorin’s eigenvalue bounding theorem to this matrix inherently fails to foresee its positive definiteness, predictably, and routinely failing to produce a nontrivial lower bound on the least eigenvalue of this, theoretically assured to be positive definite, matrix. Considered here are practical methods for producing an optimal similarity transformation for the finite-elements global stiffness matrix, following which non trivial, realistic, lower bounds on the least eigenvalue can be located, then further improved. The technique is restricted here to the common case of a global stiffness matrix having only non-positive off-diagonal entries. For such a matrix application of the Gershgorin bounding method may be carried out by a mere matrix vector multiplication.

1. Introduction

Knowledge, even approximate, of the extremal eigenvalues of the, large, positive definite, global stiffness matrix generated by the finite-element method (see [1] ), is essential for assessing the correctness of the numerical solution of the global algebraic system of equations set up by this variational method. Other numerical procedures related to this large algebraic system, such as iterative solution methods, are also greatly affected by the condition number, the ratio of the largest to the smallest eigenvalue, of the matrix (see [2] ). In this note we, theoretically and numerically, examine various practical procedures for the numerical, a-posteriori, ever better, bounding of the eigenvalues of the large global stiffness matrices generated by the finite-element method, predominantly based on a preparatory, most favorable, similarity transformation. Special attention is paid in this note to the common class of matrices having non-positive off-diagonal entries (see [3] [4] [5] ). For such matrices, application of the Gershgorin bounding method may be most conveniently carried out by a mere matrix vector multiplication, which may be, in turn, very effectively carried out on the element level (see [6] ).

2. Gershgorin’s Eigenvalue Bounds

The utterly simple Gershgorin and Rayleigh eigenvalue bound theorems (see [7] [8] ) are of such fundamental importance in computational linear algebra that we find it good to fully repeat them here.

Throughout this paper we denote a number by a lower case Greek letter, a vector by a lower case Roman letter, and a matrix by an upper case Roman letter.

The eigenvalue spectrum λ ( A ) of real symmetric matrix A is real. For such a matrix Gershgorin’s theorem assumes the simpler form.

Theorem (Gershgorin). Let A = A ( n × n ) = A T be symmetric, and so of a real eigensystem. Then every eigenvalue of A lies in, or on the end points of, at least one of the intervals

| λ A i i | | A i 1 | + | A i 2 | + + | A i n | , i = 1,2, , n . (1)

Namely,

m i n i [ A i i ( | A i 1 | + | A i 2 | + + | A i n | ) ] λ ( A ) m a x i [ A i i + ( | A i 1 | + | A i 2 | + + | A i n | ) ] (2)

for every eigenvalue of A.

Proof. Let λ be an eigenvalue of A and x = { x 1 x 2 x n } 0 the corresponding eigenvector, so that A x = λ x . Assume that for this λ the kth component of x , x k , is largest in magnitude and is normalized so that | x k | = 1 and | x i | 1 , i k . The kth equation of A x = λ x is then

A k 1 x 1 + A k 2 x 2 + + A k k + + A k n x n = λ (3)

and

| λ A k k | = | A k 1 x 1 + A k 2 x 2 + + A k n x n | | A k 1 | | x 1 | + | A k 2 | | x 2 | + + | A k n | | x n | | A k 1 | + | A k 2 | + + | A k n | (4)

since | x i | < 1 . We do not know what k is, but are sure that λ certainly lies in one of these intervals. □

Gershgorin’s theorem readily provides outside bounds on all eigenvalues of matrix A.

3. Positive-Definite Matrices with Non-Positive Off-Diagonal Entries

Positive definite and symmetric global stiffness matrices of a positive diagonal and non-negative off-diagonal entries are common in the finite elements, or finite differences, method. For such matrices the application of the Gershgorin theorem simplifies into a mere matrix-vector multiplication operation, efficiently carried out as a summation of such multiplications on the element level of the global mesh (see [6] ).

Say matrix A is symmetric, A = A T and positive (semi) definite, and of non-positive off-diagonal entries, then obviously

λ 1 ( A ) min i A e , e i = 1 or e = { 1 , 1 , 1 , , 1 } (5)

where λ 1 ( A ) is the smallest eigenvalue of matrix A, and λ n ( A ) the largest. If matrix A is symmetric and positive definite, then λ 1 ( A ) > 0 . For example

A e = [ 1 1 1 2 1 r 1 2 ] [ 1 1 1 ] = [ 0 0 1 ] , min i [ 0 0 1 ] = 0 , 0 λ 1 ( A ) . (6)

Gershgorin’s theorem correctly predicts here that matrix A is, at least, positive semi-definite, but it fails to ascertain that it is actually positive definite, with a positive lowest eigenvalue. To obtain a more realistic, nontrivial, lower bound on the lowest eigenvalue of matrix A we resort to similarity transformations designed to rescue the lower bound from triviality.

4. Rayleigh Quotient

Let matrix A = A T be real and symmetric. If x is an eigenvector of A for corresponding eigenvalue λ , A x = λ x , then x T A x = λ x T x , or λ = x T A x / x T x . The rational function, for variable vector x

R [ x ] = x T A x x T x , x 0 (7)

is the Rayleigh quotient of matrix A. It has some interesting properties of great practical and theoretical reach.

First property: The quotient R [ x ] produces very accurate eigenvalue approximations from even not so good eigenvector approximations. Indeed, let v be an eigenvector corresponding to some λ , and let

x = v + ϵ w , v T v = 1 , w T w = 1 , w T v = v T w = 0 (8)

where ϵ w is the error vector.

Then

R [ x ] = x T A x x T x = ( v + ϵ w ) T A ( v + ϵ w ) ( v + ϵ w ) T ( v + ϵ w ) = v T A v + 2 ϵ w T A v + ϵ 2 w T A w v T v + 2 ϵ w T v + ϵ 2 w T w (9)

or

R [ x ] = λ + ϵ 2 w T A w 1 + ϵ 2 = λ + ϵ 2 w T A w λ 1 + ϵ 2 (10)

since w T v = 0 , implying that also w T A v = 0 . Thus, here

R [ x ] = λ + O ( ϵ 2 ) . (11)

Rayleigh quotient R [ x ] produces a very accurate, O ( ϵ 2 ) , approximation to an eigenvalue from an approximation x to the corresponding eigenvalue that is not very accurate, only O ( ϵ ) . Conversely, a very good approximation to an eigenvalue does not imply it coming from a very good approximation to an eigenvector.

Second property: If A = A T , then for any vector x 0

λ 1 ( A ) R [ x ] = x T A x x T x λ n ( A ) or λ 1 ( A ) x T A x λ n ( A ) , x T x = 1. (12)

The Rayleigh quotient provides inner bounds on all eigenvalues of symmetric matrix A. Hence the Rayleigh bounds and the Gershgorin bounds complement each other.

Theorem (Rayleigh.) Let the eigenvalues of A = A T be arranged in the ascending order λ 1 λ 2 λ n , with orthogonal eigenvectors x 1 , x 2 , , x n . Then

λ k + 1 x T A x x T x λ n if x T x 1 = x T x 2 = = x T x k = 0 , x 0 (13)

with the lower equality holding if and only if x = x k + 1 , and the upper inequality holding if and only if x = x n . Also

λ 1 x T A x x T x λ n k if x T x n = x T x n 1 = = x T x n k + 1 = 0 , x 0 (14)

with the lower equality holding if and only if x = x 1 , and the upper if and only if x = x n k . The two inequalities reduce to

λ 1 x T A x x T x λ n (15)

for arbitrary, nonzero, x R n .

Proof. Vector x R n , orthogonal to x 1 , x 2 , , x k has the unique expansion

x = α k + 1 x k + 1 + α k + 2 x k + 2 + + α n x n (16)

with which

x T A x = λ k + 1 α k + 1 2 + λ k + 2 α k + 2 2 + + λ n α n 2 . (17)

We normalize x by

x T x = α k + 1 2 + α k + 2 2 + + α n 2 = 1 (18)

and use this equation to eliminate α k + 1 2 from x T A x to be left with

λ ( x ) = x T A x = λ k + 1 + α k + 2 2 ( λ k + 2 λ k + 1 ) + + α n 2 ( λ n λ k + 1 ) . (19)

By assumption λ j λ k + 1 0 if j > k + 1 and hence

λ ( x ) = λ k + 1 + n o n - n e g a t i v e q u a n t i t y (20)

or λ ( x ) λ k + 1 , with equality holding if and only if

α k + 2 2 ( λ k + 2 λ k + 1 ) + + α n 2 ( λ n λ k + 1 ) = 0. (21)

In case of distinct eigenvalues, λ j λ k + 1 0 , j = k + 2 , , n , equality holds if and only if α k + 2 = α k + 3 = = α n = 0 , and λ ( x k + 1 ) = λ k + 1 . If eigenvalues repeat and λ j λ k + 1 = 0 , then α j need not be zero, but equality still holds if and only if x is in the invariant subspace spanned by the eigenvectors of λ k + 1 .

To prove the upper bound we use

α n 2 = 1 α k + 1 2 α k + 2 2 α n 1 2 (22)

to eliminate it from λ ( x ) , so as to be left with

λ ( x ) = λ n α k + 1 2 ( λ n λ k + 1 ) α n 1 2 ( λ n λ n 1 ) (23)

and λ ( x ) λ n with equality holding if and only if x = x n . The proof to the second part of the theorem is the same. □

5. Perron’s Theorem on Positive Matrices

Positive matrices are common in the finite-element method. The following is a symmetric version of Perron’s theorem on positive matrices (see [9] ).

Theorem (Perron). If A is a symmetric positive matrix, A i j > 0 , then the eigenvector corresponding to the largest (positive) eigenvalue of A is positive and unique.

Proof. If x n is a unit eigenvector corresponding to λ n , and x x n is such that x T x = 1 , then

x T A x < λ n = λ ( x n ) = x n T A x n (24)

and λ n is certainly positive. Moreover, since A i j > 0 the components of x n cannot have different signs, for this would contradict the assumption that x n maximizes λ ( x ) . Say then that ( x n ) i 0 . But none of the ( x n ) i components can be zero since A x n = λ n x n , and obviously A x n > 0 . Hence x n > 0 .

There can be no other positive vector orthogonal to x n , and so the eigenvector, and also the largest eigenvalue λ n , are unique. □

The following is another important and useful statement on positive matrices.

Theorem. Suppose that A has a positive inverse A 1 . Let x be any vector satisfying A x e = r , e = { 1 , 1 , , 1 } , r < 1 . Then

x 1 + r A 1 x 1 r . (25)

Proof. Obviously x = A 1 e + A 1 r so that

x A 1 e + A 1 r A 1 + A 1 r (26)

and

x 1 + r A 1 . (27)

To prove the other bound write x = A 1 e ( A 1 r ) , observe that A 1 e = A 1 , and have that

x A 1 e A 1 r A 1 A 1 r . (28)

Hence, if r < 1 , then

x 1 r A 1 . (29)

This completes the proof.

6. Similarity Transformation, Similar Matrices

We will make also great use of the following fact.

Theorem. If λ is an eigenvalue of matrix A, then λ is also an eigenvalue of similar matrix P 1 A P . Similar matrices A and P 1 A P have the same eigenvalues.

Proof. We have

A x = λ x , P 1 A x = λ P 1 x , x = P y , P 1 A P y = λ P 1 P y , P 1 A P y = λ y , (30)

and the result follows. □

In this note we are greatly interested in accurate eigenvectors.

To fix ideas consider the typical finite-elements, or finite-differences, matrix

A = [ 1 1 1 2 1 1 2 ] , A 1 = [ 3 2 1 2 2 1 1 1 1 ] . (31)

The matrix A is symmetric and positive definite, λ 1 ( A ) > 0 , with a corresponding eigenvector that is positive. The fact that A 1 is completely positive is a manifestation of the physical fact that a point force applied to the system causes all free points of the system to move, and all in the same direction.

To save Gershgorin’s theorem from the trivial, and useless prediction λ 1 ( A ) 0 , we propose to similarly transform matrix A with a positive diagonal matrix D, D i i > 0 , such that

min i D 1 A D e = min i D 1 A D [ 1 1 1 ] λ 1 ( A ) , e = [ 1 1 1 ] . (32)

The fact that diagonal matrix D is positive, D i i > 0 , guarantees that similar matrix D 1 A D retains the entry sign pattern of original matrix A. Namely, ( D 1 A D ) i i > 0 , ( D 1 A D ) i j 0 .

For example, with

D = [ 6 5 3 ] , D 1 A D e = [ 1 / 6 1 / 5 1 / 3 ] , and λ 1 ( A ) 1 6 = 0.167. (33)

Now Gershgorin’s theorem correctly recognizes that matrix A is positive definite.

For a better bound, matrix D needs to raise the lowest entry and to lower the highest entry of D 1 A D e . Thus,

The optimal D is for which all entries of D 1 A D e are equal, or

D 1 A D e = λ e , A ( D e ) = λ ( D e ) . (34)

Namely, the optimal D is such that De is the eigenvector corresponding to the lowest eigenvalue of matrix A. With such a better D, obtained here by inverse iterations on, the here positive, A 1 , A i j 1 > 0 , we have

D = [ 793 636 353 ] , D 1 A D e = [ 157 / 793 21 / 106 70 / 353 ] , and λ 1 ( A ) 157 793 = 0.197982. (35)

Taking the diagonal entries of matrix D as vector x we obtain from Rayleigh’s quotient

R [ x ] = x T A x x T x = 0.198062 λ 1 ( A ) (36)

or

0.197982 λ 1 ( A ) 0.198062 (37)

which are good enclosing bounds.

As for the highest eigenvalue of matrix A, λ 3 ( A ) , we have from Gershgorin and Rayleigh that

R [ x ] = x T A x x T x = 3.24673 λ 3 ( A ) 4 , x = { 19 , 42 , 33 } (38)

with x obtained by the application of three steps of the power method to matrix A starting with x 0 = { 1, 1,1 } to have a reasonable approximate for the eigenvector corresponding to the highest eigenvalue of A.

Next we turn our attention to more realistic finite element examples.

7. Taut String (Cable)

The basic issues of this paper are best illustrated on some concrete cases. First we consider the one-dimensional problem of the taut string discretized by n linear finite elements of which the element stiffness matrix is

k = 1 h [ 1 1 1 1 ] , λ 1 ( k ) = 0 , λ 2 ( k ) = 2 h (39)

in which h = 1 / n is the size of the element, n being the number of mesh subdivisions. Little element matrix k is positive semi-definite with a zero eigenvalue for eigenvector x = { 1,1 } . Element stiffness matrix k is, moreover, such that k i i > 0 , k i j 0 , and of this sign pattern is also any global matrix A assembled from it.

We assemble the element k matrices into the global matrix A, here over a mesh of five elements, impose the essential boundary condition of a fixed end point by deleting the first row and first column of the assembled global matrix, and are left with

A = [ 2 1 1 2 1 1 2 1 1 2 1 1 1 ] , h = 1 ,

λ 1 ( A ) = 0.081014053 , and λ 5 ( A ) = 3.6825. (40)

Global stiffness matrix A is now positive definite, λ ( A ) > 0 . Matrix A is, moreover, tridiagonal with non-positive off-diagonal entries A i j 0 .

We know (see [10] [11] ) that as the size of matrix A ( n × n ) increases, its lowest eigenvalue decreases, actually O ( n 2 ) .

8. Taut String, Similarity Transformation Followed by a Gershgorin Bound

Being keenly interested in an actual, numerical lower bound on λ 1 ( A ) we naively apply Gershgorin’s theorem directly to global stiffness matrix A of Equation (40) and obtain from it the disappointing

0 λ ( A ) 4. (41)

The bounding fails to confirm that matrix A is positive definite, only that it is positive semi-definite. We remark that the lower Gershgorin bound is obtained here as

min i ( A e ) i , e = { 1 , 1 , 1 , , 1 } (42)

inasmuch as A i i > 0 and A i j 0 .

To avert a trivial Gershgorin bound we propose to similarly transform matrix A into P1AP with a positive diagonal matrix P to facilitate the inversion, and subsequent matrix-matrix multiplications, while still maintaining ( P 1 A P ) i j 0 .

For example, we take P = diag [ 10,19,27,33,37 ] and have

( P 1 A P ) e = { 0.100,0.053,0.074,0.061,0.108 } ,

min i ( P 1 A P e ) i = 0.053, (43)

and

0.053 λ ( A ) 4 (44)

triumphantly ascertaining that global stiffness matrix A is indeed positive definite.

We have already remarked that we desire matrix P to be such that all entries of ( P 1 A P ) e are nearly equal, or that

( P 1 A P ) e = λ e , A ( P e ) = λ ( P e ) (45)

or that Pe is the (positive) eigenvector corresponding to λ 1 ( A ) .

How to efficiently get good approximations to the first eigenvector of global stiffness matrix A is the subject of the rest of this note.

9. Taut String, Linearization of the Characteristic Polynomial

We know that as the size of matrix A ( n × n ) increases, its lowest eigenvalue decreases, tending to zero, actually O ( n 2 ) (see again [10] [11] ). We start the linearization by writing

A x = λ x , x = { 1 , α , β , γ , δ } (46)

and have by a successive linearization for λ of each equation that, approximately

1 15 λ = 0 , λ = 1 15 = 0.06667. (47)

Then, linearly

α = 2 λ , β = 3 4 λ , γ = 4 10 λ , δ = 5 20 λ , (48)

and it follows that the eigenvector corresponding to the lowest eigenvalue of matrix A is approximately

x = { 15,29,41,50,55 } , (49)

and, correspondingly

R = R [ x ] = x T A x x T x = 0.081117. (50)

We put P = diagonal [ x ] and have from

G = G ( x ) = min i P 1 A P e = 0.06667 (51)

the reasonable bounds

G λ 1 ( A ) R or 0.06667 λ 1 ( A ) 0.081117. (52)

10. Taut String, Quadratization of the Characteristic Polynomial

Keeping quadratic terms of λ in the characteristic equation of matrix A = A ( 5 × 5 ) we have for vector x of Equation (46) the five quadratically curtailed equations of A x = λ x :

α = 2 λ β = 3 4 λ + λ 2 γ = 4 10 λ + 6 λ 2 δ = 5 20 + 21 λ 2 35 λ 2 15 λ + 1 = 0 , (53)

and

λ = 0.082578 , x = { 1 , 1.917 , 2.677 , 3.215 , 3.492 } (54)

resulting in

R = R [ x ] = 0.0810117 and G = min i P 1 A P e = 0.07919 , (55)

and

G λ 1 ( A ) R or 0.0792 λ 1 ( A ) 0.081017. (56)

The success of the linearization, or quadratization, of the characteristic equation hinges on the theoretically assured fact (see [10] [11] ) that λ 1 ( A ) 1 , and that as the size of the finite-elements global stiffness matrix A increases λ 1 ( A ) further decreases.

11. Taut String, Linearization Following a Shift

To work with a matrix of a lesser minimal eigenvalue we turn to the shifted matrix

B = A R I , R = 0.081117 , λ i ( B ) = λ i ( A ) R (57)

from which we get the approximate linear characteristic polynomial

9.85553 λ + 0.00102 = 0 , (58)

then

λ = 0.000102999 , x = { 1 , 1.919 , 2.683 , 3.229 , 3.513 } ,

R = R [ x ] = 0.081014052771 , (59)

and

G = min i P 1 A P e = 0.08101402206921 (60)

with

R G = 3.1 × 10 8 . (61)

12. Taut String, Shifted Power Method

We continue with our quest for good approximations to the fundamental eigenvector of the finite elements global stiffness matrix A. We propose to first iterate for the highest eigenvalue and the corresponding eigenvector. We choose to use the power method since it requires but one vector-matrix multiplication per step. We start with

x 0 = { 1 , 1 , 1 , 1 , 1 } , to have x 6 = A x 5 , R [ x 6 ] = 3.68242. (62)

Then we undertake the shift

B = R 6 I A , (63)

and apply to it the power method starting now with

x 0 = { 1 , 4 , 6 , 8 , 9 } , x 1 = A x 0 (64)

to successively have

R 5 = 0.0821133 , G 5 = 0.00985 R 6 = 0.0817712 , G 6 = 0.0108047 R 7 = 0.0815362 , G 7 = 0.0255912 R 8 = 0.0813744 , G 8 = 0.036619986 (65)

as upper and lower bounds on λ 1 ( A ) .

13. Hanging String

The freely down hanging string (see [12] ) is of zero tension at the free lower end, and with the tension growing linearly towards the fixed upper hanging point, that carries the entire weight of the string.

The element stiffness matrix of the hanging string of size h is

k i = 1 h 2 i 1 2 [ 1 1 1 1 ] (66)

assembled into the global stiffness matrix

A = [ 1 1 1 4 3 3 8 5 5 12 7 7 16 ] , A 1 = 1 315 [ 563 248 143 80 35 248 248 143 80 35 143 143 143 80 35 80 80 80 80 35 35 35 35 35 35 ] (67)

including the essential boundary condition of zero movement of the hanging point. Global stiffness matrix A factors as

2 A = [ 1 1 1 1 1 1 1 1 1 ] [ 1 3 5 7 9 ] [ 1 1 1 1 1 1 1 1 1 ] (68)

showing matrix A to be symmetric and positive definite. In fact, λ 1 ( A ) = 0.398987923256 and λ 5 ( A ) = 22.007479 .

We are seeking good lower and upper bounds on λ 1 ( A ) > 0 , the lowest eigenvalue of global stiffness A.

14. Hanging String, Linearization of the Characteristic Polynomial

Here we have

x = { 1 , α , β , γ , δ } , α = 1 λ , β = 1 5 3 λ , γ = 1 34 15 λ , δ = 1 298 105 λ (69)

with the linearized characteristic polynomial for λ

1069 λ 315 = 0 (70)

then

λ = 0.294668 , x = { 1 , 0.705 , 0.509 , 0.332 , 0.164 } , R = R [ x ] = 0.42172494 (71)

from which we get

G = G ( x ) = min i P 1 A P e = 0.29467 (72)

and then the reasonable bounds

G λ 1 ( A ) R , 0.2947 λ 1 ( A ) 0.4217 , λ 1 ( A ) = 0.398987923256. (73)

15. Hanging String, Second Sweep for a Linearized Characteristic Polynomial

Now we undertake the shift

B = I R A , R = 0.4217249428452726 (74)

intended to further reduce λ 1 ( B ) , and have for

x = { 1 , α , β , γ , δ } , B x = λ x (75)

a linearization resulting in

α = 0.578275 + λ β = 0.356409 + 1.38552 λ γ = 0.193228 + 1.57125 λ δ = 0.0650291 + 1.63685 λ (76)

and then the linearized characteristic polynomial

0.339557 14.5657 λ = 0 , λ = 0.023312131. (77)

From this we have

x = { 1 , 0.601587 , 0.388708 , 0.229857 , 0.103188 } (78)

by which we compute for original matrix A

R [ x ] = 0.398988578 , G [ x ] = 0.3984128 (79)

and G λ 1 ( A ) R , translating into the good bounds

0.39841 λ 1 ( A ) 0.3989886 , λ 1 ( A ) = 0.398987923256. (80)

16. Hanging String, Quadratization of the Characteristic Polynomial

Here, for vector x of Equation (75), and the quadratization of A x = λ x we have

α = 1 λ , β = 1 5 3 λ + 1 3 λ 2 , γ = 1 34 15 λ + 13 15 λ 2 , δ = 1 298 105 λ + 165 105 λ 2 , (81)

and, from the last equation of A x = λ x , the quadratic approximate characteristic polynomial

767 λ 2 1069 λ + 315 = 0 , λ = 0.4231227 , (82)

and have from this λ that

x = { 1 , 0.577 , 0.354 , 0.196 , 0.080 } , R [ x ] = 0.4022336291. (83)

17. Hanging String, Power Method

The global stiffness matrix we take is still the modest 5 × 5

A = [ 1 1 1 4 3 3 8 5 5 12 7 7 16 ] , λ 1 ( A ) = 0.398987923256 λ 5 ( A ) = 22.007479 (84)

with corresponding eigenvectors

x 1 ( A ) = { 9.71716,5.84013,3.77107,2.22872,1 }

x 5 ( A ) = { 0.00252621 , 0.0530693 , 0.317706 , 0.858211 , 1 } . (85)

We start the power method with

x 0 = { 1 , 1 , 1 , 1 , 1 } , R [ x 0 ] = 14.6 (86)

to successively produce

R [ x 1 ] = 21.0301 , R [ x 2 ] = 21.7884 , R [ x 3 ] = 21.9497 , R [ x 4 ] = 21.9917 , R [ x 5 ] = 22.0031 , (87)

then

R [ x 6 ] = 22.0063 for

x 6 = A x 5 = { 0.00296949 , 0.0578104 , 0.330741 , 0.869775 , 1 } . (88)

It follows then, from Rayleigh and Gershgorin, that

22.0063 λ 6 ( A ) 24 , λ 6 ( A ) = 22.0075. (89)

18. Hanging String, Shifted Power Method

Next we turn our attention to computing, an ever better, approximation to λ 1 ( A ) , the lowest eigenvalue of global stiffness matrix A of the hanging string. For this we take the shifted matrix

B = R [ x 6 ] I A , λ ( B ) = R [ x 6 ] λ ( A ) . (90)

The power method applied to matrix B converges now to x 1 , the eigenvector corresponding to the lowest eigenvalue of original matrix A.

We start with

x 0 = { 9 , 5 , 3 , 2 , 1 } , R [ x 0 ] = 0.408333 , G [ x 0 ] = 0.333333 (91)

and repeatedly compute

x 1 = B x 0 = { 1 , 0.556701 , 0.345357 , 0.216497 , 0.103095 } ,

R [ x 1 ] = 0.403054 , G [ x 1 ] = 0.02975 ,

x 2 = B x 1 = { 1 , 0.5593 , 0.351981 , 0.214014 , 0.0989981 } ,

R [ x 2 ] = 0.401485 , G [ x 2 ] = 0.192842 ,

x 3 = B x 2 = { 1 , 0.562325 , 0.356026 , 0.213042 , 0.097039 } ,

R [ x 3 ] = 0.400867 , G [ x 3 ] = 0.269716 ,

x 4 = B x 3 = { 1 , 0.565334 , 0.35880 , 0.212863 , 0.0961647 } ,

R [ x 4 ] = 0.400526 , G [ x 4 ] = 0.306781 ,

x 5 = B x 4 = { 1 , 0.568152 , 0.360926 , 0.213109 , 0.095850 } ,

R [ x 5 ] = 0.400289 , G [ x 5 ] = 0.325285 ,

x 6 = B x 5 = { 1 , 0.570725 , 0.362709 , 0.213587 , 0.0958295 } ,

R [ x 6 ] = 0.400105 , G [ x 6 ] = 0.33514

x 7 = B x 6 = { 1 , 0.573053 , 0.364291 , 0.214189 , 0.0959672 } ,

R [ x 7 ] = 0.399954 , G [ x 7 ] = 0.34100

x 8 = B x 7 = { 1 , 0.575153 , 0.365742 , 0.214857 , 0.0961906 } ,

R [ x 8 ] = 0.399827 , G [ x 8 ] = 0.34503

x 9 = B x 8 = { 1 , 0.577051 , 0.367094 , 0.215554 , 0.0964599 } ,

R [ x 9 ] = 0.399719 , G [ x 9 ] = 0.34823 , (92)

and have that

0.399719 λ 1 ( A ) 0.348229 , λ 1 ( A ) = 0.398988 , (93)

which we gladly accept.

19. The Four-Nodes Rectangular Membrane Element

Next we move on to two dimensional finite-element membrane problems, or the boundary value problem

2 u x 2 + 2 u y 2 + f ( x , y ) = 0 in D

u = 0 on S 1 , and u n = 0 on S 2 (94)

where n is an outwardly normal to boundary S encompassing domain D.

The element stiffness and mass matrices for the rectangular four-nodes membrane finite element of sides a and b are

k = b 6 a [ 2 2 1 1 2 2 1 1 1 1 2 2 1 1 2 2 ] + a 6 b [ 2 1 2 1 1 2 1 2 2 1 2 1 1 2 1 2 ] (95)

and

m = a b 36 [ 4 2 2 1 2 4 1 2 2 1 4 2 1 2 2 4 ] (96)

respectively. If a = b = h , then

k = 1 6 [ 4 1 1 2 1 4 2 1 1 2 4 1 2 1 1 4 ] and m = h 2 36 [ 4 2 2 1 2 4 1 2 2 1 4 2 1 2 2 4 ] . (97)

with eigenvalues

λ ( k ) = 1 6 { 0 , 4 , 6 , 6 } , λ ( m ) = h 2 36 { 1 , 3 , 3 , 9 } . (98)

Element stiffness matrix k of Equation (97) is of a positive diagonal and non-positive off-diagonal entries, and hence such is also any global matrix assembled from it.

Global stiffness matrix A for the modest 3 × 3 mesh is

A = [ 16 2 0 2 2 0 0 0 0 2 16 2 2 2 2 0 0 0 0 2 8 0 2 1 0 0 0 2 2 0 16 2 0 2 2 0 2 2 2 2 16 2 2 2 2 0 2 1 0 2 8 0 2 1 0 0 0 2 2 0 8 1 0 0 0 0 2 2 2 1 8 1 0 0 0 0 2 1 0 1 4 ] (99)

with, as expected, A i i > 0 and A i j 0 . The eigenvalues and eigenvectors of global stiffness A are

x 1 = { 0.14854,0.2737,0.3541,0.2737,0.5209,0.6713,0.3541,0.6713,1 }

λ 1 = 1.61549

x 2 = { 0.1258 , 0.242 , 0.4179 , 0.242 , 0.2034 , 0.3329 , 0.418 , 0.3329 , 1 }

λ 2 = 5.07254

x 3 = { 0, 0.538359, 1.97486,0.538359,0, 1,1.97486,1,0 }

λ 3 = 6.94842

x 4 = { 0.385 , 0.4198 , 2.934 , 0.4198 , 0.6593 , 2.762 , 2.935 , 2.762 , 1 }

λ 4 = 8.20581

x 5 = { 0 , 0.119 , 0.5390 , 0.1197 , 0 , 1 , 0.5390 , 1 , 0 }

λ 5 = 10.2995

x 6 = { 26.991 , 19.691 , 13.26 , 19.692 , 12.05 , 7.953 , 13.26 , 7.953 , 1 }

λ 6 = 12.189

x 7 = { 9.224 , 8.124 , 0.246 , 8.124 , 6.739 , 0.292 , 0.246 , 0.292 , 1 }

λ 7 = 18.0621

x 8 = { 0 , 4.773 , 0.7950 , 4.773 , 0 , 1 , 0.7950 , 1 , 0 }

λ 8 = 18.752

x 9 = { 6.105 , 0.0677 , 1.487 , 0.0677 , 8.579 , 1.152 , 1.487 , 1.152 , 1 }

λ 9 = 18.855. (100)

20. Rectangular Membrane, Power Method

We prefer the power method as it requires only a vector-matrix multiplication, We start the method with the initial x 0

x 0 = { 6 , 0 , 2 , 0 , 10 , 1 , 2 , 1 , 1 } , R [ x 0 ] = 18.8027 (101)

and keep multiplying, or raising to ever higher power, to have.

x 1 = A x 0 = { 10 , 0.1724 , 3.017 , 0.1724 , 16.03 , 1.983 , 3.017 , 1.983 , 1.897 }

R [ x 1 ] = 18.8418

x 2 = A x 1 = { 10 , 0.234 , 2.815 , 0.2342 , 15.730 , 2.023 , 2.815 , 2.023 , 1.865 }

R [ x 2 ] = 18.8491

x 3 = A x 2 = { 10 , 0.266 , 2.703 , 0.266 , 15.52 , 2.018 , 2.702 , 2.018 , 1.830 }

R [ x 3 ] = 18.8516

x 4 = A x 3 = { 10 , 0.280 , 2.638 , 0.280 , 15.37 , 2.005 , 2.638 , 2.005 , 1.807 }

R [ x 4 ] = 18.8526

x 5 = A x 4 = { 10 , 0.283 , 2.5985 , 0.283 , 15.25 , 1.991 , 2.599 , 1.99 , 1.791 }

R [ x 5 ] = 18.8531

x 6 = A x 5 = { 10 , 0.2791 , 2.573 , 0.2791 , 15.16 , 1.98 , 2.573 , 1.980 , 1.779 }

R [ x 6 ] = 18.8533

x 7 = A x 6 = { 10 , 0.271 , 2.556 , 0.271 , 15.09 , 1.971 , 2.556 , 1.971 , 1.769 }

R [ x 7 ] = 18.8535

x 8 = { 10 , 0.2595 , 2.544 , 0.260 , 15.025 , 1.964 , 2.544 , 1.964 , 1.76 }

R [ x 8 ] = 18.8537 (102)

such that, by Rayleigh and Gershgorin

18.8537 λ 9 [ A ] 32. (103)

21. Rectangular Membrane. Shifted Power Method

Now we go after the, more interesting, lowest eigenvalue of the global stiffness matrix A and replace it by the shifted matrix

B = R [ x 4 ] I A . (104)

We start with x 0 and continue to have

x 0 = { 1 , 3 , 4 , 3 , 5 , 7 , 4 , 7 , 10 }

R [ x 0 ] = 1.64964 , G [ x 0 ] = 6

x 1 = B x 0 = { 1.441 , 2.815 , 3.85 , 2.815 , 5.348 , 6.954 , 3.85 , 6.95 , 10 }

R [ x 1 ] = 1.61878 , G [ x 1 ] = 0.759863

x 2 = B x 1 = { 1.51 , 2.821 , 3.76 , 2.82 , 5.35 , 6.91 , 3.757 , 6.905 , 10 }

R [ x 2 ] = 1.61685 , G [ x 2 ] = 1.39789

x 3 = B x 2 = { 1.519 , 2.816 , 3.700 , 2.816 , 5.329 , 6.868 , 3.700 , 6.868 , 10 }

R [ x 3 ] = 1.61624 , G [ x 3 ] = 1.56061

x 4 = B x 3 = { 1.518 , 2.804 , 3.661 , 2.804 , 5.307 , 6.839 , 3.661 , 6.839 , 10 }

R [ x 4 ] = 1.61594 , G [ x 4 ] = 1.57068

x 5 = B x 4 = { 1.514 , 2.793 , 3.634 , 2.793 , 5.289 , 6.815 , 3.634 , 6.816 , 10 }

R [ x 5 ] = 1.61577 , G [ x 5 ] = 1.57916

x 6 = B x 5 = { 1.509 , 2.782 , 3.613 , 2.782 , 5.273 , 6.796 , 3.613 , 6.796 , 10 }

R [ x 6 ] = 1.61567 , G [ x 6 ] = 1.58616

x 7 = B x 6 = { 1.505 , 2.773 , 3.598 , 2.773 , 5.260 , 6.780 , 3.598 , 6.78 , 10 }

R [ x 7 ] = 1.6156 , G [ x 7 ] = 1.59187

x 8 = B x 7 = { 1.501 , 2.766 , 3.59 , 2.766 , 5.25 , 6.767 , 3.59 , 6.767 , 10 }

R [ x 8 ] = 1.61556 , G [ x 8 ] = 1.59651 , (105)

and

1.59651 λ 1 ( A ) 1.61556 , λ 1 ( A ) = 1.61549 , (106)

which we appraise as quite good and tight numerical bounds.

22. Triangular Membrane, Triangular Finite Elements

The linear, first order, membrane finite element stiffness matrix k for a triangle of sides L 1 , L 2 , L 3 and area T is

k = 1 4 T ( L 1 2 C 1 + L 2 2 C 2 + L 3 2 C 3 ) (107)

where C 1 , C 2 , C 3 are the constant matrices

C 1 = 1 2 [ 2 1 1 1 0 1 1 1 0 ] , C 2 = 1 2 [ 0 1 1 1 2 1 1 1 0 ] , C 3 = 1 2 [ 0 1 1 1 0 1 1 1 2 ] . (108)

Otherwise, the element stiffness matrix is written, by the aid of the law of the cosines, as

k = 1 4 T [ L 1 2 L 1 L 2 cos θ 3 L 1 L 3 cos θ 2 L 1 L 2 cos θ 3 L 2 2 L 2 L 3 cos θ 1 L 1 L 3 cos θ 2 L 2 L 3 cos θ 1 L 3 2 ] (109)

where θ i is the angle opposite L i . If θ i < π / 2 , then k i j < 0 , and so is any global stiffness matrix assembled from it.

We consider now an equilateral triangular membrane discretized by four equilateral triangular finite elements per side. The membrane is held fixed on one side, to produce the 10 × 10 symmetric and positive definite global stiffness matrix

A = [ 6 2 0 0 1 0 0 0 0 0 2 12 2 0 2 2 0 0 0 0 0 2 12 2 0 2 2 0 0 0 0 0 2 6 0 0 1 0 0 0 1 2 0 0 6 2 0 1 0 0 0 2 2 0 2 12 2 2 2 0 0 0 2 1 0 2 6 0 1 0 0 0 0 0 1 2 0 6 2 1 0 0 0 0 0 2 1 2 6 1 0 0 0 0 0 0 0 1 1 2 ] (110)

with, as expected, A i i > 0 and A i j 0 . The ten computed eigenvalues of this matrix are:

λ = { 0.61 , 2.29 , 4.06 , 4.74 , 6.67 , 6.98 , 8.45 , 10.55 , 14.82 , 14.83 } . (111)

23. Triangular Membrane, Power Method

We start our iterative quest for the highest eigenvalue of matrix A with x 0 , and then continue with the power method to have

x 0 = { 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 }

x 1 = A x 0 = { 0.38 , 0.5 , 0.5 , 0.38 , 0.25 , 1 , 0.25 , 0.25 , 0.25 , 0.17 }

R [ x 1 ] = 13.9798

x 2 = A x 1 = { 0.22 , 0.45 , 0.45 , 0.22 , 0.16 , 1 , 0.16 , 0.18 , 0.18 , 0.05 }

R [ x 2 ] = 14.6874

x 3 = A x 2 = { 0.16 , 0.44 , 0.44 , 0.16 , 0.14 , 1 , 0.14 , 0.17 , 0.17 , 0.03 }

R [ x 3 ] = 14.8011

x 4 = A x 3 = { 0.13 , 0.43 , 0.43 , 0.13 , 0.13 , 1 , 0.13 , 0.17 , 0.17 , 0.03 }

R [ x 4 ] = 14.8234

x 5 = A x 4 = { 0.12 , 0.42 , 0.42 , 0.12 , 0.13 , 1 , 0.13 , 0.17 , 0.17 , 0.03 }

R [ x 5 ] = 14.8286

x 6 = A x 5 = { 0.11 , 0.42 , 0.42 , 0.11 , 0.13 , 1 , 0.13 , 0.17 , 0.17 , 0.03 }

R [ x 6 ] = 14.8301

x 7 = A x 6 = { 0.11 , 0.41 , 0.41 , 0.11 , 0.13 , 1 , 0.13 , 0.18 , 0.18 , 0.03 }

R [ x 7 ] = 14.8306

x 8 = A x 7 = { 0.11 , 0.41 , 0.41 , 0.11 , 0.13 , 1 , 0.13 , 0.18 , 0.18 , 0.03 }

R [ x 8 ] = 14.8308 , (112)

and thus, by Rayleigh and Gershgorin

14.8308 λ 10 ( A ) 24 , λ 10 ( A ) = 14.831. (113)

24. Triangular Membrane, Shifted Power Method

Next we turn our attention to the lowest eigenvalue λ 1 ( A ) of global stiffness matrix A of the triangular membrane. For this we apply the power method again, but this time to the shifted matrix

B = R [ x 8 ] I A (114)

in which R [ x 8 ] is the rayleigh quotient for x 8 obtained by the power method in the previous section.

We start with an arbitrary x 0 and continue with the shifted power method to have

x 0 = { 1 , 2 , 2 , 1 , 4 , 5 , 4 , 7 , 7 , 10 }

R [ x 0 ] = 0.633962

x 1 = B x 0 = { 2.10 , 3.71 , 3.71 , 2.10 , 7.17 , 8.27 , 7.17 , 12.48 , 12.48 , 17.8 }

R [ x 1 ] = 0.619807

x 2 = B x 1 = { 3.12 , 4.99 , 4.99 , 3.12 , 9.58 , 11.0 , 9.58 , 16.63 , 16.63 , 23.83 }

R [ x 2 ] = 0.616615

x 3 = B x 2 = { 3.27 , 4.96 , 4.96 , 3.27 , 9.46 , 10.82 , 9.46 , 16.34 , 16.34 , 23.5 }

R [ x 3 ] = 0.615228 , G [ x 3 ] = 0.0722

x 4 = B x 3 = { 3.36 , 4.94 , 4.94 , 3.36 , 9.37 , 10.69 , 9.37 , 16.1 , 16.1 , 23.3 }

R [ x 4 ] = 0.614575 , G [ x 4 ] = 0.2632

x 5 = B x 4 = { 3.41 , 4.94 , 4.94 , 3.41 , 9.32 , 10.6 , 9.32 , 15.9 , 15.9 , 23.1 }

R [ x 5 ] = 0.614248 , G [ x 5 ] = 0.3782

x 6 = B x 5 = { 3.45 , 4.94 , 4.94 , 3.45 , 9.29 , 10.6 , 9.29 , 15.9 , 15.9 , 23.0 }

R [ x 6 ] = 0.614075 , G [ x 6 ] = 0.4505

x 7 = B x 6 = { 3.48 , 4.94 , 4.94 , 3.48 , 9.29 , 10.54 , 9.29 , 15.8 , 15.8 , 22.9 }

R [ x 7 ] = 0.613978 , G [ x 7 ] = 0.4976

x 8 = B x 7 = { 3.51 , 4.95 , 4.95 , 3.51 , 9.29 , 10.53 , 9.29 , 15.8 , 15.8 , 22.9 }

R [ x 8 ] = 0.61392 , G [ x 8 ] = 0.5291 (115)

and

0.5291 λ 1 ( A ) 0.61392 , λ 1 ( A ) = 0.613799. (116)

25. Conclusions

In this note we have considered the common case of the positive definite and symmetric global finite element stiffness matrices having non positive off-diagonal entries. The finite element global stiffness matrix readily becomes very large with an ever smaller least eigenvalue.

We have shown here how optimal similarity transformations of this matrix, requiring only matrix-vector multiplication, lead to eminently practical and reliable numerical iterative algorithms for tight realistic Rayleigh and Gershgorin bounds on the least eigenvalue of this large finite-elements global stiffness matrix.

Cite this paper: Fried, I. , Riganti, R. and Yu, C. (2020) Gershgorin and Rayleigh Bounds on the Eigenvalues of the Finite-Element Global Matrices via Optimal Similarity Transformations. Applied Mathematics, 11, 922-941. doi: 10.4236/am.2020.119060.
References

[1]   Strang, G. and Fix, G. (2008) An Analysis of the Finite Element Method. 2nd Edition, Wellesley Cambridge Press, Wellesley, Section 1.9.

[2]   Fried, I. (1973) Bounds on the Spectral and Maximum Norms of the Finite Element Stiffness, Flexibility and Mass Matrices. International Journal of Solids and Structures, 9, 1013-1034.
https://doi.org/10.1016/0020-7683(73)90013-9

[3]   Fried, I. (1975) Monotone Finite Element Matrices and Their Computed Condition Numbers. Computers and Structures, 5, 317-319.
https://doi.org/10.1016/0045-7949(75)90038-3

[4]   Mangasarian, O.L. (1968) Characterizations of Real Matrices of Monotone Kind. SIAM Review, 10, 439-441.
https://doi.org/10.1137/1010095

[5]   Fujimoto, T. and Ranade, R. (2004) Two Characterizations of Inverse-Positive Matrices: The Hawkins-Simon Condition and the Le Chatelier-Braun Principle. Electronic Journal of Linear Algebra, 11, 59-65.
https://www.scirp.org/journal/AM/
https://doi.org/10.13001/1081-3810.1122


[6]   Fried, I. (1970) A Computational Procedure for the Solution of Large Problems Arising from the Finite Element Method. International Journal for Numerical Methods in Engineering, 2, 477-494.
https://doi.org/10.1002/nme.1620020403

[7]   Scott, D.S. (1985) On the Accuracy of the Gershgorin Circle Theorem for Bounding the Spread of a Real Symmetric Matrix. Linear Algebra and Applications, 65, 147-155.
https://doi.org/10.1016/0024-3795(85)90093-X

[8]   Varga, R.S. (2004) Gershgorin and His Circles. Springer, New York.
https://doi.org/10.1007/978-3-642-17798-9

[9]   MacCluer, C.R. (2000) The Many Proofs and Applications of Perron’s Theorem. SIAM Review, 42, 487-498.
https://doi.org/10.1137/S0036144599359449

[10]   Fried, I. (2016) The Condition of the Least-Squares Finite Element Matrices. International Journal for Numerical Methods in Engineering, 106, 760-770.
https://doi.org/10.1002/nme.5146

[11]   Fried, I. (1972) Condition of Finite Element Matrices Generated from Non-Uniform Meshes. AIAA Journal, 10, 219-221.
https://doi.org/10.2514/3.6561

[12]   Verbin, Y. (2014) Boundary Conditions and Modes of the Vertically Hanging Chain. European Journal of Physics, 36, Article ID: 015005.
https://doi.org/10.1088/0143-0807/36/1/015005

 
 
Top