Back
 JAMP  Vol.6 No.5 , May 2018
On Monotone Eigenvectors of a Max-T Fuzzy Matrix
Abstract: The eigenvectors of a fuzzy matrix correspond to steady states of a complex discrete-events system, characterized by the given transition matrix and fuzzy state vectors. The descriptions of the eigenspace for matrices in the max-Lukasiewicz algebra, max-min algebra, max-nilpotent-min algebra, max-product algebra and max-drast algebra have been presented in previous papers. In this paper, we investigate the monotone eigenvectors in a max-T algebra, list some particular properties of the monotone eigenvectors in max-Lukasiewicz algebra, max-min algebra, max-nilpotent-min algebra, max-product algebra and max-drast algebra, respectively, and illustrate the relations among eigenspaces in these algebras by some examples.

1. Introduction

The eigenproblem for a fuzzy matrix corresponds to finding a stable state (or all stable states) of the complex discrete-events system described by the given transition matrix and fuzzy state vectors. Therefore, the investigation of the eigenspace structure in fuzzy algebras is important for application. This problem has been solved in several types of so-called extremal algebras.

A max-T fuzzy algebra is defined over the interval [ 0 , 1 ] and uses, instead of the conventional operations of addition and multiplication, the operations of maximum and one of the triangular norms, the so-called t-norm. These operations are extended in a natural way to the Cartesian products of vectors and matrices. The t-norms together with the t-conorms play a key role in fuzzy theory. These functions have applications in many areas, such as decision making, statistics, game theory, information and data fusion, probability theory, and risk management.

Although there exist various t-norms and families of t-norms (see, e.g., [1] ), let us mention the several main t-norms: the Łukasiewicz t-norm, the Gödel t-norm, the nilpotent minimum t-norm, the product t-norm, and the drastic t-norm.

The Łukasiewicz t-norm is computed as

x L y = max { x + y 1 , 0 } .

The Gödel t-norm is the simplest t-norm and the conjunction is defined as the minimum of the entries, i.e.,

x G y = min { x , y } .

The nilpotent minimum t-norm is defined by

x n M y = { 0 , if x + y 1 , min { x , y } , otherwise .

The definition of the product t-norm is

x P y = x y .

The drastic triangular t-norm is “the weakest norm” and the basic example of a non-divisible t-norm on any partially ordered set. The drastic triangular t-norm is defined as follows:

x D y = { 0 , if max { x , y } < 1 , min { x , y } , if max { x , y } = 1.

Recently, Gavalec et al. [2] [3] investigated the steady states of max-Łukasiewicz fuzzy systems and monotone interval eigenproblem in max-min algebra, Rashid et al. [4] discussed the eigenspace structure of a max-product fuzzy matrix and Gavalec et al. [5] studied the eigenspace structure of a max-drast fuzzy matrix. In this paper, based on these works, we further study eigenproblem. We investigate the eigenvectors in a max-T algebra, study monotone eigenvectors in max-nilpotent-min algebra, discuss the relation between the monotone eigenvectors in max-T algebra and max-drast algebra, and illustrate the relations among eigenspaces in these algebras by some examples.

2. Eigenvectors in a Max-T Algebra

Let T be one of the triangular norms used in fuzzy theory, let us denote the real unit interval [ 0 , 1 ] by I. By the max-T algebra we understand the triple ( I , , ) with the binary operations = max and = T on I. For given natural n , we write N = { 1 , 2 , , n } . The set of all permutations on N will be denoted by P n . The notations I ( n ) and I ( n , n ) denote the set of all vectors and all square matrices of a given dimension n over I, respectively. The operations and are extended to matrices and vectors in the standard way.

The eigenproblem for a given matrix A I ( n , n ) in max-T algebra consists in finding an eigenvector x I ( n ) for which A x = x holds true. The eigenspace of A I ( n , n ) is denoted by

F ( A ) = { x I ( n ) | A x = x } .

Theorem 2.1. Let 1 , , 2 be three triangular norms on I, A I ( n , n ) and x I ( n ) . If 1 2 , x F 1 ( A ) and x F 2 ( A ) , then x F ( A ) .

Proof. If x F 1 ( A ) and x F 2 ( A ) , then

A 1 x = x , A 2 x = x .

When 1 2 , we have that

x = A 1 x A x A 2 x = x ,

i.e., A x = x . Thus, x F ( A ) .

The theorem is proved.

The investigation of the eigenspace structure can be simplified by permuting any vector x I ( n ) to a non-decreasing form.

For given permutations φ , ψ P n , we denote by A φ ψ the matrix with rows permuted by φ and columns permuted by ψ , and we denote by x φ the vector permuted by φ .

Theorem 2.2. (Gavalec [6] ). Let A I ( n , n ) , x I ( n ) and φ P n . Then x F ( A ) if and only if x φ F ( A φ φ ) .

We say a vector x I ( n ) is increasing if x i x j holds for any i , j N , i j and strictly increasing if x i < x j whenever i < j . The set of all increasing eigenvectors of a matrix A is denoted by F ( A ) and the set of all strictly increasing eigenvectors of a matrix A is denoted by F < ( A ) . Similar notation I ( n ) and I < ( n ) will be used without the condition A x = x .

Theorem 2.3. Let A I ( n , n ) and x I < ( n ) . Then x F < ( A ) if and only if for every i N the following hold.

a i j x j x i j N , a i j x j = x i j N .

Proof. By definition, x F < ( A ) is equivalent with the condition

max { a i 1 x 1 , , a i n x n } = x i i N ,

which is equivalent to a i j x j x i for every j N and a i j x j = x i for some j N .

The theorem is proved.

Theorem 2.4. Let A I ( n , n ) and x I < ( n ) . If x F < ( A ) , then

1) a i j < 1 for all j N , i < j ,

2) a n n x n .

Proof. If x F < ( A ) , then it follows from Theorem 2.3 that

a i j x j x i j > i .

When a i j = 1 , a i j x j = x j > x i , this is a contradiction. Thus, a i j < 1 for every j N , i < j . Noting that G is the largest triangular normon I, we see that

x n = a n n x n min { a n n , x n }

and hence a n n x n .

The theorem is proved.

In particular, if x F < ( A ) with x n = 1 , then

a n n = 1.

3. Eigenvectors in Max-Łukasiewicz Algebra

The following theorem contains several logical consequences of the definition of Łukasiewicz triangular norm.

Theorem 3.1. (Rashid et al. [7] ). Let a , b , c I . Then

1) a L b = b if and only if a = 1 or b = 0 ,

2) a L c = b if and only if a = 1 + b c or ( a 1 c and b = 0 ),

3) a L c b if and only if a 1 + b c ,

4) a L c > b if and only if a > 1 + b c ,

5) if c < b , then a L c < b .

Combining Theorem 2.3 with Theorem 3.1, we have the following theorem.

Theorem 3.2. (Rashid et al. [7] ). Let A I ( n , n ) and x I < ( n ) . Then x F L < ( A ) if and only if for every i N the following hold:

1) a i j + x i x j for every j N and j i ,

2) if i = 1 , then x 1 = 0 or a 1 j = 1 + x 1 x j for some j N ,

3) if i > 1 , then a i j = 1 + x i x j for some j N .

The following theorem describes necessary conditions under which a given square matrix can have a strictly increasing eigenvector.

Theorem 3.3. (Rashid et al. [7] ). Let A I ( n , n ) . If F L < ( A ) Φ , then the following conditions are satisfied

1) a i j < 1 for all i , j N and i < j ,

2) a n n = 1 .

The following theorem describes necessary and sufficient conditions under which a three-dimensional fuzzy matrix has a strictly increasing eigenvector.

Theorem 3.4. (Rashid et al. [7] ). Let A I ( n , n ) . Then F L < ( A ) Φ if and only if the following conditions are satisfied

1) a 12 < 1 , a 13 < 1 , a 23 < 1 , for all i , j N and i < j ,

2) a 22 = 1 , or a 13 < a 23 ,

3) a 33 = 1 .

Example 3.1. Let us consider the matrix

A = [ 0.6 0.8 0.3 0.5 0.9 0.4 0.3 0.7 1 ] .

Matrix A satisfies conditions (1)-(3) in Theorem 3.4, hence F L < ( A ) Φ and

F L < ( A ) = { ( 0 , x 2 , x 3 ) | 0 < x 2 0.1 , x 3 = x 2 + 0.6 } { ( x 1 , x 2 , x 3 ) | 0 < x 1 0.3 , x 2 = x 1 + 0.1 , x 3 = x 1 + 0.7 } .

4. Eigenvectors in Max-Min Algebra

In the case of the max-min (called also: bottleneck) algebras, the eigenproblem has been studied by many authors and interesting results describing the structure of the eigenspace have been found (see [3] [8] [9] [10] [11] [12] ). In particular, algorithms have been suggested for computing the maximal eigenvector of a given max-min matrix (see [13] ).

If the binary operation coincides with the minimum operation, then the strictly increasing eigenspace F G < ( A ) can be described as an interval of strictly increasing eigenvectors, where the bounds m * ( A ) , M * ( A ) I ( n ) of the interval are defined as follows

m ( j ) ( A ) = max { a j k | k > j } , M ( j ) ( A ) = max { a j k | k j } , m i * ( A ) = max { m ( j ) ( A ) | j i } , M i * ( A ) = min { M ( j ) ( A ) | j i } .

If a maximum of an empty set should be computed in the above definition of m * ( A ) , then we use the fact that max Φ = 0 by usual definition.

The following theorem has been proved in [6] .

Theorem 4.1. (Gavalec [6] ). Let A I ( n , n ) and x I ( n ) be a strictly increasing vector. Then x F G < ( A ) if and only if m * ( A ) x M * ( A ) , i.e.,

F G < ( A ) = [ m * ( A ) , M * ( A ) ] I < ( n ) .

Hence, in view of Theorem 4.1, the structure of F G < ( A ) has been completely described for any A I ( n , n ) .

5. Eigenvectors in Max-Nilpotent-Min Algebra

We know that the nilpotent minimum norm n M is left-continuous and the R-implication generated from n M is defined by

a F D b = sup { x I | a n M x b } .

Moreover, it follows from Proposition 2.5.2 in [14] that n M and F D form an adjoint pair, i.e., they satisfy the following residual principle

a n M x b x a F D b a , b , x I ,

and

a F D b = max { x I | a n M x b } .

If A I ( n , n ) , then x F n M < ( A ) is equivalent with the two conditions:

1) for any j i , a i j n M x j x i ,

2) there exist j i such that a i j n M x j x i .

For j i ,

1) if a i j < x i , then a i j n M x j a i j < x i ;

2) if a i j > x i , then a i j n M x j x i if and only if

x j a i j F D x i = max { 1 a i j , x i } ,

a) when j > i , x j 1 a i j , a i j n M x j = 0 ,

b) when j = i , a i i n M x i x i and

a i i n M x i = x i x i 1 a i i ,

i.e., 1 a i i < x i < a i i and hence a i i > 1 2 ;

3) if a i j = x i , then a i j n M x j a i j = x i and

a i j n M x j = x i x j > 1 x i x i + x j > 1.

6. Eigenvectors in Max-Product Algebra

For every vectors x I ( n ) , define the quotient vector q = q ( x ) I ( n ) by

q i = { x i x i + 1 , if i N \ { n } , x i + 1 0 , 1 , if i N \ { n } , x i + 1 = 0 , x n , if i = n .

Then, x I < ( n ) if and only if q = q ( x ) fulfills the following inequalities

0 q 1 < 1 , 0 < q n 1 , 0 < q i < 1 i N \ { 1 , n } .

Noting that for any i < j ,

a i j P x j x i if and only if a i j x i x j = x i x i + 1 x i + 1 x i + 2 x j 1 x j .

Thus, it follows from Theorem 2.3 that

Theorem 6.1. (Rashid et al. [4] ). Suppose that A I ( n , n ) and x I < ( n ) . Then x F P < ( A ) if and only if for every i N the following two conditions hold.

1) a i j q i q i + 1 q j 1 for every j N , j > i ,

2) a i i = 1 or a i j = q i q i + 1 q j 1 for some j N , j > i .

When x F P < ( A ) , x n > 0 and it follows from the proof of Theorem 2.4 that x n = a n n x n , i.e., a n n = 1 .

Thus, the following theorem is a corollary of Theorem 2.4.

Theorem 6.2. (Rashid et al. [4] ). If A I ( n , n ) and F P < ( A ) Φ , then the following conditions satisfied

1) a i j < 1 for all i , j N , i < j ,

2) a n n = 1 .

This Theorem describes necessary conditions which a square matrix can have an increasing eigenvector.

7. Eigenvectors in Max-Drast Algebra

For x , y I ( n ) , we have

( x y ) i = max { x i , y i } , ( x D y ) i = { min { x i , y i } , if max { x i , y i } = 1 , 0 , if max { x i , y i } < 1

for every i N .

Theorem 7.1. (Gavalec et al. [5] ). Let A I ( n , n ) and x I < ( n ) . Then x F D < ( A ) if and only if for every i N the following conditions hold

1) a i j < 1 for every j N , j > i ,

2) if x n = 1 , then a i n x i ,

3) for some j { i , , n } , a i j D x j = x i .

Moreover, the following theorem describes necessary and sufficient conditions which a square matrix possesses a strictly increasing eigenvector.

Theorem 7.2. (Gavalec et al. [5] ). Let A I ( n , n ) . Then F D < ( A ) Φ if and only if the following conditions are satisfied

1) a i j < 1 for all j N , j > i ,

2) 0 < a i n for all i N \ { 1 } with a i i < 1 ,

3) a k n < a i n for all i , k N , k < i with a i i < 1 ,

4) a n n = 1 .

When x F D < ( A ) , x n > 0 and

x n = a n n D x n = { a n n , if x n = 1 , x n , if x n 1 , a n n = 1 , 0 , otherwise ,

i.e., a n n = 1 . This shows that conditions (1) and (4) in Theorem 7.2 are also straightforward consequences of Theorem 2.4.

The next theorem characterizes all the eigenvectors of a given matrix. In other words, the theorem completely describes the eigenspace structure.

Theorem 7.3. (Gavalec et al. [5] ). Let A I ( n , n ) , F D < ( A ) Φ , and x I < ( n ) . Then x F D < ( A ) if and only if the following conditions are satisfied

1) x i = a i n for all i N with a i i < 1 ,

2) if x n = 1 , then x i a i n for all i N \ { n } with a i i = 1 ,

3) if a 11 < 1 , 0 < a 1 n , then x n < 1 ,

4) if a i i < 1 for some i N \ { 1 } , then x n = 1 .

8. The Relations among These Eigenspaces

Now we discuss the relation between the monotone eigenvectors in max-T algebra and max-drast algebra.

Theorem 8.1. Let A = ( a i j ) I ( n , n ) , x I < ( n ) . If x F < ( A ) and a i i = 1 ( i = 1 , , n ) , then

x F D < ( A ) .

Proof. Assume that x F < ( A ) . For each i N , when j > i , it follows from Theorem 2.4 that a i j < 1 . If x n = 1 , then

a i 1 x 1 a i n x n = x i ( i = 1 , , n ) .

Thus, a i n x i ( i = 1 , , n ) .

When a i n x i ( i = 1 , , n ) ,

a i i D x i = min { 1 , x i } = x i ,

i.e., there exist some j { i , , n } such that

a i j D x j = x i .

Therefore, x F D < ( A ) by Proposition 3.3 in [5] .

The theorem is proved.

Finally, we illustrate the relations among eigenspaces in these algebras by two examples.

Example 8.1. Let

A = [ 1 3 1 2 1 2 1 3 1 2 2 3 1 4 1 3 1 ] , x = [ 1 4 1 3 1 2 ] .

Then

A P x = [ 1 3 1 2 1 2 1 3 1 2 2 3 1 4 1 3 1 ] P [ 1 4 1 3 1 2 ] = [ 1 4 1 3 1 2 ] ,

A D x = [ 1 3 1 2 1 2 1 3 1 2 2 3 1 4 1 3 1 ] D [ 1 4 1 3 1 2 ] = [ 0 0 1 2 ] .

Thus, x F P < ( A ) , but x F D < ( A ) . This illustrates that the condition a i i = 1 ( i = 1 , , n ) is necessary in Theorem 6.2. Moreover, by a simple computation, we see that

F G < ( A ) = ( { 1 2 } × { 1 3 } × [ 2 3 , 1 ] ) I < ( 3 ) ,

i.e., ( 1 4 , 1 3 , 1 2 ) F G < ( A ) .

This example shows that

F 1 < ( A ) F 2 < (A)

even if 1 2 .

Example 8.2. Let

A = [ 1 4 1 3 1 2 1 4 1 4 1 3 2 3 1 3 1 4 1 3 1 2 1 2 1 2 1 3 1 4 1 ]

Then the conditions (1)-(4) hold in Theorem 3.4 in [5] . Thus, F D < ( A ) Φ . But,

m 1 * ( A ) = 1 2 , m 2 * ( A ) = m 3 * ( A ) = m 4 * ( A ) = 2 3 , M 1 * ( A ) = M 2 * ( A ) = M 3 * ( A ) = M 4 * ( A ) = 1 2 ,

i.e., [ m * ( A ) , M * ( A ) ] = Φ and F G < ( A ) = Φ .

9. Conclusions and Further Works

The eigenproblem for a fuzzy matrix corresponds to finding a stable state of the complex discrete-events system described by the given transition matrix and fuzzy state vectors and the investigation of the eigenspace structure in fuzzy algebras is important for application. Gavalec et al. [2] [3] have investigated the steady states of max-Łukasiewicz fuzzy systems, Rashid et al. [4] and Gavalec et al. [5] have discussed the eigenspace structure of a max-product fuzzy matrix and a max-drast fuzzy matrix, respectively.

In this paper, we investigated the eigenvectors in a max-T algebra, discussed monotone eigenvectors in max-nilpotent-min algebra, and studied the relation between the monotone eigenvectors in max-T algebra and max-drast algebra.

In a forthcoming paper, we will further investigate monotone eigenvectors in max-nilpotent-min algebra and max-T algebra.

Acknowledgements

This work is funded by College Students Practice Innovation Training Program (201610324027Y).

Cite this paper: Wang, Q. , Qin, N. , Yang, Z. , Sun, L. , Peng, L. and Wang, Z. (2018) On Monotone Eigenvectors of a Max-T Fuzzy Matrix. Journal of Applied Mathematics and Physics, 6, 1076-1085. doi: 10.4236/jamp.2018.65093.
References

[1]   Klement, E.P., Mesiar, R. and Pap, E. (2000) Triangular Norms. Trends in Logic-Studia Logica Library, Vol. 8. Kluwer Academic Publishers, Dordrecht.

[2]   Gavalec, M. and Nemcova, Z. (2017) Steady States of Max-Lukasiewicz Fuzzy Systems. Fuzzy Sets and Systems, 325, 58-68.
https://doi.org/10.1016/j.fss.2017.02.005

[3]   Gavalec, M. and Plavka, J. (2010) Monotone Interval Eigenproblem in Max-Min Algebra. Kybernetika, 46, 387-396.

[4]   Rashid, I., Gavalec, M. and Cimler, R. (2016) Eigenspace Structure of a Max-Prod Fuzzy Matrix. Fuzzy Sets and Systems, 303, 136-148.
https://doi.org/10.1016/j.fss.2015.09.023

[5]   Gavalec, M., Rashid, I. and Cimler, R. (2014) Eigenspace Structure of a Max-Drast Fuzzy Matrix. Fuzzy Sets and Systems, 249, 100-113.
https://doi.org/10.1016/j.fss.2013.10.008

[6]   Gavalec, M. (2002) Monotone Eigenspace Structure in Max-Min Algebra. Linear Algebra and Its Applications, 345, 149-167.
https://doi.org/10.1016/S0024-3795(01)00488-8

[7]   Rashid, I., Gavalec, M. and Sergeev, S. (2012) Eigenspace of a Three-Dimensional Max-Lukasiewicz Fuzzy Matrix. Kybernetika, 48, 309-328.

[8]   Cechlarova, K. (1992) Eigenvectors in Bottleneck Algebra. Linear Algebra and Its Applications, 175, 63-73.
https://doi.org/10.1016/0024-3795(92)90302-Q

[9]   Cuninghame-Green, R.A. (1991) Minimax Algebra and Applications. Fuzzy Sets and Systems, 41, 251-267.
https://doi.org/10.1016/0165-0114(91)90130-I

[10]   Gavalec, M., Plavka, J. and Ponce, D. (2016) Tolerance Types of Interval Eigenvectors in Max-Plus Algebra. Information Sciences, 367-368, 14-27.
https://doi.org/10.1016/j.ins.2016.05.023

[11]   Gavalec, M., Plavka, J. and Tomaskova, H. (2014) Interval Eigenproblem in Max-Min Algebra. Linear Algebra and Its Applications, 440, 24-33.
https://doi.org/10.1016/j.laa.2013.10.034

[12]   Gavalec, M. and Tomaskova, H. (2010) Eigenspace of a Circulant Max-Min Matrix. Kybernetika, 46, 397-404.

[13]   Cechlarova, K. (1997) Efficient Computation of the Greatest Eigenvector in Fuzzy Algebra. Tatra Mountains Mathematical Publications, 12, 73-79.

[14]   Baczynski, W.M. and Jayaram, B. (2008) Fuzzy Implications. Studies in Fuzziness and Soft Computing, Vol. 231. Springer, Chennai.

 
 
Top