Back
 APM  Vol.10 No.7 , July 2020
A New Hesitant Fuzzy Multiple Attribute Decision Making Method with Unknown Weight Information
Abstract: In this paper, we focus on a new approach based on new generalized hesitant fuzzy hybrid weighted aggregation operators, in which the evaluation information provided by decision makers is expressed in hesitant fuzzy elements (HFEs) and the information about attribute weights and aggregation-associated vector is unknown. More explicitly, some new generalized hesitant fuzzy hybrid weighted aggregation operators are proposed, such as the new generalized hesitant fuzzy hybrid weighted averaging (NGHFHWA) operator and the new generalized hesitant fuzzy hybrid weighted geometric (NGHFHWG) operator. Some desirable properties and the relationships between them are discussed. Then, a new algorithm for hesitant fuzzy multi-attribute decision making (HF-MADM) problems with unknown weight information is introduced. Further, a practical example is used to illustrate the detailed implementation process of the proposed approach. A sensitivity analysis of the decision results is analyzed with different parameters. Finally, comparative studies are given to verify the advantages of our method.

1. Introduction

Hesitant fuzzy multiple attribute decision making (HF-MADM) can be characterized as a process of choosing or selecting or ranking a finite number of alternatives to attain the best one(s), in which alternative evaluations are expressed in HFEs by decision makers. It has been successfully applied in various areas, such as risk investment [1], pattern recognition [2], assessing the risk of rockbursting [3], energy storage technologies [4]. Due to the increasing complexity of the decision making problems, Torra and Narukawa [5] [6] introduced the concept of hesitant fuzzy sets (HFSs), which permit the membership degree of an element to a set of several possible values between 0 and 1. The concept of HFSs is more objective and effective in expressing decision makers’ inherent hesitancy. Many theoretical studies on HF-MADM problems have been put forward in recent years. Xu and Xia [7] [8], Li et al. [9] investigated a variety of distance measures for HFSs. Chen et al. [10] derived some correlation coefficient formulas for HFSs and applied them to clustering analysis. Sun et al. [11] constructed the innovative TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) based on hesitant fuzzy correlation coefficient. Best worst method (BWM) was extended to hesitant fuzzy environment by Mi and Liao [12]. In addition to the aforementioned research for hesitant fuzzy decision making methods, Xu and Zhang [13] developed a novel approach based on TOPSIS for solving HF-MADM problems, in which the information about attribute weights was obtained by the maximizing deviation method. Extended hesitant fuzzy set using the Cartesian product of HFSs was re-defined by Farhadinia [14], and a HF-MADM method with unknown weight information was proposed.

Aggregation operators are widely used in HF-MADM problems, which can calculate the actual aggregation values of the alternatives. Xia et al. [15] [16] gave an intensive study on hesitant fuzzy aggregation techniques. They introduced a series of hesitant fuzzy aggregation operators, such as hesitant fuzzy weighted averaging (HFWA) operator, hesitant fuzzy hybrid averaging (HFHA) operator, hesitant fuzzy hybrid geometric (HFHG) operator, and utilized these operators to develop an approach to solve decision making problems. Some new hesitant fuzzy hybrid weighted aggregation operators and extended hesitant fuzzy hybrid weighted aggregation operators were developed in [17] [18], the properties of these operators were investigated. Prioritized operators, power aggregation operators, Bonferroni mean, Heronian mean, Choquet integral aggregation operators were extended into hesitant fuzzy environment by Wei [19], Jin et al. [20], Zhang [21], Zhu et al. [22] [23], Yu [24], Yu et al. [25], Liao et al. [26], respectively. Qin et al. [27] developed some hesitant fuzzy aggregation operators based on Frank triangular norms. So far, the research on hesitant fuzzy aggregation operators has been well explored.

It is noted that the weight vector of these hesitant fuzzy aggregation operators should play an important part of decision making problems. Then, an important issue related to the hesitant fuzzy aggregation operators is to choose an optimal method to gain their associated weights. For example, Xu and Zhang [13] determined objective attribute weights by maximizing deviation method under hesitant fuzzy environment. Xu [28] obtained the OWA weights by normal distribution based method. Zhou [29] proposed the accurate weighted method to calculate the weights of HFEs and aggregation operator. Motivated by the above ideas, the purpose of this paper is to give a new algorithm to deal with HF-MADM problems based on new generalized hesitant fuzzy hybrid weighted aggregation operators, in which aggregation-associated weight vector and attribute weights are unknown. The main advantages of our approach can be summarized as follows:

1) The new generalized hesitant fuzzy hybrid weighted aggregation operators satisfy some desirable properties, including the properties of idempotency and boundedness.

2) The new algorithm can deal with HF-MADM problems with unknown weights information. Especially, aggregation-associated weight vector and attribute weights are calculated by the known HFEs.

This paper is organized as follows. In Section 2, we review some basic concepts of HFSs, the distance measure of HFEs, the generalized hesitant fuzzy hybrid averaging (GHFHA) operator, the generalized hesitant fuzzy hybrid geometric (GHFHG) operator, the generalized hesitant fuzzy hybrid weighted averaging (GHFHWA) operator and the generalized hesitant fuzzy hybrid weighted geometric (GHFHWG) operator. Section 3 proposes some new generalized hesitant fuzzy hybrid weighted aggregation operators, investigates properties and relationships of these operators. Section 4 presents a new algorithm to implement the proposed operators to MADM, in which aggregation-associated weight vector and attribute weights are unknown. In Section 5, a practical example is illustrated to verify the effectiveness and practicality of our approach. In Section 6, comparative studies are given to clarify the advantages of our proposed method. Some conclusions and future works are made in Section 7.

2. Preliminaries

Definition 1. ( [15]) Let X be a fixed set. A hesitant fuzzy set (HFS) A on X is a function h : X P ( [ 0,1 ] ) , where P ( [ 0,1 ] ) is a family of all subsets of [ 0,1 ] . A HFS can be represented as the following mathematical symbol:

A = { x , h A ( x ) | x X }

where h A ( x ) is a set of values in [ 0,1 ] , denoting the possible membership degrees of the element x X to a set A. For convenience, we call h A ( x ) a hesitant fuzzy element (HFE), denoted by h.

In many decision making problems, the memberships of HFSs are nonempty and finite subsets of [ 0,1 ] , which are called typical hesitant fuzzy sets (THFSs) [30]. In this paper, we utilize THFSs to deal with decision making problems. Actually, many aforementioned research, such as [7] - [13], assumed explicitly or implicitly that the memberships of HFSs are nonempty and finite subsets of [ 0,1 ] . So without distinction, we still use HFS. It is customary to assume that all the elements in each HFE h = γ h { γ } are arranged in ascending order, i.e., h = { γ ( 1 ) , γ ( 2 ) , , γ ( l ) } = i = 1 l { γ ( i ) } , where γ ( 1 ) < γ ( 2 ) < < γ ( l ) and l is the number of elements in h. Definition 2. ( [15] [31]) For a HFE h = i = 1 l { γ ( i ) } , the score function of h is defined as

s ( h ) = 1 l i = 1 l γ ( i ) . (1)

The hesitant fuzzy order central polymerization degree function of h is defined as

p ( h ) = 1 1 l i = 1 l | γ ( i ) s ( h ) | , (2)

where l is the number of elements in h. Based on the score function s ( h ) and the hesitant fuzzy order central polymerization degree function p ( h ) , the comparison scheme can be developed to rank any HFEs:

If s ( h 1 ) < s ( h 2 ) , then h 1 < h 2 ;

If s ( h 1 ) = s ( h 2 ) , then

If p ( h 1 ) < p ( h 2 ) , then h 1 < h 2 ;

If p ( h 1 ) = p ( h 2 ) , then h 1 = h 2 .

Definition 3. ( [15]) Let h , h 1 , h 2 be three HFEs, and k be a positive number, then

1) h 1 h 2 = γ 1 h 1 , γ 1 h 2 { γ 1 + γ 2 γ 1 γ 2 } ,

2) h 1 h 2 = γ 1 h 1 , γ 1 h 2 { γ 1 γ 2 } ,

3) h k = γ h { γ k } ,

4) k h = γ h { 1 ( 1 γ ) k } .

The above operations are the basic operations laws. However, they sometimes have some drawbacks. We find that if h 1 = h 2 = h , then h 1 h 2 2 h 1 , h 1 h 2 2 h 2 . For example, suppose that h 1 , h 2 are two HFEs, h 1 = h 2 = { 0.2 , 0.4 , 0.5 } . Then we get h 1 h 2 = { 0.36 , 0.52 , 0.6 , 0.52 , 0.64 , 0.7 , 0.6 , 0.7 , 0.75 } and 2 h 1 = 2 h 2 = { 0.36 , 0.64 , 0.75 } . So h 1 h 2 2 h 1 , h 1 h 2 2 h 2 . Both the addition and multiplicative operations of HFEs can increase the number of elements in the derived HFE, and also make the calculation process complicated. Liao et al. [32] adjusted them as follows.

Definition 4. ( [32]) Let h 1 = i = 1 l 1 { γ 1 ( i ) } , h 2 = i = 1 l 2 { γ 2 ( i ) } be two HFEs, then

1) h 1 ˙ h 2 = i = 1 l { γ 1 ( i ) + γ 2 ( i ) γ 1 ( i ) γ 2 ( i ) } ,

2) h 1 ˙ h 2 = i = 1 l { γ 1 ( i ) γ 2 ( i ) } ,

where l = max { l 1 , l 2 } . If l 1 < l 2 , an extension of h 1 should be considered optimistically by repeating its maximum elements until it has the same length with h 2 .

Definition 5. ( [7]) Let h 1 = i = 1 l 1 { γ 1 ( i ) } and h 2 = i = 1 l 2 { γ 2 ( i ) } be two HFEs, l = max { l 1 , l 2 } . Then the hesitant Hamming distance is defined as follows:

d ( h 1 , h 2 ) = 1 l i = 1 l | γ 1 ( i ) γ 2 ( i ) | . (3)

Definition 6. ( [15] [18]) For a collection of HFEs h j ( j = 1 , 2 , , n ) , let λ = ( λ 1 , λ 2 , , λ n ) T be the weight vector of HFEs h j ( j = 1 , 2 , , n ) with λ j [ 0,1 ] and j = 1 n λ j = 1 , ω = ( ω 1 , ω 2 , , ω n ) T be the aggregation-associated vector such that ω j [ 0,1 ] and j = 1 n ω j = 1 . Then

1) the generalized hesitant fuzzy hybrid averaging (GHFHA) operator:

GHFHA ( h 1 , h 2 , , h n ) = ( j = 1 n ω j h ˙ σ ( j ) p ) 1 p = γ ˙ σ ( 1 ) h ˙ σ ( 1 ) , γ ˙ σ ( 2 ) h ˙ σ ( 2 ) , , γ ˙ σ ( n ) h ˙ σ ( n ) { ( 1 j = 1 n ( 1 γ ˙ σ ( j ) p ) ω j ) 1 p } , (4)

where p > 0 , h ˙ σ ( j ) is the jth largest of h ˙ = n λ k h k ( k = 1 , 2 , , n ) .

2) the generalized hesitant fuzzy hybrid geometric (GHFHG) operator:

GHFHG ( h 1 , h 2 , , h n ) = 1 p ( j = 1 n ( p h ¨ σ ( j ) ) ω j ) = γ ¨ σ ( 1 ) h ¨ σ ( 1 ) , γ ¨ σ ( 2 ) h ¨ σ ( 2 ) , , γ ¨ σ ( n ) h ¨ σ ( n ) { 1 ( 1 j = 1 n ( 1 ( 1 γ ¨ σ ( j ) ) p ) ω j ) 1 p } , (5)

where p > 0 , h ¨ σ ( j ) is the jth largest of h ¨ k = h k n λ k ( k = 1 , 2 , , n ) .

3) the generalized hesitant fuzzy hybrid weighted averaging (GHFHWA) operator:

GHFHWA ( h 1 , h 2 , , h n ) = ( j = 1 n λ j ω ε ( j ) h j p j = 1 n λ j ω ε ( j ) ) 1 p = γ 1 h 1 , γ 2 h 2 , , γ n h n { ( 1 j = 1 n ( 1 γ j p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p } , (6)

where ε : { 1,2, , n } { 1,2, , n } is a permutation such that h j is the ε ( j ) th largest element of the collection of HFEs h j ( j = 1 , 2 , , n ) , and p is a parameter such that p ( , + ) .

4) the generalized hesitant fuzzy hybrid weighted geometric (GHFHWG) operator:

GHFHWG ( h 1 , h 2 , , h n ) = ( j = 1 n ( h j p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p

= γ 1 h 1 , γ 2 h 2 , , γ n h n { ( j = 1 n ( γ j ) p λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p } , (7)

where ε : { 1,2, , n } { 1,2, , n } is a permutation such that h j is the ε ( j ) th largest element of the collection of HFEs h j ( j = 1 , 2 , , n ) , and p is a parameter such that p ( , + ) .

3. New Generalized Hesitant Fuzzy Hybrid Weighted Aggregation Operators

In this section, we replace operations and in Equation (6) and Equation (7) by ˙ and ˙ respectively, get the new generalized hesitant fuzzy hybrid weighted averaging (NGHFHWA) operator and the new generalized hesitant fuzzy hybrid weighted geometic (NGHFHWG) operator.

Definition 7. For a collection of HFEs h j = i = 1 l { γ j ( i ) } , l = max j { l j } , ( j = 1 , 2 , , n ) , the following new generalized hesitant fuzzy hybrid weighted aggregation operators are defined by the mapping: H n H with an associated weight vector ω = ( ω 1 , ω 2 , , ω n ) T such that ω j [ 0,1 ] and j = 1 n ω j = 1 . Then

1) the NGHFHWA operator:

NGHFHWA ( h 1 , h 2 , , h n ) = ( ˙ j = 1 n λ j ω ε ( j ) h j p j = 1 n λ j ω ε ( j ) ) 1 p (8)

2) the NGHFHWG operator:

NGHFHWG ( h 1 , h 2 , , h n ) = 1 p ( ˙ j = 1 n ( p h j ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) (9)

where ε : { 1,2, , n } { 1,2, , n } is a permutation such that h j is the ε ( j ) th largest element of the collection of HFEs h j ( j = 1 , 2 , , n ) , λ = ( λ 1 , λ 2 , , λ n ) T is the weight vector of HFEs h j ( j = 1 , 2 , , n ) with λ j [ 0,1 ] and j = 1 n λ j = 1 , p is a parameter such that p > 0 .

Notice that p is positive parameter, since the negative multiplication of HFE h j has no meaning.

Especially, if p = 1 , then the NGHFHWA and NGHFHWG operators reduce to the new hesitant fuzzy hybrid weighted averaging (NHFHWA) operator and the new hesitant fuzzy hybrid weighted geometric (NHFHWG) operator, respectively:

NHFHWA ( h 1 , h 2 , , h n ) = ˙ j = 1 n λ j ω ε ( j ) h j j = 1 n λ j ω ε ( j )

NHFHWG ( h 1 , h 2 , , h n ) = ˙ j = 1 n h j λ j ω ε ( j ) j = 1 n λ j ω ε ( j )

Remark 1. In some decision making problems, t HFEs are the same for a collection of HFEs h j ( j = 1 , 2 , , n ) , denoted by h i 1 = h i 2 = = h i t . According to Definition 7, we get ε ( i 1 ) = ε ( i 2 ) = = ε ( i t ) , thus ω ε ( i 1 ) = ω ε ( i 2 ) = = ω ε ( i t ) = a , but j = 1 n ω ε ( j ) = b 1 . In order to meet j = 1 n ω ε ( j ) = 1 , notice h i 1 = h i 2 = = h i t , we assume that 1 b can be equally distributed to ω ε ( i 1 ) , ω ε ( i 2 ) , , ω ε ( i t ) , so ω ε ( i 1 ) = ω ε ( i 2 ) = = ω ε ( i t ) = a + 1 b k .

Theorem 1. For a collection of HFEs h j = i = 1 l { γ j ( i ) } , l = max j { l j } , ( j = 1 , 2 , , n ) , the aggregated value by using the NGHFHWA operator or the NGHFHWG operator is also a HFE, and

NGHFHWA ( h 1 , h 2 , , h n ) = i = 1 l { ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p } , (10)

NGHFHWG ( h 1 , h 2 , , h n ) = i = 1 l { 1 ( 1 j = 1 n ( 1 ( 1 γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p } , (11)

where ω = ( ω 1 , ω 2 , , ω n ) T is an associated weight vector with ω j [ 0,1 ] and j = 1 n ω j = 1 , ε : { 1,2, , n } { 1,2, , n } is a permutation such that h j is the ε ( j ) th largest element of the collection of HFEs h j ( j = 1 , 2 , , n ) , λ = ( λ 1 , λ 2 , , λ n ) T is the weight vector of HFEs h j ( j = 1 , 2 , , n ) with λ j [ 0,1 ] and j = 1 n λ j = 1 , p is a parameter such that p > 0 .

In the following, we show that both the NGHFHWA operator and the NGHFHWG operator satisfy the properties of idempotency and boundedness, and other desirable properties.

Theorem 2. (Idempotency) If h j = h ( j = 1 , 2 , , n ) , then

NGHFHWA ( h 1 , h 2 , , h n ) = h ,

NGHFHWG ( h 1 , h 2 , , h n ) = h .

Theorem 3. (Boundedness) For a collection of HFEs h j = i = 1 l { γ j ( i ) } , l = max j { l j } , ( j = 1 , 2 , , n ) , the following inequations hold:

h NGHFHWA ( h 1 , h 2 , , h n ) h + , (12)

h NGHFHWG ( h 1 , h 2 , , h n ) h + , (13)

where h = m i n 1 i l m i n 1 j n γ j ( i ) , h + = max 1 i l max 1 j n γ j ( i ) .

Lemma 1. ( [33]) If x j > 0 , ω j > 0 , j = 1 , 2 , , n , and j = 1 n ω j = 1 , then j = 1 n x j ω j j = 1 n ω j x j , with equality if and only if x 1 = x 2 = = x n .

Theorem 4. For a collection of HFEs h j = i = 1 l { γ j ( i ) } , l = max j { l j } , ( j = 1 , 2 , , n ) , then

NHFHWG ( h 1 , h 2 , , h n ) NGHFHWA ( h 1 , h 2 , , h n ) ,

NGHFHWG ( h 1 , h 2 , , h n ) NHFHWA ( h 1 , h 2 , , h n ) .

Theorem 5. For a collection of HFEs h j = i = 1 l { γ j ( i ) } , l = max j { l j } , ( j = 1 , 2 , , n ) , the NGHFHWA operator is monotonically increasing and the NGHFHWG operator is monotonically decreasing with respect to the parameter p.

Lemma 2. For a collection of HFEs h j = i = 1 l { γ j ( i ) } ( l = max j { l j } , j = 1 , 2 , , n ) , p is a parameter such that p > 0 . Let

f ( p ) = ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p ,

g ( p ) = ( 1 j = 1 n ( 1 ( 1 γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p .

Then

lim p 0 f ( p ) = e j = 1 n ( ln γ j ( i ) ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) (14)

lim p 0 g ( p ) = e j = 1 n ( ln ( 1 γ j ( i ) ) ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) (15)

where ω = ( ω 1 , ω 2 , , ω n ) T is an associated weight vector with ω j [ 0,1 ] and j = 1 n ω j = 1 , ε : { 1,2, , n } { 1,2, , n } is a permutation such that h j is the ε ( j ) th largest element of the collection of HFEs h j ( j = 1 , 2 , , n ) , λ = ( λ 1 , λ 2 , , λ n ) T is the weight vector of HFEs h j ( j = 1 , 2 , , n ) with λ j [ 0,1 ] and j = 1 n λ j = 1 .

Theorem 6. For a collection of HFEs h j = i = 1 l { γ j ( i ) } , l = max j { l j } , ( j = 1 , 2 , , n ) , then

NGHFHWG ( h 1 , h 2 , , h n ) NGHFHWA ( h 1 , h 2 , , h n ) .

The proofs of Theorems 1-6 and Lemma 2 can be found in the Appendix.

4. Decision Making Based on New Generalized Hesitant Fuzzy Hybrid Weighted Aggregation Operators with Unknown Weight Information

4.1. Problem Description

Consider that decision makers intend to evaluate a collection of alternatives A = { A 1 , A 2 , , A m } with respect to the attributes G = { G 1 , G 2 , , G n } . Suppose that h i j is an attribute value given by decision makers, which is a HFE for alternative A i with respect to attribute G j . All h i j ( i = 1 , 2 , , m ; j = 1 , 2 , , n ) form the hesitant fuzzy decision matrix H = ( h i j ) m × n , h i j = t = 1 l i j { γ i j ( t ) } . Based on the assumption that all the decision makers are optimistic. Optimists anticipate desirable outcomes and add the maximum value of the membership degrees. We obtain a normalized decision matrix H ˜ = ( h ˜ i j ) m × n , h ˜ i j = t = 1 l { γ ˜ i j ( t ) } , l = max i , j { l i j } ( i = 1 , 2 , , m ; j = 1 , 2 , , n ) . Weight vector λ = ( λ 1 , λ 2 , , λ n ) T is the importance degree for the relevant attribute, such that λ j [ 0,1 ] and j = 1 n λ j = 1 . Meanwhile ω = ( ω 1 , ω 2 , , ω n ) T is the aggregation-associated weight vector, which ω j [ 0,1 ] and j = 1 n ω j = 1 .

In the following, we first determine the weights of attributes and the aggregation-associated weight vector by optimal methods, then give an algorithm for MADM problems based on new generalized hybrid weighted aggregation operators under unknown weight information.

4.2. Obtaining the Attribute Weight Vector

The attribute weight vector plays an important role in MADM, which not only represent the relative importance of attributes, but also the preferences of decision-makers. In order to get the optimal weight vector λ = ( λ 1 , λ 2 , , λ n ) T of attributes under completely unknown information, we extend the maximizing deviation method [34] under hesitant fuzzy environment based on the hesitant Hamming distance.

For the nonlinear programming model (M-1):

(M-1) { max D ( λ ) = j = 1 n i = 1 m k = 1 m λ j ( 1 l t = 1 l | γ ˜ i j ( t ) γ ˜ k j ( t ) | ) s .t . λ j 0 , j = 1 , 2 , , n , j = 1 n λ j 2 = 1

As the calculation in [13], weight vector λ = ( λ 1 , λ 2 , , λ n ) T is obtained as follows:

λ j = Y j j = 1 n Y j , j = 1 , 2 , , n , (16)

where Y j = i = 1 m k = 1 m ( 1 l t = 1 l | γ ˜ i j ( t ) γ ˜ k j ( t ) | ) , j = 1 , 2 , , n .

4.3. Obtaining the Aggregation-Associated Weight Vector

Determining the aggregation-associated weight vector ω = ( ω 1 , ω 2 , , ω n ) T by a proper method is also an important step composed by new generalized hesitant fuzzy hybrid weighted aggregation operators. The normal distribution based method was introduced by Xu [28], which can relieve the influence of unfair arguments on decision results by weighting these arguments with small values. So we choose the normal distribution based method [28] to obtain the aggregation-associated weight vector ω = ( ω 1 , ω 2 , , ω n ) T . When the values of n from 2 to 20, the weights of the aggregation operator were calculated in [28] (We list the aggregation-associated weight vector ω in Table 1 when n from 2 to 10).

4.4. An Approach to HF-MADM Based on New Generalized Hesitant Fuzzy Hybrid Weighted Aggregation Operators with Unknown Weight Information

Algorithm:

Step 1. Obtain a normalized decision matrix H ˜ = ( h ˜ i j ) m × n from H = ( h i j ) m × n .

Step 2. Determine ω = ( ω 1 , ω 2 , , ω n ) T according to the number of n in Table 1.

Step 3. Utilize Equation (16) to obtain the weights of attributes λ = ( λ 1 , λ 2 , , λ n ) T .

Step 4. Utilize the new generalized hybrid weighted aggregation operators, such as NGHFHWA, NGHFHWG, to synthesize h ˜ i j into overall h ˜ i ( i = 1 , 2 , , m ) for alternatives A i ( i = 1 , 2 , , m ) .

Step 5. Calculate the scores s ( h ˜ i ) of the overall hesitant fuzzy values h ˜ i by Equation (1). If any two scores of alternatives are the same, calculate their p ( h ˜ i ) functions according to Equation (2) ( i = 1 , 2 , , m ) .

Step 6. Rank all the alternatives A i ( i = 1 , 2 , , m ) in accordance with s ( h ˜ i ) and p ( h ˜ i ) ( i = 1 , 2 , , m ) .

Table 1. Aggregation-associated weight vector ω = ( ω 1 , ω 2 , , ω n ) T for n from 2 to 10.

5. Illustrative Example

5.1. Case Study

As the development of the internet technology, more and more people are tending to use smartphone to get information rather than reading paper. Newspapers, as a traditional industry, must expand their business by new-media to keep pace with the times. As a government procurement function department, Public Resource Trading Center decided to purchase WeChat live broadcasting system for Haimen Daily newspaper. The aim of our example is to help government decision makers to select a proper supplier according to the following six attributes: 1) G 1 is the price; 2) G 2 is the quality; 3) G 3 is the technology; 4) G 4 is the green development degree; 5) G 5 is the reputation and 6) G 6 is the after sales service. It is assumed that four suppliers A i ( i = 1 , 2 , 3 , 4 ) are participating in the tender according to the tender request. In real world applications, decision makers found that it is hard for him to decide which supplier should be selected due to his limited knowledge. Hesitant fuzzy set may represent this data. The evaluation values of four suppliers with respect to six attributes are shown in the hesitant fuzzy decision matrix H = ( h i j ) 4 × 6 (see Table 2).

In what follows, we utilize the developed method to select the most desirable supplier.

Step 1. The normalized decision matrix H ˜ = ( h ˜ i j ) 4 × 6 from H = ( h i j ) 4 × 6 is shown in Table 3.

Step 2. According to Table 1, since n = 6 , the aggregation-associated weight vector ω = ( 0.0865,0.1716,0.2419,0.2419,0.1716,0.0865 ) T .

Step 3. Utilize Equation (16), we obtain the weights of the attributes λ = ( 0.2389,0.1327,0.2743,0.1416,0.0885,0.1239 ) T .

Step 4. Utilize the NGHFHWA operator to obtain HFEs h ˜ i for the cars A 1 , A 2 , A 3 , A 4 . We take A 2 as an example.

Table 2. Hesitant fuzzy decision matrix.

Table 3. Normalized hesitant fuzzy decision matrix.

h ˜ 2 = NGHFHWA ( h ˜ 21 , h ˜ 22 , h ˜ 23 , h ˜ 24 , h ˜ 25 , h ˜ 26 ) = NGHFHWA ( { 0.6,0.7,0.8 } , { 0.5,0.6,0.8 } , { 0.5,0.7,0.7 } , { 0.6,0.7,0.7 } , { 0.4,0.5,0.6 } , { 0.4,0.5,0.6 } ) .

According to Equation (1), we get s ( h ˜ 21 ) = 0.6 + 0.7 + 0.8 3 = 0.7 , s ( h ˜ 22 ) = 0.6333 , s ( h ˜ 23 ) = 0.6333 , s ( h ˜ 24 ) = 0.6667 , s ( h ˜ 25 ) = 0.5 , s ( h ˜ 26 ) = 0.5 . Notice s ( h ˜ 22 ) = s ( h ˜ 23 ) , by Equation (2), we obtain

p ( h ˜ 22 ) = 1 1 3 ( | 0.5 0.6333 | + | 0.6 0.6333 | + | 0.8 0.6333 | ) = 0.8889 ,

p ( h ˜ 23 ) = 1 1 3 ( | 0.5 0.6333 | + | 0.7 0.6333 | + | 0.7 0.6333 | ) = 0.9111 .

Hence, h ˜ 21 > h ˜ 24 > h ˜ 23 > h ˜ 22 > h ˜ 25 = h ˜ 26 . Thus, ε ( 21 ) = 1 , ε ( 24 ) = 2 , ε ( 23 ) = 3 , ε ( 22 ) = 3 , ε ( 25 ) = ε ( 26 ) = 5 . By Remark 1, we get ω ε ( 2 ) = ( 0.0865,0.2419,0.2419,0.1716,0.1290,0.1290 ) T . Therefore,

λ 1 ω ε ( 21 ) j = 1 6 λ j ω ε ( 2 j ) = 0.1210 , λ 2 ω ε ( 22 ) j = 1 6 λ j ω ε ( 2 j ) = 0.1879 , λ 3 ω ε ( 23 ) j = 1 6 λ j ω ε ( 2 j ) = 0.3884 ,

λ 4 ω ε ( 24 ) j = 1 6 λ j ω ε ( 2 j ) = 0.1422 , λ 5 ω ε ( 25 ) j = 1 6 λ j ω ε ( 2 j ) = 0.0668 , λ 6 ω ε ( 26 ) j = 1 6 λ j ω ε ( 2 j ) = 0.0936 .

According to Equation (10), choose p = 1 , we can calculate that

h ˜ 2 = NGHFHWA ( h ˜ 21 , h ˜ 22 , h ˜ 23 , h ˜ 24 , h ˜ 25 , h ˜ 26 ) = i = 1 3 { ( 1 ( 1 ( γ ˜ 21 ( i ) ) p ) 0.1210 ( 1 ( γ ˜ 22 ( i ) ) p ) 0.1879 ( 1 ( γ ˜ 23 ( i ) ) p ) 0.3884 × ( 1 ( γ ˜ 24 ( i ) ) p ) 0.1422 ( 1 ( γ ˜ 25 ( i ) ) p ) 0.0668 ( 1 ( γ ˜ 26 ( i ) ) p ) 0.0936 ) 1 p } = { 0.5145,0.6563,0.7228 } .

Similarly, we can calculate h ˜ 1 = { 0.4677,0.6165,0.6355 } , h ˜ 3 = { 0.4894,0.6079,0.6837 } , h ˜ 4 = { 0.5893,0.7318,0.7605 } by using the NGHFHWA operator for alternatives A 1 , A 3 , A 4 , respectively.

Step 5. Calculate the scores s ( h ˜ i ) of h ˜ i for A i ( i = 1 , 2 , 3 , 4 ) :

s ( h ˜ 1 ) = 0.5732 , s ( h ˜ 2 ) = 0.6312 , s ( h ˜ 3 ) = 0.5937 , s ( h ˜ 4 ) = 0.6939 .

Step 6. Rank alternatives A i ( i = 1 , 2 , 3 , 4 ) in accordance with s ( h ˜ i ) : h ˜ 4 > h ˜ 2 > h ˜ 3 > h ˜ 1 , thus A 4 is the best supplier.

5.2. Sensitivity Analysis

In this subsection, the influence of parameter p on the ranking results is investigated and discussed. The detailed results are shown in Table 4.

From Table 4, it is obvious that the scores obtained by the NGHFHWA operator

Table 4. Scores and ranking orders under different parameter.

are increasing with respect to p, while those obtained by the NGHFHWG operator are decreasing. On the other hand, the ranking orders obtained by the NGHFHWA operator are somewhat different as p increases. For more detailed investigation, Figure 1 and Figure 2 present the influence of different parameter p by the NGHFHWA operator and the NGHFHWG operator respectively. From Figure 1, we can know that the ranking orders for four alternatives change as p increases, the score of each alternative is monotonically increasing with respect to p, which verifies Theorem 5. More explicitly, we can find that,

i) If p ( 0,8.9 ] , the ranking of the four alternatives is A 4 A 2 A 3 A 1 .

ii) If p ( 8.9,10.9 ] , the ranking of the four alternatives is A 4 A 2 A 1 A 3 .

iii) If p ( 10.9,12.3 ] , the ranking of the four alternatives is A 4 A 1 A 2 A 3 .

iv) If p ( 12.3,30 ] , the ranking of the four alternatives is A 4 A 1 A 3 A 2 .

In summary, we conclude that the selection of values for parameter p mainly depends on decision makers’ risk preferences. Pessimists anticipate desirable outcomes and may choose small value of parameter p, while optimistic experts may choose big values. For the computational simplicity of HF-MADM problems, the decision makers can select p = 1 (or 2), which is simple and straightforward.

6. Comparative Studies

In subsection 5.1, we utilize the proposed method to solve the example successfully, which has proven the availability of our method. In addition, we also analyze the impacts of parameter p on ranking results in subsection 5.2. The sensitivity analysis illustrates the high flexibility of our approach. In order to further demonstrate the advantages of the algorithm, we use GHFHWA, GHFHWG [18], GHFHA, GHFHG [15] operators to solve the example, in which aggregation

Figure 1. Trends of scores for four alternatives by the NGHFHWA operator.

Figure 2. Trends of scores for four alternatives by the NGHFHWG operator.

associated weight vector ω = ( 0.09,0.17,0.24,0.24,0.17,0.09 ) T and attribute weights λ = ( 0.15,0.25,0.14,0.16,0.20,0.10 ) T are given by decision makers.

1) Compared with the approach based on GHFHWA, GHFHWG operators [18]

We begin our comparison by employing the method based on GHFHWA, GHFHWG operators [18] in example. First, we review the approach of Liao and Xu [18].

Step 1’. Based on the hesitant fuzzy decision matrix H = ( h i j ) 4 × 6 (Table 2), we use the GHFHWA operator to aggregate all HFEs h i j ( j = 1 , 2 , 3 , 4 , 5 , 6 ) into collective HFEs h i ( i = 1 , 2 , 3 , 4 ) . Take A 2 as an example, we have

h 2 = GHFHWA ( h 21 , h 22 , h 23 , h 24 , h 25 , h 26 ) = GHFHWA ( { 0.6,0.7,0.8 } , { 0.5,0.6,0.8 } , { 0.5,0.7 } , { 0.6,0.7 } , { 0.4,0.5,0.6 } , { 0.4,0.5,0.6 } ) .

Since s ( h 21 ) = 0.7 , s ( h 22 ) = 0.6333 , s ( h 23 ) = 0.6 , s ( h 24 ) = 0.65 , s ( h 25 ) = 0.5 , s ( h 26 ) = 0.5 , then h 21 > h 24 > h 22 > h 23 > h 25 = h 26 . Thus by Remark 1, we get ω ε ( 2 ) = ( 0.09,0.24,0.24,0.17,0.13,0.13 ) T . Therefore,

λ 1 ω ε ( 21 ) j = 1 6 λ j ω ε ( 2 j ) = 0.0779 , λ 2 ω ε ( 22 ) j = 1 6 λ j ω ε ( 2 j ) = 0.3462 , λ 3 ω ε ( 23 ) j = 1 6 λ j ω ε ( 2 j ) = 0.1939 ,

λ 4 ω ε ( 24 ) j = 1 6 λ j ω ε ( 2 j ) = 0.1570 , λ 5 ω ε ( 25 ) j = 1 6 λ j ω ε ( 2 j ) = 0.1500 , λ 6 ω ε ( 26 ) j = 1 6 λ j ω ε ( 2 j ) = 0.0750 .

According to Equation (6), choose p = 1 , we can calculate that

h 2 = GHFHWA ( h 21 , h 22 , h 23 , h 24 , h 25 , h 26 ) = γ 2 j h 2 j { ( 1 ( 1 γ 21 p ) 0.0779 ( 1 γ 22 p ) 0.3462 ( 1 γ 23 p ) 0.1939 × ( 1 γ 24 p ) 0.1570 ( 1 γ 25 p ) 0.1500 ( 1 γ 26 p ) 0.0750 ) 1 p } ,

where we don’t list all of values in h 2 since the number of values in h 2 is 324. Similarly, we can calculate h 1 , h 3 , h 4 by using the GHFHWA operator for alternatives A 1 , A 3 , A 4 , respectively. Notice that the number of values in h 1 , h 3 , h 4 are 96, 216, 144, respectively.

Step 3’. Calculate the scores s ( h i ) ( i = 1 , 2 , 3 , 4 ) of h i ( i = 1 , 2 , 3 , 4 ) :

s ( h 1 ) = 0.5786 , s ( h 2 ) = 0.6201 , s ( h 3 ) = 0.6067 , s ( h 4 ) = 0.6619 .

Step 4’. Rank all of the alternatives A i ( i = 1 , 2 , 3 , 4 ) in accordance with s ( h i ) : h 4 > h 2 > h 3 > h 1 , thus A 4 is the most desirable supplier.

If parameter p changes, choose the GHFHWA operator for example, trends of scores for the alternatives can be obtained, which is shown in Figure 3. From Figure 3, we can see that,

i) If p ( 0,6.5 ] , the ranking of the four alternatives is A 4 A 2 A 3 A 1 .

ii) If p ( 6.5,9.6 ] , the ranking of the four alternatives is A 4 A 3 A 2 A 1 .

iii) If p ( 9.6,11.1 ] , the ranking of the four alternatives is A 4 A 3 A 1 A 2 .

iv) If p ( 11.1,30 ] , the ranking of the four alternatives is A 4 A 1 A 3 A 2 .

Comparing with our proposed method based on NGHFHWA and NGHFHWG operators in this article, we can conclude that:

Figure 3. Trends of scores for four alternatives by the GHFHWA operator.

i) Our new generalized hybrid weighted aggregation operators have the property of idempotency, which is one of the most important properties for aggregation operators;

ii) The dimensions of the overall hesitant values aggregated by our new generalized hybrid weighted aggregation operators are less than that of GHFHWA and GHFHWG operators. Hence the computation of our proposed method is more simple than that of [18]. This demonstrates that the new generalized hesitant fuzzy hybrid weighted aggregation operators are effective and reasonable to deal with decision making problems.

2) Compared with the approach based on GHFHA and GHFHG operators [15]

In the following, we utilize Xia and Xu’s GHFHA and GHFHG operator to get the ranking result of our example, in which weight vectors ω , λ are given by decision makers.

Step 1’’. Based on the hesitant fuzzy decision matrix H = ( h i j ) 4 × 6 (Table 2), calculate h ˙ σ ( i j ) ( j = 1 , 2 , 3 , 4 , 5 , 6 ) for each A i ( i = 1 , 2 , 3 , 4 ) , which is the jth largest of h ˙ = 6 λ k h i k ( k = 1 , 2 , 3 , 4 , 5 , 6 ) . We also take A 2 as an example. First, we calculate h ˙ 2 j ( j = 1 , 2 , 3 , 4 , 5 , 6 ) :

h ˙ 21 = ( 6 × 0.15 ) h 21 = { 1 ( 1 0.6 ) 0.9 , 1 ( 1 0.7 ) 0.9 , 1 ( 1 0.8 ) 0.9 } = { 0.5616 , 0.6616 , 0.7651 } .

Similarly, h ˙ 22 = { 0.6464,0.7470,0.9106 } , h ˙ 23 = { 0.4414,0.6363 } , h ˙ 24 = { 0.5851,0.6852 } , h ˙ 25 = { 0.4583,0.5647,0.6670 } , h ˙ 26 = { 0.2640,0.3402,0.4229 } .

Next, calculate the scores s ( h ˙ 2 j ) ( j = 1 , 2 , 3 , 4 , 5 , 6 ) : s ( h ˙ 21 ) = 0.6628 , s ( h ˙ 22 ) = 0.7680 , s ( h ˙ 23 ) = 0.5388 , s ( h ˙ 24 ) = 0.6351 , s ( h ˙ 25 ) = 0.5633 , s ( h ˙ 26 ) = 0.3424 , then h ˙ 22 > h ˙ 21 > h ˙ 24 > h ˙ 25 > h ˙ 23 > h ˙ 26 . Hence, we have h ˙ σ ( 2 j ) as follows:

h ˙ σ ( 21 ) = h ˙ 22 = { 0.6464,0.7470,0.9106 } , h ˙ σ ( 22 ) = h ˙ 21 = { 0.5616 , 0.6616 , 0.7651 } ,

h ˙ σ ( 23 ) = h ˙ 24 = { 0.5851,0.6852 } , h ˙ σ ( 24 ) = h ˙ 25 = { 0.4583,0.5647,0.6670 } ,

h ˙ σ ( 25 ) = h ˙ 23 = { 0.4414,0.6363 } , h ˙ σ ( 26 ) = h ˙ 26 = { 0.2640,0.3402,0.4229 } .

Step 2’’. According to Equation (4), we can use the GHFHA operator to aggregate all HFEs h ˙ σ ( i j ) ( j = 1 , 2 , 3 , 4 , 5 , 6 ) into collective HFEs h ˙ i ( i = 1 , 2 , 3 , 4 ) . Take h ˙ 2 as an example, choose p = 1 .

h ˙ 2 = GHFHA ( h ˙ σ ( 21 ) , h ˙ σ ( 22 ) , h ˙ σ ( 23 ) , h ˙ σ ( 24 ) , h ˙ σ ( 25 ) , h ˙ σ ( 26 ) ) = γ ˙ σ ( 2 j ) h ˙ σ ( 2 j ) { ( 1 ( 1 γ ˙ σ ( 21 ) p ) 0.09 ( 1 γ ˙ σ ( 22 ) p ) 0.17 ( 1 γ ˙ σ ( 23 ) p ) 0.24 × ( 1 γ ˙ σ ( 24 ) p ) 0.24 ( 1 γ ˙ σ ( 25 ) p ) 0.17 ( 1 γ ˙ σ ( 26 ) p ) 0.09 ) 1 p } ,

where we don’t list all of values in h ˙ 2 since the number of values in h ˙ 2 is 324. Similarly, we can calculate h ˙ 1 , h ˙ 3 , h ˙ 4 by using the GHFHA operator for alternatives A 1 , A 3 , A 4 , respectively.

Step 3’’. Calculate the scores s ( h ˙ i ) ( i = 1 , 2 , 3 , 4 ) :

s ( h ˙ 1 ) = 0.5764 , s ( h ˙ 2 ) = 0.6139 , s ( h ˙ 3 ) = 0.6625 , s ( h ˙ 4 ) = 0.7081 .

Step 4’’. Rank the alternatives A i ( i = 1 , 2 , 3 , 4 ) in accordance with s ( h ˙ i ) : h ˙ 4 > h ˙ 3 > h ˙ 2 > h ˙ 1 , thus A 4 is the most desirable supplier.

The ranking result obtained by the GHFHA operator is different from our method. Notice that weight vectors ω and λ in [15] [18] are given by decision makers, which is subjective and may lead to unreasonable ranking result.

By the above comparison, we can see that weight vectors given by decision makers may lead to different ranking result. In the following Table 5, we compare these three methods in which aggregation-associated vector is calculated by normal distribution based method and attribute weights are obtained by maximizing deviation method, as our method.

From Table 5, we can know that the proposed NGHFHWA and NGHFHWG operators have the same ranking result with Liao and Xu’s GHFHWA and GHFHWG operators and Xia and Xu’s GHFHA and GHFHG operators. It demonstrates that the weight determined by the evaluation information itself can reduce the influence of the subjectiveness of decision makers. This is an important advantage of our proposed method.

7. Conclusions

In this paper, we first proposed some new generalized hesitant fuzzy hybrid weighted aggregation operators, such as NGHFHWA, NGHFHWG. Some properties of these operators have been investigated. Then, we apply our proposed operators to deal with HF-MADM problems, in which aggregation-associated weight vector and attribute weights are unknown. Furthermore, an illustrated example is given to show the effectiveness and validness of our proposed decision making method. By comparing with Liao and Xu’s method [18] and Xia and Xu’s method [15], our proposed algorithm has following advantages:

1) the new generalized hesitant fuzzy hybrid weighted aggregation operators satisfy idempotency;

Table 5. The ranking results calculated by the NGHFHWA, GHFHWA, GHFHA operators.

2) the calculating procedure is more simple than that of others;

3) the aggregation-associated weight vector and the attribute weights are calculated by the information itself.

In the future, the application of these operators with different hesitant fuzzy decision making or group decision making methods will be developed, such as TOPSIS, VIKOR, ELECTRE, PROMETHEE. In addition, we can also extend our method to hesitant fuzzy linguistic set, Pythagorean hesitant fuzzy set, and so on.

Acknowledgements

This work is supported by the National Natural Science Foundation of China (No. 11571175), Natural Science Foundation of Higher Education of Jiangsu Province (No. 18KJB110024), High Training Funded for Professional Leaders of Higher Vocational Colleges in Jiangsu Province (No. 2018GRFX038), Science and Technology Research Project of Nantong Shipping College (No. HYKY/2018A03).

Appendix A

A.1. Proof of Theorem 1

Proof We first prove that

˙ j = 1 n λ j ω ε ( j ) h j p = i = 1 l { 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) } (17)

by using mathematical induction on n.

For n = 2 , we show that

˙ j = 1 2 λ j ω ε ( j ) h j p = i = 1 l { 1 ( 1 ( γ 1 ( i ) ) p ) λ 1 ω ε ( 1 ) ( 1 ( γ 2 ( i ) ) p ) λ 2 ω ε ( 2 ) } . (18)

Since λ j ω ε ( j ) h j p = i = 1 l { 1 ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) } ( j = 1 , 2 ) , then

˙ j = 1 2 λ j ω ε ( j ) h j 2 = i = 1 l { 1 ( 1 ( γ 1 ( i ) ) p ) λ 1 ω ε ( 1 ) + 1 ( 1 ( γ 2 ( i ) ) p ) λ 2 ω ε ( 2 ) ( 1 ( 1 ( γ 1 ( i ) ) p ) λ 1 ω ε ( 1 ) ) ( 1 ( 1 ( γ 2 ( i ) ) p ) λ 2 ω ε ( 2 ) ) } = i = 1 l { 1 ( 1 ( γ 1 ( i ) ) p ) λ 1 ω ε ( 1 ) ( 1 ( γ 2 ( i ) ) p ) λ 2 ω ε ( 2 ) } ,

which means Equation (18) holds.

If Equation (17) holds for n = k , i.e.,

˙ j = 1 k λ j ω ε ( j ) h j p = i = 1 l { 1 j = 1 k ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) } .

Then if n = k + 1 , based on Definition 7, we can deduce that

˙ j = 1 k + 1 λ j ω ε ( j ) h j p = ( ˙ j = 1 k λ j ω ε ( j ) h j p ) ˙ ( λ k + 1 ω ε ( k + 1 ) h k + 1 p ) = i = 1 l { 1 j = 1 k ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) + 1 ( 1 ( γ k + 1 ( i ) ) p ) λ k + 1 ω ε ( k + 1 ) ( 1 j = 1 k ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) ) ( 1 ( 1 ( γ k + 1 ( i ) ) p ) λ k + 1 ω ε ( k + 1 ) ) } = i = 1 l { 1 j = 1 k + 1 ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) } ,

i.e., Equation (17) holds for n = k + 1 . Hence, Equation (17) holds for all n. Furthermore, using Definition 3, we have

˙ j = 1 n λ j ω ε ( j ) h j j = 1 n λ j ω ε ( j ) = i = 1 l { 1 ( 1 ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) ) ) 1 j = 1 n λ j ω ε ( j ) } = i = 1 l { 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) } .

Therefore,

NGHFHWA ( h 1 , h 2 , , h n ) = ( ˙ j = 1 n λ j ω ε ( j ) h j p j = 1 n λ j ω ε ( j ) ) 1 p = i = 1 l { ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p } ,

i.e., Equation (10) holds. The proof of Equation (11) is similar.

A.2. Proof of Theorem 2

Proof Suppose h 1 = h 2 = = h n = h = { γ ( 1 ) , γ ( 2 ) , , γ ( l ) } , we can get

NGHFHWA ( h 1 , h 2 , , h n ) = i = 1 l { ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p } = i = 1 l { ( 1 j = 1 n ( 1 ( γ ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p } = i = 1 l { ( 1 ( 1 ( γ ( i ) ) p ) j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p } = i = 1 l { ( 1 ( 1 ( γ ( i ) ) p ) ) 1 p } = i = 1 l { γ ( i ) } = h .

The proof of NGHFHWG ( h 1 , h 2 , , h n ) = h is similar.

A.3. Proof of Theorem 3

Proof Let h ˙ = NGHFHWA ( h 1 , h 2 , , h n ) , γ ˙ ( i ) = ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p ( i = 1 , 2 , , l ) .

For any i = 1 , 2 , , l ; j = 1 , 2 , , n , we have h γ j ( i ) h + . Since y = x a ( a > 0 ) is a monotonic increasing function when x > 0 , then we get

j = 1 n ( 1 ( h ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) j = 1 n ( 1 ( h + ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ,

i.e.,

1 ( h ) p j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) 1 ( h + ) p .

Then we get

h ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p h + .

i.e.,

h γ ˙ ( i ) h + , i = 1 , 2 , , l .

According to Definition 2, we have

h = 1 l i = 1 l h 1 l i = 1 l γ ˙ ( i ) 1 l i = 1 l h + = h + ,

i.e.,

s ( h ) s ( h ˙ ) s ( h + ) .

Thus

h NGHFHWA ( h 1 , h 2 , , h n ) h + ,

which completes the proof of Equation (12). Similarly, we can prove Equation (13).

A.4. Proof of Theorem 4

Proof For any γ j ( i ) h j , by Lemma 1, we have

j = 1 n ( γ j ( i ) ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) = ( j = 1 n ( ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p ( j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ( γ j ( i ) ) p ) 1 p = ( 1 j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ( 1 ( γ j ( i ) ) p ) ) 1 p ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p .

By Definition 2, we can conclude that s ( NHFHWG ( h 1 , h 2 , , h n ) ) s ( NGHFHWA ( h 1 , h 2 , , h n ) ) , which implies that NHFHWG ( h 1 , h 2 , , h n ) NGHFHWA ( h 1 , h 2 , , h n ) . Similarly, we can prove NGHFHWG ( h 1 , h 2 , , h n ) NHFHWA ( h 1 , h 2 , , h n ) .

A.5. Proof of Theorem 5

Proof By Theorem 1, we have

NGHFHWA ( h 1 , h 2 , , h n ) = i = 1 l { ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p } .

For any γ j ( i ) h j , i = 1 , 2 , , l ; j = 1 , 2 , , n , let f ( p ) = ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p . In order to prove f ( p ) is monotonically increasing with respect to the parameter p, we calculate the derivative of f ( p ) respect to p as follows:

f ( p ) = [ e 1 p l n ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) ] = f ( p ) p 2 [ j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ( γ j ( i ) ) p l n ( γ j ( i ) ) p 1 ( γ j ( i ) ) p l n ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) ]

= f ( p ) p 2 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) [ j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ( γ j ( i ) ) p ln ( γ j ( i ) ) p 1 ( γ j ( i ) ) p ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) ln ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ] = f ( p ) p 2 1 x 0 x 0 [ j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) g ( x j ) g ( x 0 ) ]

where g ( x ) = x ln x 1 x , x j = ( γ j ( i ) ) p , x 0 = 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) .

Next, we take the first and second derivatives of g ( x ) :

g ( x ) = [ x ln x 1 x ] = ( 1 + ln x ) ( 1 x ) + x ln x ( 1 x ) 2 = 1 x + ln x ( 1 x ) 2 ,

g ( x ) = [ 1 x + ln x ( 1 x ) 2 ] = ( 1 x 1 ) ( 1 x ) 2 + ( 1 x + ln x ) 2 ( 1 x ) ( 1 x ) 4 = 1 x 2 + 2 x ln x x ( 1 x ) 3 .

Let h ( x ) = 1 x 2 + 2 x ln x , x ( 0,1 ] and h ( 1 ) = 0 . We also calculate the first and second derivatives of h ( x ) : h ( x ) = 2 2 x + 2 ln x , h ( 1 ) = 0 and

h ( x ) = 2 ( 1 x ) x . When x ( 0,1 ) , we have h ( x ) > 0 . Hence, h ( x ) is monotonically increasing, i.e., h ( x ) < h ( 1 ) = 0 for any x ( 0,1 ) , which implies

that h ( x ) is monotonically decreasing. Therefore, h ( x ) > h ( 1 ) = 0 and g ( x ) > 0 for any x ( 0,1 ) .

Because g ( x ) > 0 for any x ( 0,1 ) , g ( x ) is monotonically increasing in ( 0,1 ) , i.e.,

g ( x ) < lim x 1 1 x + ln x ( 1 x ) 2 = lim x 1 [ 1 x + ln x ] [ ( 1 x ) 2 ] = l i m x 1 1 + 1 x 2 ( x 1 ) = l i m x 1 1 2 x = 1 2 .

Because g ( x ) > 0 for any x ( 0,1 ) , g ( x ) is strictly convex, and the inequality g ( x j ) > g ( x 0 ) + ( x j x 0 ) g ( x 0 ) holds for all x 0 , x j > 0 and x 0 x j . Therefore, we have

j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) g ( x j ) > j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) g ( x 0 ) + j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ( x j x 0 ) g ( x 0 ) = g ( x 0 ) + g ( x 0 ) [ j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) x j x 0 ] = g ( x 0 ) + g ( x 0 ) [ j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ( γ j ( i ) ) p 1 + j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ] .

By Lemma 1, we get

j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ( 1 ( γ j ( i ) ) p ) = 1 j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ( γ j ( i ) ) p .

Notice that g ( x 0 ) < 1 2 < 0 , we have

g ( x 0 ) [ j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ( γ j ( i ) ) p 1 + j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ] > 0 .

Hence, j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) g ( x j ) g ( x 0 ) > 0 , thus f ( p ) > 0 , which indicates

that f ( p ) is monotonically increasing with respect to the parameter p. Therefore, the NGHFHWA operator is monotonically increasing with respect to the parameter p. Similarly, the NGHFHWG operator is monotonically decreasing with respect to the parameter p.

A6. Proof of Lemma 2

Proof By L’Hospital rule, we have

l i m p 0 f ( p ) = e lim p 0 l n ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) p

in which,

l i m p 0 ln ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) p = l i m p 0 [ ln ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) ] p = l i m p 0 1 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) j = 1 n j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) 1 ( γ j ( i ) ) p λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ( γ j ( i ) ) p ln γ j ( i ) = l i m p 0 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ln γ j ( i ) 1 γ j ( i ) = l i m p 0 j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) k = 1 n ( 1 ( γ k ( i ) ) p 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ln γ j ( i )

= j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) k = 1 n ( l i m p 0 1 ( γ k ( i ) ) p 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ln γ j ( i ) = j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) k = 1 n ( l i m p 0 ( γ k ( i ) ) p ln γ k ( i ) ( γ j ( i ) ) p ln γ j ( i ) ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ln γ j ( i ) = j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) k = 1 n ( ln γ k ( i ) ln γ j ( i ) ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ln γ j ( i ) = j = 1 n λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) k = 1 n ( ln γ k ( i ) ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) = k = 1 n ( ln γ k ( i ) ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j )

Thus

lim p 0 ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p = e j = 1 n ( ln γ j ( i ) ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j )

which completes the proof of Equation (14). Similarly, we can prove Equation (15).

A.7. Proof of Theorem 6

Proof Let f ( p ) = ( 1 j = 1 n ( 1 ( γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p , g ( p ) = ( 1 j = 1 n ( 1 ( 1 γ j ( i ) ) p ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ) 1 p .

First, we will prove f ( p ) 1 g ( p ) , i.e., f ( p ) + g ( p ) 1 . According to Theorem 5, f ( p ) and f ( p ) are monotonically increasing with respect to

the parameter p. Thus f ( p ) + g ( p ) l i m p 0 f ( p ) + l i m p 0 g ( p ) . By Lemma 2, we have

f ( p ) + g ( p ) e j = 1 n ( ln γ j ( i ) ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) + e j = 1 n ( ln ( 1 γ j ( i ) ) ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) = e ( 1 ) j = 1 n ( ln γ j ( i ) ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) + e ( 1 ) j = 1 n ( ln ( 1 γ j ( i ) ) ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) = 1 e j = 1 n ( ln γ j ( i ) ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) + 1 e j = 1 n ( ln ( 1 γ j ( i ) ) ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) .

By Lemma 1, we get

f ( p ) + g ( p ) 1 e γ 1 ( i ) = γ 2 ( i ) = = γ n ( i ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ( ln γ j ( i ) ) + 1 e γ 1 ( i ) = γ 2 ( i ) = = γ n ( i ) λ j ω ε ( j ) j = 1 n λ j ω ε ( j ) ( ln ( 1 γ j ( i ) ) ) = 1 e ln γ j ( i ) + 1 e ln ( 1 γ j ( i ) ) = γ j ( i ) + ( 1 γ j ( i ) ) = 1

By Definition 2, we get

s ( NGHFHWG ( h 1 , h 2 , , h n ) ) s ( NGHFHWA ( h 1 , h 2 , , h n ) ) ,

i.e.,

NGHFHWG ( h 1 , h 2 , , h n ) NGHFHWA ( h 1 , h 2 , , h n ) .

This completes the proof.

Cite this paper: Jiang, S. , He, W. and Cheng, Q. (2020) A New Hesitant Fuzzy Multiple Attribute Decision Making Method with Unknown Weight Information. Advances in Pure Mathematics, 10, 405-431. doi: 10.4236/apm.2020.107025.
References

[1]   Gu, X., Wang, Y. and Yang, B. (2011) A Method for Hesitant Fuzzy Multiple Attribute Decision Making and Its Application to Risk Investment. Journal of Convergence Information Technology, 6, 282-287.
https://doi.org/10.4156/jcit.vol6.issue6.29

[2]   Zeng, W., Li, D. and Yin, Q. (2016) Distance and Similarity Measures between Hesitant Fuzzy Sets and Their Application in Pattern Recognition. Pattern Recognition Letters, 84, 267-271.
https://doi.org/10.1016/j.patrec.2016.11.001

[3]   Liang, W., Zhao, G., Wang, X., Zhao, J. and Ma, C. (2019) Assessing the Rockburst Risk for Deep Shafts via Distance-Based Multi-Criteria Decision Making Approaches with Hesitant Fuzzy Information. Engineering Geology, 260, Article ID: 105211.
https://doi.org/10.1016/j.enggeo.2019.105211

[4]   Çolaka, M. and Kaya, İ. (2020) Multi-Criteria Evaluation of Energy Storage Technologies Based on Hesitant Fuzzy Information: A Case Study for Turkey. Journal of Energy Storage, 28, Article ID: 101211.
https://doi.org/10.1016/j.est.2020.101211

[5]   Torra, V. and Narukawa, Y. (2009) On Hesitant Fuzzy Sets and Decision. The 18th IEEE International Conference on Fuzzy Systems, Jeju Island, 1378-1382.
https://doi.org/10.1109/FUZZY.2009.5276884

[6]   Torra, V. (2010) Hesitant Fuzzy Sets. International Journal of Intelligent Systems, 25, 529-539.
https://doi.org/10.1002/int.20418

[7]   Xu, Z.S. and Xia, M.M. (2011) Distance and Similarity Measures for Hesitant Fuzzy Sets. Information Science, 181, 2128-2138.
https://doi.org/10.1016/j.ins.2011.01.028

[8]   Xu, Z.S. and Xia, M.M. (2011) On Distance and Correlation Measures of Hesitant Fuzzy Information. International Journal of Intelligent Systems, 26, 410-425.
https://doi.org/10.1002/int.20474

[9]   Li, D., Zeng, W. and Li, J. (2015) New Distance and Similarity Measures on Hesitant Fuzzy Sets and Their Applications in Multiple Criteria Decision Making. Engineering Applications of Artificial Intelligence, 45, 11-16.
https://doi.org/10.1016/j.engappai.2014.12.012

[10]   Chen, N., Xu, Z.S. and Xia, M.M. (2013) Correlation Coefficients of Hesitant Fuzzy Sets and Their Applications to Clustering Analysis. Applied Mathematical Modelling: Simulation and Computation for Engineering and Environmental Systems, 37, 2197-2211.
https://doi.org/10.1016/j.apm.2012.04.031

[11]   Sun, G., Guan, X., Yi, X. and Zhou, Z. (2018) An Innovative TOPSIS Approach Based on Hesitant Fuzzy Correlation Coefficient and Its Applications. Applied Soft Computing, 68, 249-267.
https://doi.org/10.1016/j.asoc.2018.04.004

[12]   Mi, X.M. and Liao, H.C. (2019) An Integrated Approach to Multiple Criteria Decision Making Based on the Average Solution and Normalized Weights of Criteria Deduced by the Hesitant Fuzzy Best Worst Method. Computer & Industrial Engineering, 133, 83-94.
https://doi.org/10.1016/j.cie.2019.05.004

[13]   Xu, Z. and Zhang, X. (2013) Hesitant Fuzzy Multi-Attribute Decision Making Based on TOPSIS with Incomplete Weight Information. Knowledge-Based Systems, 52, 53-64.
https://doi.org/10.1016/j.knosys.2013.05.011

[14]   Farhadinia, B. and Herrera-Viedma, E. (2019) Multiple Criteria Group Decision Making Method Based on Extended Hesitant Fuzzy Sets with Unknown Weight Information. Applied Soft Computing Journal, 78, 310-323.
https://doi.org/10.1016/j.asoc.2019.02.024

[15]   Xia, M.M. and Xu, Z.S. (2011) Hesitant Fuzzy Information Aggregation in Decision Making. International Journal of Approximate Reasoning, 52, 395-407.
https://doi.org/10.1016/j.ijar.2010.09.002

[16]   Xia, M.M., Xu, Z.S. and Chen, N. (2013) Some Hesitant Fuzzy Aggregation Operators with Their Application in Group Decision Making. Group Decision and Negotiation, 22, 259-279.
https://doi.org/10.1007/s10726-011-9261-7

[17]   Liao, H.C. and Xu, Z.S. (2014) Some New Hybrid Weighted Aggregation Operators under Hesitant Fuzzy Multi-Criteria Decision Making Environment. Journal of Intelligent & Fuzzy Systems, 26, 1601-1617.
https://doi.org/10.3233/IFS-130841

[18]   Liao, H.C. and Xu, Z.S. (2015) Extend Hesitant Fuzzy Hybrid Weighted Aggregation Operators and Their Application in Decision Making. Soft Computing, 19, 2551-2564.
https://doi.org/10.1007/s00500-014-1422-6

[19]   Wei, G.W. (2012) Hesitant Fuzzy Prioritized Operators and Their Application in Multiple Attribute Decision Making. Knowledge-Based Systems, 31, 176-182.
https://doi.org/10.1016/j.knosys.2012.03.011

[20]   Jin, F.F., Ni, Z.W. and Chen, H.Y. (2016) Note on Hesitant Fuzzy Prioritized Operators and Their Application in Multiple Attribute Decision Making. Knowledge-Based Systems, 96, 115-119.
https://doi.org/10.1016/j.knosys.2015.12.023

[21]   Zhang, Z. (2013) Hesitant Fuzzy Power Aggregation Operators and Their Application to Multiple Attribute Group Decision Making. Information Science, 234, 150-181.
https://doi.org/10.1016/j.ins.2013.01.002

[22]   Zhu, B., Xu, Z.S. and Xia, M.M. (2012) Hesitant Fuzzy Geometric Bonferroni Means. Information Sciences, 205, 72-85.
https://doi.org/10.1016/j.ins.2012.01.048

[23]   Zhu, B. and Xu, Z.S. (2013) Hesitant Fuzzy Bonferroni Means for Multi-Criteria Decision Making. Journal of the Operational Research Society, 64, 1831-1840.
https://doi.org/10.1057/jors.2013.7

[24]   Yu, D.J. (2015) Hesitant Fuzzy Multi-Criteria Decision Making Methods Based on Heronian Mean. Technological & Economic Development of Economy, 23, 1-20.
https://doi.org/10.3846/20294913.2015.1072755

[25]   Yu, D., Wu, Y. and Zhou, W. (2011) Multi-Criteria Decision Making Based on Choquet Integral under Hesitant Fuzzy Environment. Journal of Computational Information Systems, 7, 4506-4513.

[26]   Liao, Z.Q., Liao, H.C., Tang, M., Al-Barakati, A. and Llopis-Albert, C. (2020) A Choquet Integral-Based Hesitant Fuzzy Gained and Lost Dominance Score Method for Multi-Criteria Group Decision Making Considering the Risk Perferences of Experts: Case Study of Higher Business Education Evaluation. Information Fusion, 62, 121-133.
https://doi.org/10.1016/j.inffus.2020.05.003

[27]   Qin, J., Liu, X. and Pedrycz, W. (2016) Frank Aggregation Operators and Their Application to Hesitant Fuzzy Multiple Attribute Decision Making. Applied Soft Computing, 41, 428-452.
https://doi.org/10.1016/j.asoc.2015.12.030

[28]   Xu, Z.S. (2005) An Overview of Methods for Determining OWA Weights. International Journal of Intelligent Systems, 20, 843-865.
https://doi.org/10.1002/int.20097

[29]   Zhou, W. (2014) An Accurate Method for Determining Hesitant Fuzzy Aggregation Operator Weights and Its Application to Project Investment. International Journal of Intelligent Systems, 29, 668-686.
https://doi.org/10.1002/int.21651

[30]   Bedregal, B., Reiser, R., Bustince, H., Lopez-Molina, C. and Torra, V. (2014) Aggregation Functions for Typical Hesitant Fuzzy Elements and the Action of Automorphisms. Information Science, 255, 82-99.
https://doi.org/10.1016/j.ins.2013.08.024

[31]   He, Y., He, Z., Wang, G. and Chen, H. (2015) Hesitant Fuzzy Power Bonferroni Means and Their Application to Multiple Attribute Decision Making. IEEE Transactions on Fuzzy Systems, 23, 1655-1668.
https://doi.org/10.1109/TFUZZ.2014.2372074

[32]   Liao, H.C., Xu, Z.S. and Xia, M.M. (2014) Multiplicative Consistency of Hesitant Fuzzy Preference Relation and Its Application in Group Decision Making. International Journal of Information Technology & Decision Making, 13, 47-76.
https://doi.org/10.1142/S0219622014500035

[33]   Xu, Z.S. (2000) On Consistency of the Weighted Geometric Mean Complex Judgement Matrix in AHP. European Journal of Operational Research, 126, 683-687.
https://doi.org/10.1016/S0377-2217(99)00082-X

[34]   Wang, Y.M. (1998) Using the Method of Maximizing Deviations to Make Decision for Multiindices. System Engineering and Electronics, 7, 24-26.

 
 
Top