On the Matrices of Pairwise Frequencies of Categorical Attributes for Objects Classification

Show more

1. Introduction

The solution to the classification problem is reduced to calculating a function that divides a training sample (TRS) into classes and simultaneously obtains an acceptable classification accuracy for a test sample (TS). In most existing methods, algorithms for calculating these functions have considerable computational complexity [1] [2] [3] . In previous work [4] , the method of invariants (MI) was proposed, where this function is a linear combination of the simplest functions of the values of each feature that qualitatively simplifies the computation algorithm. It was shown in [5] that the MI corresponds to sensory process models of animals, which aim to recognize an object’s class by searching for a prototype in the information accumulated in the brain.

The MI proceeds from the fact that in classification problems, the accuracy of the data plays a special role since the objects, their descriptions, and their classes are correlated, and each type of entity has a randomness component. Therefore, a given data matrix is just one of possible random realization of the matrices that form the set of invariants with respect to the class. This approach is consistent with the concept proposed by L. Zadeh, which says that for most manually solved tasks, high accuracy is not required because the brain perceives only a “trickle of information” about the external world [6] . Moreover, for systems whose complexity exceeds a certain threshold, accuracy and practical sense are almost mutually exclusive characteristics.

In the MI, the range of attribute values after randomization, accompanied by an introduction of an additive component that follows a uniform distribution, is divided along each attribute into equal numbers of intervals, within which the feature values are assumed to be equiprobable. All objects falling within the interval receive an index of the corresponding attribute equal to the interval number.

For each index, one can find lists of numbers of TRS objects of a certain class and then calculate the frequencies of the indices. With some error, these frequencies will be the same for the objects in the TRS and the TS because both samples belong to the same general population. Therefore, it is possible to estimate the probability of the individual attributes of any object in each class. Then, using the simplest formula of the total probability, estimate the probability of an object having a specific set of feature values. Finally, the class of the object is determined based on the maximum likelihood principle.

There is an obvious analogy between indices and categories, the values of which can always be described by a finite sequence of integers 1, 2... Therefore, the MI serves as the basis for this article, in which two algorithms are proposed: one implements the simplest version of the MI developed for quantitative attributes, and the other more fully takes the features of categorical attributes into account.

The efficiency of the new algorithms was tested on five databases [7] .

2. Assumptions and Preliminary Assumptions

The article is devoted to solving classification problems for which all attributes are categorical. The solution is based on two MI assumptions:

· The data matrix has a set of invariants with respect to a class of objects.

· Object classes differ in the attribute probability distributions.

For categorical attributes, the number of values or levels n that individual objects can take is an important characteristic of the problem. In real tasks for quantitative attributes, the value of ${n}_{q}$, as a rule, considerably exceeds that of ${n}_{c}$ —the corresponding value for categorical attributes. According to the theory proposed by C. Shannon, the information volume per value of a feature increases in proportion to the value of ${\mathrm{log}}_{2}\left({n}_{q}\right)/{\mathrm{log}}_{2}\left({n}_{c}\right)$. Therefore, in tasks involving categorical features, the “information load” of the data often increases several fold. This circumstance manifests in an increase in the number of objects of different classes that have the same attribute values. This reduces the difference between the attribute frequencies for objects of different classes, which can lead to an increase in the number of classification errors.

However, categorical attributes also have “favorable” features. The probability of an object of a certain class is an unknown function of its attributes, which takes into account the interrelations among all the attributes. Usually, this function is nonlinearly dependent on the attribute values of the object. This relationship is indirectly taken into account in the accepted assumption of the MI, since the frequencies of attribute indices are calculated for a particular class of objects. Then, this dependence becomes linear, which greatly simplifies algorithm’s calculation. One algorithm takes the same approach for categorical attributes whose values are, as noted above, analog indices.

The second algorithm considers the peculiarities of categorical attributes in a different way and is based on a new solution to the question of attribute relationships. Usually, the relationship between random variables is estimated using the Pearson correlation coefficient or the rank correlation coefficient. However, in the framework for this method, we are interested in the frequencies of attribute values that take a relatively small number of values. The paper further shows that pairwise frequencies of features allow an approximate assessment of the relationship between the features of objects of the same class (note that, as a rule, only a weak correlation exists between the categorical features of objects in the same class).

However, pairwise frequencies do not allow the determination of the class of TS objects if no object has the same combination of attribute values in the TRS. To classify objects, this algorithm uses an analog of the k-nearest neighbors method: the object is assigned to a class for which the total number of the k-nearest neighbors of the TRS’ objects for each attribute are maximized.

3. Two Algorithms for Solving the Classification Problem

3.1. Statement and Basic Algorithm

Let the vectors ${X}_{k},k\in \left(1,N\right)$ describe the values of categorical attributes objects, which form the TRS $\left\{\left({X}_{s},{y}_{s}\right)|s\in \left(1,M\right)\right\}$, where y is the vector of the object class labels, M is the number of objects, and missing data are excluded. Without loss of generality, we assume that the values of the attributes ${X}_{k},k\in \left(1,N\right)$ and classes (possibly after preliminary encoding) belong to the sets of integers $j\in \left(1,{n}_{k}\right)$ and $i\in \left(1,C\right)$, respectively, where ${n}_{k}\ll M$ is the number of values of attribute ${X}_{k}$ and С is the number of classes. The problem is to classify the TS objects.

We denote s objects by ${x}_{s}={\left({x}_{s1},\cdots ,{x}_{sN}\right)}^{\text{T}}$ and the data matrix by $Q={\Vert {x}_{sk}\Vert}_{M\times N}$. Consider the algorithm for the basic MI (algorithm 1). Using matrix $Q$, we find lists ${\omega}_{i}=\left\{s|s\in \left(1,M\right),{y}_{s}=i\right\}$ of numbers of objects of class $i\in \left\{1,C\right\}$. The sample probability of objects in class i determines the obvious dependence:

${p}_{i}\left(x\right)=p\left\{{X}_{1}={x}_{s1},\cdots ,{X}_{N}={x}_{sN}|s\in {\omega}_{i},k\in \left(1,N\right)\right\}$. (1)

This dependence allows finding objects whose attribute value ${x}_{k}=j$. Let ${r}_{kj}\ge 0$ denote the number of such objects. Then, the frequency of a value j for an attribute k of the TRS object of class i equals ${\left({f}_{kj}\right)}_{i}={r}_{kj}/{l}_{i}$, where ${l}_{i}=\left|{\omega}_{i}\right|$.

Object $x$ arises as a result of appearances of each attribute k with the corresponding value j. Since these events form a complete group of incompatible events, the total probability formula gives an estimate of the probability that an object belongs to class i:

${p}_{i}\left(x\right)=\frac{1}{N}{\displaystyle {\sum}_{k=1}^{M}{\left({f}_{kj}\right)}_{i}}$, (2)

where j is the value of attribute k for object $x$.

Formulas (1) and (2) yield a class probability estimate for the TRS objects. Since TRS and TS belong to a single general population, the formula also determines the frequencies of the TS objects. According to the maximum likelihood principle, the calculated class of the object $x$ is

$I\left(x\right)=\mathrm{arg}{\mathrm{max}}_{i\in \left(1,C\right)}{p}_{i}\left(x\right).$ (3)

3.2. Features of the Model of Probability Density Objects

Essentially, the MI is based on the assumption that a class of objects can be recognized by the probability distribution of its attributes. According to (2), the probability ${p}_{i}\left(x\right)$ received its point estimate equal to the average frequencies attributes of object $x$ of class i. Thus, the empirical frequency distribution of features is transformed into the frequency distribution of objects. Therefore, the MI considers the average composition of the attribute distribution as a probability distribution for objects of a particular class.

We investigate the characteristics of this distribution in the case of two attributes that have typical forms of attribute frequency distributions. Our analysis showed that the distributions of each attribute can be considered a sample of the theoretical distributions described by unimodal laws, the maximum of which is located in the middle and the “tails” of the distribution.

Consider the following task. Let objects have two categorical attributes, the values of which describe random variables Y and Z with probability densities

${\phi}_{y}\left(y\right)=\frac{b}{{a}^{2}+{y}^{2}}$ and ${\phi}_{z}\left(z\right)=c+d\ast z+g\ast {z}^{2}$, respectively, where $y\in \left(0,n\right)$,

$z\in \left(0,n\right)$, and $a,b,c,d,g,h$ and n are parameters. From formula (2), a random variable $U=\left(Y+Z\right)/2$ is the composition of Y and Z, which simulates the total distribution of the objects. We are interested in the features of this distribution.

Note that the functions ${\phi}_{y}\left(y\right)$ and ${\phi}_{z}\left(z\right)$ allow us to obtain an analytical solution for the distribution composition of the above types of attributes. Since these functions determine the corresponding density distribution, their parameters are related by the following:

${\int}_{0}^{n}{\phi}_{Y}\left(y\right)\text{d}y}=1$, ${\int}_{0}^{n}{\phi}_{Z}\left(z\right)\text{d}z}=1$.

Obviously, $U=\stackrel{\u02dc}{Y}+\stackrel{\u02dc}{Z}$, where $\stackrel{\u02dc}{Y}=Y/2$ and $\stackrel{\u02dc}{Z}=Z/2$ are random variables [8] . Given that density ${\phi}_{\stackrel{\u02dc}{y}}\left(\stackrel{\u02dc}{y}\right)={\phi}_{y}\left(\mu \left(\stackrel{\u02dc}{y}\right)\right){\mu}^{\prime}\left(\stackrel{\u02dc}{y}\right)$, $\mu \left(\stackrel{\u02dc}{y}\right)=2\stackrel{\u02dc}{y}$, we obtain ${\phi}_{\stackrel{\u02dc}{y}}\left(\stackrel{\u02dc}{y}\right)=\frac{2b}{{a}^{2}+4{\stackrel{\u02dc}{y}}^{2}}$. Similarly, we find that ${\phi}_{\stackrel{\u02dc}{z}}\left(\stackrel{\u02dc}{z}\right)=2\ast \left(c+2\ast d\ast \stackrel{\u02dc}{z}+4\ast g\ast {\stackrel{\u02dc}{z}}^{2}\right)$.

The density ${\phi}_{U}\left(u\right)$ is a convolution of the functions ${\phi}_{\stackrel{\u02dc}{Y}}$ and ${\phi}_{\stackrel{\u02dc}{z}}\left(\stackrel{\u02dc}{z}\right)$ :

${\phi}_{U}\left(u\right)={\displaystyle {\int}_{0}^{u}{\phi}_{\stackrel{\u02dc}{Y}}\left(\stackrel{\u02dc}{y}\right){\phi}_{\stackrel{\u02dc}{Z}}\left(u-\stackrel{\u02dc}{y}\right)\text{d}\stackrel{\u02dc}{y}}$.

The range of u is divided into segments: $0\le u\le n/2$ and $n/2<u\le n$. Because $\stackrel{\u02dc}{z}\ge 0$, the lower and upper limits of the integrals are equal to 0 and u for the first segment and $u-n/2$ and u for the second segment, respectively. Then, we can obtain the formula for calculating the density:

${\phi}_{U}\left(u\right)=4\ast b{\displaystyle {\sum}_{q=1}^{3}{A}_{q}{w}_{q}\left(u\right)}$,

where ${A}_{1}=c+2\ast d\ast u+4\ast g\ast {u}^{2}$, ${A}_{2}=-\left(2\ast d+8\ast g\ast u\right)$, ${A}_{3}=4\ast g$

${w}_{q}\left(u\right)=\{\begin{array}{l}{\displaystyle {\int}_{0}^{u}\frac{{\stackrel{\u02dc}{y}}^{q-1}}{{a}^{2}+4\ast {\stackrel{\u02dc}{y}}^{2}}\text{d}\stackrel{\u02dc}{y}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}0\le u\le n/2\\ {\displaystyle {\int}_{u-n/2}^{u}\frac{{\stackrel{\u02dc}{y}}^{q-1}}{{a}^{2}+4\ast {\stackrel{\u02dc}{y}}^{2}}\text{d}\stackrel{\u02dc}{y}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}n/2<u\le n\end{array}$

Sub-integral functions are tabulated and not given for the abbreviated entries.

We performed calculations were performed for a wide range of parameters. The results are illustrated in Figure 1, where the density ${\phi}_{u}$ is determined for the case in which the density ${\phi}_{z}$ follows a normal distribution, and the ${\phi}_{y}$ distribution is close to hyperbolic. The figure shows that with respect to the curve ${\phi}_{z}$, the ordinates of curve ${\phi}_{u}$ increase in the region of high values of density ${\phi}_{y}$ and decrease in sections with low values. Consequently, the function ${\phi}_{u}$ does not follow a normal distribution. However, confidence intervals of continuous random variables can be estimated only for normal distributions.

From the analysis, it should be noted that the composition distributions of individual attributes result in a poorly predictable distribution for certain classes of objects. Thus, the effectiveness of the various MI algorithms depends on the data characteristics for a particular task and can be tested only empirically.

3.3. Algorithm 2

Algorithm 1 reduces the MI assumption that the individual classes of objects are

Figure 1. Density curves of random variables Y, Z and U ( $\phi y,\phi z$ and $\Phi $ correspond to ${\phi}_{y},{\phi}_{z}$ and ${\phi}_{u}$ ).

from different distributions of features to classify objects according to the frequencies of the categorical attribute values. However, another variant of the approximate realization of this assumption is also possible.

For any type of attribute, the probability of an arbitrary object $x={\left({x}_{1},\cdots ,{x}_{N}\right)}^{\text{T}}$ of class i is determined by the following relation:

$P\left(x\right)={p}_{1}\left({x}_{1}\right){p}_{2}\left({x}_{2}|{x}_{1}\right){p}_{3}\left({x}_{3}|{x}_{1},{x}_{2}\right)\cdots {p}_{N}\left({x}_{N}|{x}_{1},\cdots ,{x}_{N-1}\right)$, (4)

where ${p}_{k}\left({x}_{k}|{x}_{1},\cdots ,{x}_{k-1}\right)$ is the conditional probability of an attribute ${X}_{k}$ at values ${x}_{1},\cdots ,{x}_{k-1}$ of attributes ${X}_{1},\cdots ,{X}_{k-1}$. Here, ${p}_{1}\left({x}_{1}\right)$ is found by formula (1).

Consider the features of this dependence for categorical attributes. Here, the elements of the set of Cartesian products of the attributes
${X}_{k}$ and
${X}_{k+1}$,
$k\in \left(1,N-1\right)$ are ordered pairs:
$\left({\stackrel{\u02dc}{x}}_{k},{\stackrel{\u02dc}{x}}_{k+1}\right)$, where
${\stackrel{\u02dc}{x}}_{k}\in \left\{1,{n}_{k}\right\}$ and
${\stackrel{\u02dc}{x}}_{k+1}\in \left\{1,{n}_{k+1}\right\}$. Let
${e}_{k,k+1}^{i}$ be the number of objects of class i

${R}_{k,k+1}^{i}={\Vert {f}_{gw}^{i}\Vert}_{{n}_{k+1}\times {n}_{k}}$,

constructing a matrix of pairwise frequencies (MPF) for the attributes k and $k+1$ for the TRS objects of class i. There are $N-1$ MPFs for each class. According to the concept formed by the above matrix, we can define the properties of the TRS and TS objects. Then, from formula (4), we obtain the approximate dependence for estimating the probability that object $x$ belongs to class i

${P}_{i}\left(x\right)={p}_{1}\left({x}_{1}\right){f}_{{x}_{1},{x}_{2}}^{i}{f}_{{x}_{2},{x}_{3}}^{i}\cdots {f}_{{x}_{N-1},{x}_{N}}^{i}$. (5)

In formula (5), ${f}_{{x}_{k},{x}_{k+1}}^{i}$ is the element of matrix ${R}_{k,k+1}^{i}$ that corresponds to the frequency of the attribute pair values k and $k+1$ of an object in class i. The estimated class of this object is determined by an analog of formula (3):

$\stackrel{\u02dc}{I}\left(x\right)=\mathrm{arg}{\mathrm{max}}_{i\in \left(1,C\right)}{P}_{i}\left(x\right).$ (6)

3.4. Improving the Accuracy of Algorithm 2

From formula (5), it follows that ${P}_{i}\left(x\right)=0$ if one of the factors ${f}_{{x}_{k},{x}_{k+1}}^{i}=0$. Such a case occurs when there is no object with the same attribute value among the TRS objects of class i. The total number of possible combinations of attribute values is $v={n}_{1}{n}_{2}\cdots {n}_{N}$ and, as a rule, $v\gg {l}_{i}$. Therefore, MPFs often contain zero elements and can be sparse.

If ${P}_{i}\left(x\right)=0$ for all i, then uncertainty arises, since formula (5) “does not work”. Note that when applying algorithm 1, such situations are practically excluded. The MI serves as the basis for eliminating this uncertainty, since it assumes that many data matrices exist that are invariant with respect to a class of objects. It can be assumed that in the case of invariant transformations, the relative position of the attribute values of TRS objects will be preserved near the singular points corresponding to the attribute values of an “undefined” object. Consequently, we can use the idea underlying the k-nearest neighbor method to solve classification problems.

We assume that the “undefined” object has a class to which most of the k-nearest TRS objects belong. Since the concept of distance between objects is not defined in the MI, we will evaluate the “proximity” for each attribute value of an “undefined” object.

Let Z be a set of TS objects for which the class could not be determined using formula (5) and object $z={\left({z}_{1},\cdots ,{z}_{N}\right)}^{\text{T}}\in Z$. The goal is to find TRS objects of class i whose attributes ${X}_{k}$ are in h neighborhoods of ${z}_{k}$, $k\in \left(1,N\right)$. The numbers of these objects form the set $D=\left\{t|\left|{x}_{tk}-{z}_{k}\right|\le h,t\in {\omega}_{i}\right\}$, and their frequency

is ${T}_{ik}\left({z}_{k},h\right)=\frac{\left|D\right|}{\left|{\omega}_{i}\right|}$. Having calculated the frequencies, we can find the average

frequency ${\stackrel{\xaf}{T}}_{i}\left(z,h\right)$ of all the attributes of object $z$ in class i. Then, the calculated class of object $z$ is equal to

$\stackrel{\xaf}{I}\left(z,h\right)={\mathrm{max}}_{i\in \left(1,C\right)}{\stackrel{\xaf}{T}}_{i}\left(z,h\right),$ (7)

where h is a parameter whose domain is the set of integers $\left\{1,\cdots ,\stackrel{\u02dc}{n}\right\}$, where $\stackrel{\u02dc}{n}=\text{min}\left({n}_{k}\right)$.

Let ${1}_{i}\left(z,h\right)$ be an indicator of class i that equals 1 when the calculated class is not equal to the class of object $z$ and 0 otherwise. Then, the number of incorrectly classified objects in the set Z will be equal to

$F\left(h\right)={\displaystyle {\sum}_{z\in Z}{1}_{i}\left(z,h\right)}$. (8)

The calculated value of parameter h, denoted by $\stackrel{\u02dc}{h}$, and the corresponding value $\stackrel{\xaf}{I}\left(z,h\right)$ can be found via the minimum number of “undefined” objects:

$\stackrel{\u02dc}{h}=F\left(h\right)\to {\mathrm{min}}_{h\in \left(1,\stackrel{\u02dc}{n}\right)}$. (9)

4. The Effectiveness of New Algorithms

The MI serves as a general conceptual framework for formulas (1)-(3) and (4)-(9), which respectively define algorithms 1 and 2 for solving the classification problem. The effectiveness of the algorithms was studied with five databases from the UCI repository; the objects in these databases, the objects had only categorical features. The characteristics of the bases given in Table 1 that cover rather wide ranges of values for the numbers of objects (267 - 20,000), features (3 - 22) and classes (2 - 26).

The dependencies in (3) and (5) are applicable not only for the TS but also for the TRS. Therefore, we calculated the test error rate, ${f}_{c}$, and the training error rate, ${f}_{l}$. All the calculations were performed on the basis of the cross-validation procedure. The database was divided into 10 datasets of approximately equal size. The first 9 datasets were used as the TRS, and the remaining dataset was used for testing. This procedure was applied 10 times. Consequently, for each base, a sequence of 10 pairs of TRS and TS variants was considered. For each partitioning variant $m\in \left(1,10\right)$, we calculated the error rates ${f}_{\u0441m}$ and ${f}_{lm}$.

The ${f}_{\u0441m}$ and ${f}_{lm}$ curves for different databases are shown in Figure 2 and Figure 3, respectively. The graphs are identified by an ordered pair a_b, where a is the first letter of the database name and b is the algorithm identifier. For these rates, the average values E and the standard deviations St are given in Table 1.

Database Car evaluation and Spect have no “undefined” objects; for them, the functions $F\left(h\right)$ were not calculated. Figure 4 depicts the curves $F{b}_{h},F{h}_{h}$ and

Table 1. Table of databases characteristics and calculation results.

Figure 2. Frequency distributions of test errors ${f}_{\u0441m}$ for algorithms 1 and 2.

Figure 3. Frequency distributions of learning errors ${f}_{lm}$ for algorithms 1 and 2.

Figure 4. Graphs of the function $F\left(h\right)$ for Breast Cancer, Haberman’s Survival and Letter Image databases.

$F{l}_{h}$ that reflect the features of these functions for the Breast Cancer, Haberman’s Survival and Letter Image databases, respectively.

Below, we summarize the main results of the calculations:

1) With some exceptions, the error rate curves do not undergo drastic changes under the sequential changes in the composition of the TRS and TS objects under cross-validation. Both algorithms yield fairly stable results: in most cases, the error variances for TS and TRS are relatively small ( $St/E<1$ ). The most stable results were obtained for algorithm 2, where $St/E<0.4$ for the TS. We note that the number of test errors is typically considerably higher than training errors.

2) Algorithm 2, as a rule, is much more accurate than algorithm 1. This is well illustrated in Figure 2, where almost all the dotted lines corresponding to algorithm 1 are concentrated in the upper part. The resulting conclusion is that considering the pairwise frequencies of attributes makes it possible to more accurately differentiate the latent properties of objects of different classes. For algorithm 2, the minimum values of the mean error E are 0.076 and 0.016 for the test and training samples, respectively.

3) In many cases, the introduction of the function $F\left(h\right)$ and a corresponding reduction in the number of “uncertain” objects can lead to significant increases in the efficiency of the MPF and in the accuracy of the solution.

We can conclude that these experiments confirm the operability of both algorithms.

5. Conclusions

The paper proposes two new algorithms based on the MI for classifying objects with categorical features. Both algorithms originate from the same assumption: that the objects in each class differ in attribute probability distribution, but both algorithms use different models to approximate the distributions. Under this assumption, an object class is defined by the individual frequencies of its attribute values rather than by the nonlinear functions of attributes values used in most existing methods. This characteristic explains the comparative simplicity of the proposed algorithms.

It has been established that along with the correlation between categorical attributes, for objects belonging to one class, a functional relationship exists between the attribute values, which is characterized by the frequencies of the pairwise attribute values. This set of frequencies forms an MPF, which is calculated for the TRS objects for each class and attribute. In one of the algorithms, the MPF is used in conjunction with an analog of the k-nearest neighbors method. This addition allows one to determine the class of a TS object when the TRS does not contain objects with the same combination of attribute values.

It can be expected that the MPF can also be applied to solve problems with quantitative attributes because the values (with some error) can be represented by integers corresponding to the data description with a coarser measuring scale.

An experimental examination has shown that algorithm 2, using the MPF, provides more reliable results than does algorithm 1.

References

[1] Bishop, C. (2006) Pattern Recognition and Machine Learning. Springer, Berlin, 738.

[2] Hastie, T., Tibshirani, R. and Friedman, J. (2009) The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2nd Edition, Springer, Berlin, 764.

[3] Murphy, K. (2012) Machine Learning. A Probabilistic Perspective. MIT Press, Cambridge, Massachusetts, London, 1098.

[4] Shats, V.N. (2017) Classification Based on Invariants of the Data Matrix. Journal of Intelligent Learning Systems and Applications, 9, 35-46.

[5] https://doi.org/10.4236/jilsa.2017.93004

[6] Shats, V.N. (2018) The Classification of Objects Based on a Model of Perception. Studies in Computational Intelligence, 736, 125-131.

https://doi.org/10.1007/978-3-319-66604-4_19

[7] Zadeh, L. (1979) Fuzzy Sets and Information Granularity. In: Gupta, N., Ragade, R. and Yager, R., Eds., Advances in Fuzzy Set Theory and Applications, World Science Publishing, Amsterdam, 3-18.

[8] Hogg, R.V., Tanis, E.A. and Zimmerman, D. (2015) Probability and Statistical Inference. 9th Edition, Pearson, London, 557.

[9] Asuncion, A. and Newman, D.J. (2007) UCI Machine Learning Repository. Irvine University of California, Irvine.