Modeling Methods in Clustering Analysis for Time Series Data

Show more

1. Introduction

The clustering analysis is one of the statistical methods that deal with the division and classification of variables data elements into several homogeneous groups that are homogeneous within one group (cluster) and are different from other groups (other clusters). Cluster analysis is defined as, a set of methods for constructing a (hopefully) sensible and informative classification of an initially unclassified set of data, using the variable values observed on each individual. All such methods essentially try to imitate what the eye-brain system does so well in two dimensions (Everitt and Skrondal [1]). Because of this characteristic of cluster analysis, it has been used in many applied fields. It is used to divide and classify data into aggregates, which help to properly select appropriate statistical analysis of these data as a decision-making tool. The objective of this statistical method is to divide the data matrix containing the number of (n) of the samples and (p) of the variables into a homogeneous number of partial groups (k) by assembling homogeneous and convergent sample items in clusters. Thereafter, criteria and measures must be used to distinguish between the different cluster results to reach two main points: the similarity of the data elements within the different clusters and the optimal number of clusters. This is done through the use of the legal functions, known as the standards of validity and legal performance of the cluster. In this paper, one of the most important hybrid models based on clusters, the Gaussian mixed model-based clustering is used. The hybrid models based on clusters are able to predict accurately if the appropriate variance model is chosen. It is applied through the use of four heterogeneity models. The covariance matrix of the Gaussian mixed model is unknown. So to estimate these parameters we need to maximize the log-likelihood function of. Direct maximization of the log-likelihood function is complicated, so the maximum likelihood estimator (MLE) of a finite mixture model is usually obtained via the EM algorithm (Dempster et al. [2]).

Banfield and Raftery [3] proposed a model-based clustering method based on constraining these geometric features of components using the eigenvalue decomposition of the covariance matrix.

Different constraints on the covariance matrix provides different models that are applicable to different data structures, which is another advantage of model-based clustering. In 1995, Celeux and Govaert [4] classified these models in three main families of models: spherical, diagonal and general families. They have given the definitions and derivations of all 14 available models, along with the covariance matrix update equations based on these models to be used in the EM algorithm. However, only nine of those have a closed form solution to the covariance update equation, which is evaluated in the M-step of the EM algorithm.

Later in 2016, Chi et al., [5] showed that the population likelihood function has bad local maxima even in the special case of equally-weighted mixtures of well-separated and spherical Gaussians. They proved that the log-likelihood value of these bad local maxima can be arbitrarily worse than that of any global optimum. Also, they showed that the EM algorithm with random initialization will converge to bad critical points with probability at least. They further establish that the first-order variant of EM will not converge to strict saddle points almost surely, indicating that the poor performance of the first-order method can be attributed to the existence of bad local maxima rather than bad saddle points.

Cluster analysis is used in various fields of science. Tóth et al., [6] described gamma-ray bursts (GRBs) using clustering. They analyzed the Final BATSE Catalog using Gaussian-mixture-models-based clustering methods for six variables (durations, peak flux, total fluency and spectral hardness ratios) that contain information on clustering.

In 2000, Bozdogan [7] studied the basic idea of Akaike’s [8] information criterion (AIC). Then, he presented some recent developments on a new entropic or information complexity (ICOMP) criterion of Bozdogan [9] for model selection.

The main contribution of the present paper is to propose the mixture-model cluster analysis technique under different covariance structures of the component densities. To determine the optimal number of clusters by selecting the number of clusters corresponding to the lowest values for the different criteria. Four models for covariance structures that have not been applied in previous studies are studied using three criteria of the complexity of information.

This paper is organized as follows: Section one is the introduction and section two the Gaussian Mixture Model-based Clustering (GMMC) is discussed. In section three, the Expectation-Maximization (EM) algorithm is introduced. The Model Selection Criteria are introduced in section four. Finally, sections five and six contain the Numerical Results, and the Conclusion, respectively (Table 1).

2. The Gaussian Mixture Model-Based Clustering (GMMC)

The Gaussian mixture model is a powerful clustering algorithm used in cluster analysis. It is the most widely used clustering method of this kind, is the one based on learning a mixture of Gaussians. It assumes that there are a certain number of Gaussian distributions, and each of these distributions represents a cluster. Hence, a Gaussian Mixture Model tends to group the data points belonging to a single distribution together. Gaussian Mixture Models are probabilistic models and use the soft clustering approach for distributing the points in different clusters. It’s difficult to determine the right model parameters, Expectation-Maximization method is used to determine the model parameters.

In a case where $X\in {\mathbb{R}}^{\left(n\times p\right)}$ are given (p dimensional data of size n), would be interested in estimating the number of clusters K. Assuming the observations ${x}_{ij}$ ( $i=1,\cdots ,n$, $j=1,\cdots ,p$ ) are assumed to be drawn from the following mixture K distribution, each corresponding to a different cluster:

Table 1. Nomenclatures of used parameters.

$f\left(x;\pi ,\theta \right)={\displaystyle \underset{k=1}{\overset{K}{\sum}}{\pi}_{k}{g}_{k}\left(x;{\theta}_{k}\right)}$

Here ${\pi}_{1},\cdots ,{\pi}_{K}$ are the mixing proportions that satisfy ${\pi}_{k}>0$ and ${\sum}_{k=1}^{K}{\pi}_{k}}=1$. ${\theta}_{k}$ is the vector of unknown parameters of the kth component, and ${\pi}_{k}$ represents the probability that an observation belongs to the kth component. The Gaussian mixture model assumes that the components of the mixture are the multivariate normal distribution, thus the density function becomes:

$f\left(x;\pi ,\mu ,\Sigma \right)={\displaystyle \underset{k=1}{\overset{K}{\sum}}{\pi}_{k}{g}_{k}\left(x;{\mu}_{k},{\Sigma}_{k}\right)}$

The mixture components (i.e. clusters) are ellipsoids centered at ${\mu}_{k}$ with other geometric features, such as volume, shape, and orientation, determined by the covariance matrix ${\Sigma}_{k}$. (Titterington et al. [10]).

In this case, the component densities ${g}_{k}$ are given by:

${g}_{k}\left(x;{\mu}_{k},{\Sigma}_{k}\right)={\left(2\pi \right)}^{\frac{-p}{2}}{\left|{\Sigma}_{k}\right|}^{\frac{-1}{2}}\mathrm{exp}\left\{-\frac{1}{2}\left(x-{\mu}_{k}\right){\Sigma}_{k}^{-1}\left(x-{\mu}_{k}\right)\right\}$ _{ }

Parsimonious parameterizations of the covariance matrices can be obtained by using the eigenvalue decomposition of the covariance matrix. The eigenvalue decomposition of the kth covariance matrix is given as:

${\Sigma}_{k}={\lambda}_{k}{D}_{k}{A}_{\kappa}{D}_{\kappa}^{\text{T}}>k$

where: ${\lambda}_{k}$ is a scalar controlling the volume of the ellipsoid.

${A}_{\kappa}$ is a diagonal matrix specifying the shape of the density contours with $\mathrm{det}\left({A}_{\kappa}\right)=1$.

${D}_{\kappa}$ is an orthogonal matrix which determines the orientation of the corresponding ellipsoid (Banfield and Raftery [3] and Celeux and Govaert [4]).

In one dimension, there are just two models: E for equal variance and V for varying variance. In the multivariate setting, the volume, shape, and orientation of the covariance can be constrained to be equal or variable across groups. Thus, 14 possible models with different geometric characteristics can be specified. Table 2 reports all such models with the corresponding distribution structure type, volume, shape, orientation, and associated model names. See (Erar [11], Gupta and Bhatia [12], Chi et al., [5], Scrucca et al., [13], Malsiner-Walli et al., [14] and Tóth, et al., [6]).

Approaching the clustering problem from this probabilistic standpoint reduces the whole problem to the parameter estimation of a mixture density. The unknown parameters of the Gaussian mixture density, are the mixing proportions, ${\pi}_{k}$, the mean vectors, ${\mu}_{k}$, and the covariance matrices, ${\Sigma}_{k}$. Therefore, to estimate these parameters, we need to maximize the log-likelihood given by:

$\mathrm{log}L\left(\theta |x\right)={\displaystyle \underset{i=1}{\overset{n}{\sum}}\mathrm{log}\left[{\displaystyle \underset{k=1}{\overset{K}{\sum}}{\pi}_{k}{g}_{k}\left({x}_{i}|{\mu}_{k},{\Sigma}_{k}\right)}\right]}$

The estimates of the mixing proportion, ${\pi}_{k}$, the mean vector ${\mu}_{k}$, and the covariance matrix ${\Sigma}_{k}$ for the kth population are given as:

Table 2. Parameterizations of the covariance matrix and the corresponding geometric features.

${\stackrel{^}{\pi}}_{k}=\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}{I}_{k}\left({\stackrel{^}{\mathcal{Y}}}_{i}\right)}$

${\stackrel{^}{\mu}}_{k}=\frac{1}{{\stackrel{^}{\pi}}_{k}n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}{\chi}_{i}{I}_{k}\left({\stackrel{^}{\mathcal{Y}}}_{i}\right)}$

${\stackrel{^}{\Sigma}}_{k}=\frac{1}{{\stackrel{^}{\pi}}_{k}n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}\left[{\left({\chi}_{i}-{\stackrel{^}{\mu}}_{k}\right)}^{\prime}\left({\chi}_{i}-{\stackrel{^}{\mu}}_{k}\right)\right]{I}_{k}\left({\stackrel{^}{\mathcal{Y}}}_{i}\right)}$

where: ${I}_{k}\left({\stackrel{^}{\mathcal{Y}}}_{i}\right)=\{\begin{array}{l}1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\stackrel{^}{\mathcal{Y}}}_{i}=k\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\stackrel{^}{\mathcal{Y}}}_{i}\ne k\end{array}$.

This estimation requires the non-linear optimization of the mixture likelihood for high-dimensional data sets. However, there are no closed-form solutions to

$\frac{\partial}{\partial \theta}\mathrm{log}L\left(\stackrel{^}{\theta}|x\right)=0$ for any mixture density; so the likelihood has to be numerically maximized. For this numerical optimization, the Expectation-Maximization (EM) algorithm of Dempster et al. [2] is used, which treats the data as incomplete and the group labels y_{i} as missing.

3. The Expectation-Maximization (EM) Algorithm

The expectation-maximization (EM) algorithm is an iterative procedure used to find maximum likelihood estimates when data are incomplete or are treated as being incomplete. The consummate citation for the EM algorithm is the famous paper by Dempster et al. [2]. In EM algorithm, E and M steps are iterated until convergence is reached. The EM algorithm is based on the “complete-data”; i.e., the observed data plus the missing data. In E-step, the expected value of the complete-data log-likelihood, say Q, is computed; in the M-step, Q is maximized with respect to the model parameters. The EM algorithm is easy to implement and a numerically stable algorithm that has reliable global convergence under fairly general conditions. However, the likelihood surface in mixture models tends to have multiple modes. So initialization of EM is crucial because it usually produces sensible results when started from reasonable starting values (Wu [15]). In this approach, hierarchical clusters are obtained by recursively merging the two clusters that provide the smallest decrease in the classification likelihood for the Gaussian mixture model (Banfield and Raftery [3], Xu et al. [16]).

The EM algorithm is an iterative procedure consisting of two alternating steps, given some starting values for all parameters ( ${\stackrel{^}{\pi}}_{k}$, ${\stackrel{^}{\mu}}_{k}$ and ${\stackrel{^}{\Sigma}}_{k}$ ). The algorithm can be summarized as follows at iteration (t + 1):

1) In the E-step, the posterior probability, ${\stackrel{^}{T}}_{ik}$ of the ith observation belonging to the kth component is estimated, given the current parameter estimates.

${\stackrel{^}{T}}_{ik}=\frac{{\stackrel{^}{\pi}}_{k}^{\left(t\right)}{g}_{k}\left({x}_{i}|{\stackrel{^}{\mu}}_{k}^{\left(t\right)},{\stackrel{^}{\Sigma}}_{k}^{\left(t\right)}\right)}{{\displaystyle {\sum}_{k=1}^{K}{\stackrel{^}{\pi}}_{k}^{\left(t\right)}{g}_{k}\left({x}_{i}|{\stackrel{^}{\mu}}_{k}^{\left(t\right)},{\stackrel{^}{\Sigma}}_{k}^{\left(t\right)}\right)}}.$

2) In the M-step, the parameter estimates of ${\pi}_{k}$, ${\mu}_{k}$ and ${\Sigma}_{k}$ are updated given the estimated posterior probabilities, using the update equations

${\stackrel{^}{\pi}}_{\kappa}^{\left(t+1\right)}=\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}{\stackrel{^}{T}}_{i\kappa}}$

${\stackrel{^}{\mu}}_{\kappa}^{\left(t+1\right)}=\frac{1}{n{\stackrel{^}{\pi}}_{\kappa}^{\left(t+1\right)}}{\displaystyle \underset{i=1}{\overset{n}{\sum}}{x}_{i}{\stackrel{^}{T}}_{i\kappa}}$

${\stackrel{^}{\Sigma}}_{\kappa}^{\left(t+1\right)}=\frac{1}{n{\stackrel{^}{\pi}}_{\kappa}^{\left(t+1\right)}}{\displaystyle \underset{i=1}{\overset{n}{\sum}}{\stackrel{^}{T}}_{i\kappa}{\left({x}_{i}-{\stackrel{^}{\mu}}_{\kappa}^{\left(t+1\right)}\right)}^{\prime}\left({x}_{i}-{\stackrel{^}{\mu}}_{\kappa}^{\left(t+1\right)}\right)}$

3) Iterate the first two steps until convergence.

The EM algorithm requires two issues to be addressed; determining the number of components, K, and initialization of the parameters.

4. The Model Selection Criteria

After estimating the parameters for the covariance matrix, the next step of determining the optimal cluster structure is selecting the best model. Despite the vast number of different model selection criteria in the literature, Schwarz’s Bayesian Criteria (SBC) (Schwarz [17]) is no doubt the most widely used in the model-based clustering. Besides these criteria, two other selection criteria are used. Namely AIC (Akaike [8]) and the information complexity (ICOMP) criterion (Bozdogan [18]). When using any information criterion to perform model selection, the model corresponding to the lowest score as providing the best balance between good fit and parsimony is chosen. Using the likelihood function, the AIC and SBC functions of the Gaussian mixture model can be defined as follows:

$\text{AIC}=-2\mathrm{log}L\left(\stackrel{^}{\theta}|x\right)+2m$

$\text{SBC}=-2\mathrm{log}L\left(\stackrel{^}{\theta}|x\right)+m\mathrm{log}(\; n\; )$

where: $L\left(\stackrel{^}{\theta}|x\right)$ is the likelihood function.

m is the number of independent parameters to be estimated.

$\stackrel{^}{\theta}$ is the maximum likelihood estimate for parameter θ.

ICOMP, originally introduced by Bozdogan [7] [9] [18] [19], is a logical extension of AIC and SBC, based on the structural complexity of an element or set of random vectors via the generalization of the information-based covariance complexity index of Van Emden [20]. ICOMP penalizes the lack-of-fit of a model with twice the negative of the maximized log-likelihood, following the same procedure of AIC and SBC. However, in ICOMP, a combination of lack-of-parsimony and profusion-of-complexity are also simultaneously penalized by a scalar complexity measure, C, of the model covariance matrix; while in AIC and SBC, only the lack of parsimony is penalized in terms of the number of parameters. In general, ICOMP is defined by using the likelihood function, the AIC and SBC functions of the Gaussian mixture model can be defined as follows:

$\text{ICOMP}=-2\mathrm{log}L\left(\stackrel{^}{\theta}|x\right)+2C\left(\stackrel{^}{Cov\left(\theta \right)}\right)$

where: $L\left(\stackrel{^}{\theta}|x\right)$ is the likelihood function.

*C* is a real-valued complexity measure.

$\stackrel{^}{Cov\left(\theta \right)}$ is the estimated model covariance matrix.

The covariance matrix is estimated by the estimated inverse Fisher information matrix (IFIM), ${\stackrel{^}{\mathcal{F}}}^{-1}$ is given by:

${\stackrel{^}{\mathcal{F}}}^{-1}={\left\{-E\left[\frac{{\partial}^{2}\mathrm{log}L\left(\stackrel{^}{\theta}\right)}{\partial \theta \partial {\theta}^{\prime}}\right]\right\}}^{-1}$

That is to say, IFIM is the negative expectation of the matrix of the second partial derivatives of the maximized log-likelihood of the fitted model, evaluated at the maximum likelihood estimators $\stackrel{^}{\theta}$.

For a multivariate normal model, the general form of ICOMP is defined as:

${\text{ICOMP}}_{\text{PEU}}\left({\stackrel{^}{\mathcal{F}}}^{-1}\right)=-2\mathrm{log}L\left(\stackrel{^}{\theta}|x\right)+m+\mathrm{log}\left(n\right){C}_{1}\left({\stackrel{^}{\mathcal{F}}}^{-1}\right)$

where:

${C}_{1}\left({\stackrel{^}{\mathcal{F}}}^{-1}\right)=\frac{S}{2}\mathrm{log}\left[\frac{tr\left({\stackrel{^}{\mathcal{F}}}^{-1}\right)}{s}\right]-\frac{1}{2}\mathrm{log}\left|{\stackrel{^}{\mathcal{F}}}^{-1}\right|$

$s=\mathrm{dim}\left({\stackrel{^}{\mathcal{F}}}^{-1}\right)=\text{rank}\left({\stackrel{^}{\mathcal{F}}}^{-1}\right)$

For all the above criteria, the decision rule is to select the model that gives the minimum score for the loss function.

5. The Numerical Results

All results were obtained by using MATLAB.

The Gaussian mixture-model based clustering is applied, which implements the EM algorithm for inference, to four simulated data sets. The maximum number of clusters is taken K max = 6 for all examples. The convergence criteria of the EM algorithm are set to see = 10^{−}^{6} and a maximum of 1000 iterations is allowed. After confirming the validity of mathematical equations and the program, four models of covariance matrix were applied. These models are:

Model: EVV with the covariance matrix ( $\lambda {D}_{k}{A}_{\kappa}{D}_{\kappa}^{\text{T}}$ ).

Model: VII with the covariance matrix ( ${\lambda}_{k}I$ ).

Model: VEE with the covariance matrix ( ${\lambda}_{k}DA{D}^{\text{T}}$ ).

Model: VVE with the covariance matrix ( ${\lambda}_{k}D{A}_{\kappa}{D}^{\text{T}}$ ).

These models have been selected due to their distinguishing features: They represent different cases of the covariance matrix. Where the models [EVV] [VEE] and [VVE] belong to the General Family (Celeux and Govaert [4]). While the model [VII] belongs to the spherical family. In all models, the AIC, SBC and ICOMPPEU parameters were calculated. The optimal number of clusters has been determined by reaching the lowest values. The values of the complexity criteria were as follows:

1) Model: EVV with the covariance matrix ( $\lambda {D}_{k}{A}_{\kappa}{D}_{\kappa}^{\text{T}}$ ) (Figure 1 & Figure 2)

From Table 3, the optimal number of clusters was determined. It was determined by achieving the lowest values of the criteria at the same time. It was found to fit the number of clusters of two clusters.

Given below in Table 4 the parameter values estimated for the best simulation.

For the selected model, GMMC identifies the cluster labels with a miss classification rate of 1%. The miss classification rate is calculated as follows:

Figure 1. Scatterplot of the actual dataset labeled by groups (Model EVV).

Figure 2. Scatterplot of the estimated dataset labeled by groups (Model EVV).

Table 3. Values of the criteria for selecting the model to reach the best simulation for the model (EVV) for the number of clusters k = 1, ..., 6.

Table 4. The resulting confusion matrix for model (EVV).

$\begin{array}{c}\left(1-\frac{{a}_{ii}+{a}_{jj}}{\Sigma}\right)\times 100=\left(1-\frac{174+75}{250}\right)\times 100=\left(1-\frac{249}{250}\right)\times 100\\ =\left(1-0.99\right)\times 100=1\end{array}$

2) Model: VII with the covariance matrix ( ${\lambda}_{k}I$ ) (Figure 3 & Figure 4)

Using Table 5, the optimal number of clusters was two clusters. GMMC achieves a miss classification rate of 2% for the model (VII). The resulting confusion matrix is shown in Table 6.

3) Model: VEE with the covariance matrix ( ${\lambda}_{k}DA{D}^{\text{T}}$ ) (Figure 5 & Figure 6)

From the results in Table 7, it was found that the optimal number of clusters is three, so the number of clusters was increased. To achieve greater clarity, the sample size was 500 instead of 250 and was divided into three groups as follows (Table 8).

For this model, the miss classification rate was 15%.

4) Model: VVE with the covariance matrix ( ${\lambda}_{k}D{A}_{\kappa}{D}^{\text{T}}$ ) (Figure 7 & Figure 8)

Figure 3. Scatterplot of the actual dataset labeled by groups (Model VII).

Figure 4. Scatterplot of the estimated dataset labeled by groups (Model VII).

Figure 5. Scatterplot of the actual dataset labeled by groups (Model VEE).

Table 5. Values of the criteria for selecting the model to reach the best simulation for the model (VII) for the number of clusters k = 1, ..., 6.

Table 6. The resulting confusion matrix for model (VII).

Figure 6. Scatterplot of the estimated dataset labeled by groups (Model VEE).

Figure 7. Scatterplot of the actual dataset labeled by groups (Model VVE).

Figure 8. Scatterplot of the estimated dataset labeled by groups (Model VVE).

Table 7. Values of the criteria for selecting the model to reach the best simulation for the model (VEE) for the number of clusters k = 1, ..., 6.

Table 8. The resulting confusion matrix for model (VEE).

The fit number of clusters for this model was two clusters (Table 9).

It was shown that the miss classification rate was 0% from the data in Table 10.

6. Conclusion

In this paper, the Gaussian mixture model-based clustering is used. The mixture models based on clusters are able to predict accurately if the appropriate covariance matrix, model is selected. It is applied by using four models:

Table 9. Values of the criteria for selecting the model to reach the best simulation for the model (VVE) for the number of clusters k = 1, ..., 6.

Table 10. The resulting confusion matrix for model (VVE).

1) Model [EVV] ( $\lambda {D}_{k}{A}_{k}{D}_{k}^{\text{T}}$ ) represents the case of equal volume, variable shape, and orientation. It is showed that the optimal number of clusters equals two. From the values of the complexity criteria in Table 3, it is noted that the ICOMPPEU criterion corresponds to the lowest value compared to the other two criteria and the miss classification rate was 1%.

2) Model [VII] ( ${\lambda}_{k}I$ ) represents the case of variable volume, shape, and orientation. Also, in this model, the optimal number of clusters equals two and the ICOMPPEU criterion corresponds to the lowest value compared to the other two parameters (the values in Table 5). The miss classification rate was 2%.

3) Model [VEE] ( ${\lambda}_{k}DA{D}^{\text{T}}$ ) represents the case of variable volume, equal shape, and direction. From Table 7, it is found that the optimal number of clusters is calculated by the number of clusters corresponding to the lowest values of the complexity of the information and found to be equal to three. The miss classification rate was 15%.

4) Model [VVE] ( ${\lambda}_{k}D{A}_{k}{D}^{\text{T}}$ ) represents the case of variable volume, shape, and equal orientation. As the first and second model the optimal number of clusters equals two the ICOMPPEU criterion corresponds to the lowest value compared to the other two criteria (values are found in Table 9, while the miss classification rate was 0%.

The results showed that the ICOMPPEU criteria were superior to the rest of the criteria. In addition to the success of the Gauss model based on the clusters in the prediction using the covariance matrix. The study also determined the possibility of determining the optimal number of clusters by selecting the number of clusters corresponding to the lowest values of the different criteria.

For the number of clusters k = 1, ..., 6, the three different selection criteria have chosen the VVE model for the number of clusters two to be the optimal model. For the selected model, the Gaussian Mixture Model-based Clustering (GMMC) diagnoses the cluster classification with a 0% miss classification rate.

References

[1] Everitt, B. and Skrondal, A. (2010) The Cambridge Dictionary of Statistics. Cambridge University Press, Cambridge.

https://doi.org/10.1017/CBO9780511779633

[2] Dempster, A.P., Laird, N.M. and Rubin, D.B. (1977) Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39, 1-38.

https://doi.org/10.1111/j.2517-6161.1977.tb01600.x

[3] Banfield, J.D. and Raftery, A.E. (1993) Model-Based Gaussian and Non-Gaussian Clustering. Biometrics, 49, 803-821.

https://doi.org/10.2307/2532201

[4] Celeux, G. and Govaert, G. (1995) Gaussian Parsimonious Clustering Models. Pattern Recognition, 28, 781-793.

https://doi.org/10.1016/0031-3203(94)00125-6

[5] Chi, J., Yuchen, Z., Sivaraman, B., Martin, J. and Michael, I. (2016) Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences. Advances in Neural Information Processing Systems, Barcelona, 5-10 December 2016, 4116-4124.

[6] Tóth, B.G., Rácz, I.I. and Horváth, I. (2019) Gaussian-Mixture-Model-Based Cluster Analysis of Gamma-Ray Bursts in the BATSE Catalog. Monthly Notices of the Royal Astronomical Society, 486, 4823-4828.

https://doi.org/10.1093/mnras/stz1188

[7] Bozdogan, H. (2000) Akaike’s Information Criterion and Recent Developments in Information Complexity. Journal of Mathematical Psychology, 44, 62-91.

https://doi.org/10.1006/jmps.1999.1277

[8] Akaike, H. (1973) Information Theory and an Extension of the Maximum Likelihood Principle. In: Petrox, B. and Csaki, F., Eds., Second International Symposium on Information Theory, Academiai Kiado, Budapest, 267-281.

[9] Bozdogan, H. (1988) ICOMP: A New Model-Selection Criteria. In: Bock, H., Ed., Classification and Related Methods of Data Analysis, North-Holland, Amsterdam, 599-608.

[10] Titterington, D., Smith, A. and Makov, U. (1985) Statistical Analysis of Finite Mixture Distributions. Wiley Series in Probability and Mathematical Statistics. Applied Probability and Statistics. Wiley, Hoboken.

[11] Erar, B. (2011) Mixture model Cluster Analysis under Different Covariance Structures Using Information Complexity. Master’s Thesis.

[12] Gupta, S. and Bhatia, V. (2015) Gaussian Mixture Model Based Clustering Hierarchy Protocol in Wireless Sensor Network. International Journal of Scientific Engineering and Research, 3, 2347-3878.

[13] Scrucca, L., Fop, M., Murphy, T.B. and Raftery, A.E. (2016) Mclust 5: Clustering, Classification and Density Estimation Using Gaussian Finite Mixture Models. The R Journal, 8, 205-233.

https://doi.org/10.32614/RJ-2016-021

[14] Malsiner-Walli, G., Frühwirth-Schnatter, S. and Grün, B. (2016) Model-Based Clustering Based on Sparse Finite Gaussian Mixtures. Statistics and Computing, 26, 303-324.

https://doi.org/10.1007/s11222-014-9500-2

[15] Wu, C.J. (1983) On the Convergence Properties of the EM Algorithm. The Annals of Statistics, 11, 95-103.

https://doi.org/10.1214/aos/1176346060

[16] Xu, J., Hsu, D. and Maleki, A. (2018) Benefits of Over-Parameterization with EM. 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, p. 35.

[17] Schwarz, G. (1978) Estimating the Dimension of a Model. Annals of Statistics, 6, 461-464.

https://doi.org/10.1214/aos/1176344136

[18] Bozdogan, H. (1994) Mixture-Model Cluster Analysis Using Model Selection Criteria and a New Informational Measure of Complexity. In: Bozdogan, H., Ed., Proceedings of the First US/Japan Conference on the Frontiers of Statistical Modeling: An Informational Approach, Kluwer Academic Publishers, Dordrecht, Vol. 2, 69-113.

https://doi.org/10.1007/978-94-011-0800-3_3

[19] Bozdogan, H. (2000) On the Information-Based Measure of Covariance Complexity and Its Application to the Evaluation of Multivariate Linear Models. Communications in Statistics—Theory and Methods, 19, 221-278.

https://doi.org/10.1080/03610929008830199

[20] Van Emden, M. (1971) An Analysis of Complexity. In: Mathematical Centre Tracts, Mathematisch Centrum, Amsterdam, 35.