Maximum Likelihood Estimation of the Parameters of Exponentiated Generalized Weibull Based on Progressive Type II Censored Data
ABSTRACT
Exponentiated Generalized Weibull distribution is a probability distribution which generalizes the Weibull distribution introducing two more shapes parameters to best adjust the non-monotonic shape. The parameters of the new probability distribution function are estimated by the maximum likelihood method under progressive type II censored data via expectation maximization algorithm.

1. Introduction

Various probability density functions have been proposed to perform statistical analysis of lifetime data. The Weibull distribution is one of the most widely used distributions in the analysis of lifetimes data. It was introduced by the French Mathematicians Fréchet (1928)  . Indeed in the 1920s Fréchet developed a distribution to which he gave his name; Fréchet distribution, as an extreme value distribution. This distribution is in fact equal to the reciprocal of the Weibull distribution. Rosin and Rammler (1933)  applied Fréchet’s ditribution to describe the particle size distribution generated by grinding, milling and crushing operations of materials. This probability distribution has been widely used as a probabilistic model in studies on lifetimes. Mudholkar and Srivastava (1993)  introduced the exponentiated Weibull distribution to analyse bathtub failure rate data which cannot be handled well by the regular Weibull for monotonicity of its hazard rate. Also Zhang and Xie (2011)  worked on bathtub failure data using the truncated Weibull distribution. Soumaya and Soufiane (2014)  have given estimation of the parameters of the exponentiated Weibull distribution and the additive Weibull distribution, which are two specific generalizations of the Weibull distribution.

Cordeiro, et al. (2013)  introduced the exponentiated generalized class of distribution which is more general than the two classes of Lehmann’s (1953)  alternatives, it is a combination of the Lehmann type I and type II alternatives. Indeed, for any baseline (or parent) distribution it is possible to define the corresponding Exponentiated Generalized family of distribution. Cordeiro, et al. (2013) discussed four special models namely the Exponentiated Generalized Fréchet, the Exponentiated Generalized Normal distribution, the Exponentiated Generalized Gamma distribution and the Exponentiated Generalized Gumbel distribution. Oguntunde et al. (2015)  have discussed the special case of the Exponentiated Generalized Weibull distribution by using the Weibull distribution as baseline distribution. The proposed distribution has four parameters (three shape parameters and one scale parameter). The work of Oguntunde et al. is mainly focused on the mathematical properties of the distribution like the moments, the limiting behaviour of the functions (pdf and cdf), the reliability analysis, and the quantile function.

The probability density function and the cumulative distribution function of the Exponentiated Generalized Weibull are respectively given by:

$f\left(x;a,b,\alpha ,\beta \right)=ab\frac{\alpha }{\beta }{\left(\frac{x}{\beta }\right)}^{\alpha -1}{\left\{{\text{e}}^{-{\left(x/\beta \right)}^{\alpha }}\right\}}^{a}{\left[1-{\left\{{\text{e}}^{-{\left(x/\beta \right)}^{\alpha }}\right\}}^{a}\right]}^{b-1}$ (1)

and

$F\left(x;a,b,\alpha ,\beta \right)={\left[1-{\left\{{\text{e}}^{-{\left(x/\beta \right)}^{\alpha }}\right\}}^{a}\right]}^{b}$ (2)

where $x>0$ , $a>0$ , $b>0$ , $\alpha >0$ , $\beta >0$ .

The Exponentiated Generalized Weibull generalizes the following distributions:

For $a=1$ , Generalized Weibull;

For $b=1$ , Exponentiated Weibull;

For $a=b=1$ , Weibull distribution;

For $a=b=\alpha =1$ , Exponential distribution.

The survival function and the hazard function have respectively the following expressions:

$S\left(x;a,b,\alpha ,\beta \right)=1-F\left(x;a,b,\alpha ,\beta \right)=1-{\left[1-{\left\{{\text{e}}^{-{\left(x/\beta \right)}^{\alpha }}\right\}}^{a}\right]}^{b}$ (3)

and

$h\left(x;a,b,\alpha ,\beta \right)=\frac{ab\frac{\alpha }{\beta }{\left(\frac{x}{\beta }\right)}^{\alpha -1}{\left\{{\text{e}}^{-{\left(x/\beta \right)}^{\alpha }}\right\}}^{a}{\left[1-{\left\{{\text{e}}^{-{\left(x/\beta \right)}^{\alpha }}\right\}}^{a}\right]}^{b-1}}{1-{\left[1-{\left\{{\text{e}}^{-{\left(x/\beta \right)}^{\alpha }}\right\}}^{a}\right]}^{b}}$ (4)

2. Parameters Estimation

2.1. The Model

Let us assume that we have n independent variables in a trial, and the ordered m failures are observed under the progressive type-II censoring plan $R=\left({R}_{1},\cdots ,{R}_{m}\right)$ , where ${R}_{j}\ge 0$ for $j=1,\cdots ,m$ and ${\sum }_{j=0}^{m}\text{ }\text{ }{R}_{j}+m=n$ . Let the observed and censored data be respectively $Y=\left({Y}_{1},\cdots ,{Y}_{m}\right)$ and $Z=\left({Z}_{1},\cdots ,{Z}_{m}\right)$ , where ${Z}_{j}=\left({Z}_{j1},\cdots ,{Z}_{j{R}_{j}}\right)$ for $j=1,\cdots ,m$ . Now consider $X=\left(Y,Z\right)$ to be the complete data (observed and censored data together). Then the joint probability that the complete sample (the complete data likelihood) is observed is given by

${L}_{c}\left(x,a,b,\alpha ,\beta \right)=\underset{j=1}{\overset{m}{\prod }}\left[f\left({y}_{j},a,b,\alpha ,\beta \right)\underset{k=1}{\overset{{R}_{j}}{\prod }}f\left({z}_{jk},a,b,\alpha ,\beta \right)\right]$ (5)

(Ng et al 2002)  .

From which we get the following log-likelihood by substituting in (5) the pdf by expression (1)

$\begin{array}{l}\mathrm{log}{L}_{c}\left(x,a,b,\alpha ,\beta \right)\\ =n\mathrm{log}a+n\mathrm{log}b+n\mathrm{log}\alpha -n\mathrm{log}\beta +\left(\alpha -1\right)\underset{j=1}{\overset{m}{\sum }}\mathrm{log}\left(\frac{{y}_{j}}{\beta }\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }-a\underset{j=1}{\overset{m}{\sum }}{\left(\frac{{y}_{j}}{\beta }\right)}^{\alpha }+\left(b-1\right)\underset{j=1}{\overset{m}{\sum }}\mathrm{log}\left[1-{\left\{{\text{e}}^{-{\left({y}_{j}/\beta \right)}^{\alpha }}\right\}}^{a}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }+\left(\alpha -1\right)\underset{j=1}{\overset{m}{\sum }}\underset{k=1}{\overset{{R}_{j}}{\sum }}\mathrm{log}\left(\frac{{z}_{jk}}{\beta }\right)-a\underset{j=1}{\overset{m}{\sum }}\underset{k=1}{\overset{{R}_{j}}{\sum }}{\left(\frac{{z}_{jk}}{\beta }\right)}^{\alpha }\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }+\left(b-1\right)\underset{j=1}{\overset{m}{\sum }}\underset{k=1}{\overset{{R}_{j}}{\sum }}\mathrm{log}\left[1-{\left\{{\text{e}}^{-{\left({z}_{jk}/\beta \right)}^{\alpha }}\right\}}^{a}\right]\end{array}$ (6)

2.2. EM Algorithm

2.2.1. E-Step

In oder to tackle the E-step the conditional expectation of the log-likelihood given the observed sample $Y=\left({y}_{1},{y}_{2},\cdots ,{y}_{m}\right)$ is computed. Let us denote it by $Q\left(\theta \right)$ where $\theta =\left(a,b,\alpha ,\beta \right)$ is the vector of parameters. That is,

$Q\left(\theta \right)={\mathbb{E}}_{\theta }\left(log{L}_{c}\left(X,\theta |Y\right)\right)$

The conditional expectation of the above log-likelihood becomes

$\begin{array}{c}Q\left(\theta \right)=n\mathrm{log}a+n\mathrm{log}b+n\mathrm{log}\alpha -n\mathrm{log}\beta +\left(\alpha -1\right)\underset{j=1}{\overset{m}{\sum }}\mathrm{log}\left(\frac{{y}_{j}}{\beta }\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-a\underset{j=1}{\overset{m}{\sum }}{\left(\frac{{y}_{j}}{\beta }\right)}^{\alpha }+\left(b-1\right)\underset{j=1}{\overset{m}{\sum }}\mathrm{log}\left[1-{\left\{{\text{e}}^{-{\left({y}_{j}/\beta \right)}^{\alpha }}\right\}}^{a}\right]\end{array}$

$\begin{array}{c}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(\alpha -1\right)\underset{j=1}{\overset{m}{\sum }}\underset{k=1}{\overset{{R}_{j}}{\sum }}\text{ }\text{ }{\mathbb{E}}_{\theta }\left(\mathrm{log}\left(\frac{{z}_{jk}}{\beta }\right)|{z}_{jk}>{y}_{j}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-a\underset{j=1}{\overset{m}{\sum }}\underset{k=1}{\overset{{R}_{j}}{\sum }}\text{ }\text{ }{\mathbb{E}}_{\theta }\left({\left(\frac{{z}_{jk}}{\beta }\right)}^{\alpha }|{z}_{jk}>{y}_{j}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(b-1\right)\underset{j=1}{\overset{m}{\sum }}\underset{k=1}{\overset{{R}_{j}}{\sum }}\text{ }\text{ }{\mathbb{E}}_{\theta }\left(\mathrm{log}\left[1-{\left\{{\text{e}}^{-{\left({z}_{jk}/\beta \right)}^{\alpha }}\right\}}^{a}\right]|{z}_{jk}>{y}_{j}\right)\end{array}$ (7)

Thus, to facilitate the E-step, the conditional distribution of Z for given Y and the current value of the parameters, needs to be determined.

The conditional distribution of Z for given Y is given by

${f}_{Z|Y}\left({z}_{jk}|Y=y,\theta \right)=\frac{f\left({z}_{jk},\theta \right)}{1-F\left({y}_{j},\theta \right)},\text{ }{z}_{jk}>{y}_{j}$ (8)

see Ng et al. (2002).

Let us set

$A\left(\theta ,{y}_{j}\right)={\mathbb{E}}_{\theta }\left(\mathrm{log}\left(\frac{{z}_{jk}}{\beta }\right)|{z}_{jk}>{y}_{j}\right)$ (9)

$B\left(\theta ,{y}_{j}\right)={\mathbb{E}}_{\theta }\left({\left(\frac{{z}_{jk}}{\beta }\right)}^{\alpha }|{z}_{jk}>{y}_{j}\right)$ (10)

and

$\begin{array}{l}C\left(\theta ,{y}_{j}\right)\\ ={\mathbb{E}}_{\theta }\left(\mathrm{log}\left[1-{\left\{{\text{e}}^{-{\left({z}_{jk}/\beta \right)}^{\alpha }}\right\}}^{a}\right]|{z}_{jk}>{y}_{j}\right)\end{array}$ (11)

Using (8)-(11) the expressions for $A\left(\theta ,{y}_{j}\right)$ , $B\left(\theta ,{y}_{j}\right)$ and $C\left(\theta ,{y}_{j}\right)$ become

$\begin{array}{l}A\left(\theta ,{y}_{j}\right)=\frac{ab}{\alpha \left[1-F\left({y}_{j},\theta \right)\right]}\underset{v=0}{\overset{\infty }{\sum }}\left(\begin{array}{c}b-1\\ v\end{array}\right){\left(-1\right)}^{v}\left[\frac{\mathrm{log}\left({y}_{j}/\beta \right)}{\alpha a\left(v+1\right)}{\text{e}}^{-a\left(v+1\right){\left({y}_{j}/\beta \right)}^{\alpha }}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\begin{array}{c}\underset{}{\overset{}{}}\\ \end{array}+\Gamma \left(0,a\left(v+1\right){\left({y}_{j}/\beta \right)}^{\alpha }\right)\right]\end{array}$ (12)

where

$\Gamma \left(0,c\right)={\int }_{c}^{\infty }{z}^{-1}{\text{e}}^{-z}\text{d}z$

is the upper incomplete gamma function.

$\begin{array}{l}B\left(\theta ,{y}_{j}\right)\\ =\frac{ab}{1-F\left({y}_{j},\theta \right)}\underset{v=0}{\overset{\infty }{\sum }}\left(\begin{array}{c}b-1\\ v\end{array}\right)\frac{{\left(-1\right)}^{v}}{{\left(a\left(v+1\right)\right)}^{2}}\left(1+a\left(v+1\right){\left({y}_{j}/\beta \right)}^{\alpha }\right){\text{e}}^{-a\left(v+1\right){\left({y}_{j}/\beta \right)}^{\alpha }}\end{array}$ (13)

$C\left(\theta ,{y}_{j}\right)=-\frac{ab}{\left[1-F\left({y}_{j},\theta \right)\right]}\underset{i=0}{\overset{\infty }{\sum }}\underset{v=0}{\overset{\infty }{\sum }}\frac{{\left(-1\right)}^{v}}{ia\left(v+i+1\right)}\left(\begin{array}{c}b-1\\ v\end{array}\right){\text{e}}^{-a\left(v+i+1\right){\left({y}_{j}/\beta \right)}^{\alpha }}$ (14)

We therefore obtain an expression for the conditional expectation of the log- likelihood as

$\begin{array}{c}Q\left(\theta \right)=n\mathrm{log}a+n\mathrm{log}b+n\mathrm{log}\alpha -n\mathrm{log}\beta +\left(\alpha -1\right)\underset{j=1}{\overset{m}{\sum }}\mathrm{log}\left(\frac{{y}_{j}}{\beta }\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-a\underset{j=1}{\overset{m}{\sum }}{\left(\frac{{y}_{j}}{\beta }\right)}^{\alpha }+\left(b-1\right)\underset{j=1}{\overset{m}{\sum }}\mathrm{log}\left[1-{\left\{{\text{e}}^{-{\left({y}_{j}/\beta \right)}^{\alpha }}\right\}}^{a}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(\alpha -1\right)\underset{j=1}{\overset{m}{\sum }}\text{ }\text{ }{R}_{j}A\left(\theta ,{y}_{j}\right)-a\underset{j=1}{\overset{m}{\sum }}\text{ }\text{ }{R}_{j}B\left(\theta ,{y}_{j}\right)+\left(b-1\right)\underset{j=1}{\overset{m}{\sum }}\text{ }\text{ }{R}_{j}C\left(\theta ,{y}_{j}\right)\end{array}$ (15)

where the functions A, B, and C are respectively defined in (12)-(14).

2.2.2. M-Step

In the M-step on the p-th iteration of the EM algorithm, the value of $\theta$ which maximizes $Q\left(\theta ,{\theta }^{\left(p-1\right)}\right)$ will be used as the next estimate ${\theta }^{\left(p\right)}$ of $\theta$ . Where ${\theta }^{\left(p\right)}=\left({a}^{\left(p\right)},{b}^{\left(p\right)},{\alpha }^{\left(p\right)},{\beta }^{\left(p\right)}\right)$ is the vector of parameters at the p-th iteration $p\ge 1$ , and ${\theta }^{\left(0\right)}=\left({a}^{\left(0\right)},{b}^{\left(0\right)},{\alpha }^{\left(0\right)},{\beta }^{\left(0\right)}\right)$ is the initial value of the vector of parameters.

$Q\left(\theta ,{\theta }^{\left(p-1\right)}\right)={\mathbb{E}}_{\theta }\left(log{L}_{c}\left(X,\theta \right)|Y,{\theta }^{\left(p-1\right)}\right)$

Therefore, if at the p-th stage the estimate of $\theta$ is ${\theta }^{\left(p-1\right)}$ , then ${\theta }^{\left(p\right)}$ can be obtained by maximizing

$\begin{array}{c}Q\left(\theta ,{\theta }^{\left(p-1\right)}\right)=n\mathrm{log}a+n\mathrm{log}b+n\mathrm{log}\alpha -n\mathrm{log}\beta +\left(\alpha -1\right)\underset{j=1}{\overset{m}{\sum }}\mathrm{log}\left(\frac{{y}_{j}}{\beta }\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-a\underset{j=1}{\overset{m}{\sum }}{\left(\frac{{y}_{j}}{\beta }\right)}^{\alpha }+\left(b-1\right)\underset{j=1}{\overset{m}{\sum }}\mathrm{log}\left[1-{\left\{{\text{e}}^{-{\left({y}_{j}/\beta \right)}^{\alpha }}\right\}}^{a}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(\alpha -1\right)\underset{j=1}{\overset{m}{\sum }}\text{ }{R}_{j}A\left({\theta }^{\left(p-1\right)},{y}_{j}\right)-a\underset{j=1}{\overset{m}{\sum }}\text{ }\text{ }{R}_{j}B\left({\theta }^{\left(p-1\right)},{y}_{j}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(b-1\right)\underset{j=1}{\overset{m}{\sum }}\text{ }\text{ }{R}_{j}C\left({\theta }^{\left(p-1\right)},{y}_{j}\right)\end{array}$ (16)

Then ${\theta }^{\left(p\right)}$ is solution of the following system of equations

$\begin{array}{l}\frac{\partial Q\left(\theta ,{\theta }^{\left(p-1\right)}\right)}{\partial a}=0\hfill \\ \frac{\partial Q\left(\theta ,{\theta }^{\left(p-1\right)}\right)}{\partial b}=0\hfill \\ \frac{\partial Q\left(\theta ,{\theta }^{\left(p-1\right)}\right)}{\partial \alpha }=0\hfill \\ \frac{\partial Q\left(\theta ,{\theta }^{\left(p-1\right)}\right)}{\partial \beta }=0\hfill \end{array}$ (17)

which is equivalent to

$\left\{\begin{array}{l}\frac{n}{a}-\underset{j=1}{\overset{m}{\sum }}{\left(\frac{{y}_{j}}{\beta }\right)}^{\alpha }+\left(b-1\right)\underset{j=1}{\overset{m}{\sum }}\frac{{\left(\frac{{y}_{j}}{\beta }\right)}^{\alpha }{\left\{{\text{e}}^{-{\left({y}_{j}/\beta \right)}^{\alpha }}\right\}}^{a}}{1-{\left\{{\text{e}}^{-{\left({y}_{j}/\beta \right)}^{\alpha }}\right\}}^{a}}-\underset{j=1}{\overset{m}{\sum }}\text{ }\text{ }{R}_{j}B\left({\theta }^{\left(p-1\right)},{y}_{j}\right)=0\\ \frac{n}{b}+\underset{j=1}{\overset{m}{\sum }}\mathrm{log}\left[1-{\left\{{\text{e}}^{-{\left({y}_{j}/\beta \right)}^{\alpha }}\right\}}^{a}\right]+\underset{j=1}{\overset{m}{\sum }}\text{ }{R}_{j}C\left({\theta }^{\left(p-1\right)},{y}_{j}\right)=0\\ \frac{n}{\alpha }+\underset{j=1}{\overset{m}{\sum }}\mathrm{log}\left(\frac{{y}_{j}}{\beta }\right)-a\underset{j=1}{\overset{m}{\sum }}{\left(\frac{{y}_{j}}{\beta }\right)}^{\alpha }\mathrm{log}\left(\frac{{y}_{j}}{\beta }\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+a\left(b-1\right)\underset{j=1}{\overset{m}{\sum }}\frac{\mathrm{log}\left(\frac{{y}_{j}}{\beta }\right){\left(\frac{{y}_{j}}{\beta }\right)}^{\alpha }{\left\{{\text{e}}^{-{\left({y}_{j}/\beta \right)}^{\alpha }}\right\}}^{a}}{1-{\left\{{\text{e}}^{-{\left({y}_{j}/\beta \right)}^{\alpha }}\right\}}^{a}}+\underset{j=1}{\overset{m}{\sum }}\text{ }{R}_{j}A\left({\theta }^{\left(p-1\right)},{y}_{j}\right)=0\\ -\frac{n+\left(\alpha -1\right)m}{\beta }+\frac{a\alpha }{\beta }\underset{j=1}{\overset{m}{\sum }}{\left(\frac{{y}_{j}}{\beta }\right)}^{\alpha }-\frac{a\alpha \left(b-1\right)}{\beta }\underset{j=1}{\overset{m}{\sum }}\frac{{\left(\frac{{y}_{j}}{\beta }\right)}^{\alpha }{\left\{{\text{e}}^{-{\left({y}_{j}/\beta \right)}^{\alpha }}\right\}}^{a}}{1-{\left\{{\text{e}}^{-{\left({y}_{j}/\beta \right)}^{\alpha }}\right\}}^{a}}=0\end{array}$ (18)

From the second equation in the above system we can express ${b}^{\left(p\right)}$ for known ${a}^{\left(p\right)}$ , ${\alpha }^{\left(p\right)}$ , ${\beta }^{\left(p\right)}$ as:

${b}^{\left(p\right)}=-\frac{n}{\underset{j=1}{\overset{m}{\sum }}\mathrm{log}\left[1-{\left\{{\text{e}}^{-{\left({y}_{j}/{\beta }^{\left(p\right)}\right)}^{{\alpha }^{\left(p\right)}}}\right\}}^{{a}^{\left(p\right)}}\right]+\underset{j=1}{\overset{m}{\sum }}{R}_{j}C\left({\theta }^{\left(p-1\right)},{y}_{j}\right)}$ (19)

The expressions for ${a}^{\left(p\right)}$ , ${\alpha }^{\left(p\right)}$ and ${\beta }^{\left(p\right)}$ are not available in closed form.

The solution to the M-step does not exists in closed form. For this case Dempster et al. (1977)  defined what is called the generalized EM algorithm (GEM algorithm) for which the M-step requires ${\theta }^{\left(p\right)}$ to be chosen such that

$Q\left({\theta }^{\left(p\right)},{\theta }^{\left(p-1\right)}\right)\ge Q\left({\theta }^{\left(p-1\right)},{\theta }^{\left(p-1\right)}\right)$ (20)

Since we need only to increase the likelihood, we may replace the M-step with a single iteration of the Newton-Raphson (N-R) algorithm.

3. Simulation

3.1. Simulation

For the simulation the M-step is replaced by a single iteration of the Newton- Raphson algorithm.

For the values of $n=30$ and $m=20$ and $\theta =\left(1,1,1,1\right)$ progressively Type- II censored sample was generated from the Exponentiated Generalized Weibull distribution using the algorithm in Balakrishnan and Sandhu (1995)  .

The algorithm is defined as follows:

• Generate m independent Uniform (0, 1) observations ${W}_{1},{W}_{2},\cdots ,{W}_{m}$

• Set ${V}_{i}={W}_{i}^{1/\left(i+{R}_{m}+{R}_{m-1}+\cdots +{R}_{m-i+1}\right)}$ for $i=1,2,\cdots ,m$

• Set ${U}_{i}=1-{V}_{m}{V}_{m-1}\cdots {V}_{m-i+1}$ for $i=1,2,\cdots ,m$ . Then ${U}_{1},{U}_{2},\cdots ,{U}_{m}$ is the required progressive Type-II censored sample from the Uniform (0, 1) distribution.

• Finally, we set ${X}_{i}={F}^{-1}\left({U}_{i},\theta \right)$ for $i=1,2,\cdots ,m$ , where ${F}^{-1}\left(.,\theta \right)$ is the inverse cdf of the Exponentiated Generalized Weibull distribution. Then ${X}_{1},{X}_{2},\cdots ,{X}_{m}$ is the required progressive Type-II censored sample from the Exponentiated Generalized Weibull distribution.

with censoring scheme $R=\left(1,3,1,1,1,1,1,0,0,0,0,1,0,0,0,0,0,0,0,0\right)$ .

The generated sample is:

0.0138 0.0230 0.0447 0.2401 0.3091 0.3264 0.4597 0.5448 0.5841 0.7274 0.9875 1.1164 1.2090 1.3519 1.4896 1.5041 1.6224 2.9952 3.4537 3.6385

Via the EM algorithm discussed in Section 2, the computed MLEs of the parameters become:

$\stackrel{^}{a}=0.7606$ , $\stackrel{^}{b}=0.8272$ , $\stackrel{^}{\alpha }=1.0911$ and $\stackrel{^}{\beta }=1.0365$

In Table 1 a Monte Carlo simulation for N = 500 was used to compute the RMSE and the mean estimates for different value of n, m and $\theta =\left(2,2,1,1\right)$ . The following formula was used to compute the RMSE

$\text{RMSE}\left(\stackrel{^}{\lambda }\right)=\sqrt{\underset{i=1}{\overset{N}{\sum }}\frac{{\left({\stackrel{^}{\lambda }}_{i}-\lambda \right)}^{2}}{N}}$

where ${\stackrel{^}{\lambda }}_{i}$ is the i-th estimates of the parameter $\lambda$

3.2. Remarks

• For fixed sample size n and by increasing m, we get smaller RMSE’s.

• By increasing the sample size n, we get smaller RMSE’s.

• The largest values of m in each case represent the complete sample case.

Table 1. RMSE of the estimators.

4. Conclusion

The parameters of the Exponentiated Generalized Weibull distribution were estimated using maximum likelihood estimation method via Expectation Maximization (EM) algorithm. The Root Mean Square Error were computed at different values of the sample size n and failures (observed data) m. It was observed that the RMSEs were smaller for fixed sample size n and increasing the size m of the observed data, and also for the increasing sample size n.

Acknowledgements

I would like to thank my supervisors Professor Leo Odongo and Doctor Ibrahim Ly for accompanying me through this work. Sincere thanks to the African Union for giving me the opportunity to do scientific reseach.

Cite this paper
Sawadogo, I. , Odongo, L. , Ly, I. (2017) Maximum Likelihood Estimation of the Parameters of Exponentiated Generalized Weibull Based on Progressive Type II Censored Data. Open Journal of Statistics, 7, 956-963. doi: 10.4236/ojs.2017.76067.
References
   Fréchet (1928) Sur la loi de probabilité de l'écart maximum. Annales de la societe Polonaise de Mathematique, 6, 93-116

   Rosin, P. and Rammler, E. (1933) The Laws Governing the Fineness of powdered coal. Journal of the Institute of Fuel, 7, 29-36.

   Mudhokar, G.S. and Srivastava, D.K. (1993) Exponentiated Weibull Familly for Analysing Bathtub Failure-Rate Data. IEEE Transaction on Reliability, 42, 299-302.
https://doi.org/10.1109/24.229504

   Zhang, T. and Xie, M. (2011) On the Upper Truncated Weibull Distribution and Its Reliability Implications. Reliability Engineering and System Safety, 96, 194-200.
https://doi.org/10.1016/j.ress.2010.09.004

   Soumaya, G. and Soufiane, G. (2014) Parameters Estimations for Some Modification of the Weibull Distribution. Open Journal of Statistics, 4, 597-610.
https://doi.org/10.4236/ojs.2014.48056

   Cordeiro, G.M. Ortega, E.M. and Da Cunha, D.C. (2013) The Exponentiated Generalised Class of Distributions. Journal of Data Science, 11, 127.

   Lehman, E.L. (1953) The Power of Rank Tests. The Annals of Mathematical Statistics, 24, 23-43.
https://doi.org/10.1214/aoms/1177729080

   Oguntunde, P., Odetunmibi, O. and Adejumo, A. (2015) On the Exponentiated Generalized Weibull Distribution: A Generalization of the Weibull Distribution. Journal of Science and Technology, 8, 1-7.

   Ng, H.K.T., Chan, P.S. and Balakrishnan, N. (2002) Estimation of Parameters from Progressively Censored Data Using EM Algorithm. Computational Statistics & Data Analysis, 39, 371-389.
https://doi.org/10.1016/S0167-9473(01)00091-3

   Dempster, A.P., Laird, N.M. and Rubin, D.B. (1977) Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society, Series B, 1-38.

   Balakrishnan, N. and Sandhu, R. (1995) A Simple Simulational Algorithm for Generating Progressive Type II Censored Samples. The American Statistician, 49, 229-230.

Top