A Modification of the Quasi Lindley Distribution
Abstract: In this paper, we introduce a modification of the Quasi Lindley distribution which has various advantageous properties for the lifetime data. Several fundamental structural properties of the distribution are explored. Its density function can be left-skewed, symmetrical, and right-skewed shapes with various rages of tail-weights and dispersions. The failure rate function of the new distribution has the flexibility to be increasing, decreasing, constant, and bathtub shapes. A simulation study is done to examine the performance of maximum likelihood and moment estimation methods in its unknown parameter estimations based on the asymptotic theory. The potentiality of the new distribution is illustrated by means of applications to the simulated and three real-world data sets.

1. Introduction

The modeling of the lifetime data is a crucial one in many applied sciences, especially engineering, actuarial science, medicine, and others. Several lifetime distributions, for instance, the exponential, gamma, Weibull, log-normal distributions, and their modifications, have been used to model the lifetime data [1]. These distributions and their modifications have their own characteristics in terms of the shapes of the failure rate function, covering the tail-heaviness, horizontal symmetry, and dispersion. The tail-heaviness for a data set can be measured by the excess kurtosis (EK) which is defined as $\tau -3$, where $\tau$ is the kurtosis of the data set. The $EK>0$ is called a fatter tail (Leptokurtic) and $EK<0$ is called a thinner tail (Platykurtic) distribution. Further, the symmetry and dispersion for a data set can be measured by skewness (SK), and Fano factor (FF) values, respectively, where the Fano factor value is the variance-to-mean ratio.

The modification of a lifetime distribution may be done by using the finite mixture model to handle the complexity by heterogeneity. The Lindley distribution (LD) is one of the finite mixture models under the Bayesian framework, and it was introduced by Lindley (1958) [2] having the density function:

${f}_{Y}\left(y\right)=\frac{{\theta }^{2}}{1+\theta }\left(1+y\right){\text{e}}^{-\theta y};\text{ }y>0,\text{ }\theta >0,$ (1)

where $\theta$ is the shape parameter that controls the shape of the distribution, and y is the respective random variable. The density function of this distribution is based on a two-component mixture of two different continuous distributions

namely exponential ( $\theta$ ) and gamma ( $2,\theta$ ) distributions with the mixing proportion, $p=\frac{\theta }{1+\theta }$, where the p is defined by using the shape parameter (s) of

the latent variable distribution. The LD has the increasing failure rate function while the exponential distribution has the constant failure rate function. In statistical literature, Ghitany et al. (2008) [3] showed that the Lindley distribution is more flexible and provides a better fit than the exponential distribution for lifetime data, especially its flexible mathematical format and failure rate criteria.

Some modifications of LD have been proposed by researchers to increase the flexibility further, especially for failure rate criteria. Here, they introduce new parameter (s) that might be shape or scale or location parameter (s). In general, while a scale parameter stretches or shrinks the respective distribution, a location parameter changes the starting point of that distribution. Dolati et al. (2009) [4] introduced a generalized Lindley distribution (GLD), Shanker et al. (2013) [5] obtained a two-parameter Lindley distribution (TwPLD), Abouammoh et al. (2015) [6] proposed a new generalized Lindley distribution (NGLD), and Monsef (2016) [7] introduced a Lindley distribution with location parameter (LwLD). Ekhosuehi et al. (2018) [8] obtained a new generalized two-parameter Lindley distribution (NGTwPLD). Tharshan and Wijekoon (2020) [9] proposed a location-based generalized Akash distribution (LGAD). Recently, Ramos et al. (2020) [10] introduced a two-parameter distribution with increasing and Bathtub hazard rate (TwPD). Note that GLD, TwPLD, NGLD, LwLD, NGTwPLD, LGAD, and TwPD are two-component mixture models with two or three parameters. Table 1 summarizes these distributions’ mixing proportions, mixing components, failure rate, and parameters. Further, in all distributions given in Table 1, the mixing proportions are defined by incorporating the scale parameter of the mixing components. This may limit the flexibility to perform the scale parameter of the mixing component and shape parameters of the latent variable distribution, separately for a data set.

Without incorporating the scale parameter to the mixing proportion, Shanker et al. (2013) [11] proposed the Quasi Lindley distribution (QLD) with the density function:

${f}_{Y}\left(y\right)=\frac{\theta \left(\alpha +y\theta \right)}{1+\alpha }{\text{e}}^{-\theta y};y>0,\theta >0,\alpha >-1,$ (2)

where $\alpha$ is the shape parameter introduced from the latent variable distribution and $\theta$ is the scale parameter introduced from the mixing components. Equation (2) presents two-component mixture of an exponential ( $\theta$ ), and gamma

( $2,\theta$ ) with the mixing proportion, $p=\frac{\alpha }{\alpha +1}$. It has the increasing failure rate and its skewness ( ${\gamma }_{1Q}$ ), kurtosis ( ${\gamma }_{2Q}$ ), and Fano factor ( ${\gamma }_{3Q}$ ) functions are:

$\frac{2\left({\alpha }^{3}+6{\alpha }^{2}+6\alpha +2\right)}{{\left({\alpha }^{2}+4\alpha +2\right)}^{3/2}}$, $\frac{3\left(3{\alpha }^{4}+24{\alpha }^{3}+44{\alpha }^{2}+32\alpha +8\right)}{{\left({\alpha }^{2}+4\alpha +2\right)}^{2}}$, and $\frac{{\alpha }^{2}+4\alpha +2}{\theta \left(\alpha +2\right)\left(\alpha +1\right)}$,

respectively. Then, it is clear that it has more flexibility to cover the tail-heaviness and dispersion than mentioned distributions in Table 1.

Tharshan and Wijekoon (2020) [12] have done a comparison study by introducing a new five-parameter generalized Lindley distribution (FPGLD). They have shown that QLD can perform well than some other existing Lindley family distributions for higher SK, EK, and FF values by using the simulated and real-world data sets. This new distribution (FPGLD) was introduced to ease the

Table 1. Mixing proportions, mixing components, failure rate, and parameters of some notable existing Lindley family distributions.

comparison. The density function of FPGLD ( $\theta ,\beta ,\alpha ,\delta ,\eta$ ) is given by:

$\begin{array}{l}{f}_{Y}\left(y\right)=\frac{\theta }{\delta \alpha +\eta }\left(\delta \alpha +\eta \theta \left(y-\beta \right)\right){\text{e}}^{-\theta \left(y-\beta \right)};\\ y>\beta \ge 0,\theta >0,\delta \alpha >-\eta ,\delta \alpha >-\eta \theta \left(y-\beta \right)\end{array}$ (3)

where $\alpha ,\delta$, and $\eta$ are the shape parameters introduced from the latent variable distribution, and $\theta$ and $\beta$ are scale and location parameters, respectively, introduced from the mixing components. Equation (3) presents two-component mixture of an exponential ( $\theta ,\beta$ ), and gamma ( $2,\theta ,\beta$ ) with the mixing proportion

$p=\frac{\delta \alpha }{\delta \alpha +\eta }$. Although the QLD performs well than the other distributions when

all three measures; skewness, excess kurtosis, and Fano factor are high, the flexibility of QLD is limited for all ranges of the above three measures since the shape parameter of the mixing component gamma ( $2,\theta$ ) is fixed in QLD. That is, $\sqrt{2}<{\gamma }_{1Q}<2$, and $6<{\gamma }_{2Q}<9$.

In this context, we modify the QLD by adding a shape parameter that is not fixed to the mixing components. The modified QLD will be called modified Quasi Lindley distribution (MQLD). The new distribution is a two-component mixture of an exponential, and a gamma distributions. Since FPGLD ( $\theta ,\beta =0,\alpha ,\delta ,\eta$ ) has the same mixing components of MQLD and accommodates several existing and new sub-models of Lindley family distributions by setting its parameters of mixing proportion, we define the the mixing proportion p for MQLD via a comparison study among the FPGLD ( $\theta ,\beta =0,\alpha ,\delta ,\eta$ ) and its sub-models. Here, FPGLD ( $\theta ,\beta =0,\alpha ,\delta ,\eta$ ) means FPGLD by setting its location parameter $\beta =0$. This comparison study will be helpful to define the mixing proportion of MQLD that provides a better fit without having additional shape parameter(s) in the new distribution.

The paper is outlined as follows: in Section 2, we introduce the MQLD with its density and distribution functions. We present the statistical properties of MQLD including moments and moment generating functions, and quantile function in Section 3. In Section 4, we derive the reliability properties of MQLD. The size-biased form of the MQLD is discussed in Section 5. Section 6 covers the unknown parameter estimation of MQLD. Finally, a simulation study is performed to verify the asymptotic property of unknown parameter estimation methods, and simulated and real-world data sets are used to illustrate its applicability over some other existing Lindley family distributions.

2. Formulation of the New Distribution

In this section, we introduce a finite mixture of two non-identical distributions called modified Quasi Lindley distribution with its probability density function (pdf) and cumulative distribution function (cdf).

2.1. Defining the Mixing Proportion p

For the comparison study, it is simulated 50 random samples of size, $n=150$ from FPGLD ( $\theta ,\beta =0,\alpha ,\delta ,\eta$ ) with various skewness (SK), Excees kurtosis (EK), and Fano factor (FF) values by setting the parameter values. Then, FPGLD ( $\theta ,\beta =0,\alpha ,\delta ,\eta$ ) and its sub-models for given $\eta$ and $\delta$ values in Table 2 are fitted to the simulated random samples. Table 2 shows sub-models and FPGLD ( $\theta ,\beta =0,\alpha ,\delta ,\eta$ ), denoted ${D}_{\delta =\delta }^{\eta =\eta }$ and highlighted some of the sub-models that gives minimum negative log-likelihood ( $-2\mathrm{log}L$ ) values consistently with ${D}_{\delta =\delta }^{\eta =\eta }$ for all simulated random samples. Based on minimum number of parameters among the highlighted models, the sub-model, denoted ${D}_{\delta ={\alpha }^{2}}^{\eta =1}$ is a simple distribution and can perform well than others. Then, we utilize the mixing

proportion of ${D}_{\delta ={\alpha }^{2}}^{\eta =1}$, $p=\frac{{\alpha }^{3}}{{\alpha }^{3}+1}$ to define the mixing proportion of MQLD. The detailed study results could be provided upon request of reviewers.

2.2. Defining the pdf and cdf

Suppose Y be a non-negative random variable that is derived as a finite mixture of two non-identical distributions, exponential ( $\theta$ ), and gamma ( $\delta ,\theta$ ) with the mixing proportion, $p=\frac{{\alpha }^{3}}{{\alpha }^{3}+1}$ under the Bayesian framework, as follows:

${f}_{Y}\left(y;\theta ,\alpha ,\delta \right)=p{f}_{1}\left(y;\theta \right)+\left(1-p\right){f}_{2}\left(y;\delta ,\theta \right),$

where $\alpha$ and $\delta$ are shape parameters, and $\theta$ is a scale parameter and

${f}_{1}\left(y;\theta \right)=\theta {\text{e}}^{-\theta y}$ and ${f}_{2}\left(y;\delta ,\theta \right)=\frac{{\theta }^{\delta }{y}^{\delta -1}{\text{e}}^{-\theta y}}{\Gamma \left(\delta \right)};y>0,\theta >0,\delta >0$.

Then, the pdf of the MQLD with parameters $\theta ,\alpha$, and $\delta$ is defined as:

${f}_{Y}\left(y;\theta ,\alpha ,\delta \right)=\frac{\theta {\text{e}}^{-\theta y}}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta y\right)}^{\delta -1}\right);y>0,\theta >0,{\alpha }^{3}>-1,\delta >0.$ (4)

The first derivative of $\mathrm{log}f\left(y\right)$ for y is given by:

$\frac{\text{d}\mathrm{log}\left(f\left(y\right)\right)}{\text{d}y}=l{f}^{\prime }\left(y\right)=-\theta +\frac{{\theta }^{\delta -1}\left(\delta -1\right){y}^{\delta -2}}{\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta y\right)}^{\delta -1}}.$

Table 2. Comparison study results of FPGLD ( $\theta ,\beta =0,\alpha ,\delta ,\eta$ ) and its sub-models.

Then, the non-linear equation respect to y, $l{f}^{\prime }\left(y\right)=0$ gives the modes of the $f\left(y\right)$, i.e. roots of the $l{f}^{\prime }\left(y\right)=0$. It is clear that there exists more than one roots for $l{f}^{\prime }\left(y\right)=0$. Suppose $y={y}_{0}$ is a mode value of $l{f}^{\prime }\left(y\right)=0$, then $l{f}^{″}\left({y}_{0}\right)<0$ (local maximum), where $l{f}^{″}\left({y}_{0}\right)=\frac{{\text{d}}^{2}\mathrm{log}\left(f\left(y\right)\right)}{\text{d}{y}^{2}}$ at $y={y}_{0}$.

Figure 1 illustrates some of the possible shapes of the pdf of the MQLD.

The corresponding cdf of MQLD is given by:

${F}_{Y}\left(y;\theta ,\alpha ,\delta \right)=1-\frac{\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}+\Gamma \left(\delta ,\theta y\right)}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)};y>0,\theta >0,{\alpha }^{3}>-1,\delta >0,$ (5)

where $\Gamma \left(a,y\right)$ is an incomplete gamma function defined as $\Gamma \left(a,y\right)={\int }_{y}^{\infty }\text{ }\text{ }{x}^{a-1}{\text{e}}^{-x}\text{d}x$.

3. Statistical Properties

In this section, we provide some important statistical properties of MQLD such as rth moments about the origin and about the mean, moment related measures, moment generating and characteristic functions, and quantile function.

3.1. Moments and Related Measures

We may utilize the moments to study the characteristics of a distribution such as horizontal symmetry, dispersion, and tail-heaviness. The following proposition gives the rth moment about the origin:

Proposition 1. The rth moment about the origin of the MQLD is given by:

${{\mu }^{\prime }}_{r}=\frac{1}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right){\theta }^{r}}\left({\alpha }^{3}\Gamma \left(\delta \right)\Gamma \left(r+1\right)+\Gamma \left(r+\delta \right)\right)$ (6)

Figure 1. The probability density of MQLD at different parameter values. (a) and (b): $\delta$ and $\alpha$ are fixed, and $\theta$ values are changed; (c) and (d): $\theta$ and $\delta$ are fixed, and $\alpha$ values are changed; (e) and (f): $\theta$ and $\alpha$ are fixed, and $\delta$ values are changed.

Proof. $\begin{array}{c}{{\mu }^{\prime }}_{r}={\int }_{0}^{\infty }\text{ }{y}^{r}\frac{\theta {\text{e}}^{-\theta y}}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta y\right)}^{\delta -1}\right)\text{d}y\\ =\frac{\theta }{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\left({\alpha }^{3}\Gamma \left(\delta \right){\int }_{0}^{\infty }\text{ }\text{ }{y}^{r}{\text{e}}^{-\theta y}\text{d}y+{\theta }^{\delta -1}{\int }_{0}^{\infty }\text{ }\text{ }{y}^{r+\delta -1}{\text{e}}^{-\theta y}\text{d}y\right)\\ =\frac{\theta }{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\left(\frac{\Gamma \left(\delta \right){\alpha }^{3}\Gamma \left(r+1\right)}{{\theta }^{r+1}}+\frac{\Gamma \left(r+\delta \right)}{{\theta }^{r+1}}\right)\\ =\frac{1}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right){\theta }^{r}}\left({\alpha }^{3}\Gamma \left(\delta \right)\Gamma \left(r+1\right)+\Gamma \left(r+\delta \right)\right)\end{array}$

Substituting $r=1,2,3$ and 4 in Equation (6), the first four moments about the origin are derived as:

${{\mu }^{\prime }}_{1}=\frac{{\alpha }^{3}+\delta }{\left({\alpha }^{3}+1\right)\theta }=\mu ,\text{ }{{\mu }^{\prime }}_{2}=\frac{2{\alpha }^{3}+\delta \left(\delta +1\right)}{\left({\alpha }^{3}+1\right){\theta }^{2}},\text{ }{{\mu }^{\prime }}_{3}=\frac{6{\alpha }^{3}+\delta \left(\delta +1\right)\left(\delta +2\right)}{\left({\alpha }^{3}+1\right){\theta }^{3}},$ and

${{\mu }^{\prime }}_{4}=\frac{24{\alpha }^{3}+\delta \left(\delta +1\right)\left(\delta +2\right)\left(\delta +3\right)}{\left({\alpha }^{3}+1\right){\theta }^{4}}$,

respectively. Then, the rth-order moments about the mean can be obtained by using the relationship between moments about the mean and moments about the origin, i.e.

${\mu }_{r}=E\left[{\left(Y-\mu \right)}^{r}\right]={\sum }_{i=0}^{r}\left(\begin{array}{c}r\\ i\end{array}\right){\left(-1\right)}^{r-i}{{\mu }^{\prime }}_{i}{\mu }^{r-i}$.

Therefore, some rth-order moments about the mean are:

${\mu }_{2}=-{\mu }^{2}+{{\mu }^{\prime }}_{2}=\frac{{\alpha }^{3}\left({\alpha }^{3}+2\right)+\delta \left(1+{\alpha }^{3}\left(\delta -1\right)\right)}{{\left({\alpha }^{3}+1\right)}^{2}{\theta }^{2}},$

$\begin{array}{c}{\mu }_{3}=2{\mu }^{3}-3{{\mu }^{\prime }}_{2}\mu +{{\mu }^{\prime }}_{3}\\ =\frac{2{\alpha }^{3}\left(3+3{\alpha }^{3}+{\alpha }^{6}\right)+\delta \left(2-{\alpha }^{6}-5{\alpha }^{3}+\delta \left(6{\alpha }^{3}+\delta \left(-{\alpha }^{3}+{\alpha }^{6}\right)\right)\right)}{{\left({\alpha }^{3}+1\right)}^{3}{\theta }^{3}},\end{array}$ and

${\mu }_{4}=-3{\mu }^{4}+6{{\mu }^{\prime }}_{2}{\mu }^{2}-4{{\mu }^{\prime }}_{3}\mu +{{\mu }^{\prime }}_{4}=\frac{3{\alpha }^{3}\left(8+16{\alpha }^{3}+12{\alpha }^{6}+3{\alpha }^{9}\right)+A}{{\left({\alpha }^{3}+1\right)}^{4}{\theta }^{4}},$

where, $\begin{array}{c}A=\delta \left(6-14{\alpha }^{3}-16{\alpha }^{6}+8{\alpha }^{9}+\delta \left(3+29{\alpha }^{3}+13{\alpha }^{6}+5{\alpha }^{9}{\text{ }}_{}^{}\\ \text{\hspace{0.17em}}+\delta \left(-4{\alpha }^{3}+10{\alpha }^{6}+2{\alpha }^{9}+\delta \left({\alpha }^{3}-{\alpha }^{6}+{\alpha }^{9}\right)\right)\right)\right),\end{array}$

respectively. Further, measures of skewness ( ${\gamma }_{1}$ ), measures of kurtosis ( ${\gamma }_{2}$ ), and the Index of dispersion/Fano factor ( ${\gamma }_{3}$ ) of the MQLD are derived as:

${\gamma }_{1}=\frac{{\mu }_{3}}{{\left({\mu }_{2}\right)}^{3/2}}=\frac{2{\alpha }^{3}\left(3+3{\alpha }^{3}+{\alpha }^{6}\right)+\delta \left(2-{\alpha }^{6}-5{\alpha }^{3}+\delta \left(6{\alpha }^{3}+\delta \left(-{\alpha }^{3}+{\alpha }^{6}\right)\right)\right)}{{\left({\alpha }^{3}\left({\alpha }^{3}+2\right)+\delta \left(1+{\alpha }^{3}\left(\delta -1\right)\right)\right)}^{3/2}},$

${\gamma }_{2}=\frac{{\mu }_{4}}{{\left({\mu }_{2}\right)}^{2}}=\frac{3{\alpha }^{3}\left(8+16{\alpha }^{3}+12{\alpha }^{6}+3{\alpha }^{9}\right)+A}{{\left({\alpha }^{3}\left({\alpha }^{3}+2\right)+\delta \left(1+{\alpha }^{3}\left(\delta -1\right)\right)\right)}^{2}},$ and

${\gamma }_{3}=\frac{{\mu }_{2}}{{{\mu }^{\prime }}_{1}}=\frac{{\alpha }^{3}\left({\alpha }^{3}+2\right)+\delta \left(1+{\alpha }^{3}\left(\delta -1\right)\right)}{\left({\alpha }^{3}+\delta \right)\left({\alpha }^{3}+1\right)\theta },$

respectively. Figure 2 and Figure 3 show various patterns of the skewness, kurtosis, and Fano factor functions of MQLD at different parameter values. The patterns suggest that the MQLD is more flexible than the QLD in terms of covering various ranges of skewness, kurtosis, and Fano factor values.

3.2. Moment Generating and Characteristic Function

Own characteristics of a probability distribution are directly associated with the moment generating function (mgf) and the characteristic function (cf). The

Figure 2. The skewness and kurtosis functions of MQLD at different parameter values of $\delta$ and $\alpha$.

Figure 3. The Fano factor function of MQLD at different parameter values of $\delta ,\alpha$ and $\theta$.

following proposition provides mgf of the MQLD:

Proposition 2. The mgf say ${M}_{Y}\left(t\right)$ of the MQLD is given as follows:

${M}_{Y}\left(t\right)=\frac{\theta }{\left({\alpha }^{3}+1\right){\left(\theta -t\right)}^{\delta }}\left({\alpha }^{3}{\left(\theta -t\right)}^{\delta -1}+{\theta }^{\delta -1}\right).$ (7)

Proof. $\begin{array}{c}{M}_{Y}\left(t\right)=E\left({\text{e}}^{ty}\right)\\ ={\int }_{0}^{\infty }\text{ }\text{ }{\text{e}}^{ty}\frac{\theta {\text{e}}^{-\theta y}}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta y\right)}^{\delta -1}\right)\text{d}y\\ =\frac{\theta }{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\left(\Gamma \left(\delta \right){\alpha }^{3}{\int }_{0}^{\infty }\text{ }\text{ }{\text{e}}^{-y\left(\theta -t\right)}\text{d}y+{\theta }^{\delta -1}{\int }_{0}^{\infty }\text{ }\text{ }{y}^{\delta -1}{\text{e}}^{-y\left(\theta -t\right)}\text{d}y\right)\\ =\frac{\theta }{\left({\alpha }^{3}+1\right){\left(\theta -t\right)}^{\delta }}\left({\alpha }^{3}{\left(\theta -t\right)}^{\delta -1}+{\theta }^{\delta -1}\right).\end{array}$ o

Similar way, the characteristic function say, ${\psi }_{Y}\left(t\right)$ of the MQLD can be derived as follows:

${\psi }_{Y}\left(t\right)=E\left({\text{e}}^{ity}\right)=\frac{\theta }{\left({\alpha }^{3}+1\right){\left(\theta -it\right)}^{\delta }}\left({\alpha }^{3}{\left(\theta -it\right)}^{\delta -1}+{\theta }^{\delta -1}\right).$ (8)

where $i=\sqrt{-1}$ is the complex unit.

3.3. Quantile Function

We may use the quantile function to estimate the quantiles and simulate the random samples for a probability distribution. The quantile function can be derived by solving $F\left({y}_{u}\right)=u,0. The quantile function of MQLD is obtained as:

$F\left({y}_{u}\right)=\frac{1}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\left(\Gamma \left(\delta \right)\left(1+{\alpha }^{3}\left(1-{\text{e}}^{-\theta {y}_{u}}\right)\right)-\Gamma \left(\delta ,\theta {y}_{u}\right)\right)=u,$

$⇒\Gamma \left(\delta \right)\left(1+{\alpha }^{3}\left(1-{\text{e}}^{-\theta {y}_{u}}\right)\right)-\Gamma \left(\delta ,\theta {y}_{u}\right)-u\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)=0$ (9)

Since Equation (9) is not a closed-form, we cannot estimate the quantiles and simulate the random variables from MQLD directly. However, these can be done by using numerical methods. Further, By substituting $u=0.25,0.5$ and 0.75 in Equation (9), the first three quartiles can be derived by solving the following equations, respectively.

$\begin{array}{l}\Gamma \left(\delta \right)\left(1+{\alpha }^{3}\left(1-{\text{e}}^{-\theta {y}_{0.25}}\right)\right)-\Gamma \left(\delta ,\theta {y}_{0.25}\right)-0.25\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)=0,\\ \Gamma \left(\delta \right)\left(1+{\alpha }^{3}\left(1-{\text{e}}^{-\theta {y}_{0.50}}\right)\right)-\Gamma \left(\delta ,\theta {y}_{0.50}\right)-0.50\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)=0,\end{array}$

and

$\Gamma \left(\delta \right)\left(1+{\alpha }^{3}\left(1-{\text{e}}^{-\theta {y}_{0.75}}\right)\right)-\Gamma \left(\delta ,\theta {y}_{0.75}\right)-0.75\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)=0.$

3.4. Distribution of Order Statistics

The linear combinations of order statistics are used to estimate the unknown parameters for a distribution. Let ${Y}_{1},{Y}_{2},\cdots ,{Y}_{n}$ be n independent random variables from MQLD and ${Y}_{1:n}\le {Y}_{2:n}\le \cdots \le {Y}_{n:n}$ be the corresponding order statistics. Then, the pdf and cdf of ${Y}_{k:n}$ are given as:

${f}_{{Y}_{k:n}}\left(y\right)=\frac{1}{B\left(k,n-k+1\right)}{f}_{Y}\left(y\right){\left({F}_{Y}\left(y\right)\right)}^{k-1}{\left(1-{F}_{Y}\left(y\right)\right)}^{n-k},$ (10)

and

${F}_{{Y}_{k:n}}\left(y\right)=\underset{j=k}{\overset{n}{\sum }}\left(\begin{array}{c}n\\ j\end{array}\right){\left({F}_{Y}\left(y\right)\right)}^{j}{\left(1-{F}_{Y}\left(y\right)\right)}^{n-j};\text{ }k=1,2,\cdots ,n,$ (11)

respectively. By substituting ${f}_{Y}\left(y\right)$ and ${F}_{Y}\left(y\right)$ of MQLD in Equations (10) and (11), the pdf and cdf of ${Y}_{k:n}$ for MQLD are obtained as:

$\begin{array}{l}{f}_{{Y}_{k:n}}\left(y\right)=\frac{1}{B\left(k,n-k+1\right)}\left(\frac{\theta {\text{e}}^{-\theta y}}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta y\right)}^{\delta -1}\right)\\ \text{\hspace{0.17em}}×{\left(1-\frac{\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}+\Gamma \left(\delta ,\theta y\right)}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\right)}^{k-1}{\left(\frac{\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}+\Gamma \left(\delta ,\theta y\right)}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\right)}^{n-k}\right)\\ =\frac{1}{B\left(k,n-k+1\right)}\left(\frac{\theta {\text{e}}^{-\theta y}}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta y\right)}^{\delta -1}\right)\\ \text{\hspace{0.17em}}×\underset{i=0}{\overset{k-1}{\sum }}{\left(-1\right)}^{i}\left(\begin{array}{c}k-1\\ i\end{array}\right){\left(\frac{\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}+\Gamma \left(\delta ,\theta y\right)}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\right)}^{i}\frac{1}{{\left(\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)\right)}^{n-k}}\\ \text{\hspace{0.17em}}×\underset{j=0}{\overset{n-k}{\sum }}\left(\begin{array}{c}n-k\\ j\end{array}\right){\left(\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}\right)}^{j}{\left(\Gamma \left(\delta ,\theta y\right)\right)}^{n-k-j}\right)\end{array}$

$\begin{array}{l}=\frac{\theta {\text{e}}^{-\theta y}\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta y\right)}^{\delta -1}\right)}{B\left(k,n-k+1\right){\left(\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)\right)}^{n-k+1}}\underset{i=0}{\overset{k-1}{\sum }}\text{ }\text{ }\underset{j=0}{\overset{n-k}{\sum }}\text{ }\text{ }\underset{m=0}{\overset{i}{\sum }}\left(\begin{array}{c}k-1\\ i\end{array}\right)\left(\begin{array}{c}n-k\\ j\end{array}\right)\left(\begin{array}{c}i\\ m\end{array}\right)\\ \text{\hspace{0.17em}}×{\left(-1\right)}^{i}\frac{{\left(\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}\right)}^{m+j}{\left(\Gamma \left(\delta ,\theta y\right)\right)}^{n+i-m-k-j}}{{\left(\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)\right)}^{i}}\end{array}$

and

$\begin{array}{l}{F}_{{Y}_{k:n}}\left(y\right)=\underset{j=k}{\overset{n}{\sum }}\left(\begin{array}{c}n\\ j\end{array}\right){\left(1-\frac{\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}+\Gamma \left(\delta ,\theta y\right)}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\right)}^{j}\\ \text{\hspace{0.17em}}×{\left(\frac{\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}+\Gamma \left(\delta ,\theta y\right)}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\right)}^{n-j}\\ =\underset{j=k}{\overset{n}{\sum }}\left(\begin{array}{c}n\\ j\end{array}\right)\underset{i=0}{\overset{j}{\sum }}\left(\begin{array}{c}j\\ i\end{array}\right){\left(-1\right)}^{i}{\left(\frac{\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}+\Gamma \left(\delta ,\theta y\right)}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\right)}^{i}\\ \text{\hspace{0.17em}}×\frac{1}{{\left(\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)\right)}^{n-j}}\underset{m=0}{\overset{n-j}{\sum }}\left(\begin{array}{c}n-j\\ m\end{array}\right){\left(\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}\right)}^{m}{\left(\Gamma \left(\delta ,\theta y\right)\right)}^{n-j-m}\\ =\underset{j=k}{\overset{n}{\sum }}\text{ }\text{ }\underset{i=0}{\overset{j}{\sum }}\text{ }\text{ }\underset{m=0}{\overset{n-j}{\sum }}\text{ }\text{ }\underset{p=0}{\overset{i}{\sum }}\left(\begin{array}{c}n\\ j\end{array}\right)\left(\begin{array}{c}j\\ i\end{array}\right)\left(\begin{array}{c}n-j\\ m\end{array}\right)\left(\begin{array}{c}i\\ p\end{array}\right)\text{\hspace{0.17em}}×{\left(-1\right)}^{i}\frac{{\left(\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}\right)}^{m+p}{\left(\Gamma \left(\delta ,\theta y\right)\right)}^{n+i-p-m-j}}{{\left(\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)\right)}^{n+i-j}},\end{array}$

respectively.

4. Reliability, Inequality and Entropy Measures

In this section, we derive and study some important reliability measures of MQLD, namely survival function/reliability function $S\left(y\right)$, hazard rate function/failure rate function $h\left(y\right)$, reversed hazard rate function $r\left(y\right)$, cumulative hazard rate function $H\left(y\right)$, mean residual life function $m\left(y\right)$ ; inequality measures, namely Lorenz curve $L\left(F\left(y\right)\right)$, and Benferroni curve $B\left(F\left(y\right)\right)$ ; and the Renyi entropy measure.

4.1. Survival and Hazard Rate Functions

The survival function and hazard rate function are crucial functions to specify a survival distribution. The survival function is the probability of surviving up to a point $\lambda$. Then, the survival function of MQLD is defined as:

$S\left(y\right)=p\left(Y>y\right)=1-F\left(y\right)=\frac{\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}+\Gamma \left(\delta ,\theta y\right)}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}$. (12)

Note that, $S\left(0\right)=1$ and ${\mathrm{lim}}_{y\to \infty }S\left(y\right)=0$.

The instantaneous failure rate is described by the hazard rate function (hrf). The hrf of MQLD is given by:

$\begin{array}{c}h\left(y\right)=\underset{\Delta y\to 0}{\mathrm{lim}}\frac{P\left(yy\right)}{\Delta y}=\frac{f\left(y\right)}{S\left(y\right)}\\ =\frac{\theta {\text{e}}^{-\theta y}\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta y\right)}^{\delta -1}\right)}{\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}+\Gamma \left(\delta ,\theta y\right)}.\end{array}$ (13)

Note that, $h\left(0\right)=\frac{\theta {\alpha }^{3}}{{\alpha }^{3}+1}=f\left(0\right)$ and ${\mathrm{lim}}_{y\to \infty }h\left(y\right)=\theta$.

Figure 4 illustrates the possible patterns of the hrf of MQLD at different parameter values. The results indicate that MQLD has the capability to model the monotonic increasing and decreasing, constant, and bathtub failure rate shapes while QLD has only increasing failure rate shape.

The reversed hazard rate function of MQLD is defined as:

$\begin{array}{c}r\left(y\right)=\underset{\Delta y\to 0}{\mathrm{lim}}\frac{P\left(y (14)

and the corresponding cumulative hazard rate function that represents the total number of failures over an interval of time is defined for MQLD as:

$H\left(y\right)=-\mathrm{log}\left[S\left(y\right)\right]=-\mathrm{log}\left(\frac{\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}+\Gamma \left(\delta ,\theta y\right)}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\right).$ (15)

Figure 4. The hazard rate function of MQLD at different parameter values of $\theta$, $\alpha$, and $\delta$.

This is a monotonic increasing function and satisfies $H\left(0\right)=0$ and ${\mathrm{lim}}_{y\to \infty }H\left(y\right)=\infty$.

4.2. Mean Residual Life Function

The mean residual life function represents the expected additional lifetime of a component that has survived up to time $y>0$. It is an important characteristic in the reliability study. The mean residual life function say $m\left(y\right)$, is defined as:

$m\left(y\right)=E\left(Y-y/Y>y\right)=\frac{1}{1-F\left(y\right)}{\int }_{y}^{\infty }\text{ }\text{ }tf\left(t\right)\text{d}t-y$. The following proposition gives the $m\left(y\right)$ for the MQLD.

Proposition 3. The mean residual life function of MQLD is given by:

$m\left(y\right)=\frac{{\text{e}}^{-\theta y}\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta y\right)}^{\delta }\right)+\Gamma \left(\delta ,\theta y\right)\left(\delta -\theta y\right)}{\theta \left(\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}+\Gamma \left(\delta ,\theta y\right)\right)}.$ (16)

Proof. $m\left(y\right)=\frac{1}{1-F\left(y\right)}{\int }_{y}^{\infty }\text{ }\text{ }tf\left(t\right)\text{d}t-y$ and note that,

$\begin{array}{c}{\int }_{y}^{\infty }\text{ }\text{ }tf\left(t\right)\text{d}t={\int }_{y}^{\infty }\text{ }\text{ }t\frac{\theta {\text{e}}^{-\theta y}}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta y\right)}^{\delta -1}\right)\text{d}t\\ =\frac{\theta }{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\left(\Gamma \left(\delta \right){\alpha }^{3}{\int }_{y}^{\infty }\text{ }\text{ }t{\text{e}}^{-\theta t}\text{d}t+{\theta }^{\delta -1}{\int }_{y}^{\infty }\text{ }\text{ }{t}^{\delta }{\text{e}}^{-\theta t}\text{d}t\right)\\ =\frac{1}{\theta \left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\left(\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}\left(1+\theta y\right)+\Gamma \left(\delta +1,\theta y\right)\right)\end{array}$

Therefore,

$\begin{array}{c}m\left(y\right)=\frac{\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}\left(1+\theta y\right)+\Gamma \left(\delta +1,\theta y\right)}{\theta \left(\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}+\Gamma \left(\delta ,\theta y\right)\right)}-y\\ =\frac{{\text{e}}^{-\theta y}\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta y\right)}^{\delta }\right)+\Gamma \left(\delta ,\theta y\right)\left(\delta -\theta y\right)}{\theta \left(\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}+\Gamma \left(\delta ,\theta y\right)\right)}.\end{array}$ o

Then, Equation (16) satisfies: $m\left(y\right)\ge 0$, $m\left(0\right)=\frac{{\alpha }^{3}+\delta }{\theta \left({\alpha }^{3}+1\right)}=\mu$, and ${\mathrm{lim}}_{y\to \infty }m\left(y\right)=\frac{1}{\theta }$.

4.3. Lorenz and Bonferroni Curves

The Lorenz and Bonferroni curves are used to measure the income inequality. They are widely used in reliability, insurance, economises, and medicine. The

Lorenz curve say $L\left(F\left(y\right)\right)$, is defined as: $L\left(F\left(y\right)\right)=1-\frac{{\int }_{y}^{\infty }\text{ }\text{ }tf\left(t\right)\text{d}t}{\mu }$, and the Bonferroni curve say $B\left(F\left(y\right)\right)$, is defined as $B\left(F\left(y\right)\right)=\frac{L\left(F\left(y\right)\right)}{F\left(y\right)}$. By substituting the integral part, ${\int }_{y}^{\infty }\text{ }\text{ }tf\left(t\right)\text{d}t$ value from the previous proposition’s proof, the $L\left(F\left(y\right)\right)$ for MQLD can be obtained as:

$\begin{array}{l}L\left(F\left(y\right)\right)\\ =1-\frac{\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}\left(1+\theta y\right)+\Gamma \left(\delta +1,\theta y\right)}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}.\end{array}$ (17)

Then, the $B\left(F\left(y\right)\right)$ for the MQLD is given by:

$\begin{array}{l}B\left(F\left(y\right)\right)\\ =\frac{\left({\alpha }^{3}+1\right)\left(\left({\alpha }^{3}+\delta \right)\Gamma \left(\delta \right)-\left(\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}\left(1+\theta y\right)+\Gamma \left(\delta +1,\theta y\right)\right)\right)}{\left({\alpha }^{3}+\delta \right)\left(\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)-\left(\Gamma \left(\delta \right){\alpha }^{3}{\text{e}}^{-\theta y}+\Gamma \left(\delta ,\theta y\right)\right)\right)}\end{array}$ (18)

4.4. Renyi Entropy

The entropy measure is a measure of the variation of uncertainty for a distribution and widely used in the information theory. The Renyi entropy is a popular uncertainty measure say ${H}_{R}\left(\gamma \right)$ and it is an extension of Shannon entropy

[13]. The ${H}_{R}\left(\gamma \right)$ is defined as: ${H}_{R}\left(\gamma \right)=\frac{1}{1-\gamma }\mathrm{ln}{\int }_{0}^{\infty }{\left(f\left(y\right)\right)}^{\gamma }\text{d}y$. The following proposition derives the Renyi entropy for MQLD:

Proposition 4. The Renyi entropy for the MQLD is obtained as:

${H}_{R}\left(\gamma \right)=\frac{1}{1-\gamma }\mathrm{ln}\left(\frac{{\theta }^{\gamma -1}{\alpha }^{3\gamma }}{{\left({\alpha }^{3}+1\right)}^{\gamma }\gamma }\underset{i=0}{\overset{\gamma }{\sum }}\left(\begin{array}{c}\gamma \\ i\end{array}\right)\frac{i\left(\delta -1\right)\Gamma \left(i\left(\delta -1\right)\right)}{{\left(\Gamma \left(\delta \right){\alpha }^{3}\right)}^{i}{\gamma }^{i\left(\delta -1\right)}}\right)$ (19)

Proof. $\begin{array}{c}{H}_{R}\left(\gamma \right)=\frac{1}{1-\gamma }\mathrm{ln}{\int }_{0}^{\infty }{\left(f\left(y\right)\right)}^{\gamma }\text{d}y\\ =\frac{1}{1-\gamma }\mathrm{ln}{\int }_{0}^{\infty }{\left(\frac{\theta {\text{e}}^{-\theta y}}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta y\right)}^{\delta -1}\right)\right)}^{\gamma }\text{d}y\\ =\frac{1}{1-\gamma }\mathrm{ln}{\int }_{0}^{\infty }\frac{{\theta }^{\gamma }}{{\left({\alpha }^{3}+1\right)}^{\gamma }{\left(\Gamma \left(\delta \right)\right)}^{\gamma }}{\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta y\right)}^{\delta -1}\right)}^{\gamma }{\text{e}}^{-\theta \gamma y}\text{d}y\\ =\frac{1}{1-\gamma }\mathrm{ln}{\int }_{0}^{\infty }\frac{{\theta }^{\gamma }{\left(\Gamma \left(\delta \right){\alpha }^{3}\right)}^{\gamma }}{{\left({\alpha }^{3}+1\right)}^{\gamma }{\left(\Gamma \left(\delta \right)\right)}^{\gamma }}{\sum }_{i=0}^{\gamma }\left(\begin{array}{c}\gamma \\ i\end{array}\right)\frac{{\left(\theta y\right)}^{i\left(\delta -1\right)}}{{\left(\Gamma \left(\delta \right){\alpha }^{3}\right)}^{i}}{\text{e}}^{-\theta \gamma y}\text{d}y\\ =\frac{1}{1-\gamma }\mathrm{ln}\left(\frac{{\left(\theta {\alpha }^{3}\right)}^{\gamma }}{{\left({\alpha }^{3}+1\right)}^{\gamma }}{\sum }_{i=0}^{\gamma }\left(\begin{array}{c}\gamma \\ i\end{array}\right)\frac{1}{{\left(\Gamma \left(\delta \right){\alpha }^{3}\right)}^{i}}{\int }_{0}^{\infty }{\left(\theta y\right)}^{i\left(\delta -1\right)}{\text{e}}^{-\theta \gamma y}\text{d}y\right)\\ =\frac{1}{1-\gamma }\mathrm{ln}\left(\frac{{\theta }^{\gamma -1}{\alpha }^{3\gamma }}{{\left({\alpha }^{3}+1\right)}^{\gamma }\gamma }{\sum }_{i=0}^{\gamma }\left(\begin{array}{c}\gamma \\ i\end{array}\right)\frac{i\left(\delta -1\right)\Gamma \left(i\left(\delta -1\right)\right)}{{\left(\Gamma \left(\delta \right){\alpha }^{3}\right)}^{i}{\gamma }^{i\left(\delta -1\right)}}\right)\end{array}$ o

5. The Size-Biased of MQLD

The weighted distributions are used to record the observations with an unequal chance. The application of the weighted distributions in reliability, medical, and ecological sciences have studied by Patil et al. (1978) [14]. The weighted random variable ${Y}_{w}$ of MQLD is defined as:

${f}_{{Y}_{w}}\left(y\right)=\frac{w\left(y\right)f\left(y\right)}{E\left(w\left(y\right)\right)};y>0$ (20)

where $E\left(w\left(y\right)\right)={\int }_{0}^{\infty }\text{ }\text{ }w\left(y\right)f\left(y\right)\text{d}y;\text{\hspace{0.17em}}0.

When $w\left(y\right)={y}^{\gamma },\gamma >0$, the resulting distribution is called size-biased version of MQLD with order $\gamma$, and is defined as:

${f}_{{Y}_{s}}^{\gamma }\left(y\right)=\frac{{y}^{\gamma }f\left(y\right)}{E\left({y}^{\gamma }\right)};y>0,\gamma >0$,

where ${Y}_{s}$ is the respective random variable. The following proposition gives the density function for the sized-biased version of MQLD:

Proposition 5. The density function for rth order sized-biased form of MQLD is derived as:

${f}_{{Y}_{s}}^{\gamma }\left(y\right)=\frac{{y}^{\gamma }{\theta }^{\gamma +1}\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta y\right)}^{\delta -1}\right){\text{e}}^{-\theta y}}{\Gamma \left(\delta \right){\alpha }^{3}\Gamma \left(\gamma +1\right)+\Gamma \left(\gamma +\delta \right)};\text{\hspace{0.17em}}y>0,\gamma >-1,\delta >0,\gamma +\delta >0$ (21)

Proof. ${f}_{{Y}_{s}}^{\gamma }\left(y\right)=\frac{{y}^{\gamma }f\left(y\right)}{E\left({y}^{\gamma }\right)}$.

Note that

$\begin{array}{c}E\left({y}^{\gamma }\right)={\int }_{0}^{\infty }\text{ }\text{ }{y}^{\gamma }f\left(y\right)\text{d}y\\ ={\int }_{0}^{\infty }\text{ }\text{ }{y}^{\gamma }\frac{\theta {\text{e}}^{-\theta y}}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta y\right)}^{\delta -1}\right)\text{d}y\\ =\frac{\theta }{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\left(\Gamma \left(\delta \right){\alpha }^{3}{\int }_{0}^{\infty }\text{ }\text{ }{y}^{\gamma }{\text{e}}^{-\theta y}\text{d}y+{\theta }^{\delta -1}{\int }_{0}^{\infty }\text{ }\text{ }{y}^{\gamma +\delta -1}{\text{e}}^{-\theta y}\text{d}y\right)\\ =\frac{1}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right){\theta }^{\gamma }}\left(\Gamma \left(\delta \right){\alpha }^{3}\Gamma \left(\gamma +1\right)+\Gamma \left(\gamma +\delta \right)\right).\end{array}$

Therefore, ${f}_{{Y}_{s}}^{\gamma }\left(y\right)=\frac{{y}^{\gamma }{\theta }^{\gamma +1}\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta y\right)}^{\delta -1}\right){\text{e}}^{-\theta y}}{\Gamma \left(\delta \right){\alpha }^{3}\Gamma \left(\gamma +1\right)+\Gamma \left(\gamma +\delta \right)}$ o

The length-biased density function can be obtained by substituting $\gamma =1$ in Equation (21) for MQLD and is given as:

${f}_{{Y}_{s}}^{1}\left(y\right)=\frac{y{\theta }^{2}\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta y\right)}^{\delta -1}\right){\text{e}}^{-\theta y}}{\Gamma \left(\delta \right)\left({\alpha }^{3}+\delta \right)};\text{\hspace{0.17em}}\text{\hspace{0.17em}}y>0,\delta >0,$ (22)

and corresponding cdf is given as:

$\begin{array}{l}{F}_{{Y}_{s}}^{1}\left(y\right)=\frac{1}{\Gamma \left(\delta \right)\left({\alpha }^{3}+\delta \right)\theta }\left(\theta \text{ }\Gamma \left(\delta \right){\alpha }^{3}\left(1-{\text{e}}^{-\theta y}\left(1-\theta y\right)\right)+\gamma \left(\delta ,\theta y\right)\right);\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}y>0,\delta >0,\theta >0,\end{array}$ (23)

where $\gamma \left(a,b\right)={\int }_{0}^{b}\text{ }\text{ }{\lambda }^{a-1}{\text{e}}^{-\lambda }\text{d}\lambda$ is the lower incomplete gamma function.

The mean and variance of length-biased MQLD are: ${\mu }_{s}=\frac{2{\alpha }^{3}+\delta \left(\delta +1\right)}{\left({\alpha }^{3}+\delta \right)\theta }$ and ${\sigma }_{s}^{2}=\frac{{\delta }^{2}\left(1+\delta \right)+{\alpha }^{3}\left(2{\alpha }^{3}+\delta \left(4-\delta \left(1-\delta \right)\right)\right)}{{\left({\alpha }^{3}+\delta \right)}^{2}{\theta }^{2}}$, respectively.

6. Parameter Estimation

This section introduces the parameter estimation methods of MQLD by using the method of moment estimation, maximum likelihood estimation method, and weighted least square estimation method.

6.1. Method of Moment Estimation (MME)

The method of moment estimators of $\theta ,\alpha$, and $\delta$, abbreviated as ${\stackrel{^}{\theta }}_{\text{MME}},{\stackrel{^}{\alpha }}_{\text{MME}}$, and ${\stackrel{^}{\delta }}_{\text{MME}}$ can be derived by equating the raw-moments, say ${{\mu }^{\prime }}_{r}$, to the sample moments, say $\frac{{\sum }_{i=1}^{n}\text{ }\text{ }{y}_{i}^{r}}{n},r=1,2,3$. Then, we need to solve the following system of equations:

$n\left({\alpha }^{3}+\delta \right)-\left({\alpha }^{3}+1\right)\theta {\sum }_{i=1}^{n}\text{ }\text{ }{y}_{i}=0,\text{ }n\left(2{\alpha }^{3}+\delta \left(\delta +1\right)\right)-\left({\alpha }^{3}+1\right){\theta }^{2}{\sum }_{i=1}^{n}\text{ }\text{ }{y}_{i}^{2}=0,$ and

$n\left(6{\alpha }^{3}+\delta \left(\delta +1\right)\left(\delta +2\right)\right)-\left({\alpha }^{3}+1\right){\theta }^{3}{\sum }_{i=1}^{n}\text{ }\text{ }{y}_{i}^{3}=0$.

Since the simultaneous equations are not a closed-form, the numerical methods such as Newton-Rapshon can be employed to find the roots of the equations.

6.2. Maximum Likelihood Estimation (MLE)

The MLE method is the most commonly employed due to its better asymptotic properties. Suppose ${y}_{1},{y}_{2},\cdots ,{y}_{n}$ be the observed values from MQLD with the parameters $\theta ,\alpha$, and $\delta$. The likelihood function of the ith sample value ${y}_{i}$ can be written as:

$\text{Ł}\left(\theta ,\alpha ,\delta |{y}_{i}\right)=\frac{\theta {\text{e}}^{-\theta {y}_{i}}}{\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)}\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta {y}_{i}\right)}^{\delta -1}\right),$

and the log likelihood function is given by:

$\begin{array}{l}\mathrm{log}\left(L\left(\theta ,\alpha ,\delta |y\right)\right)=l\left(\theta ,\alpha ,\delta |y\right)\\ =n\mathrm{log}\left(\theta \right)-\theta {\sum }_{i=1}^{n}\text{ }\text{ }{y}_{i}+{\sum }_{i=1}^{n}\mathrm{log}\left(\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta {y}_{i}\right)}^{\delta -1}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-n\mathrm{log}\left({\alpha }^{3}+1\right)-n\mathrm{log}\left(\Gamma \left(\delta \right)\right)\end{array}$.

By solving the expressions $\frac{\partial l\left(\theta ,\alpha ,\delta |y\right)}{\partial \theta }=0$, $\frac{\partial l\left(\theta ,\alpha ,\delta |y\right)}{\partial \alpha }=0$, and $\frac{\partial l\left(\theta ,\alpha ,\delta |y\right)}{\partial \delta }=0$, the maximum likelihood estimators of $\theta ,\alpha$, and $\delta$, abbreviated as ${\stackrel{^}{\theta }}_{\text{MLE}},{\stackrel{^}{\alpha }}_{\text{MLE}}$, and ${\stackrel{^}{\delta }}_{\text{MLE}}$ can be obtained. The system of the equations are:

$\frac{n}{\theta }+{\sum }_{i=1}^{n}\frac{\left(\delta -1\right){\theta }^{\delta -2}{y}_{i}^{\delta -1}}{\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta {y}_{i}\right)}^{\delta -1}}={\sum }_{i=1}^{n}\text{ }\text{ }{y}_{i},\text{ }\frac{3{\alpha }^{2}\Gamma \left(\delta \right)}{\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta {y}_{i}\right)}^{\delta -1}}=\frac{3n{\alpha }^{2}}{{\alpha }^{3}+1},$

and $\frac{\Gamma \left(\delta \right)\psi \left(\delta \right){\alpha }^{3}+{\left(\theta {y}_{i}\right)}^{\delta -1}\mathrm{log}\left(\theta {y}_{i}\right)}{\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta {y}_{i}\right)}^{\delta -1}}=n\chi \left(\delta \right)$, where $\psi \left(a\right)=\frac{\partial }{\partial a}\mathrm{log}\Gamma \left(a\right)=\frac{{\Gamma }^{\prime }\left(a\right)}{\Gamma \left( a \right)}$

The asymptotic confidence intervals for the parameters $\theta ,\alpha$, and $\delta$ are derived by the asymptotic theory. The estimators are asymptotic three-variate normal with mean $\left(\theta ,\alpha ,\delta \right)$ and the observed information matrix:

$I\left(\theta ,\alpha ,\delta \right)=\left(\begin{array}{ccc}-\frac{{\partial }^{2}l\left(\theta ,\alpha ,\delta |y\right)}{\partial {\theta }^{2}}& -\frac{{\partial }^{2}l\left(\theta ,\alpha ,\delta |y\right)}{\partial \theta \partial \alpha }& -\frac{{\partial }^{2}l\left(\theta ,\alpha ,\delta |y\right)}{\partial \theta \partial \delta }\\ -\frac{{\partial }^{2}l\left(\theta ,\alpha ,\delta |y\right)}{\partial \alpha \partial \theta }& -\frac{{\partial }^{2}l\left(\theta ,\alpha ,\delta |y\right)}{\partial {\alpha }^{2}}& -\frac{{\partial }^{2}l\left(\theta ,\alpha ,\delta |y\right)}{\partial \alpha \partial \delta }\\ -\frac{{\partial }^{2}l\left(\theta ,\alpha ,\delta |y\right)}{\partial \delta \partial \theta }& -\frac{{\partial }^{2}l\left(\theta ,\alpha ,\delta |y\right)}{\partial \delta \partial \alpha }& -\frac{{\partial }^{2}l\left(\theta ,\alpha ,\delta |y\right)}{\partial {\delta }^{2}}\end{array}\right)$

at $\theta ={\stackrel{^}{\theta }}_{\text{MLE}},\alpha ={\stackrel{^}{\alpha }}_{\text{MLE}}$, and $\delta ={\stackrel{^}{\delta }}_{\text{MLE}}$. That is, $\left({\stackrel{^}{\theta }}_{\text{MLE}},{\stackrel{^}{\alpha }}_{\text{MLE}},{\stackrel{^}{\delta }}_{\text{MLE}}\right)\sim {N}_{3}\left(\left(\theta ,\alpha ,\delta \right),{I}^{-1}\left(\theta ,\alpha ,\delta \right)\right)$. The elements of the observed information matrix are given in Appendix.

Therefore, the $\left(1-a\right)100%$ confidence interval for the parameters $\theta ,\alpha$, and $\delta$ are given by

${\stackrel{^}{\theta }}_{\text{MLE}}±{z}_{a/2}\sqrt{var\left({\stackrel{^}{\theta }}_{\text{MLE}}\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\stackrel{^}{\alpha }}_{\text{MLE}}±{z}_{a/2}\sqrt{var\left({\stackrel{^}{\alpha }}_{\text{MLE}}\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\stackrel{^}{\delta }}_{\text{MLE}}±{z}_{a/2}\sqrt{var\left({\stackrel{^}{\delta }}_{\text{MLE}}\right)},$

wherein, the $var\left({\stackrel{^}{\theta }}_{\text{MLE}}\right),var\left({\stackrel{^}{\alpha }}_{\text{MLE}}\right)$, and $var\left({\stackrel{^}{\delta }}_{\text{MLE}}\right)$ are the variance of ${\stackrel{^}{\theta }}_{\text{MLE}},{\stackrel{^}{\alpha }}_{\text{MLE}}$, and ${\stackrel{^}{\delta }}_{\text{MLE}}$, respectively, and can be derived by diagonal elements of ${I}^{-1}\left(\theta ,\alpha ,\delta \right)$ and ${z}_{a/2}$ is the critical value at a level of significance.

6.3. Weighted Least Square Estimation (WLE)

The weighted least square estimators of $\theta ,\alpha$, and $\delta$, abbreviated as ${\stackrel{^}{\theta }}_{\text{MLE}},{\stackrel{^}{\alpha }}_{\text{MLE}}$, and ${\stackrel{^}{\delta }}_{\text{MLE}}$ can be obtained by minimizing:

$W\left(\theta ,\alpha ,\delta \right)={\sum }_{i=1}^{n}\frac{{\left(n+1\right)}^{2}\left(n+2\right)}{i\left(n-i+1\right)}{\left({F}_{{Y}_{i:n}}\left(y\right)-\frac{i}{n+1}\right)}^{2}$ with respect to $\theta ,\alpha ,\delta$,

where ${F}_{{Y}_{i:n}}\left(y\right)$ is the cdf of the order statistic defined in section 3.4. Then, the estimators can be found by solving the non-linear equations:

${\sum }_{i=1}^{n}\frac{{\left(n+1\right)}^{2}\left(n+2\right)}{i\left(n-i+1\right)}{\left({F}_{{Y}_{i:n}}\left(y\right)-\frac{i}{n+1}\right)}^{2}{F}_{{Y}_{i:n}}^{D\left(m\right)}\left(y\right)=0;\text{ }m=1,2,3,$

where

$\begin{array}{l}{F}_{{Y}_{i:n}}^{D\left(1\right)}\left(y\right)=\frac{\partial }{\partial \theta }{F}_{{Y}_{i:n}}\left(y\right),\text{ }{F}_{{Y}_{i:n}}^{D\left(2\right)}\left(y\right)=\frac{\partial }{\partial \alpha }{F}_{{Y}_{i:n}}\left(y\right),\text{ }\\ {F}_{{Y}_{i:n}}^{D\left(3\right)}\left(y\right)=\frac{\partial }{\partial \delta }{F}_{{Y}_{i:n}}\left(y\right).\end{array}$

7. Simulation Study

In this section, we examine the performance of the MME and MLE method in the unknown parameter estimation of MQLD with respect to the sample size n. Further, a comparison study is performed among the MQLD, QLD, and LD based on minimum negative log-likelihood ( $-2\mathrm{log}L$ ) by using various simulated samples from MQLD. The following algorithm is used to generate the random samples from MQLD:

Algorithm

1) Generate ${u}_{i}\sim uniform\left(0,1\right);i=1,2,\cdots$

2) Solve the non-linear equation for ${y}_{u}$ ; $\Gamma \left(\delta \right)\left(1+{\alpha }^{3}\left(1-{\text{e}}^{-\theta {y}_{u}}\right)\right)-\Gamma \left(\delta ,\theta {y}_{u}\right)-u\left({\alpha }^{3}+1\right)\Gamma \left(\delta \right)=0$.

7.1. Performance of MME and MLE Methods

The simulation study is designed to examine the performance of $\left({\stackrel{^}{\theta }}_{\text{MLE}},{\stackrel{^}{\alpha }}_{\text{MLE}},{\stackrel{^}{\delta }}_{\text{MLE}}\right)$ and $\left({\stackrel{^}{\theta }}_{\text{MME}},{\stackrel{^}{\alpha }}_{\text{MME}},{\stackrel{^}{\delta }}_{\text{MME}}\right)$ with respect to the sample size n as follows:

1) generate thousand samples of size n

2) Compute the average biases, and mean squared errors of $\stackrel{^}{\theta },\stackrel{^}{\alpha }$, and $\stackrel{^}{\delta }$ of the parameters $\theta ,\alpha$, and $\delta$ by using the equations:

a) The average biases are:

$\frac{\underset{i=1}{\overset{1000}{\sum }}\left(\stackrel{^}{\theta }-\theta \right)}{1000},\text{ }\frac{\underset{i=1}{\overset{1000}{\sum }}\left(\stackrel{^}{\alpha }-\alpha \right)}{1000},\text{ }\frac{\underset{i=1}{\overset{1000}{\sum }}\left(\stackrel{^}{\delta }-\delta \right)}{1000}.$

b) The average MSEs are:

$\frac{\underset{i=1}{\overset{1000}{\sum }}{\left(\stackrel{^}{\theta }-\theta \right)}^{2}}{1000},\text{ }\frac{\underset{i=1}{\overset{1000}{\sum }}{\left(\stackrel{^}{\alpha }-\alpha \right)}^{2}}{1000},\text{ }\frac{\underset{i=1}{\overset{1000}{\sum }}{\left(\stackrel{^}{\delta }-\delta \right)}^{2}}{1000}.$

Table 3 and Table 4 represent the performance of MME and MLE method for the combinations of parameter values $\left(\theta =0.25,\alpha =0.75,\delta =2.25\right)$ that represents the unimodel case and $\left(\theta =0.75,\alpha =1.5,\delta =3.25\right)$ that represents the monotonic decreasing case, respectively. They summarize average MMEs, MLEs, biases, and MSEs for different sample sizes and corresponding results of MLE method are given in parentheses. We consider sample sizes of $n=60,100,140$ and 180.

Observations from Table 3 and Table 4, the biases and MSEs decrease as n increases in both methods. Then, both methods verify the asymptotic property. However, comparing between MME and MLE method for given combination of parameter values and different sample sizes, it is clear that the MLE method is better than the MME since its’ ability to converge to the actual parameter value is stronger than the method of moment estimation. Further, we have noted that this ability is very strong for a large sample. Among the MLEs of unknown parameters, $\theta$ and $\delta$ are overestimated and $\alpha$ is underestimated for both combinations of parameters. Further, ${\stackrel{^}{\theta }}_{\text{MLE}}$ has low biases and MSEs while ${\stackrel{^}{\delta }}_{\text{MLE}}$ has high biases and MSEs.

Table 3. Performance of MME and MLE methods for MQLD ( $\theta =0.25,\alpha =0.75,\delta =2.25$ ).

Table 4. Performance of MME and MLE methods for MQLD ( $\theta =0.75,\alpha =1.50,\delta =3.25$ ).

7.2. Comparison Study among MQLD, QLD and LD

This comparison study is performed to show how the MQLD provides a better fit than QLD and LD for the various data sets that are simulated from MQLD. Since the ranges of skewness, and kurtosis of QLD are, $\sqrt{2}<{\gamma }_{1Q}<2$, and $6<{\gamma }_{2Q}<9$, respectively, we define three ranges of SKs and EKs to simulate data sets as R1, R2 and R3, where R1: $SK<\sqrt{2}$ and $EK<3$, R2: $\sqrt{2}, and $3, and R3: $SK>2$ and $EK>6$. This study is designed as follows:

1) Generate 8 random samples of size, $n=150$ from MQLD ( $\theta ,\alpha ,\delta$ ) for each range R1, R2, and R3.

2) Fit the MQLD, QLD, and LD to the 24 generated random samples.

3) Make the comparisons based on minimum $-2\mathrm{log}L$ values.

Here, the estimates of the unknown parameters for the distributions are derived by the MLE method. Tables 5-7 summarize $-2\mathrm{log}L$ values of MQLD, QLD, and LD for the generated random samples. Based on minimum $-2\mathrm{log}L$ value, the MQLD performs better than QLD, and LD in all given ranges of SK, EK, and FF.

Table 5. $-2\mathrm{log}L$ values of MQLD, QLD, and LD for the simulated random samples of R1.

Table 6. $-2\mathrm{log}L$ values of MQLD, QLD, and LD for the simulated random samples of R2.

Table 7. $-2\mathrm{log}L$ values of MQLD, QLD, and LD for the simulated random samples of R3.

8. Real-World Applications

In this section, we fit the MQLD to three published real-data sets and compare its’ performance with some existing Lindley family distributions. The $-2\mathrm{log}L$, Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Kolmogorov-Smirnov Statistics (K-S Statistics) are utilized to compare the performance of distributions. Based on the minimum value of these statistics the best model is chosen to fit the data. The unknown parameters of distributions are estimated by using the MLE method. The three real-data sets are:

Data set 1: Fuller et al. (1994) [15] discussed this data set that represents the strength of glass of the aircraft window. The data are:

18.83, 20.80, 21.657, 23.03, 23.23, 24.05, 24.321, 25.50, 25.52, 25.80, 26.69, 26.77, 26.78, 27.05, 27.67, 29.90, 31.11, 33.20, 33.73, 33.76, 33.89, 34.76, 35.75, 35.91, 36.98, 37.08, 37.09, 39.58, 44.045, 45.29, 45.381.

Data set 2: The following data set represents the tree circumferences in Marshall, Minnesota and reported by Shakil et al. (2010) [16].

1.8, 1.8, 1.9, 2.4, 3.1, 3.4, 3.7, 3.7, 3.8, 3.9, 4.0, 4.1, 4.9, 5.1, 5.1, 5.2, 5.3, 5.5, 8.3, 13.7.

Data set 3: The data set was used by Murthy et al. (2004) [17] that represents 50 items failure times in weeks.

0.013, 0.065, 0.111, 0.111, 0.163, 0.309, 0.426, 0.535, 0.684, 0.747, 0.997, 1.284, 1.304, 1.647, 1.829, 2.336, 2.838, 3.269, 3.977, 3.981, 4.520, 4.789, 4.849, 5.202, 5.291, 5.349, 5.911, 6.018, 6.427, 6.456, 6.572, 7.023, 7.087, 7.291, 7.787, 8.596, 9.388, 10.261, 10.713, 11.658, 13.006, 13.388, 13.842, 17.152, 17.283, 19.418, 23.471, 24.777, 32.795, 48.105.

Some important statistical measures for data sets 1, 2, and 3, are summarized in Table 8.

The empirical histogram of the data sets and the fitted densities of MQLD, QLD, and LD are displayed in Figure 5. One can observe that the fitted density of MQLD gives a closer fit with the empirical distributions of the data sets. Table 9 lists the MLEs, SDs, $-2\mathrm{log}L$, AICs, BICs, and K-S statistics with critical values for the fitted models to the data set 1, 2, and 3. It is noted that from Table 9, the MQLD provides the lowest values for the $-2\mathrm{log}L$, AIC, and BIC among all fitted models. Then, it is clear from Table 9 and Figure 5 results that the MQLD provides a better fit than the QLD and LD.

Table 8. Statistical measures for data set 1, 2, and 3

Table 9. MLEs, SDs, AICs, BICs, K-S statistics and its critical values of the fitted models.

Figure 5. Empirical histograms of the data sets with fitted densities of MQLD, QLD, TwPLD, and LD.

9. Conclusion

In this paper, we have introduced a new three-parameter Lindley family distribution, called the modified Quasi Lindley distribution (MQLD). We studied its’ fundamental structural properties such as the density, moments and related measures, quantile function, order statistics, failure rate function, mean residual life function, inequality and entropy measures, and size-biased of MQLD. The new distribution has very flexible properties for lifetime data. Its’ density function covers various ranges of horizontal symmetries, tail-weights, and dispersion. Further, the failure rate function of new distribution can be increasing, decreasing, constant, and bathtub shapes. A simulation study indicates that the maximum likelihood method offers better performance and accuracy than the method of moment estimation. The maximum likelihood estimation method was approached for estimating the unknown model parameters. A simulation study and three real-world applications showed its superiority over the Quasi Lindley distribution, Two-parameter Lindley distribution, and Lindley distribution.

Appendix

The terms ${T}_{1},{T}_{2}$, and ${T}_{3}$ are defined as follows:

${T}_{1}=\Gamma \left(\delta \right){\alpha }^{3}+{\left(\theta {y}_{i}\right)}^{\delta -1}$,

${T}_{2}=\left(\delta -1\right){\theta }^{\delta -2}{y}_{i}^{\delta -1}$, and

${T}_{3}={\alpha }^{3}\Gamma \left(\delta \right)\psi \left(\delta \right){\alpha }^{3}+{\left(\theta {y}_{i}\right)}^{\delta -1}\mathrm{log}\left(\theta {y}_{i}\right)$.

Then, the second order partial derivatives of the log-likelihood function are as follows:

$\frac{{\partial }^{2}l\left(\theta ,\alpha ,\delta |y\right)}{\partial {\theta }^{2}}=\frac{-n}{{\theta }^{2}}+{\sum }_{i=1}^{n}\frac{\left(\delta -1\right){y}_{i}^{\delta -1}{\theta }^{\delta -3}\left(\left(\delta -2\right){T}_{1}-{T}_{2}\theta \right)}{{\left({T}_{1}\right)}^{2}},$

$\frac{{\partial }^{2}l\left(\theta ,\alpha ,\delta |y\right)}{\partial \theta \partial \alpha }={\sum }_{i=1}^{n}\frac{-3{T}_{2}{\alpha }^{2}\Gamma \left(\delta \right)}{{\left({T}_{1}\right)}^{2}},$

$\frac{{\partial }^{2}l\left(\theta ,\alpha ,\delta |y\right)}{\partial \theta \partial \delta }={\sum }_{i=1}^{n}\frac{{T}_{1}{\left(\theta {y}_{i}\right)}^{\delta -1}\left(\left(\delta -1\right)\mathrm{log}\left(\theta {y}_{i}\right)+1\right)-{T}_{2}{T}_{3}\theta }{{\left({T}_{1}\right)}^{2}\theta },$

$\frac{{\partial }^{2}l\left(\theta ,\alpha ,\delta |y\right)}{\partial \alpha \partial \delta }={\sum }_{i=1}^{n}\frac{3{\alpha }^{2}\Gamma \left(\delta \right)\left(\psi \left(\delta \right){T}_{1}-{T}_{3}\right)}{{\left({T}_{1}\right)}^{2}},$

$\frac{{\partial }^{2}l\left(\theta ,\alpha ,\delta |y\right)}{\partial {\alpha }^{2}}={\sum }_{i=1}^{n}\frac{3\alpha \Gamma \left(\delta \right)\left(2{T}_{2}-3{\alpha }^{3}\Gamma \left(\delta \right)\right)}{{\left({T}_{1}\right)}^{2}}-\frac{3n\alpha \left(2\left({\alpha }^{3}+1\right)-3{\alpha }^{3}\right)}{{\left({\alpha }^{3}+1\right)}^{2}},$ and

$\begin{array}{l}\frac{{\partial }^{2}l\left(\theta ,\alpha ,\delta |y\right)}{\partial {\delta }^{2}}\\ ={\sum }_{i=1}^{n}\frac{{T}_{2}\left({\alpha }^{3}\Gamma \left(\delta \right)\left({\psi }_{1}\left(\delta \right)+{\left(\psi \left(\delta \right)\right)}^{2}\right)+{\left(\mathrm{log}\left(\theta {y}_{i}\right)\right)}^{2}{\left(\theta {y}_{i}\right)}^{\delta -1}\right)-{\left({T}_{3}\right)}^{2}}{{\left({T}_{1}\right)}^{2}}-n{\psi }_{1}\left(\delta \right),\end{array}$

where ${\psi }_{1}\left(a\right)$ is the trigamma function and it is defined as:

${\psi }_{1}\left(a\right)=\frac{{\text{d}}^{2}\mathrm{log}\left(\Gamma \left(a\right)\right)}{\text{d}{a}^{2}}={\sum }_{k=1}^{\infty }\frac{1}{{\left(a+k\right)}^{2}}.$

Cite this paper: Tharshan, R. , Wijekoon, P. (2021) A Modification of the Quasi Lindley Distribution. Open Journal of Statistics, 11, 369-392. doi: 10.4236/ojs.2021.113022.
References

[1]   Lawless, J.F. (1982) Statistical Models and Methods for Lifetime Data. John Wiley and Sons, New York, USA.

[2]   Lindley, D.V. (1958) Fiducial Distributions and Bayes’ Theorem. Journal of the Royal Statistical Society, Series B, 20, 102-107.
https://www.jstor.org/stable/2983909
https://doi.org/10.1111/j.2517-6161.1958.tb00278.x

[3]   Ghitany, M.E., Atieh, B. and Nadarajah, S. (2008) Lindley Distribution and Its Applications. Mathematics and Computers in Simulation, 78, 493-506.
https://doi.org/10.1016/j.matcom.2007.06.007

[4]   Zakerzadeh, H. and Dolati, A. (2009) Generalized Lindley Distribution. Journal of Mathematical Extension, 3, 13-25.

[5]   Shanker, R., Sharma, S. and Shanker, R. (2013) A Two-Parameter Lindley Distribution for Modeling Waiting and Survival Times Data. Applied Mathematics, 4, 363-368.
https://doi.org/10.4236/am.2013.42056

[6]   Abouammoh, A.M., Alshangiti, A.M. and Ragab, I.E. (2015) A New Generalized Lindley Distribution. Journal of Statistical Computation and Simulation, 85, 3662-3678.
https://doi.org/10.1080/00949655.2014.995101

[7]   Monsef, M.M.E.A. (2016) A New Lindley Distribution with Location Parameter. Communications in Statistics—Theory and Methods, 45, 5204-5219.
https://doi.org/10.1080/03610926.2014.941496

[8]   Ekhosuehi, N., Opone, F. and Odobaire, F. (2018) A New Generalized Two-Parameter Lindley Distribution. Journal of Data Science, 16, 549-566.
https://doi.org/10.6339/JDS.201807_16(3).0006

[9]   Tharshan, R. and Wijekoon, P. (2020) Location Based Generalized Akash Distribution: Properties and Applications. Open Journal of Statistics, 10, 163-187.
https://doi.org/10.4236/ojs.2020.102013

[10]   Ramos, P.L., Louzada, F. and Moala, A.F. (2020) A Two-Parameter Distribution with Increasing and Bathtub Hazard Rate. Journal of Data Science, 18, 813-827.
http://jds.ruc.edu.cn/EN/Y2020/V18/I4/813
https://doi.org/10.6339/JDS.202010_18(4).0014

[11]   Shanker, R. and Mishra, A. (2013) A Quasi Lindley Distribution. African Journal of Mathematics and Computer Science Research, 6, 64-71.

[12]   Tharshan, R. and Wijekoon, P. (2020) A Comparison Study on a New Five-Parameter Generalized Lindley Distribution with Its Sub-Models. Statistics in Transition New Series, 21, 89-117.
https://doi.org/10.21307/stattrans-2020-015

[13]   Shannon, C. and Weaver, W. (1949) The Mathematical Theory of Communication. University of Illinois Press, Chicago.

[14]   Patil, G.P. and Rao, G.R. (1978) Weighted Distributions and Size Biased Sampling with Applications to Wildlife Populations and Human Families. Biometrics, 34, 179-189.
https://doi.org/10.2307/2530008

[15]   Fuller, E.R., Freiman, S.W., Quinn, J.B., Quinn, G. and Carter, W. (1994) Fracture Mechanics Approach to the Design of Glass Aircraft Windows—A Case Study. Proceedings of SPIE—The International Society for Optical Engineering, 2286, 419-430.
https://doi.org/10.1117/12.187363

[16]   Shakil, M., Kibria, B.G. and Singh, J.N. (2010) A New Family of Distributions Based on the Generalized Pearson Di Erential Equation with Some Applications. Austrian Journal of Statistics, 39, 259-278.
https://doi.org/10.17713/ajs.v39i3.248

[17]   Murthy, D.N.P., Xie, M. and Jiang, R. (2004) Weibull Models. John Wiley & Sons, Hoboken.

Top