Bayesian Estimation of the Shape Parameters of Mcdonald Generalized Beta-Binomial Distribution

Show more

1. Introduction

Data is an essential tool that is multidisciplinary in the world today because it informs on the decision making process of all the subjects that exist. Therefore, understanding the data as well as modelling this data to extract useful information is a key step to achieving a good decision. The confounding aspects of data also form an integral part in extracting information from a dataset, they are preliminary aspects that should be identified in a dataset. For instance presence of over dispersion [1]. With the changes that are happening in the world today like technology advancement and new industrial era, big volumes of data are generated on daily basis [2]. Therefore, models of higher dimension are becoming more useful in modelling such data. One characteristic of such data is the diversity that it presents in its features and therefore controlling such data requires flexible modelling techniques [3]. Bayesian framework for example, solves such problems through its unique nature of incorporating prior information about the parameters that is, allowing the parameters to vary as well and quantifying this variation using probability distributions [4]. For this reason, Bayesian modelling has been adopted in areas of artificial intelligence, bioinformatics, agriculture, and economics among many others [5]. Data in the form of proportions or count is not an exception to the emerging era of big data. It is often encountered in many scientific and social fields. One common feature that may be noticed from such data is over or under dispersion [6]. Over or under dispersion can be as result of variation in the success probability which in the case of a simple binomial model is treated as a constant but in reality, is usually not the case. Therefore, mixture distributions are developed to capture over dispersion where the mixing distribution is defined on the [0, 1] interval due to the property that the success probability should range on this interval. McDonald Generalized Beta-Binomial distribution (McGBB) has been proven to be superior in modelling overdispersed binomial data by [7]. The maximum likelihood estimation procedure was applied by [7] and [8] introduced the estimating equations for the McGBB distribution. Previously, estimation of the McGBB parameters using the Bayesian framework has not been addressed. Therefore, in this paper we develop a Bayesian framework through a Monte Carlo simulation technique specifically the Metropolis-Hasting step within Gibbs sampler to obtain the marginal posterior samples of the parameters of the McGBB, which will in turn help in Bayesian inference in particular estimation of the estimates of the parameters. The first part of this paper introduces the reader to the topic, the second part covers methodology which is subdivided into sections, the third part presents results and the fourth part gives a conclusion. In the paper different symbols have been used to identify different parameters, the symbols are defined as:

α, β and γ: Shape parameters for McGBB distribution, which the paper wishes to estimate using the Bayesian framework.

$\underset{\_}{\theta}$ : A notation that represents the parameter space, which contains the three parameters named (α, β and γ).

${\pi}_{1}\left(\alpha \right)$, ${\pi}_{2}\left(\beta \right)$ and ${\pi}_{3}\left(\gamma \right)$ : notation for prior distribution of each shape parameter.

${R}_{\underset{\_}{\theta}}$ : A notation of the metropolis hasting ratio.

${\underset{\_}{\theta}}^{prop}$ : Joint proposal distribution used for the metropolis hasting algorithm.

$\left({\alpha}_{\left(\left[\psi M/2\right]\right)},{\alpha}_{\left(\left[\left(1-\psi /2\right)M\right]\right)}\right)$ : Credible region showing upper and lower limits for the shape parameter α.

$\left({\beta}_{\left(\left[\psi M/2\right]\right)},{\beta}_{\left(\left[\left(1-\psi /2\right)M\right]\right)}\right)$ : Credible region showing upper and lower limits for the shape parameter β.

$\left({\gamma}_{\left(\left[\psi M/2\right]\right)},{\gamma}_{\left(\left[\left(1-\psi /2\right)M\right]\right)}\right)$ : Credible region showing upper and lower limits for the shape parameter γ.

2. Methodology

In the methodology section, brief discussions of the distribution, the simulation algorithm and Bayesian method are discussed. This ensures that the reader is able to follow through each step of what forms the entire part of developing the Bayesian framework.

2.1. McDonald Generalized Beta Binomial Distribution

The McGBB is a mixture of distribution that is used to capture and model over-dispersion in binomial data. The mixture distribution is obtained by mixing the binomial distribution and the McDonald generalized beta distribution of the first kind [7]. A random variable X follows a MCGBB with parameters (n, α, β, γ) if the probability mass function of X is expressed as:

${f}_{\text{McGBB}}\left(x|n,\alpha ,\beta ,\gamma \right)=\left(\begin{array}{c}n\\ x\end{array}\right)\frac{\gamma}{B\left(\alpha ,\beta \right)}\underset{i=0}{\overset{\infty}{{\displaystyle \sum}}}{\left(-1\right)}^{i}\left(\begin{array}{c}\beta -1\\ i\end{array}\right)B\left(x+\alpha \gamma +\gamma i,n-x+1\right)$

For $\alpha >0,\beta >0,\gamma >0$.

where, $B\left(w,z\right)=\frac{\Gamma \left(w\right)\Gamma \left(z\right)}{\Gamma \left(w+z\right)}$ is the beta function.

An alternative form of the probability mass function of X is given as:

${f}_{\text{McGBB}}\left(y;n,\alpha ,\beta ,\gamma \right)=\left(\begin{array}{c}n\\ y\end{array}\right)\frac{1}{B\left(\alpha ,\beta \right)}\underset{j=0}{\overset{n-y}{{\displaystyle \sum}}}{\left(-1\right)}^{j}\left(\begin{array}{c}n-y\\ j\end{array}\right)B\left(\frac{y}{\gamma}+\alpha +\frac{j}{\gamma},\beta \right)$

For $\alpha >0,\beta >0,\gamma >0$. Source [7].

Where the parameters α, β and γ are the shape parameters of the distribution that can only assume positive values.

The properties of the distribution have been discussed in detail by [7], this paper primary attention is given to the discussion of the Bayesian framework to estimate the parameters of the distribution.

2.2. Simulation of McDonald Generalized Beta-Binomial Variables

Simulation of the McGBB variables has not been implemented using the following algorithm. The algorithm provides a direct approach to simulate directly from a McGBB distribution while being in control of the parameters. The variables obtained are McGBB variables as opposed to the algorithm suggested by [7] whose variables were over dispersed random binomial variables:

Step 1: Setting fixed values of α, β and γ.

Step 2: Generate K random variables from Beta (α, β) (i.e. ${U}_{i}~\text{BETA}\left(\alpha ,\beta \right)$ for $i=1,\cdots ,K$.

Step 3: For each of the random variables that is ${P}_{i}={U}_{i}^{\frac{1}{\gamma}}$ for $i=1,\cdots ,K$.

Step 4: For each ${P}_{i}$ in step 3 generate binomial random variables that is: ${X}_{i}~\text{Bin}\left(n,{P}_{i}\right)$, then ${X}_{i}~\text{McGBB}\left(n,\alpha ,\beta ,\gamma \right)$ for $i=1,\cdots ,K$.

2.3. Bayesian Method

Prior information is what makes Bayesian framework a unique method of parameter estimation as it provides a framework through which expert opinion that may have been obtained from previous studies is incorporated to the study [4]. Bayesian theory demands that the parameters of a distribution be treated random variables and thus the prior distribution is the main way in which such information (beliefs) is quantified [9]. The first step of Bayesian analysis is to define a probability distribution that is best suited to model a given dataset. The second step is to choose appropriate prior distributions for the parameters that characterizes the distribution selected. The choice of prior distributions is mostly guided by intuition and information that exists and is known about these parameters [10]. The methodology of choosing a prior distribution is however criticized by the classical statisticians since it is subjective as it is guided by intuitive knowledge which often lead to the use of informative priors. However, to circumvent this limitation, theory has highlighted the method of non-informative priors [11]. Non-informative prior distributions provide no information or provide an equal chance for all the possible parameters values in the parametric space before the data is observed [12]. The subject of obtaining the non-informative priors is the current in thing in Bayesian research, but a commonly used non-informative prior is the flat or diffuse priors. In this paper flat priors were used. In the Bayesian framework inferences are made based on the marginal posterior distributions of the parameters. However, a closed form of the joint posterior distribution may not be feasible thus making it more challenging to sample from such a distribution. Further, in cases of high dimensional parametric spaces, the integrations of such joint posterior distributions become intractable and complex and thus not easy to obtain the marginal posterior distributions [13]. In order to avoid the problem of intractability problem, the most commonly used methods are the Markov Chain Monte Carlo methods (MCMC) [14]. With these methods it is possible to obtain a samples from the marginal posterior distributions of the parameters from the joint without performing the integrations [15]. In this paper the Bayesian framework developed utilized the MCMC methods, specifically the use of a Metropolis-Hasting step within the Gibbs sampling.

In the Bayesian framework, the unknown parameters of the model were assumed to be random variables. Thus, there was a need to make appropriate assumptions about the distributions (prior distributions) of unknown parameters. The McGBB distribution has three shape parameters $\underset{\_}{\theta}={\left(\alpha ,\beta ,\gamma \right)}^{\text{T}}$ which we are interested in estimating, the study assumed the prior distribution of $\alpha $, $\beta $ and $\gamma $ had jointly a flat prior, which were represented as:

$\pi \left(\underset{\_}{\theta}\right)\propto \pi \left(\alpha ,\beta ,\gamma \right)\propto 1$

The joint posterior distribution of $\underset{\_}{\theta}=\left(\alpha ,\beta ,\gamma \right)$ was obtained by multiplying the conditional distribution $f\left(\underset{\_}{y}|\underset{\_}{\theta}\right)$ (essentially the likelihood function) with $\pi \left(\underset{\_}{\theta}\right)$ the joint prior distribution of $\alpha $, $\beta $ and $\gamma $. Let $\underset{\_}{y}={y}_{1},{y}_{2},\cdots ,{y}_{N}$ be a random sample of size N from a McGBB distribution with unknown parameter vector $\underset{\_}{\theta}={\left(\alpha ,\beta ,\gamma \right)}^{\text{T}}$. The conditional distribution of $f\left(\underset{\_}{y}|\underset{\_}{\theta}\right)$ is obtained as:

$\begin{array}{c}f\left(\underset{\_}{y}|\underset{\_}{\theta}\right)=f\left({y}_{1},{y}_{2},\cdots ,{y}_{N}|\alpha ,\beta ,\gamma \right)=\underset{k=0}{\overset{N}{{\displaystyle \prod}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}f\left({y}_{k}|\alpha ,\beta ,\gamma ,n\right)\\ =\underset{k=1}{\overset{N}{{\displaystyle \prod}}}\left(\begin{array}{c}n\\ {y}_{k}\end{array}\right)\times \frac{1}{B\left(\alpha ,\beta \right)}\times \underset{j=0}{\overset{n-{y}_{k}}{{\displaystyle \sum}}}{\left(-1\right)}^{j}\left(\begin{array}{c}n-{y}_{k}\\ j\end{array}\right)\times B\left(\left(\frac{{y}_{k}}{\gamma}+\alpha +\frac{j}{\gamma}\right),\beta \right)\end{array}$

The posterior distribution becomes:

$\begin{array}{l}\pi \left(\underset{\_}{\theta}|\underset{\_}{y}\right)\propto f\left(\underset{\_}{y}|\underset{\_}{\theta}\right)\times \pi \left(\underset{\_}{\theta}\right)\\ \propto \underset{k=1}{\overset{N}{{\displaystyle \prod}}}\left(\begin{array}{c}n\\ {y}_{k}\end{array}\right)\times \frac{1}{B\left(\alpha ,\beta \right)}\times \underset{j=0}{\overset{n-{y}_{k}}{{\displaystyle \sum}}}{\left(-1\right)}^{j}\left(\begin{array}{c}n-{y}_{k}\\ j\end{array}\right)\times B\left(\left(\frac{{y}_{k}}{\gamma}+\alpha +\frac{j}{\gamma}\right),\beta \right)\times 1\end{array}$

It is evident that sampling from this joint posterior distribution is complicated, thus the study employed MCMC methods and in particular, the Metropolis Hasting step within the Gibbs sampling technique. In order to implement this algorithm, computation of the full conditional distributions for the parameters was obtained as follows:

${\pi}_{\alpha}\left(\alpha |\beta ,\gamma ,\underset{\_}{y}\right)=\frac{\pi \left(\alpha ,\beta ,\gamma |\underset{\_}{y}\right)}{\pi \left(\beta ,\gamma |\underset{\_}{y}\right)}=\frac{\pi \left(\alpha ,\beta ,\gamma |\underset{\_}{y}\right)}{{\displaystyle \int \pi \left(\alpha ,\beta ,\gamma |\underset{\_}{y}\right)\text{d}\alpha}}\propto \pi \left(\alpha ,\beta ,\gamma |\underset{\_}{y}\right)$

${\pi}_{\beta}\left(\beta |\beta ,\gamma ,\underset{\_}{y}\right)=\frac{\pi \left(\alpha ,\beta ,\gamma |\underset{\_}{y}\right)}{\pi \left(\alpha ,\gamma |\underset{\_}{y}\right)}=\frac{\pi \left(\alpha ,\beta ,\gamma |\underset{\_}{y}\right)}{{\displaystyle \int \pi \left(\alpha ,\beta ,\gamma |\stackrel{\xaf}{y}\right)\text{d}\beta}}\propto \pi \left(\alpha ,\beta ,\gamma |\underset{\_}{y}\right)$

${\pi}_{\gamma}\left(\gamma |\beta ,\gamma ,\underset{\_}{y}\right)\frac{\pi \left(\alpha ,\beta ,\gamma |\underset{\_}{y}\right)}{\pi \left(\alpha ,\beta |\underset{\_}{y}\right)}=\frac{\pi \left(\alpha ,\beta ,\gamma |\underset{\_}{y}\right)}{{\displaystyle \int \pi \left(\alpha ,\beta ,\gamma |\underset{\_}{y}\right)\text{d}\gamma}}\propto \pi \left(\alpha ,\beta ,\gamma |\underset{\_}{y}\right)$

Then the metropolis within Gibbs sampling algorithm involved the following steps:

Step 1. Start with 𝑗 = 1 and the initial values of $\left\{{\underset{\_}{\theta}}^{\left(1\right)}=\left({\alpha}^{\left(1\right)},{\beta}^{\left(1\right)},{\gamma}^{\left(1\right)}\right)\right\}$.

Step 2. Using the proposal distributions of $\underset{\_}{\theta}$, where the proposal for the parameters was chosen as $\alpha ~\text{Normal}\left({\alpha}^{\left(k-1\right)},{\sigma}_{\alpha}^{2}\right)$, $\beta ~\text{Normal}\left({\beta}^{\left(k-1\right)},{\sigma}_{\beta}^{2}\right)$. $\gamma ~\text{Exponential}\left(\lambda \right)$ sample a candidate value for ${\underset{\_}{\theta}}^{prop}$.

Step 3. Generate U from a Uniform (0, 1) distribution (i.e. $u~UNIF\left(0,1\right)$.

Step 4. Calculate the Metropolis-Hasting (MH) ratio at the candidate value ${\underset{\_}{\theta}}^{prop}$ and the previous value ${\underset{\_}{\theta}}^{\left(k-1\right)}$, using block updating.

${R}_{\underset{\_}{\theta}}=\frac{{\pi}_{\underset{\_}{\theta}}\left({\underset{\_}{\theta}}^{prop}|\underset{\_}{y}\right)\times {q}_{\underset{\_}{\theta}}\left({\underset{\_}{\theta}}^{\left(k-1\right)}|{\underset{\_}{\theta}}^{prop}\right)}{{\pi}_{\underset{\_}{\theta}}\left({\underset{\_}{\theta}}^{\left(k-1\right)}|\underset{\_}{y}\right)\times {q}_{\underset{\_}{\theta}}\left({\underset{\_}{\theta}}^{prop}|{\underset{\_}{\theta}}^{\left(k-1\right)}\right)}$

Step 5: If $u\le \mathrm{min}\left(1,{R}_{\underset{\_}{\theta}}\right)$, accept the candidate point with probability $\mathrm{min}\left(1,{R}_{\underset{\_}{\theta}}\right)$, i.e., set ${\underset{\_}{\theta}}^{\left(k\right)}={\underset{\_}{\theta}}^{prop}$. Otherwise set ${\underset{\_}{\theta}}^{\left(k\right)}={\underset{\_}{\theta}}^{\left(k-1\right)}$. Therefore for all $j=1,2,\cdots ,M$ a sample of size M is obtained as the joint posterior distribution $\left\{{\underset{\_}{\theta}}_{j}=\left({\alpha}_{j},{\beta}_{j},{\gamma}_{j}\right),j=1,2,\cdots ,M\right\}$ sample.

Let ${B}_{0}$ be the burn-in period for the markov chains for the parameters then under squared error loss function of the Bayesian estimates parameters were obtained as the mean of the samples generated using the algorithm above, i.e.,

$\stackrel{^}{\alpha}=E\left(\alpha |\underset{\_}{y}\right)=\frac{1}{M-{B}_{0}}\underset{j={B}_{0}+1}{\overset{M}{{\displaystyle \sum}}}{\alpha}_{j}$

$\stackrel{^}{\beta}=E\left(\beta |\underset{\_}{y}\right)=\frac{1}{M-{B}_{0}}\underset{j={B}_{0}+1}{\overset{M}{{\displaystyle \sum}}}{\beta}_{j}$

$\stackrel{^}{\gamma}=E\left(\gamma |\underset{\_}{y}\right)=\frac{1}{M-{B}_{0}}\underset{j={B}_{0}+1}{\overset{M}{{\displaystyle \sum}}}{\gamma}_{j}$

The $100\left(1-\psi \right)\%$ Bayesian Credible Intervals and $100\left(1-\psi \right)\%$ Highest Probability Density (HPD) intervals for $\alpha $, $\beta $ and $\gamma $ were obtained using the algorithm proposed by [16]. The algorithm packaged in the package CODA

in R language. Let $\left\{\left({\alpha}_{\left(j\right)},{\beta}_{\left(j\right)},{\gamma}_{\left(j\right)}\right),j=1,2,\cdots ,M\right\}$ be an ordered sample corresponding to the MCMC chain $\left\{\left({\alpha}_{j},{\beta}_{j},{\gamma}_{j}\right),j=1,2,\cdots ,M\right\}$ obtained using the algorithm above.

Then the approximate $100\left(1-\psi \right)\%$ Bayesian Credible Intervals for $\alpha $, $\beta $ and $\gamma $ were obtained as: $\left({\alpha}_{\left(\left[\psi M/2\right]\right)},{\alpha}_{\left(\left[\left(1-\psi /2\right)M\right]\right)}\right)$, $\left({\beta}_{\left(\left[\psi M/2\right]\right)},{\beta}_{\left(\left[\left(1-\psi /2\right)M\right]\right)}\right)$ and $\left({\gamma}_{\left(\left[\psi M/2\right]\right)},{\gamma}_{\left(\left[\left(1-\psi /2\right)M\right]\right)}\right)$ respectively.

2.4. Real Dataset

The framework was also applied to a real dataset and point estimates obtained. The dataset is secondary called alcohol dataset which has also been used in literature by [7] and [8] for illustration. This dataset is documented in the fitODBOD package in R language. It consist of data collected in Netherlands for self-reported alcohol consumption frequencies from 399 randomly selected sample for two independent weeks. When number of days a respondent consumes alcohol out of 7 days is treated as a binomial random variable, traits of over-dispersion is portrayed from the variations of different individuals need to consume alcohol.

3. Results and Discussion

A small sample of size $k=25$ and a large sample of size $k=1000$ was simulated from the McGBB distribution while fixing the shape parameters at $\left(\alpha ,\beta ,\gamma \right)=\left(0.5,0.5,0,5\right)$ respectively and applied to the Bayesian framework to get estimates of these parameters (essentially to recover the fixed parameters mentioned). The results obtained are presented using visuals of trace plots to show the behaviour of the samples generated, histogram plots to show the distribution or the unique shapes that the parameter estimates assume, autocorrelation plots, trace plots for means and over dispersion and the estimates presented in a table.

Figure 1 shows the different chains generated from the posterior sampling when the sample size $k=25$ which was generated by setting the true values at $\left(\alpha ,\beta ,\gamma \right)=\left(0.5,0.5,0,5\right)$ respectively. The starting points were set at $\left(0.1,0.1,0.1\right)$ for the parameters $\left(\alpha ,\beta ,\gamma \right)$. The values of sigma for the proposal distributions were set at $\left(\alpha ,\beta ,\gamma \right)=\left(0.1,0.1,0.5\right)$. The first ten thousand values of three hundred thousand iterations were discarded as the burn-in-period. The blue horizontal line shows the true parameter value while the black horizontal line shows the parameter estimate that was given by the mean value for every chain.

The horizontal lines on the plot for α shows a bigger difference between the true parameter value and the estimate. However, for β and γ the horizontal lines are close to each other which is an indication that the estimates were relatively close to the true value. From the figure it can be seen that there was a higher serial correlation for α and γ than it was for β. To see the autocorrelation behaviour, autocorrelation plots are presented in Figure 2.

From the plots, there was high correlation in α values which is almost close to 1; confirming the behaviour of the trace plot for α. The parameter β and γ show a decreasing correlation as more iterations are performed. Beta had the lowest correlation of all the parameters which is also evident from its trace plot.

Figure 1. Trace plots for the three shape parameters of the McGBB when k = 25.

Figure 2. Autocorrelation plots for α, β and γ when k = 25.

Figure 3 shows the different histogram plots for the shape parameters. This shows the shape of the distribution that each of the parameter takes from the iterations performed. From the figure the histogram for α and γ are skewed to the left with γ being the most skewed. β has a relatively normal curve.

Figure 4 shows the trace plots for each mean computed at every parameter estimate obtained from the chains generated and a plot for the over-dispersion parameter. The blue horizontal line shows on average the mean and over-dispersion parameter across the chains. The mean was a value close to 0.7 and the over dispersion of a value close to 0.5. For values of over dispersion close to one indcate high over-dispersion while values of row close to zero indicate low over-dispersion. From the plot it can be seen that over-dispersion was present and it was relatively on the average.

Figure 5 shows the different chains generated from the posterior sampling when a sample of size $k=1000$, with true values of $\left(\alpha ,\beta ,\gamma \right)=\left(0.5,0.5,0,5\right)$ respectively. The starting points were set at $\left(0.1,0.1,0.1\right)$ for the parameters $\left(\alpha ,\beta ,\gamma \right)$. The values of sigma for the proposal distribution were set at $\left(\alpha ,\beta ,\gamma \right)=\left(0.1,0.01,5\right)$ respectively. The black horizontal line shows the true parameter value while the blue horizontal line shows the parameter estimate that was given by the mean value for every chain. From the figure, the plots of $\alpha $ and $\gamma $ it can be seen that autocorrelation was high and low for $\beta $ plot.

From Figure 6, correlation was relatively average for the parameter β, and high for α and γ. This can be seen from the behaviour of the traceplots. Comparing the autocorrelation plots for when sample size is k = 25, it can be seen that autocorrelation has increased for sample size k = 1000.

From Figure 7, the shape of α is skewed so is γ. However, γ is more skewed in shape than α. β approaches a normal distribution shape. It can be noticed that with the increase in sample size the shapes are becoming flatter compared to when the sample size was k = 25.

Figure 3. Histogram plots for the parameters when k = 25.

Figure 4. Trace plots for means and over dispersion parameter ρ computed from the estimates generated.

Figure 5. Trace plots when k = 1000.

Figure 6. Correlation plots for α, β and γ.

Figure 7. Histogram plots for α, β and γ.

Figure 8 shows trace plots for each mean computed at every parameter estimate obtained from the chains generated and a plot for the over-dispersion parameter. The blue horizontal line shows on average the mean and over-dispersion parameter across the chains. When the sample size is increased it can be noticed that there was an increase in over dispersion as well. On average for every combination of estimates computed the mean was 0.50. The average over dispersion was about 0.65 for every combination of the estimates computed across three hundred thousand iterations.

From Table 1 the estimates were obtained from the posterior mean with true values of the three parameters set at $\left(\alpha ,\beta ,\gamma \right)=\left(0.5,0.5,0,5\right)$ respectively. The first one thousand values sampled were discarded as the burn-in-period The acceptance rates of the sampled candidate values were relatively within the acceptable limits for small samples however, relatively declined for large samples. The acceptable limits of the acceptance rates should be between 13% - 50% [5]. Acceptance rates show the percentage of the values from the specified iterations which in this case was three hundred thousand that mimic the posterir distribution. The estimates for the parameter β and γ were close to the true parameter values. However, the parameter estiamtes of α were not as close as β and γ to the true parameter value, but was not as far from the true value as much. The bayesian framework produced estimates that were close to the true parameter values for small samples.

Table 2 shows the standard errors for the estimates obtained. Standard errors shows the how precise the estimates are. From the table the parameter α had higher standard errors, which is also reflected in the trace plots figure where the variability is high. The parameter β had lower standard errors, which is evident from the low variation displayed in the trace plot figures while γ had relatively low standard errors.

From Table 3, all the credible regions for all the parameters contained the true parameter value. Therefore, there is a 95% chance that the true value of each of the parameters $\left(\alpha ,\beta ,\gamma \right)$ will lie within their respective credible regions computed.

Table 4 shows the confidence lengths which were short especially for the parameter β. This further stamps the preciseness of the Bayesian framework in the estimation of the McGBB parameters.

Table 5 shows the highest posterior densities for the parameters. The HPD is the shortest credible interval among all the intervals. Any point within the HPD has a higher density than any other point outside [17].

The article also uses a real dataset to apply the Bayesian framework. Table 6 shows the estimates that were obtained when the Bayesian framework was applied for the alcohol dataset. Other methods of estimation by [7] and [8] have been applied for the same dataset. While the primary attention of this article was to come up with a Bayesian framework, readers can compare the estimates for the alcohol dataset with those obtained by [7] and [8].

Table 7 shows the standard errors of the estimates when a real dataset was used. From the table the standard errors were high for α which could have been contributed by the high variation portrayed in α.

Figure 8. Trace plots for means and over dispersion parameter.

Table 1. Parameter estimates for *α*, *β* and *γ*.

Table 2. Standard errors of the estimates.

Table 3. Bayesian credible intervals.

Table 4. Lengths of the credible regions.

Table 5. Bayesian HPD interval.

Table 6. Application to a real dataset (Alcohol dataset).

Table 7. Standard Errors for the Estimates of the Real data.

4. Conclusion

This article explores a Bayesian framework for the mixture distribution known as McDonald generalized beta-binomial distribution. From the framework, the paper has been able to obtain the point estimates, credible regions and HPD intervals. It is evident that the point estimates obtained do not deviate from the true parameter which can also be seen from the standard errors obtained. The credible regions obtained are of short lengths and all include the true parameter value, which further emphasizes on the precision of the Bayesian framework. This paper emphasizes the importance of the Bayesian framework on modelling mixture models or distributions. The use of Bayesian framework for estimation is preferably good especially for a mixture distribution that is susceptible to problems of integration. The ability of the framework to overcome the challenge of intractable integrations especially in high-dimensional distributions by applying a random search procedure to obtain estimates makes it a preferable method. McGBB is one of the many beta-type generated distributions, this work can be extended to other distributions of the same class. Moreover, future studies can explore the Bayesian methodology by using informative priors as opposed to non-informative that were used in this study.

References

[1] Stoffel, M.A., Nakagawa, S. and Schielzeth, H. (2017) Repeatability Estimation and Variance Decomposition by Generalized Linear Mixed-Effects Models. Methods in Ecology and Evolution, 8, 1639-1644. https://doi.org/10.1111/2041-210X.12797

[2] Coveney, P., Dougherty, E. and Highfield, R. (2016) Big Data Need Big Theory Too. Philosophical Transactions: Mathematical, Physical and Engineering Sciences, 374, 1-11.

http://www.jstor.org/stable/26115982

https://doi.org/10.1098/rsta.2016.0153

[3] Richterich, A. (2018) Big Data: Ethical Debates. In: The Big Data Agenda: Data Ethics and Critical Data Studies, University of Westminster Press, London, 33-52.

https://doi.org/10.2307/j.ctv5vddsw

[4] Petzschner, F.H., Glasauer, S. and Stephan, K.E. (2015) A Bayesian Perspective on Magnitude Estimation. Trends in Cognitive Sciences, 19, 285-293.

https://doi.org/10.1016/j.tics.2015.03.002

[5] Entezari, R. (2018) Bayesian Computations via MCMC, with Applications to Big Data and Spatial Data. Doctoral Dissertation.

[6] Zhang, C. (2019) Statistical Modeling of Count Data with Over-Dispersion or Zero-Inflation Problems.

[7] Manoj, C., Wijekoon, P. and Yapa, R.D. (2013) The McDonald Generalized Beta-Binomial Distribution: A New Binomial Mixture Distribution and Simulation Based Comparison with Its Nested Distributions in Handling over Dispersion. International Journal of Statistics and Probability, 2, 24.
https://doi.org/10.5539/ijsp.v2n2p24

[8] Janiffer, N., Islam, A. and Luke, O. (2014) Estimating Equations for Estimation of McDonald Generalized Beta-Binomial Parameters. Open Journal of Statistics, 4, 702-709.

https://doi.org/10.4236/ojs.2014.49065

[9] Dodwell, T.J., Ketelsen, C., Scheichl, R. and Teckentrup, A.L. (2015) A Hierarchical Multilevel Markov Chain Monte Carlo Algorithm with Applications to Uncertainty Quantification in Subsurface Flow. SIAM/ASA Journal on Uncertainty Quantification, 3, 1075-1108.

https://doi.org/10.1137/130915005

[10] Lee, M.D. and Vanpaemel, W. (2018) Determining Informative Priors for Cognitive Models. Psychonomic Bulletin & Review, 25, 114-127.
https://doi.org/10.3758/s13423-017-1238-3

[11] Sprenger, J. (2018) The Objectivity of Subjective Bayesianism. European Journal for Philosophy of Science, 8, 539-558. https://doi.org/10.1007/s13194-018-0200-1

[12] Lynch, S.M. (2007) Introduction to Applied Bayesian Statistics and Estimation for Social Scientists. Springer Science & Business Media, Berlin.

https://doi.org/10.1007/978-0-387-71265-9

[13] Li, J., Nott, D.J., Fan, Y. and Sisson, S.A. (2017) Extending Approximate Bayesian Computation Methods to High Dimensions via a Gaussian Copula Model. Computational Statistics & Data Analysis, 106, 77-89.
https://doi.org/10.1016/j.csda.2016.07.005

[14] Martino, S. and Riebler, A. (2019) Integrated Nested Laplace Approximations (INLA).

https://doi.org/10.1002/9781118445112.stat08212

[15] Alquier, P., Friel, N., Everitt, R. and Boland, A. (2016) Noisy Monte Carlo: Convergence of Markov Chains with Approximate Transition Kernels. Statistics and Computing, 26, 29-47.

https://doi.org/10.1007/s11222-014-9521-x

[16] Le, H., Pham, U., Nguyen, P. and Pham, T.B. (2018) Improvement on Monte Carlo Estimation of HPD Intervals. Communications in Statistics-Simulation and Computation.

https://doi.org/10.1080/03610918.2018.1513141

[17] Grzenda, W. (2015) The Advantages of Bayesian Methods over Classical Methods in the Context of Credible Intervals. Information Systems in Management, 4, 53-63.