Bayesian Estimations with Fuzzy Data to Estimation Inverse Rayleigh Scale Parameter
Abstract: In this paper, Bayesian computational method is used to estimate inverse Rayleigh Scale parameter with fuzzy data. Based on imprecision data, the Bayes estimates cannot be obtained in explicit form. Therefore, we provide Tierney and Kadane’s approximation to compute the Bayes estimates of the scale parameter under Square error and Precautionary loss function using Non-informative Jefferys Prior. Also, we provide compared numerically through Monte-Carlo simulation study to obtained estimates of the scale parameter in terms of mean squared error values.

1. Introduction

The Rayleigh distribution (RD) is originated from two parameter Weibull distribution and it is an appropriate model for life-testing. It can be shown by transformation of random variable that if the random variable X has Rayleigh distribution,

Then the random variable $Y=\frac{1}{X}$ has an inverse Rayleigh distribution (IRD)

[1]. The Inverse Rayleigh distribution (IRD) has been introduced by Trayer (1964) [2]. The distribution of life times of several types of experimental units can be approximated by the IRD [3]. The IRD plays an important role in many applications, including life test and reliability studies [4]. A random variable Y is said to have a one-parameter (IRD) if it has the following (PDF),

${f}_{Y}\left(y;\lambda \right)=\frac{2\lambda }{{y}^{3}}{\text{e}}^{\frac{-\lambda }{{y}^{2}}};\text{\hspace{0.17em}}y\ge 0,\lambda >0$ (1)

and (CDF), is given by:

${F}_{Y}\left(y;\lambda \right)={\text{e}}^{\frac{-\lambda }{{y}^{2}}};\text{\hspace{0.17em}}y\ge 0,\lambda >0$ (2)

where $\lambda$ is the scale parameter.

2. Maximum Likelihood Estimators (MLE)

Given $\underset{_}{y}=\left({y}_{1},{y}_{2},\cdots ,{y}_{m}\right)$ be an (i.i.d.) random vector of a random sample of size m from (IRD), the complete-data likelihood function is:

$L\left(\lambda ;\underset{_}{y}\right)={2}^{m}{\lambda }^{m}{\prod }_{i=1}^{m}\frac{1}{{y}_{i}^{3}}{\text{e}}^{-\lambda {\sum }_{i=1}^{m}\frac{1}{{y}_{i}^{2}}}$ (3)

Now if $\underset{_}{y}$ is not observed precisely. Then, we can compute its probability by using Zadeh’s definition of an imprecision event [5]. The observed-data log-likelihood function can then be obtained as,

$L\left(\lambda ;\stackrel{˜}{\underset{_}{y}}\right)=\underset{i=1}{\overset{m}{\prod }}\int {f}_{Y}\left(y;\lambda \right)\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y$

$L\left(\lambda ;\stackrel{˜}{\underset{_}{y}}\right)=\underset{i=1}{\overset{m}{\prod }}\int \frac{2\lambda }{{y}^{3}}{\text{e}}^{\frac{-\lambda }{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y$ (4)

where $\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)$ is the Borel measurable membership function.

Now, by take the natural logarithm for the likelihood function and differentiating with respect to $\lambda$ and then equating to zero we get:

$\frac{\partial \mathrm{ln}L\left(\lambda ;\stackrel{˜}{\underset{_}{y}}\right)}{\partial \lambda }=\frac{m}{\lambda }-\underset{i=1}{\overset{m}{\sum }}\frac{{\int }^{\text{​}}\frac{1}{{y}^{5}}{\text{e}}^{\frac{-\lambda }{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}{{\int }^{\text{​}}\frac{1}{{y}^{3}}{\text{e}}^{\frac{-\lambda }{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}=0$ (5)

Since, the (MLE) of $\lambda$ is the solution of Equation (5), so, we used the modified Newton’s Method to determine the MLE of the parameter $\lambda$.

Where, at iteration $\left(h+1\right)$

${\stackrel{^}{\lambda }}^{\left(h+1\right)}={\stackrel{^}{\lambda }}^{\left(h\right)}-\left(\upsilon \right)\frac{{\frac{\partial \mathrm{ln}L\left(\lambda ;\stackrel{˜}{\underset{_}{y}}\right)}{\partial \lambda }|}_{\lambda ={\stackrel{^}{\lambda }}^{\left(h\right)}}}{{\frac{{\partial }^{2}\mathrm{ln}L\left(\lambda ;\stackrel{˜}{\underset{_}{y}}\right)}{\partial {\lambda }^{2}}|}_{\lambda ={\stackrel{^}{\lambda }}^{\left(h\right)}}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\upsilon >1$ (6)

and

$\frac{{\partial }^{2}\mathrm{ln}L\left(\lambda ;\stackrel{˜}{\underset{_}{y}}\right)}{\partial {\lambda }^{2}}=-\frac{m}{{\lambda }^{2}}+\underset{i=1}{\overset{m}{\sum }}\frac{{\int }^{\text{​}}\frac{1}{{y}^{7}}{\text{e}}^{\frac{-\lambda }{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}{{\int }^{\text{​}}\frac{1}{{y}^{3}}{\text{e}}^{\frac{-\lambda }{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}-\underset{i=1}{\overset{n}{\sum }}{\left[\frac{{\int }^{\text{​}}\frac{1}{{y}^{5}}{\text{e}}^{\frac{-\lambda }{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}{{\int }^{\text{​}}\frac{1}{{y}^{3}}{\text{e}}^{\frac{-\lambda }{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}\right]}^{2}$ (7)

3. Bayes Estimator

In this section, we describe Bayesian method to estimate the parameter $\lambda$. In Bayesian opinion the parameter itself is considered as a random variable from a given probability distribution whose variability can be described by the prior distribution.

Assume that the prior distribution of the unknown scale parameter $\lambda$ of IRD defined as using Jeffery’s prior information $\pi \left(\lambda \right)$, which is given by [2] :

$\pi \left(\lambda \right)\propto \sqrt{I\left(\lambda \right)}$

where $I\left(\lambda \right)=-mE\left[\frac{{\partial }^{2}\mathrm{ln}f\left(y;\lambda \right)}{\partial {\lambda }^{2}}\right]$

$⇒\pi \left(\lambda \right)=a\sqrt{-mE\left[\frac{{\partial }^{2}\mathrm{ln}f\left(y;\lambda \right)}{\partial {\lambda }^{2}}\right]}$, a is a constant,

where

$E\left[\frac{{\partial }^{2}\mathrm{ln}f\left(y;\lambda \right)}{\partial {\lambda }^{2}}\right]=\frac{-1}{{\lambda }^{2}}⇒\pi \left(\lambda \right)=\frac{a\sqrt{m}}{\lambda },\lambda >0$

Now, the posterior density function of $\lambda$ given imprecision data is:

$h\left(\lambda |\stackrel{˜}{\underset{_}{y}}\right)=\frac{\pi \left(\lambda \right)L\left(\lambda ;\stackrel{˜}{\underset{_}{y}}\right)}{{\int }_{0}^{\infty }\pi \left(\lambda \right)L\left(\lambda ;\stackrel{˜}{\underset{_}{y}}\right)\text{d}\lambda }$

$⇒h\left(\lambda |\stackrel{˜}{\underset{_}{y}}\right)=\frac{\frac{a\sqrt{m}}{\lambda }{\prod }_{i=1}^{m}\int \frac{2\lambda }{{y}^{3}}{\text{e}}^{\frac{-\lambda }{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}{{\int }_{0}^{\infty }\frac{a\sqrt{m}}{\lambda }{\prod }_{i=1}^{m}\int \frac{2\lambda }{{y}^{3}}{\text{e}}^{\frac{-\lambda }{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}$ (8)

In this study we consider non-informative prior density for $\lambda$ based on square error and precautionary loss function as the following:

3.1. Bayes Estimator Based on Square Error Loss Function

Bayes estimation of any function of the scale parameter $\lambda$ say $g\left(\lambda \right)$, based on a squared error loss function, may be written as,

${\stackrel{^}{g}}_{s}\left(\lambda \right)=E\left[g\left(\lambda \right)|\stackrel{˜}{\underset{_}{y}}\right]=\frac{{\int }_{0}^{\infty }g\left(\lambda \right)\pi \left(\lambda \right)L\left(\lambda ;\stackrel{˜}{\underset{_}{y}}\right)\text{d}\lambda }{{\int }_{0}^{\infty }\pi \left(\lambda \right)L\left(\lambda ;\stackrel{˜}{\underset{_}{y}}\right)\text{d}\lambda }$ (9)

3.2. Bayes Estimator Based on Precautionary Loss Function

Precautionary loss function was proposed by Norstrom (1996) [6] , as follows:

$L\left(\stackrel{^}{\theta },\theta \right)=\frac{{\left(\theta -\stackrel{^}{\theta }\right)}^{2}}{\stackrel{^}{\theta }}$,

where $\stackrel{^}{\theta }$ is an estimate of $\theta$.

Bayes estimation of any function of the scale parameter $\lambda$ say $g\left(\lambda \right)$, based on a precautionary error loss function, may be written as,

${\stackrel{^}{g}}_{p}\left(\lambda \right)=\sqrt{E\left[{g}^{2}\left(\lambda \right)|\stackrel{˜}{\underset{_}{y}}\right]}=\sqrt{\frac{{\int }_{0}^{\infty }{g}^{2}\left(\lambda \right)\pi \left(\lambda \right)L\left(\lambda ;\stackrel{˜}{\underset{_}{y}}\right)\text{d}\lambda }{{\int }_{0}^{\infty }\pi \left(\lambda \right)L\left(\lambda ;\stackrel{˜}{\underset{_}{y}}\right)\text{d}\lambda }}$ (10)

Note that, Bayes estimator in (9) and (10) cannot be simplified in to a closed form. Therefore, we consider Tierney and Kadane’s approximation form to obtain Bayes estimator of $\lambda$ of IRD.

4. Tierney and Kadane’s Approximation Form

Tierney and Kadane (1986) [7] proposed an alternative method for the evaluation of the ratio of integrals of the form (9) and (10).

Setting $Q\left(\lambda \right)=\mathrm{ln}\left(\pi \left(\lambda \right)\right)+\mathrm{ln}\left(L\left(\lambda ;\stackrel{˜}{\underset{_}{y}}\right)\right)$

${\stackrel{^}{g}}_{s}\left(\lambda \right)=E\left[g\left(\lambda \right)|\stackrel{˜}{\underset{_}{y}}\right]=\frac{{\int }_{0}^{\infty }g\left(\lambda \right){\text{e}}^{Q\left(\lambda \right)}\text{d}\lambda }{{\int }_{0}^{\infty }{\text{e}}^{Q\left(\lambda \right)}\text{d}\lambda }$ (11)

${\stackrel{^}{g}}_{p}\left(\lambda \right)=\sqrt{E\left[{g}^{2}\left(\lambda \right)|\stackrel{˜}{\underset{_}{y}}\right]}=\sqrt{\frac{{\int }_{0}^{\infty }{g}^{2}\left(\lambda \right){\text{e}}^{Q\left(\lambda \right)}\text{d}\lambda }{{\int }_{0}^{\infty }{\text{e}}^{Q\left(\lambda \right)}\text{d}\lambda }}$ (12)

Now, set

$H\left(\lambda \right)=\frac{Q\left(\lambda \right)}{m}$

${H}_{s}^{*}\left(\lambda \right)=\frac{\mathrm{ln}\left(g\left(\lambda \right)\right)}{m}+H\left(\lambda \right)$ (13)

And

${H}_{p}^{*}\left(\lambda \right)=\frac{\mathrm{ln}\left({g}^{2}\left(\lambda \right)\right)}{m}+H\left(\lambda \right)$ (14)

$⇒{\stackrel{^}{g}}_{s}\left(\lambda \right)=\frac{{\int }_{0}^{\infty }{\text{e}}^{n{H}_{s}^{\ast }\left(\lambda \right)}\text{d}\lambda }{{\int }_{0}^{\infty }{\text{e}}^{nH\left(\lambda \right)}\text{d}\lambda }$ (15)

${\stackrel{^}{g}}_{p}\left(\lambda \right)={\left[\frac{{\int }_{0}^{\infty }{\text{e}}^{n{H}_{p}^{\ast }\left(\lambda \right)}\text{d}\lambda }{{\int }_{0}^{\infty }{\text{e}}^{nH\left(\lambda \right)}\text{d}\lambda }\right]}^{\frac{1}{2}}$ (16)

Now, the Equation (15) and Equation (16) can be written as

${\stackrel{^}{g}}_{s}^{T}\left(\lambda \right)=\sqrt{\frac{{\tau }^{*}}{\tau }}\mathrm{exp}\left\{n\left({H}_{s}^{\ast }\left({\stackrel{^}{\lambda }}^{*}\right)-H\left(\stackrel{^}{\lambda }\right)\right)\right\}$ (17)

${\stackrel{^}{g}}_{p}^{T}\left(\lambda \right)=\sqrt{\sqrt{\frac{{\tau }^{*}}{\tau }}\mathrm{exp}\left\{n\left({H}_{p}^{\ast }\left({\stackrel{^}{\lambda }}^{*}\right)-H\left(\stackrel{^}{\lambda }\right)\right)\right\}}$ (18)

where, ${\tau }^{*}$: is the minus the inverses of the second derivative of ${H}_{s}^{\ast }\left(\lambda \right)$ or ${H}_{p}^{\ast }\left(\lambda \right)$ at ${\stackrel{^}{\lambda }}^{*}$ depending on what loss function have been used. $\tau$: is the minus the inverses of the second derivative of $H\left(\lambda \right)$ at $\stackrel{^}{\lambda }$. And ${\stackrel{^}{\lambda }}^{*}$ maximize ${H}_{s}^{\ast }\left(\lambda \right)$ and ${H}_{p}^{\ast }\left(\lambda \right)$ as well as $\stackrel{^}{\lambda }$ maximize $H\left(\lambda \right)$

Now, the function $H\left(\lambda \right)$ is given by,

$H\left(\lambda \right)=\frac{1}{m}\left[k+\left(m-1\right)\mathrm{ln}\left(\lambda \right)+{\sum }_{i=1}^{m}\mathrm{ln}{\int }^{\text{​}}\frac{1}{{y}^{3}}{\text{e}}^{\frac{-\lambda }{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y\right]$ (19)

where,

$k=\mathrm{ln}\left(a\right)+m\mathrm{ln}\left(2\right)+\frac{1}{2}\mathrm{ln}\left(m\right)$ (20)

and $\stackrel{^}{\lambda }$ that maximize $H\left(\lambda \right)$, can be obtained by solving the following equation,

$\frac{\partial H\left(\lambda \right)}{\partial \lambda }=\frac{1}{m}\left[\frac{m-1}{\lambda }-{\sum }_{i=1}^{m}\frac{{\int }^{\text{​}}\frac{1}{{y}^{5}}{\text{e}}^{\frac{-\lambda }{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}{{\int }^{\text{​}}\frac{1}{{y}^{3}}{\text{e}}^{\frac{-\lambda }{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}\right]=0$ (21)

It is clear there is no explicit solution to Equation (21). Therefore, modified Newton method is applied to solve the required equation.

${\stackrel{^}{\lambda }}^{\left(h+1\right)}={\stackrel{^}{\lambda }}^{\left(h\right)}-\left(\upsilon \right)\frac{{\frac{\partial H\left(\lambda \right)}{\partial \lambda }|}_{\lambda ={\stackrel{^}{\lambda }}^{\left(h\right)}}}{{\frac{{\partial }^{2}H\left(\lambda \right)}{\partial {\lambda }^{2}}|}_{\lambda ={\stackrel{^}{\lambda }}^{\left(h\right)}}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\upsilon >1$ (22)

where

$\frac{{\partial }^{2}H\left(\lambda \right)}{\partial {\lambda }^{2}}=\frac{1}{m}\left[\frac{-\left(m-1\right)}{{\lambda }^{2}}+\underset{i=1}{\overset{m}{\sum }}\frac{{\int }^{\text{​}}\frac{1}{{y}^{7}}{\text{e}}^{\frac{-\stackrel{^}{\lambda }}{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}{{\int }^{\text{​}}\frac{1}{{y}^{3}}{\text{e}}^{\frac{-\stackrel{^}{\lambda }}{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}-\underset{i=1}{\overset{m}{\sum }}{\left(\frac{{\int }^{\text{​}}\frac{1}{{y}^{5}}{\text{e}}^{\frac{-\stackrel{^}{\lambda }}{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}{{\int }^{\text{​}}\frac{1}{{y}^{3}}{\text{e}}^{\frac{-\stackrel{^}{\lambda }}{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}\right)}^{2}\right]$ (23)

then,

$\tau =-{\left[{\frac{{\partial }^{2}H\left(\lambda \right)}{\partial {\lambda }^{2}}|}_{\lambda ={\stackrel{^}{\lambda }}^{\left(h\right)}}\right]}^{-1}$

Now, following the same argument with $g\left(\lambda \right)=\lambda$

4.1. Tierney and Kadane’s Approximation of λ Based on Square Error Loss Function (TKS)

Set $\left(\lambda \right)=\lambda$, Equation (13) will be,

${H}_{s}^{\ast }\left(\lambda \right)=\frac{\mathrm{ln}\left(\lambda \right)}{m}+H\left(\lambda \right)$

${H}_{s}^{\ast }\left(\lambda \right)=\frac{1}{m}\left[k+\left(m\right)\mathrm{ln}\left(\lambda \right)+{\sum }_{i=1}^{m}\mathrm{ln}{\int }^{\text{​}}\frac{1}{{y}^{3}}{\text{e}}^{\frac{-\lambda }{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y\right]$ (24)

where k is a constant as in (20).

Now, ${\stackrel{^}{\lambda }}^{*}$ that maximize ${H}_{s}^{\ast }\left(\lambda \right)$ in (24) can be obtained by solving following equation iteratively as in

${\stackrel{^}{\lambda }}^{\ast }{}^{\left(h+1\right)}={\stackrel{^}{\lambda }}^{\ast }{}^{\left(h\right)}-\left(\upsilon \right)\frac{{\frac{\partial {H}_{s}^{\ast }\left(\lambda \right)}{\partial \lambda }|}_{\lambda ={\stackrel{^}{\lambda }}^{\ast }{}^{\left(h\right)}}}{{\frac{{\partial }^{2}{H}_{s}^{\ast }\left(\lambda \right)}{\partial {\lambda }^{2}}|}_{\lambda ={\stackrel{^}{\lambda }}^{\ast }{}^{\left(h\right)}}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\upsilon >1$ (25)

where

$\frac{\partial {H}_{s}^{\ast }\left(\lambda \right)}{\partial \lambda }=\frac{1}{m}\left[\frac{m}{{\stackrel{^}{\lambda }}^{*}}-\underset{i=1}{\overset{m}{\sum }}\frac{{\int }^{\text{​}}\frac{1}{{y}^{5}}{\text{e}}^{\frac{-{\stackrel{^}{\lambda }}^{*}}{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}{{\int }^{\text{​}}\frac{1}{{y}^{3}}{\text{e}}^{\frac{-{\stackrel{^}{\lambda }}^{*}}{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}\right]$

$\frac{{\partial }^{2}{H}_{s}^{\ast }\left(\lambda \right)}{\partial {\lambda }^{2}}=\frac{1}{m}\left[\frac{-\left(m\right)}{{\stackrel{^}{\lambda }}^{\ast }{}^{2}}+\underset{i=1}{\overset{m}{\sum }}\frac{{\int }^{\text{​}}\frac{1}{{y}^{7}}{e}^{\frac{-{\stackrel{^}{\lambda }}^{*}}{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}{{\int }^{\text{​}}\frac{1}{{y}^{3}}{\text{e}}^{\frac{-{\stackrel{^}{\lambda }}^{*}}{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}-\underset{i=1}{\overset{m}{\sum }}{\left(\frac{{\int }^{\text{​}}\frac{1}{{y}^{5}}{\text{e}}^{\frac{-{\stackrel{^}{\lambda }}^{*}}{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}{{\int }^{\text{​}}\frac{1}{{y}^{3}}{\text{e}}^{\frac{-{\stackrel{^}{\lambda }}^{*}}{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}\right)}^{2}\right]$

And

${\tau }^{*}=-{\left[{\frac{{\partial }^{2}{H}_{s}^{\ast }\left(\lambda \right)}{\partial {\lambda }^{2}}|}_{\lambda ={\stackrel{^}{\lambda }}^{\ast }{}^{\left(h\right)}}\right]}^{-1}$ (26)

Now, Bayes estimate of $\lambda$ of IRD based on square error loss function, denoted by ${\stackrel{^}{\lambda }}_{s}^{TK}$, can be obtained from Equation (17), where all the H and ${H}_{s}^{\ast }$ elements are evaluated in $\stackrel{^}{\lambda }$ and ${\stackrel{^}{\lambda }}^{*}$ respectively.

4.2. Bayes Estimate of λ Based on Precautionary Loss Function (TKP)

Set, $g\left(\lambda \right)=\lambda$, Equation (14) will be,

${H}_{p}^{*}\left(\lambda \right)=\frac{\mathrm{ln}\left({\lambda }^{2}\right)}{m}+H\left(\lambda \right)$

${H}_{p}^{\ast }\left(\lambda \right)=\frac{1}{m}\left[k+\left(m+1\right)\mathrm{ln}\left(\lambda \right)+{\sum }_{i=1}^{m}\mathrm{ln}{\int }^{\text{​}}\frac{1}{{y}^{3}}{\text{e}}^{\frac{-\lambda }{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y\right]$ (27)

where k is a constant as in (20).

Now, ${\stackrel{^}{\lambda }}^{*}$ that maximize ${H}_{p}^{\ast }\left(\lambda \right)$ in (27) can be obtained by solving following equation

$\frac{\partial {H}_{p}^{*}\left(\lambda \right)}{\partial \lambda }=\frac{1}{m}\left[\frac{m+1}{{\stackrel{^}{\lambda }}^{*}}-\underset{i=1}{\overset{m}{\sum }}\frac{{\int }^{\text{​}}\frac{1}{{y}^{5}}{\text{e}}^{\frac{-{\stackrel{^}{\lambda }}^{*}}{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}{{\int }^{\text{​}}\frac{1}{{y}^{3}}{\text{e}}^{\frac{-{\stackrel{^}{\lambda }}^{*}}{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}\right]$

iteratively as in

${\stackrel{^}{\lambda }}^{\ast }{}^{\left(h+1\right)}={\stackrel{^}{\lambda }}^{\ast }{}^{\left(h\right)}-\left(\upsilon \right)\frac{{\frac{\partial {H}_{p}^{\ast }\left(\lambda \right)}{\partial \lambda }|}_{\lambda ={\stackrel{^}{\lambda }}^{\ast }{}^{\left(h\right)}}}{{\frac{{\partial }^{2}{H}_{p}^{\ast }\left(\lambda \right)}{\partial {\lambda }^{2}}|}_{\lambda ={\stackrel{^}{\lambda }}^{\ast }{}^{\left(h\right)}}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\upsilon >1$ (28)

where

$\frac{{\partial }^{2}{H}_{p}^{\ast }\left(\lambda \right)}{\partial {\lambda }^{2}}=\frac{1}{m}\left[\frac{-\left(m+1\right)}{{\stackrel{^}{\lambda }}^{\ast }{}^{2}}+\underset{i=1}{\overset{m}{\sum }}\frac{{\int }^{\text{​}}\frac{1}{{y}^{7}}{\text{e}}^{\frac{-{\stackrel{^}{\lambda }}^{*}}{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}{{\int }^{\text{​}}\frac{1}{{y}^{3}}{\text{e}}^{\frac{-{\stackrel{^}{\lambda }}^{*}}{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}-\underset{i=1}{\overset{m}{\sum }}{\left(\frac{{\int }^{\text{​}}\frac{1}{{y}^{5}}{\text{e}}^{\frac{-{\stackrel{^}{\lambda }}^{*}}{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}{{\int }^{\text{​}}\frac{1}{{y}^{3}}{\text{e}}^{\frac{-{\stackrel{^}{\lambda }}^{*}}{{y}^{2}}}\mu {f}_{{\stackrel{˜}{y}}_{i}}\left(y\right)\text{d}y}\right)}^{2}\right]$

And

${\tau }^{*}=-{\left[{\frac{{\partial }^{2}{H}_{p}^{\ast }\left(\lambda \right)}{\partial {\lambda }^{2}}|}_{\lambda ={\stackrel{^}{\lambda }}^{\ast }{}^{\left(h\right)}}\right]}^{-1}$ (29)

Now, Bayes estimate of $\lambda$ of IRD based on prec. loss function, denoted by ${\stackrel{^}{\lambda }}_{p}^{TK}$, can be obtained from Equation (18), where all the H and ${H}_{p}^{\ast }$ elements are evaluated in $\stackrel{^}{\lambda }$ and ${\stackrel{^}{\lambda }}^{*}$ respectively.

5. Simulation Study

In trying to illustrate and compare the methods as described above, a Monte-Carlo simulation study was perform to generate an (i.i.d) random samples, say $\underset{_}{y}$, according to IRD through the adoption of inverse transformation method with size n = 10, 30 and 90 to take care of small, medium and large data sets. The scale parameter λ = 0.3, 0.5, 1, 1.5, 2. Then, each observation of $\underset{_}{y}$ was made Imprecision based on an appropriate selected membership function among four membership functions in the Imprecision Information System as the following Figure 1.

The simulation program has been written by using MATLAB (R2010b) program. The results of Monte-Carlo simulation have been summarized in Table 1.

The initial values required for proceeding modified Newton-Raphson method chosen to be the symmetrical rank regression estimators. The comparisons between the parameter estimates were based on values from MSE where [8] :

Figure 1. Imprecision information system.

Table 1. MSE values for estimates of the scale parameter ( $\lambda$ ) of IRD with different cases.

n: sample size; ${\stackrel{^}{\lambda }}_{\text{MNR}}$: maximum likelihood estimate of λ by newton-raphson; ${\stackrel{^}{\lambda }}_{\text{TKS}}$: Bayes estimate of λ of IRD based on square error loss function; ${\stackrel{^}{\lambda }}_{\text{TKP}}$: Bayes estimate of λ of IRD based on prec. loss function.

$\text{MSE}\left(\stackrel{^}{\lambda }\right)=\frac{{\sum }_{j=1}^{L}{\left({\stackrel{^}{\lambda }}_{j}-\lambda \right)}^{2}}{L}$ (30)

${\stackrel{^}{\lambda }}_{j}$: is the estimate of $\lambda$ respectively at the jth run.

L: is the number of sample replicated chosen to be (500).

6. Conclusions and Recommendations

The most important conclusions of Monte-Carlo simulation results are:

Tierney and Kadane’s approximation based on square error loss function (TKS) estimate introduced the best perform compared with the different estimates for all sample sizes and for all cases except $\lambda =0.3$ and $\lambda =0.5$, where Bayes Estimate based on Precautionary loss function (TKP) is the best.

Based on this, we recommend,

1) Using the TKS estimate to compute estimates of the scale parameter of IRD for all sample sizes and with cases $\lambda =1$, $\lambda =1.5$ and $\lambda =2$.

2) Using the TKP estimate to compute estimates of the scale parameter of IRD for all sample sizes and with the cases $\lambda =0.3$ and $\lambda =0.5$.

3) For further study, we suggest such type of work can be done by using other informative priors for the parameter of the IRD and also the parameter can be estimated by other methods.

4) Research can be applied to real data and demonstrate the importance of this distribution in practice.

Cite this paper: AL-Sultany, S. (2019) Bayesian Estimations with Fuzzy Data to Estimation Inverse Rayleigh Scale Parameter. Open Journal of Applied Sciences, 9, 673-681. doi: 10.4236/ojapps.2019.98054.
References

[1]   Rao, G.S. and Mbwambo, S. (2019) Exponentiated Inverse Rayleigh Distribution and an Application to Coating Weights of Iron Sheets Data. Journal of Probability and Statistics, 2019, Article ID: 7519429.
https://doi.org/10.1155/2019/7519429

[3]   Rasheed, H.A., Ismail, S.Z. and Jabir, A.G. (2015) Acomparison of the Classical Estimators with the Bayes Estimators of One Parameter Inverse Rayleigh Distribution, International Journal of Advanced Research, 3, 738-749.

[4]   Rasheed, H.A. and Aref, R.K.H. (2016) Bayesian Approach in Estimation of Scale Parameter of Inverse Rayleigh Distribution. Mathematics and Statistics Journal, 2, 8-13.

[5]   Khoolenjani, N.B. and Shahsanaei, F. (2016) Estimating the Parameter of Exponential Distribution under Type-II Censoring from Fuzzy Data. Journal of Statistical Theory and Applications, 15, 181-195.
https://doi.org/10.2991/jsta.2016.15.2.8

[6]   Norstrom, J.G. (1996) The Use of Precautionary Loss Function in Risk Analysis. IEEE Transactions on Reliability, 45, 400-403.
https://doi.org/10.1109/24.536992

[7]   Tierney, L. and Kadane, J.B. (1986) Accurate Approximations for Posterior Moments and Marginal Densities. Journal of the American Statistical Association, 81, 82-86.
https://doi.org/10.1080/01621459.1986.10478240

[8]   Pak, A., Parham, G.A. and Saraj, M. (2013) Inference for the Weibull Distribution Based on Fuzzy Data. Revista Colombiana de Estadistica, 36, 339-358.

Top