Maximum Entropy Empirical Likelihood Methods Based on Laplace Transforms for Nonnegative Continuous Distribution with Actuarial Applications
Author(s) Andrew Luong
ABSTRACT
Maximum entropy likelihood (MEEL) methods also known as exponential tilted empirical likelihood methods using constraints from model Laplace transforms (LT) are introduced in this paper. An estimate of overall loss of efficiency based on Fourier cosine series expansion of the density function is proposed to quantify the loss of efficiency when using MEEL methods. Penalty function methods are suggested for numerical implementation of the MEEL methods. The methods can easily be adapted to estimate continuous distribution with support on the real line encountered in finance by using constraints based on the model generating function instead of LT.

1. Introduction

1.1. New Distributions Created Using Laplace Transforms

Nonnegative continuous parametric families of distributions are useful for modeling loss data or lifetime data in actuarial sciences. Many of these families do not have closed form densities. The densities can only be expressed using means of infinite series representations but their corresponding Laplace transforms (LT) have closed form expressions and they are relatively simple to handle. An illustration is given below.

Example

Hougaard [1] introduced the positive tempered stable (PTS); the PTS distribution is obtained by tilting the positive stable (PS) distribution. The random variable $X\ge 0$ follows a positive stable law if the Laplace transform is given as

${\phi }_{\beta }\left(s\right)=E\left({\text{e}}^{-sX}\right)={\int }_{0}^{\infty }{\text{e}}^{-s}{f}_{\beta }\left(x\right)\text{d}x={\text{e}}^{-\frac{\delta }{\alpha }{s}^{\alpha }},\text{\hspace{0.17em}}0<\alpha <1,\text{\hspace{0.17em}}\delta >0,\text{\hspace{0.17em}}\beta ={\left(\delta ,\alpha \right)}^{\prime }.$

The density function ${f}_{\beta }\left(x\right)$ has no closed form but can be represented as an infinite series.

Now if we create a new distribution using the Esscher transform technique, the corresponding new density can be expressed using ${f}_{\beta }\left(x\right)$ and is given by

$\frac{{\text{e}}^{-\theta x}{f}_{\beta }\left(x\right)}{{\phi }_{\beta }\left(\theta \right)}$ ,

and its LT is

$L\left(s\right)=\frac{{\phi }_{\beta }\left(s+\theta \right)}{{\phi }_{\beta }\left(\theta \right)}$ .

This operation adds an extra tilted parameter $\theta$ to the vector of parameter $\beta$ of the original distribution and a new distribution is created. This new distribution is the positive tempered stable (PTS) distribution with Laplace transform given by

$\begin{array}{l}L\left(s\right)=E\left({\text{e}}^{-sX}\right)=\mathrm{exp}\left(-\frac{\delta }{\alpha }\left[{\left(\theta +s\right)}^{\alpha }-{\theta }^{\alpha }\right]\right),\text{\hspace{0.17em}}\\ s>0,\text{\hspace{0.17em}}\theta >0,\text{\hspace{0.17em}}0<\alpha <1,\text{\hspace{0.17em}}\delta >0.\end{array}$ (1)

The first four cumulants are given by Hougaard [1] with

$\begin{array}{l}{c}_{1}=\delta {\theta }^{\alpha -1},\text{\hspace{0.17em}}{c}_{2}=\delta \left(1-\alpha \right){\theta }^{\alpha -2},\text{\hspace{0.17em}}{c}_{3}=\delta \left(1-\alpha \right)\left(2-\alpha \right){\theta }^{\alpha -3},\\ {c}_{4}=\delta \left(1-\alpha \right)\left(2-\alpha \right)\left(3-\alpha \right){\theta }^{\alpha -4}.\end{array}$

For the limiting case $\alpha \to {0}_{+}$ we have the gamma distribution. In general,

the density function has no closed form except for $\alpha =\frac{1}{2}$ . For $\alpha =\frac{1}{2}$ we ob-

tain the inverse Gaussian (IG) distribution with density function given by Hougaard ( [1] , p.392) as

$f\left(x;\theta ,\alpha \right)=\frac{\delta }{{x}^{\frac{3}{2}}\sqrt{\text{π}}}\mathrm{exp}\left(2\delta {\theta }^{\frac{1}{2}}\right)\mathrm{exp}\left(-\theta x-\frac{{\delta }^{2}}{x}\right),\text{\hspace{0.17em}}x\ge 0.$

For other parameterisation for the IG distribution, see Panjer and Wilmott ( [2] , p.114).

Hougaard [1] has given the name power variance for the PTS distribution and developed moment estimators. There are many names given to this distribution. In the financial literature the name PTS is commonly used, see Schoutens ( [3] , p.56), Kuchler and Tappe [4] . In actuarial sciences, it is also called generalized gamma distribution, see Gerber [5] .

Many new infinitely distributions (ID) can be created using operations on LT based on existing distributions. One of them is the power mixture (PM) operator, see Abate and Whitt ( [6] , p.92). It can be summarized as follows. Assume that ${X}_{t}$ is an infinitely divisible random variable with LT given by ${\left(\kappa \left(s\right)\right)}^{t},t\ge 0.$ The LT of ${X}_{t}$ is formed using $\kappa \left(s\right)$ which is the LT of a continuous nonnegative, ID random variable $X$ . With the introduction of a new random variable $T$ which is also positive continuous and ID with distribution $H\left(t\right)$ , a new nonnegative continuous distribution with LT ${\kappa }_{H}\left(s\right)$ can then be created with LT

$\eta \left(s\right)=EP\left(\kappa ,H\right)={\int }_{0}^{\infty }{\left(\kappa \left(s\right)\right)}^{t}\text{d}H\left(t\right)={\kappa }_{H}\left(-\mathrm{log}\left(\kappa \left(s\right)\right)\right).$ (2)

The new distribution is created using the power mixture (PM) operator. The PM operator was introduced by Abate and Whitt ( [6] , p.92). The random variable Y is also called mixing random variable.

The new distribution obtained will have more parameters than the distribution of $X$ . For other methods, such as compounding methods for creating new distributions, see Klugman et al. ( [7] ), p.141-1430. For other ID nonnegative distributions with closed form LT’s, see Section (1.2) of Luong [8] . ID nonnegative distributions also appear in risk theory as they arise naturally from Lévy processes often used to model aggregate claim processes, see Gerber [9] , Dufresne and Gerber [10] for examples. We often work with ID distribution and for completeness, a definition is given below.

Definition (Lévy-Khintchine Representation). A characteristic function (CF) $\omega \left(s\right)$ of a random variable $X$ is infinitely divisible if and only if it can be represented as

$\omega \left(v\right)=\mathrm{exp}\left(ibv+\underset{-\infty }{\overset{\infty }{\int }}\left({\text{e}}^{ixv}-1-\frac{ivx}{1+{x}^{2}}\right)\left(\frac{1+{x}^{2}}{{x}^{2}}\right)\text{d}G\left(x\right)\right),$

$G\left(x\right)$ is a bounded and non-decreasing function with $G\left(-\infty \right)=0,$ see Rao [11] . An equivalent expression known as the canonical Lévy-Khintchine representation is also used in the literature, see Sato [12] . A similar representation using LT for nonnegative distribution instead of CF can be found in Feller ( [13] , p.450).

1.2. Quasi-Likelihood Estimation

For statistical inferences, we assume that we have a random sample with n observations, ${X}_{1},\cdots ,{X}_{n}$ . These observations are independent and identically distributed as $X$ which has a model distribution with closed form LT ${L}_{\beta }\left(s\right),$ $\beta ={\left({\beta }_{1},\cdots ,{\beta }_{p}\right)}^{\prime }$ is the vector of parameters. The true parameter vector is denoted by ${\beta }_{0}$ . The density function ${f}_{\beta }\left(x\right)$ has no closed form which makes likelihood estimation difficult to implement. Consequently, we would like to estimate $\beta$ based on ${L}_{\beta }\left(s\right)$ . Quasi-likelihood among other methods which do not rely on the true density can be considered. A brief review of QL estimation is given below.

Godambe and Thompson [14] developed estimating equations theory and extended quasi-likelihood theory in their seminal paper. They also proposed estimators using quadratic estimating equations which can be viewed as based on a quasi-score functions obtained by projecting the vector of the scores functions denoted by

$\frac{\partial \mathrm{log}{f}_{\beta }\left(x\right)}{\partial \beta }$

on the linear spanned space by the basis

$\begin{array}{l}{B}_{Q}=\left\{{g}_{1}\left(x\right)={h}_{1}\left(x\right)-E\left(x\right),{g}_{2}\left(x\right)={h}_{2}\left(x\right)-E\left({x}^{2}\right)\right\},\text{\hspace{0.17em}}\\ {h}_{1}\left(x\right)=x,\text{\hspace{0.17em}}{h}_{2}\left(x\right)={x}^{2}.\end{array}$

Note that $E\left(x\right),E\left({x}^{2}\right)$ can be obtained based on the model Laplace transform denoted by ${L}_{\beta }\left(s\right)$ . See Chan and Ghosh [15] for a geometry point of view of estimating functions. Godambe and Thompson [14] obtained the most efficient estimators based on quasi-score functions obtained by projecting the true score functions on the linear space spanned by ${B}_{Q}$ . They are the most efficient quadratic estimating function (EQEF) estimators within the class of quadratic estimating function introduced by Crowder [16] . The class includes Gaussian quasi-likelihood estimating equations. Consequently, The EQEF estimators are more efficient than normal quasi-likelihood (NQL) estimators in general.

The EQEF estimators are simple to obtain since the basis ${B}_{Q}$ has only two elements. The fact they are based on best approximations of the score functions allow them to outperform moment estimators in many circumstances.

For example, we can consider a parametric model with 3 parameters which leads to solve the moment equations, i.e.,

$\frac{1}{n}\underset{i=1}{\overset{n}{\sum }}\left({x}_{i}-E\left(x\right)\right)=0,\text{\hspace{0.17em}}\frac{1}{n}\underset{i=1}{\overset{n}{\sum }}\left({x}_{i}^{2}-E\left({x}^{2}\right)\right)=0,\text{\hspace{0.17em}}\frac{1}{n}\underset{i=1}{\overset{n}{\sum }}\left({x}_{i}^{3}-E\left({x}^{3}\right)\right)=0.$

It is easy to see that the quasi-score functions of moment methods belong to the linear space spanned by the basis

${B}_{mom}=\left\{x-E\left(x\right),{x}^{2}-E\left({x}^{2}\right),{x}^{3}-E\left({x}^{3}\right)\right\}$ .

Even that ${B}_{mom}$ includes all the elements of ${B}_{Q}$ , the EQEF methods can outperform the method of moments due to the quasi-score functions of the methods of moments are not the best approximations based on ${B}_{mom}$ .

Therefore, in this paper we shall emphasize quasi-score functions which make use of best linear combinations of elements of a basis to produce quasi score functions and propose some bases that can provide better efficiencies than the basis formed by linear and quadratic polynomials. The basis should only make use of the model LT. Note that moment estimators based on selected points of the model LT have been discussed in the literature, see Read ( [17] , p.151-153). The methods appear to be useful for fields of applications which make use extensively of LT of the distributions such as actuarial sciences or engineering.

We shall approach in a unified way so that both QL methods and the MEEL methods are related to the notion of projection of the true score functions on a linear space spanned by a finite base. Within this framework, MEEL estimators are shown to be asymptotically first order equivalent to QL estimators using the same base. For the first and higher order properties of empirical likelihood estimators, see Newey and Smith [18] , Smith [19] .

MEEL methods use of informations from the parametric model via constraints and there is one to one correspondence between the constraints, moment conditions and the elements of the basis. Despite that general theory of MEEL or QL methods is well established, the question which bases should we choose to achieve good efficiencies appears to be a relevant one for applications. There is a need to quantify the loss of efficiency as well and consequently in this paper, we also propose a measure of loss of efficiency to evaluate whether MEEL are appropriate methods for analyzing a data set from a specific field of applications.

We hope that the answers will give ideas on how to choose moment conditions or constraints for MEEL estimators. It will also give ideas how to construct semi-parametric bounds as defined by Chamberlain ( [20] , p.311) which can approximate the parametric bound which is the inverse of the Fisher information matrix. We emphasize MEEL methods but offer a unified view for both MEEL and QL methods as they are related. Numerical implementations of the MEEL methods are also discussed to facilitate practical implementations of these methods for applications in actuarial sciences. We shall discuss the quasi-score functions of the QD methods in the next section.

Let ${l}_{i}={l}_{i}\left(\beta \right)=\frac{\partial }{\partial {\beta }_{i}}\mathrm{log}f\left(x;\beta \right),i=1,\cdots ,p$ be the score functions and note that

$E\left({l}_{i}\right)=0$ in general. If we try to approximate ${l}_{i}\left(\beta \right)$ using a quasi-score function formed by linear combinations of the functions ${h}_{1}\left(x\right),\cdots ,{h}_{k}\left(x\right)$ , this leads to consider quasi-score functions of the form

${l}_{i}^{a}={a}_{0}+{\sum }_{j=1}^{k}{a}_{ij}{h}_{j}\left(x\right),i=1,\cdots ,p.$

We shall also impose the condition of unbiasedness of estimating function by requiring $E\left({l}_{i}^{a}\right)=0,i=1,\cdots ,p$ . With these restrictions, it is equivalent to consider

${l}_{i}^{a}={\sum }_{j=1}^{k}{a}_{ij}\left({h}_{j}\left(x\right)-{E}_{\beta }\left({h}_{j}\left(x\right)\right)\right),i=1,\cdots ,p.$

Using vector notations,

${l}_{i}^{a}={{a}^{\prime }}_{i}g,$

$g\left(x\right)={\left({h}_{1}\left(x\right)-{E}_{\beta }\left({h}_{1}\left(x\right)\right),\cdots ,{h}_{k}\left(x\right)-{E}_{\beta }\left({h}_{k}\left(x\right)\right)\right)}^{\prime }$

and define

$h\left(x\right)={\left({h}_{1}\left(x\right),\cdots ,{h}_{k}\left(x\right)\right)}^{\prime }.$

For the best approximation by projecting ${l}_{i}\left(\beta \right)$ on the linear space spanned by the basis $B=\left\{{h}_{1}\left(x\right)-{E}_{\beta }\left({h}_{1}\left(x\right)\right),\cdots ,{h}_{k}\left(x\right)-{E}_{\beta }\left({h}_{k}\left(x\right)\right)\right\}$ , we look for the vector of coefficients ${a}_{i}^{*}{}^{\prime }s$ which minimizes

$E{\left({l}_{i}-{{a}^{\prime }}_{i}g\right)}^{2}$ ,

$E\left(.\right)={E}_{\beta }\left(.\right)$ , the expectation is taken under ${f}_{\beta }\left(x\right)$ .

Using results of the proof of Theorem 4.1 given by Luong and Doray ( [21] , p 150), the optimum vector is ${a}_{i}^{*}$ with

${a}_{i}^{*}{}^{\prime }={E}_{\beta }\left(\frac{\partial {g}^{\prime }}{\partial {\beta }_{i}}\right){\Sigma }^{-1},\text{\hspace{0.17em}}i=1,\cdots ,p$ (3)

$\Sigma$ is the covariance matrix of $h\left(x\right)$ under ${f}_{\beta }\left(x\right)$ . We also use the notation $\Sigma =\Sigma \left(\beta \right)$ if an emphasis on the dependence on $\beta$ is needed.It is easy to see that the best approximation is given by

${l}_{i}={l}_{i}^{*}\left(\beta \right)={a}_{i}^{*}{}^{\prime }\left(\beta \right)g\left(\beta \right),\text{\hspace{0.17em}}i=1,\cdots ,k.$ .

Note that the elements of ${a}_{i}^{*}$ need to be spelled out explicitly which means that the covariance matrix $\Sigma$ needs to be known or estimated for applying quasi-likelihood estimation. MEEL estimation does not need this feature and yet produces asymptotic equivalent estimators. This is one of the main advantages of MEEL estimation over QL estimation.

Quadratic distance (QD) estimation as given by Luong and Thompson [22] can be viewed as a form of quasi-likelihood estimation. Numerically, it might be easier to implement QD methods than QL methods defined using estimating functions as there is an objective function to minimize for QD estimation rather than solving for roots of the QL estimating equations. QD estimation will be briefly discussed below.

Let ${u}_{n}$ be a vector defined based on observations,

${{u}^{\prime }}_{n}=\left({\int }_{0}^{\infty }{h}_{1}\text{d}{F}_{n},\cdots ,{\int }_{0}^{\infty }{h}_{k}\text{d}{F}_{n}\right).$

Its model counterpart is given by

${{u}^{\prime }}_{\beta }=\left({\int }_{0}^{\infty }{h}_{1}\text{d}{F}_{\beta },\cdots ,{\int }_{0}^{\infty }{h}_{k}\text{d}{F}_{\beta }\right)$

where ${F}_{n}$ is the sample distribution function and its model counterpart is denoted by ${F}_{\beta }$ .

The QD estimators $\stackrel{^}{{\beta }_{Q}}$ are obtained by minimizing the quadratic form defined as

$Q\left(\beta \right)={\left({u}_{n}-{u}_{\beta }\right)}^{\prime }{\Sigma }^{-1}\left({u}_{n}-{u}_{\beta }\right).$

$Q\left(\beta \right)={\left({u}_{n}-{u}_{\beta }\right)}^{\prime }{\stackrel{^}{\Sigma }}^{-1}\left({u}_{n}-{u}_{\beta }\right),$ (4)

$\stackrel{^}{\Sigma }$ is a consistent estimate of $\Sigma \left(\beta \right)$ under the true vector of parameters ${\beta }_{0}$ . One can see that this procedure is equivalent to use quasi-score functions obtained by projecting the true score functions on the linear space spanned by $B$ since minimizing Expression (3) leads to solve for $\beta$ the system of equations

${\sum }_{j=1}^{n}{E}_{\beta }\left(\frac{\partial {g}^{\prime }}{\partial {\beta }_{i}}\right){\stackrel{^}{\Sigma }}^{-1}g\left({x}_{i},\beta \right)=0,\text{\hspace{0.17em}}i=1,\cdots ,p$ ,

${E}_{\beta }\left(\frac{\partial {g}^{\prime }}{\partial {\beta }_{i}}\right)=-\frac{\partial {E}_{\beta }\left({h}^{\prime }\right)}{\partial {\beta }_{i}},\text{\hspace{0.17em}}i=1,\cdots ,p.$

Observe that the vector ${\stackrel{^}{a}}_{i}^{*}{}^{\prime }={E}_{\beta }\left(\frac{\partial {g}^{\prime }}{\partial {\beta }_{i}}\right){\stackrel{^}{\Sigma }}^{-1}$ is equivalent to ${a}_{i}^{*}{}^{\prime }$ .

From results in Luong and Thompson ( [22] , p 245) the asymptotic distribution of $\stackrel{^}{{\beta }_{Q}}$ is given as

$\sqrt{n}\left(\stackrel{^}{{\beta }_{Q}}-{\beta }_{0}\right)\stackrel{L}{\to }N\left(0,V\right).$

$V={\left({S}^{\prime }{\Sigma }^{-1}S\right)}^{-1}$ (5)

The matrix ${S}^{\prime }$ can be expressed as

${S}^{\prime }=\left[\begin{array}{ccc}\frac{\partial {E}_{\beta }\left({h}_{1}\right)}{\partial {\beta }_{1}}& \dots & \frac{\partial {E}_{\beta }\left({h}_{k}\right)}{\partial {\beta }_{1}}\\ ⋮& \ddots & ⋮\\ \frac{\partial {E}_{\beta }\left({h}_{1}\right)}{\partial {\beta }_{p}}& \dots & \frac{\partial {E}_{\beta }\left({h}_{k}\right)}{\partial {\beta }_{p}}\end{array}\right]$ .

The elements of the matrix $V$ are evaluated under $\beta ={\beta }_{0}$ , ${S}^{\prime }$ is the transpose of $S$ .

Following Morton ( [23] , p.228), the matrix

${I}_{B}\left(\beta \right)={S}^{\prime }\left(\beta \right){\Sigma }^{-1}\left(\beta \right)S\left(\beta \right)$ (6)

can be defined as the information matrix of the vector of optimum quasi-score functions and it is related to the semiparametric bounds using the moment conditions as given by Chamberlain ( [20] , p.311). The moments conditions can be identified with elements of the basis and so are the constraints used for MEEL methods.

Despite QL and MEEL methods generate asymptotic equivalent estimators, there are reasons to consider MEEL methods rather than quasi-likelihood me- thods.

With MEEL methods, we have the following main advantages:

1) The matrix $\Sigma$ which depends on $\beta$ in general needs to be specified explicitly which might restrict elements to be included in the basis. We can only include elements with relative simple form for their covariances, otherwise $\Sigma$ will be complicated.

2) If $\Sigma$ is replaced by a consistent estimate $\stackrel{^}{\Sigma }$ under ${\beta }_{0}$ , the estimate is often not accurate enough especially when the sample size n is not large enough and therefore, $\stackrel{^}{\Sigma }$ tends to be nearly singular even with a few elements in the basis, this creates numerical instability when applying QD methods or quasi- likelihood methods.

3) Goodness of fit test statistics with limiting chi-square distributions for testing the model can be constructed in a unified way with MEEL methods. This feature is not shared by QL methods.

Within the class of empirical likelihood methods, the MEEL methods are numerically more stable than the original empirical likelihood methods (EL) which were first introduced by Owen [24] . For asymptotic properties of the empirical likelihood methods, see Qin and Lawless [25] , Schennach [26] , Imbens et al. [27] . Also, see the monograph by Owen [28] , the book by Mittelhammer et al. ( [29] , p.281-325) and the book by Anatolyev and Gospodinov ( [30] , p.45-61). It is also worthwhile to note that the MEEL methods are less simulation oriented than indirect inference methods as proposed by Garcia et al. [31] . Numerical implementations using penalty function methods are relative simple and will be discussed in Section 4. We hope that with the exposition of the methods in details without too many technicalities, it will encourage people to use these methods in practice. There are many fields beside actuarial sciences where LT for the distribution is widely used. With some modifications, such as using constraints from the model moment generating function instead of the model LT, the methods can be applied to estimate distribution with support on the real line which are often used in finance, see these distributions in Fang and Osterlee [42] .

The paper is organized as follows. The choice of bases for generating constraints for the MEEL methods is examined in Section 2. Two families of bases using LT are presented in this section. These two families of bases appear to be useful for actuarial applications. In Section 3, we review asymptotic properties of MEEL methods. An estimate for overall relative efficiency using Fourier cosine series expansion is proposed to quantify the loss of overall efficiency when MEEL methods are used. In Section 4 we examine numerical issues and penalty function methods are advocated to locate the global minimizer which gives the MEEL estimators. Simulations are discussed in Section 5. The simulation study from the positive tempered stable distribution shows that the MEEL estimators are much more efficient than moment estimators originally proposed by the seminal paper of Hougaard [1] . Based on the fields of application, often the full parameter space is not needed and can be restricted to a subspace by having the parameters subject to inequality bounds, the MEEL estimators have the potential to attain high efficiency when comparing to the maximum likelihood (ML) by using a reasonable number of elements in the basis. Actuarial applications are discussed in Section 6.

2. Choice of Bases

Using results in Section 1.2, we consider the basis B which can be used for nonnegative continuous distribution or nonnegative distribution with a discontinuity point at the origin with mass assigned to it. The basis B will have the form

$B=\left\{x-E\left(x\right),{x}^{2}-E\left({x}^{2}\right),{\text{e}}^{-\tau x}-{L}_{\beta }\left(\tau \right),\cdots ,{\text{e}}^{-m\tau x}-{L}_{\beta }\left(m\tau \right)\right\}.$ (7)

We observe that the number of elements in the basis is $k=m+2$ and the elements can be obtained using the LT of the model and therefore suitable for estimation for parametric continuous distribution with density without a closed form expression.

The number of elements in basis B is finite. It is formed based on the completeness property of the following basis with an infinite number of elements,

$\left\{{\text{e}}^{-\tau x}-{L}_{\beta }\left(\delta \right),{\text{e}}^{-2\tau x}-{L}_{\beta }\left(2\tau \right),\cdots \right\}.$ (8)

This infinite basis can be traced back to the work of Zakian and Littlewood [32] who show that a density function can be expressed as an infinite series using elements of the infinite basis given by Expression (8) and develop methods to recover the density function using selected points of its LT. This might explain the potential of high efficiencies of the MEEL estimators constructed using only a finite number of elements of B on some restricted parameter spaces.

The following example will make it clear the notion of restricted parameter spaces. For example, we have a model with two parameters given by ${\theta }_{1}\ge 0$ , ${\theta }_{2}\ge 0$ . On a restricted parameter space, the parameters are subject to stricter in equalty bounds. For example $0 and $0 with $a,b,c,d$ are finite positive real numbers.

Therefore, in practice we might want to fix $m=10$ and $\tau =0.01$ , i.e., let

$B=\left\{x-E\left(x\right),{x}^{2}-E\left({x}^{2}\right),{\text{e}}^{-0.01x}-{L}_{\beta }\left(0.01\right),\cdots ,{\text{e}}^{-0.1x}-{L}_{\beta }\left(0.1\right)\right\}.$ (9)

The basis B as indicated above often gives a good balance between numerical simplicity and efficiencies of the estimators.

If the model density has no discontinuity at all then the following basis with negative power moment elements can be considered and we shall see negative power moments can be recovered using the LT. Using the result given in lemma 1 given by Brockwell and Brown ( [33] , p.630), the following infinite basis with negative power moment element is again complete in general if $\tau$ belongs to some interval with $\tau >0$

$\left\{{x}^{-\tau }-E\left({x}^{-\tau }\right),{x}^{-2\tau }-E\left({x}^{-2\tau }\right),\cdots \right\}.$

Therefore, the following finite basis

$\begin{array}{l}C=\left\{x-E\left(x\right),{x}^{2}-E\left({x}^{2}\right),{x}^{-\tau }-E\left({x}^{-\tau }\right),\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}^{-2\tau }-E\left({x}^{-2\tau }\right),\cdots ,{x}^{-m\tau }-E\left({x}^{-m\tau }\right)\right\}\end{array}$ (10)

can also be considered.

The elements of a basis should respect the regularity conditions of Assumption of Section (3.2) for the estimators to be consistent and have an asymptotic normal distribution. The following example will illustrate this point. In practice for example if $E\left({x}^{-1}\right)$ exists and lower negative power moments do not exist, we might want to choose C to be

$\begin{array}{l}C=\left\{x-E\left(x\right),{x}^{2}-E\left({x}^{2}\right),{x}^{-\tau }-E\left({x}^{-\tau }\right),\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}^{-2\tau }-E\left({x}^{-2\tau }\right),\cdots ,{x}^{-m\tau +h}-E\left({x}^{-m\tau +h}\right)\right\},\end{array}$ (11)

$m=5,\tau =0.1.$

The last element is special as it involves h which can be set equal to some small positive value, for example let $h=0.01$ for the regularity condition 3) of Assumption 1 to be met. Obviously, if $E\left({x}^{-2}\right)$ exists then we can let $h=0$ .

Now we shall state a proposition which relates negative power moments of a distribution to its LT. The results given by the following proposition are more general than results given by Cressie et al. [34] who only give results for negative integer moments. The general results can be traced back to Theorem 2.1 given by Brockwell and Brown ( [35] , p.215) but it can be difficult to find this reference, so we reproduce the results below.

Proposition

Suppose that $X$ is a nonnegative continuous random variable with density function and Laplace transform given respectively by $f\left(x\right)$ and $L\left(s\right)$ then if

$E\left({x}^{-u}\right)$ exists, it is given by $E\left({x}^{-u}\right)=\frac{1}{\Gamma \left(u\right)}{\int }_{0}^{\infty }{s}^{u-1}L\left(s\right),u>0$ , $\Gamma \left(u\right)$ is the

commonly used gamma function, assuming the integral exists.

Proof.

Observe that ${\int }_{0}^{\infty }{s}^{u-1}L\left(s\right)\text{d}s={\int }_{0}^{\infty }\left({\int }_{0}^{\infty }{s}^{u-1}{\text{e}}^{-sx}\text{d}s\right)f\left(x\right)\text{d}x$ by switching the or-

der of integration and note that the inner integral can be expressed as

${\int }_{0}^{\infty }{s}^{u-1}{\text{e}}^{-sx}\text{d}s={x}^{-u}\Gamma \left(u\right),\text{\hspace{0.17em}}u>0,$ ,

using properties of a gamma distribution. The integral ${\int }_{0}^{\infty }{s}^{u-1}L\left(s\right),u>0$ if it exists can be evaluated numerically. Most of computer packages provide built- in functions to evaluate these integrals numerically.For the positive stable distribution or gamma distribution negative power moments have closed form expressions, see Luong and Doray ( [21] , p.149). The bases B and C only provide guidelines to form a good basis based on LT. We can also combine or select elements from these bases to form new bases.

3. MEEL Methods

3.1. Two Stages Distance Methods

MEEL methods as discussed in chapter 13 by Mittelhammer et al. ( [29] , p.313- 326) belong to the class of empirical likelihood methods. The name MEEL is given by the authors and MEEL methods are based on the Kullback-Leibler distance which belongs to the class of distance for discrete distribution introduced by Cressie-Read [36] . MEEL methods are also called exponential tilted empirical likelihood methods in the literature, see Imbens [37] . The methods are asymptotically as efficient as other EL methods. The main reason which leads us to emphasize MEEL methods over EL methods is the advantage of the numerical stability of MEEL methods, see Schennach [26] for this property. We shall discuss how to implement MEEL methods with the specifications of constraints extracted from LT of the original model. These constraints are associated with moment conditions or elements of a finite base. MEEL methods can also be viewed conceptually as a two stages distance methods based on the Kullback- Leibler distance for discrete distributions. The first stage consists of choosing the best proxy discrete model to replace the original parametric model and the second stage consists of using the best discrete proxy model to estimate the parameters of the original model.

Assume that we have a random sample as in Section 1.2. The vector $\beta$ has p components, i.e.,

$\beta ={\left({\beta }_{1},\cdots ,{\beta }_{p}\right)}^{\prime }.$

We are in the situation where the density ${f}_{\beta }\left(x\right)$ has no closed form expression but using the LT, we can extract k moments of the original parametric model, ${\left({E}_{\beta }\left({h}_{1}\left(x\right)\right),\cdots ,{E}_{\beta }\left({h}_{k}\left(x\right)\right)\right)}^{\prime }$ assuming $k>p$ .

Clearly, the sample distribution function corresponds to a discrete distribu-

tion which assigns the mass ${p}_{in}=\frac{1}{n}$ at the realized point of the observation

$i=1,\cdots ,n$ and ${\sum }_{i=1}^{n}{p}_{i}=1$ . Now instead of using the original model for inferences we shall consider proxy discrete models with mass function ${\pi }_{i}\left(\beta \right)$ assigning mass at the realized point of observations. Let

${p}_{n}={\left({p}_{1n},\cdots ,{p}_{nn}\right)}^{\prime }={\left(\frac{1}{n},\cdots ,\frac{1}{n}\right)}^{\prime },$

$\pi =\pi \left(\beta \right)={\left({\pi }_{1},\cdots ,{\pi }_{n}\right)}^{\prime },$

The Kullback-Leibler distance between the two discrete distributions ${p}_{n}$ and $\pi$ is defined by the following measure of discrepancy,

$KL\left(\pi ,{p}_{n}\right)={\sum }_{i=1}^{n}{\pi }_{i}\left(\mathrm{ln}{\pi }_{i}-\mathrm{ln}\left(\frac{1}{n}\right)\right).$

We also require the proxy model beside satisfying the basic requirement, i.e., ${\sum }_{i=1}^{n}{\pi }_{i}=1,{\pi }_{i}\ge 0$ , it also satisfies the same moment conditions of the original parametric model, i.e.,

${E}_{\pi }\left({h}_{j}\left(x\right)\right)={E}_{\beta }\left({h}_{j}\left(x\right)\right),j=1,\cdots ,k,$

${E}_{\pi }\left({h}_{j}\left(x\right)\right)={\sum }_{i=1}^{n}{\pi }_{i}{h}_{j}\left(x\right).$

Parametric estimation will be carried out in two stages. The first stage is to choose the best proxy model by minimizing $KL\left(\pi ,{p}_{n}\right)$ which is equivalent to maximize the entropy measure with the above constraints. It leads to maximize $-{\sum }_{i=1}^{n}{\pi }_{i}\mathrm{ln}{\pi }_{i}$ or equivalently minimize

${\sum }_{i=1}^{n}{\pi }_{i}\mathrm{ln}{\pi }_{i}$ (12)

subject to the constraints given by

${\sum }_{i=1}^{n}{\pi }_{i}=1,$ (13)

${\sum }_{i=1}^{n}{\pi }_{i}\left({g}_{j}\left({x}_{i};\beta \right)\right)=0,j=1,\cdots ,k$ (14)

with

${g}_{j}\left({x}_{i};\beta \right)={h}_{j}\left({x}_{i}\right)-{E}_{\beta }\left({h}_{j}\left(x\right)\right),j=1,\cdots ,k.$ (15)

Mittelhammer et al. ( [29] , p.321) have shown that the Lagrangian of the optimization problem is

$L\left(\pi ,\lambda ,\mu \right)=\underset{i=1}{\overset{n}{\sum }}\text{ }\text{ }{\pi }_{i}\mathrm{ln}{\pi }_{i}+\underset{j=1}{\overset{k}{\sum }}\text{ }\text{ }{\lambda }_{j}\left(\underset{i=1}{\overset{n}{\sum }}\text{ }\text{ }{\pi }_{i}{g}_{j}\left({x}_{i};\beta \right)\right)+\mu \left(\underset{i=1}{\overset{n}{\sum }}\text{ }\text{ }{\pi }_{i}-1\right)$

with ${\lambda }_{j},j=1,\cdots ,k$ and $\mu$ are Lagrange multipliers. Taking partial derivatives with respect to ${\pi }_{i}$ leads to the system of equation

$\frac{\partial L}{\partial {\pi }_{i}}=\mathrm{ln}{\pi }_{i}+1+{\sum }_{j=1}^{k}{\lambda }_{j}\left(\text{ }{g}_{j}\left({x}_{i};\beta \right)\right)+\mu =0,\text{\hspace{0.17em}}i=1,\cdots ,n.$ (16)

The solutions of the equation yield the best discrete proxy model with mass function given by

${\pi }_{i}^{*}=\frac{\mathrm{exp}\left(-{\sum }_{j=1}^{k}\text{ }{\lambda }_{j}\left(\beta \right)\left(\text{ }{g}_{j}\left({x}_{i};\beta \right)\right)\right)}{{\sum }_{i=1}^{n}\text{ }\mathrm{exp}\left(-{\sum }_{j=1}^{k}\text{ }{\lambda }_{j}\left(\beta \right)\left({g}_{j}\left({x}_{i};\beta \right)\right)\right)},i=1,\cdots ,n$ (17)

which is Expression (13. 2.6) given by Mittelhammer et al. ( [29] , p.321). Note that ${\lambda }_{j}={\lambda }_{j}\left(\beta \right)$ and the ${{\lambda }^{\prime }}_{j}s$ are defined implicitly by Expression (16).

Note that since the ${\pi }_{i}^{*}{}^{\prime }s$ are defined implicitly, they depend on $\beta$ but do not depend on the Lagrange multiplier $\mu$ as it is easy to see that we already have ${\sum }_{i=1}^{n}{\pi }_{i}^{*}=1$ and ${\pi }_{i}^{*}\ge 0,i=1,\cdots ,n.$

Let ${\pi }^{*}={\left({\pi }_{1}^{*},\cdots ,{\pi }_{n}^{*}\right)}^{\prime }$ , $\lambda ={\left({\lambda }_{1},\cdots ,{\lambda }_{k}\right)}^{\prime }$

The second stage is to use the KL distance for parametric inferences. At this stage, we minimize with respect to $\beta$ the expression

${\sum }_{i=1}^{n}{\pi }_{i}^{*}\left(\beta \right)\mathrm{ln}{\pi }_{i}^{*}\left(\beta \right)$ (18)

to obtain the MEEL estimators $\stackrel{^}{\beta }$ .

The numerical procedures to implement MEEL methods appear to be complicated as the ${\pi }_{i}^{*}{}^{\prime }s$ are defined implicitly. Numerical procedures are simplified by using penalty function methods and will be discussed in Section 4. With this approach, it suffices to perform unconstrained minimization with respect to k + p variables given by ${\lambda }_{j},j=1,\cdots ,k,{\beta }_{i},i=1,\cdots ,p$ with $k>p$ using a suitably defined objective function. Therelationships between the vector $\lambda$ and the vector $\beta$ are given by

${\sum }_{i=1}^{n}{\pi }_{i}^{*}\left(\lambda ,\beta \right)\left({g}_{j}\left({x}_{i};\beta \right)\right)=0,j=1,\cdots ,k$ (19)

will be used to build the penalty function part of the new objective function.

Imbens ( [37] , p.501-502) also advocated the use of a specific version of penalty function approach to obtain the MEEL estimators. Chong and Zak ( [38] , p.564-571) give details on how to construct penalty function to handle optimization under equality and inequality constraints and can be a good reference for using penalty methods. We choose to follow more closely penalty methods of nonlinear optimization used in the literature as given by Chong and Zak ( [38] ) and suggest strategy to identify the global minimizer vector which is the vector of the estimators.

The identification of the global minimizer in nonlinear estimation which gives the estimates is an important one as most of the algorithms only give local minimizers and therefore are vulnerable to starting points used to initialize the algorithm, see Davidson and McKinnon ( [39] , p.232-233) for a strategy using different starting points. Andrews [40] proposes to use the criterion functions of goodness of fit test statistics to limit the search for the global minimizer in a suitable restricted parameter space, this can be handled easily with penalty function methods with inequality constraints. For performing a global random search based on simulated annealing, see Robert and Casella ( [41] , p.140-146).

3.2. Asymptotic Properties

3.2.1. Asymptotic Covariance

The regularity conditions for the MEEL estimators $\stackrel{^}{\beta }$ to be consistent and to follow an asymptotic normal distribution have been given by Assumption 1 in Schennach ( [26] , p.645) who also provides proofs for consistency and asymptotic normality of the MEEL estimators. The regularity conditions are reproduced below. Also, see Expressions (13.2.10), (13.2.11) of the book by Mittelhammer et al. ( [29] , p.323).

Assumption

Assume that:

1) The true parameter given by the vector ${\beta }_{0}$ is an interior point of the parametric space $\theta$ which is assumed to be compact.

2) ${\beta }_{0}$ is the unique vector which satisfies ${E}_{{\beta }_{0}}\left(g\left(x,{\beta }_{0}\right)\right)=0$ .

3) $g\left(x,\beta \right)$ is differentiable with respect to $\beta$ and ${E}_{{\beta }_{0}}\left({\mathrm{sup}}_{\beta }{|{g}_{i}|}^{2+h}\right)<\infty$ for some $h>0,i=1,\cdots ,k$ .

4) The derivatives of $g\left(x,\beta \right)$ , $\frac{\partial {g}_{i}}{\partial {\beta }_{j}},i=1,\cdots ,k,$ also satisfy the local bounde- ness condition ${E}_{{\beta }_{0}}\left({\mathrm{sup}}_{\beta }{|\frac{\partial {g}_{i}}{\partial {\beta }_{j}}|}^{2+\delta }\right)<\infty$ for some $\delta >0$ when $\beta$ is restricted to some neighbor- hood of ${\beta }_{0}$ .

5) The covariance matrix $\Sigma$ of $g\left(x,\beta \right)$ has rank $k$ .

Under Assumption 1, then the MEEL estimators given by the vector $\stackrel{^}{\beta }$ is consistent and have a multinormal asymptotic distribution, $\stackrel{^}{\beta }\stackrel{p}{\to }{\beta }_{0}$ , ${\beta }_{0}$ is the vector of the true parameters, $\sqrt{n}\left(\stackrel{^}{\beta }-{\beta }_{0}\right)\stackrel{p}{\to }N\left(0,\Omega \right),$

$\Omega ={\left[E\left[{\frac{\partial g\left(x,\beta \right)}{\partial \beta }|}_{\beta ={\beta }_{0}}\right]{\left({E\left[g\left(x,\beta \right)g{\left(x,\beta \right)}^{\prime }\right]|}_{\beta ={\beta }_{0}}\right)}^{-1}E\left[{\frac{\partial g\left(x,\beta \right)}{\partial {\beta }^{\prime }}|}_{\beta ={\beta }_{0}}\right]\right]}^{-1},$ (20)

$g\left(x,\beta \right)={\left({g}_{1}\left(x;\beta \right),\cdots ,{g}_{k}\left(x;\beta \right)\right)}^{\prime },\text{\hspace{0.17em}}\Sigma \left({\beta }_{0}\right)={E\left[g\left(x,\beta \right)g{\left(x,\beta \right)}^{\prime }\right]|}_{\beta ={\beta }_{0}}.$

An estimator $\stackrel{^}{\Omega }$ for $\Omega$ can be defined,

$\begin{array}{l}\stackrel{^}{\Omega }=\left[\left[{\sum }_{i=1}^{n}{\stackrel{^}{\pi }}_{i}{\frac{\partial g\left({x}_{i},\beta \right)}{\partial \beta }|}_{\beta =\stackrel{^}{\beta }}\right]{\left({\sum }_{i=1}^{n}{\stackrel{^}{\pi }}_{i}{g\left({x}_{i},\beta \right)g{\left({x}_{i},\beta \right)}^{\prime }|}_{\beta =\stackrel{^}{\beta }}\right)}^{-1}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\left[{\sum }_{i=1}^{n}{\stackrel{^}{\pi }}_{i}{\frac{\partial g\left({x}_{i},\beta \right)}{\partial {\beta }^{\prime }}|}_{\beta =\stackrel{^}{\beta }}\right]\right]}^{-1},\end{array}$

${\stackrel{^}{\pi }}_{i}={\pi }_{i}^{*}\left(\stackrel{^}{\beta }\right),i=1,\cdots ,n.$ If we let ${\stackrel{^}{\pi }}_{i}=\frac{1}{n},i=1,\cdots ,n$ ,we have another consistent estimator for $\Omega$ .

Note that $\Omega$ is identical to Expression (5) which shows the asymptotic equivalence between optimum quasi-likelihood estimation and MEEL estimation. Both methods do not need full specifications of the model but only require moment conditions of the true model.

3.2.2. Goodness-of-Fit Test Statistics

The use of the KL distance also allows construction of a goodness-of-fit test statistics which follows an asymptotic chi-square distribution. The validity of the original model is reduced to the validity of moment conditions, we might want to test the null hypothesis specified as ${H}_{0}:{E}_{\beta }\left({h}_{j}\left(x\right)\right),j=1,\cdots ,k$ , the expectations are under the true parametric model.

The following test statistics given below is a chi-square test statistics with $r=k-p$ degree of freedom, i.e.,

$\begin{array}{l}2nKL\left({\pi }^{*}\left(\stackrel{^}{\beta }\right),{p}_{n}\right)\\ =2n\left({\sum }_{i=1}^{n}{\pi }_{i}^{*}\left(\stackrel{^}{\beta }\right)\left[\mathrm{ln}{\pi }_{i}^{*}\left(\stackrel{^}{\beta }\right)-\mathrm{ln}\left(\frac{1}{n}\right)\right]\right)\stackrel{L}{\to }{\chi }^{2}\left(k-p\right).\end{array}$ (21)

3.3. An Estimate for the Overall Relative Efficiency

It is clear that only under special circumstances that MEEL methods are as efficient as ML methods due to the use of a finite basis. This can only happen when the true score functions belong to the linear space spanned by a finite basis. Therefore, it appears to be useful to be able to quantify the loss of efficiency when using MEEL methods despite the model density has no closed form expression to check whether MEEL methods are appropriate for a specific field of applications. Fourier series expansion can be useful to approximate the density function and will be introduced below.

The density function can be expanded using Fourier cosine series in the range $0 , see Expressions (7-11) given by Fang and Osterlee ( [42] , p.6), Powers ( [43] , p.62), i.e.,

${f}_{\beta }\left(x\right)~{f}_{\beta }^{a}\left(x\right),\text{\hspace{0.17em}}{f}_{\beta }^{a}\left(x\right)={F}_{0}\left(\beta \right)+{\sum }_{j=1}^{\infty }{F}_{j}\left(\beta \right)\mathrm{cos}\left(\frac{j\text{π}}{b}x\right).$

The coefficients ${F}_{j}\left(\beta \right),j=0,1,2,\cdots$ are Fourier coefficients,

${F}_{0}\left(\beta \right)=\frac{1}{b}{\int }_{0}^{b}\mathrm{cos}\left(\frac{j\text{π}}{b}x\right){f}_{\beta }\left(x\right)\text{d}x,j=0,$

${F}_{j}\left(\beta \right)=\frac{2}{b}{\int }_{0}^{b}\mathrm{cos}\left(\frac{j\text{π}}{b}x\right){f}_{\beta }\left(x\right)\text{d}x,\text{\hspace{0.17em}}j=1,2,\cdots .$

Regularity conditions for uniform convergence of Fourier series are also given by Powers ( [43] , p.72-73). The derivatives of these coefficients with respect to ${\beta }_{l},l=1,\cdots ,p$ are given by

${{F}^{\prime }}_{0,{\beta }_{l}}\left(\beta \right)=\frac{1}{b}{\int }_{0}^{b}\mathrm{cos}\left(\frac{j\text{π}}{b}x\right)\frac{\partial {f}_{\beta }\left(x\right)}{\partial {\beta }_{l}}\text{d}x$

${{F}^{\prime }}_{j,{\beta }_{l}}\left(\beta \right)=\frac{2}{b}\frac{\partial }{\partial {\beta }_{l}}\left({\int }_{0}^{b}\mathrm{cos}\left(\frac{j\text{π}}{b}x\right){f}_{\beta }\left(x\right)\text{d}x\right),\text{\hspace{0.17em}}l=1,\cdots ,p,\text{\hspace{0.17em}}j=1,2,\cdots$

If $b$ is chosen sufficiently large, we have the following approximations of the coefficients using either the characteristic function (CF) or LT,

${F}_{j}\left(\beta \right)\approx {\stackrel{¯}{F}}_{j}\left(\beta \right)=\frac{2}{b}{\int }_{0}^{\infty }\mathrm{cos}\left(\frac{j\text{π}}{b}x\right){f}_{\beta }\left(x\right)\text{d}x=\frac{2}{b}Re\left({L}_{\beta }\left(-i\frac{j\text{π}}{b}\right)\right),j=1,2,\cdots$

and ${\stackrel{¯}{F}}_{0}\left(\beta \right)=\frac{1}{b}$ . Similarly,

$\begin{array}{l}{{F}^{\prime }}_{j,{\beta }_{l}}\left(\beta \right)\approx {\stackrel{¯}{{F}^{\prime }}}_{j,{\beta }_{l}}=\frac{2}{b}\frac{\partial }{{\beta }_{l}}Re\left({L}_{\beta }\left(-i\frac{j\text{π}}{b}\right)\right),\\ \text{\hspace{0.17em}}{\stackrel{¯}{{F}^{\prime }}}_{0,{\beta }_{l}}\left(\beta \right)=0,\text{\hspace{0.17em}}l=1,\cdots ,p,\text{\hspace{0.17em}}j=1,2,\cdots ,M,\end{array}$

$Re\left(...\right)$ is the real part of the complex number inside the parenthesis and most of the computer packages can handle complex numbers computations. In practice, we can only use a finite cosine series expansion with M terms. The formulas for the coefficients given by Fang and Osterlee ( [42] , p.6) make use of the characteristic function but they can be converted easily to expressions using LT. Using these truncated series, it leads to approximate the score functions by

$\frac{\partial \mathrm{log}{\stackrel{¯}{f}}^{a}\left(x\right)}{\partial {\beta }_{l}}=\frac{\frac{\partial {\stackrel{¯}{f}}^{a}\left(x\right)}{\partial {\beta }_{l}}}{{\stackrel{¯}{f}}_{a}\left(x\right)},\text{\hspace{0.17em}}l=1,\cdots ,p$

with

${\stackrel{¯}{f}}^{a}\left(x\right)=\frac{1}{b}+{\sum }_{j=1}^{M}{\stackrel{¯}{F}}_{j}\left(\beta \right)\mathrm{cos}\left(\frac{j\text{π}}{b}x\right)$ ,

$\frac{\partial {\stackrel{¯}{f}}^{a}\left(x\right)}{\partial {\beta }_{l}}={\sum }_{j=1}^{M}{{\stackrel{¯}{F}}^{\prime }}_{j,{\beta }_{l}}\mathrm{cos}\left(\frac{j\text{π}}{b}x\right),l=1,\cdots ,p.$

Therefore, if $\beta$ is estimated by $\stackrel{^}{\beta }$ , the Fisher information matrix $I\left(\stackrel{^}{\beta }\right)$ can be estimated by $\stackrel{^}{I}\left(\stackrel{^}{\beta }\right)$ using the original sample or simulated samples from the distribution with $\beta =\stackrel{^}{\beta }$ . If the original sample is used,

$\stackrel{^}{I}\left(\stackrel{^}{\beta }\right)=\frac{1}{n}{\sum }_{i=1}^{n}{\left(\frac{\partial \mathrm{ln}{\stackrel{¯}{f}}^{a}\left({x}_{i}\right)}{\partial \beta }\right){\left(\frac{\partial \mathrm{ln}{\stackrel{¯}{f}}^{a}\left({x}_{i}\right)}{\partial \beta }\right)}^{\prime }|}_{\beta =\stackrel{^}{\beta }}.$

The estimate overall relative efficiency can be defined based on Expression (20) as

$ARE\left(\stackrel{^}{\beta }\right)=\frac{\mathrm{det}\left(\stackrel{^}{U}\left(\stackrel{^}{\beta }\right)\right)}{\mathrm{det}\left(\stackrel{^}{I}\left(\stackrel{^}{\beta }\right)\right)},$ (22)

$\stackrel{^}{U}\left(\stackrel{^}{\beta }\right)={S}^{\prime }\left(\stackrel{^}{\beta }\right){\stackrel{^}{\Sigma }}^{-1}S\left(\stackrel{^}{\beta }\right)$ , det(.) is the determinant of the matrix inside the paranthesis, see Expression (3.7) given by Bhapkar ( [44] , p.471) for overall relative efficiency using determinants of matrices. Instead of determinants of matrices, the traces of matrices can also be used, this leads to alternative measure of overall relative efficiency. Fang and Osterterlee ( [42] ) show that finite cosine Fourier series converge at an exponential rate which suggest that with M ≥ 500, the approximation should be quite accurate if the model density is continuous using examples given by their paper. The value M can be increased for more accuracy if needed.

For the value of b, we can let $b=\stackrel{¯}{X}+Ls,\text{\hspace{0.17em}}10\le L\le 15,$ $\stackrel{¯}{X}$ and are respectively the sample mean and sample standard deviation. Note that $ARE\left(\stackrel{^}{\beta }\right)$ despite its simplicity can give an idea whether MEEL methods are approriate for the data set and the parametric model being considered.

4. Numerical Implementations

We shall use penalty function approaches to convert the problem of minimization with constraints to a problem of minimization without constraints by introducing a surrogate objective function which is defined suitably. The techniques of penalty function are well described in Chong and Zak ( [38] , p.560-567). They can handle both equalities and inequalities constraints. The new objective function can be minimized using a precise direct search based on Nelder-Mead simplex methods for example. The simplex methods are derivative free and converge to local optimizers, see Chong and Zak ( [38] , p.274-278). The package R has built-in functions to perform simplex algorithm with constraints.

For illustration, we start with a simple example and extend it to the problem for finding MEEL estimators.

Suppose that we wish to minimize a function $f\left({x}_{1},{x}_{2}\right)$ with two variables ${x}_{1}$ and ${x}_{2}$ subject to a constraint $c=c\left({x}_{1},{x}_{2}\right)=0$ .The numerical solutions of this problem can be found by minimizing the following unconstrained objective

function given by $f\left({x}_{1},{x}_{2}\right)+\frac{K}{2}\left({\left[c\left({x}_{1},{x}_{2}\right)\right]}^{2}\right),K\to \infty .$ In practice setting a

value for $K$ being very large gives solutions with numerical accuracy. The penalty function which makes use of the square function is the second component of the objective function. The minimization procedures can give exact solutions with the use of a more complicated nondifferentiable penalty function, see Chong and Zak ( [38] , p.570-571).

For the MEEL minimization problem, we have ${\lambda }_{j},j=1,\cdots ,k$ depend on $\beta$ and ${\pi }_{i}^{*}$ are given by

${\pi }_{i}^{*}\left(\lambda ,\beta \right)=\frac{\mathrm{exp}\left(-{\sum }_{j=1}^{k}\text{ }{\lambda }_{j}\left({\sum }_{i=1}^{n}\text{ }{g}_{j}\left({x}_{i};\beta \right)\right)\right)}{{\sum }_{i=1}^{n}\mathrm{exp}\left(-{\sum }_{j=1}^{k}\text{ }{\lambda }_{j}\left({\sum }_{i=1}^{n}\text{ }{g}_{j}\left({x}_{i};\beta \right)\right)\right)},i=1,\cdots ,n.$

The vectors $\lambda$ and $\beta$ are related by the equality constraints given by

${c}_{1}={\sum }_{i=1}^{n}{\pi }_{i}^{*}\left(\lambda ,\beta \right)\left[{g}_{1}\left({x}_{i},\beta \right)\right]=0,\cdots ,{c}_{k}={\sum }_{i=1}^{n}{\pi }_{i}^{*}\left(\lambda ,\beta \right)\left[{g}_{k}\left({x}_{i},\beta \right)\right]=0.$ (23)

Therefore, we can perform unconstrained minimization using the following objective function with respect to ${\lambda }_{1},\cdots ,{\lambda }_{k}$ and ${\beta }_{1},\cdots ,{\beta }_{p}$ ,

$\begin{array}{l}{\sum }_{i=1}^{n}{\pi }_{i}^{*}\left(\lambda ,\beta \right)\mathrm{ln}{\pi }_{i}^{*}\left(\lambda ,\beta \right)+\frac{K}{2}\left[{\left({\sum }_{i=1}^{n}{\pi }_{i}^{*}\left(\lambda ,\beta \right)\left[{g}_{1}\left({x}_{i},\beta \right)\right]\right)}^{2}+\cdots \\ +{\left({\sum }_{i=1}^{n}{\pi }_{i}^{*}\left(\lambda ,\beta \right)\left[{g}_{k}\left({x}_{i},\beta \right)\right]\right)}^{2}\right].\end{array}$ (24)

The penalty constant K is a large positive value, setting K = 500000 for example. If the absolute value function is used to construct the penalty function then we can only use direct search algorithms which are derivative free.

It is worth to note that only a local minimizer is found each time using these algorithms, some strategies are needed to identify the global minimizer. The following procedures can be used:

1) We might need a starting vector being close to the estimators to initialize the algorithm, this is important when working with real data. For example, we might want to consider starting the algorithm with simple but consistent estimators given by ${\stackrel{^}{\beta }}_{s}$ obtained by minimizing ${\sum }_{j=1}^{k}{\left({\sum }_{i=1}^{n}{g}_{j}\left({x}_{i};\beta \right)\right)}^{2}$ .

If the number of parameters are not large, global random search can be performed. Simulated annealing (SA) or particle swarm optimization (PSO) are commonly used global random search technique, see Chong and Zak ( [38] , p. 279-285) to supplement local search algorithm. This problem is less severe for local search algorithm using simulated data since the true vector ${\beta }_{0}$ is known. The simple estimators can be considered as quasi-likelihood estimators which can make use of a larger basis than the one used to generate MEEL estimators since there are less numerical difficulties to compute the simple estimators, there is no need to estimate ${\Sigma }^{-1}$ . However, these quasi score functions are no longer orthogonal projections on the larger basis used. Based on remark 2.4.3 given by Luong and Thompson ( [22] , p.245) which gives the asymptotic covariance matrix of $\stackrel{^}{{\beta }_{S}}$ , the overall relative efficiency can be defined as

$ARE\left({\stackrel{^}{\beta }}_{S}\right)=\frac{\mathrm{det}\left({V}_{S}\left(\stackrel{^}{{\beta }_{S}}\right)\right)}{\mathrm{det}\left({\stackrel{^}{I}}^{-1}\left(\stackrel{^}{{\beta }_{S}}\right)\right)},$

${V}_{S}\left(\stackrel{^}{{\beta }_{S}}\right)={\left({S}^{\prime }S\right)}^{-1}{S}^{\prime }{\Sigma }^{-1}S{\left({S}^{\prime }S\right)}^{-1}$ ,

evaluated at $\beta =\stackrel{^}{{\beta }_{S}}$ .

2) For finding the global minimizer Andrews ( [40] , p.919-921) has suggested the use of the criterion function of a goodness of fit test statistics to identify good starting vectors by requesting a good staring vector ${\beta }^{\left(0\right)}$ must satisfy the inequality

$2nKL\left({\pi }^{*}\left({\lambda }^{\left(0\right)},{\beta }^{\left(0\right)}\right),{p}_{n}\right)\le {\chi }_{0.95}^{2}\left(k-p\right),$ (25)

${\chi }_{0.95}^{2}\left(k-p\right)$ is the 0.95 percentile of the chi-square distribution with $k-p$ degree of freedom, $k>p$ . We might want to minimize not only with the equality constraints given by Expression (23) but also with the inequality constraint given by Expression (25)

$2nKL\left({\pi }^{*}\left(\lambda ,\beta \right),{p}_{n}\right)\le {\chi }_{0.95}^{2}\left(k-p\right).$

With penalty function methods, we can define a penalty function to handle the inequality constraint as

$\frac{H}{2}{\left({c}^{+}\right)}^{2}$ , ${c}_{+}=\mathrm{max}\left(2KL\left({\pi }^{*}\left(\lambda ,\beta \right),{p}_{n}\right)-\frac{{\chi }_{q}^{2}\left(k-p\right)}{n},0\right)$ ,

H is again a penalty constant.

This leads to find the global minimizer of a new objective function given by

$\begin{array}{l}{\sum }_{i=1}^{n}{\pi }_{i}^{*}\left(\lambda ,\beta \right)\mathrm{ln}{\pi }_{i}^{*}\left(\lambda ,\beta \right)+\frac{K}{2}\left[{\left({\sum }_{i=1}^{n}{\pi }_{i}^{*}\left(\lambda ,\beta \right)\left[{g}_{1}\left({x}_{i},\beta \right)\right]\right)}^{2}+\cdots \\ +{\left({\sum }_{i=1}^{n}{\pi }_{i}^{*}\left(\lambda ,\beta \right)\left[{g}_{k}\left({x}_{i},\beta \right)\right]\right)}^{2}\right]+\frac{H}{2}{\left({c}^{+}\right)}^{2}.\end{array}$ (26)

We might also want to repeat the procedures with different starting vectors and identify the global minimizer as the value of the vector which yields the overall smallest value of

${\sum }_{i=1}^{n}{\pi }_{i}^{*}\left(\lambda ,\beta \right)\mathrm{ln}{\pi }_{i}^{*}\left(\lambda ,\beta \right).$ (27)

5. Simulations

5.1. Simulations from the PTS Distribution

The representation of a new distribution created by performing operation on LT of the original distribution often suggests how to simulate from the new distribution if we can simulate from the original distribution. For example, to simulate from the tilted density ${f}^{t}\left(x\right)$ obtained by applying the Esscher operation on $f\left(x\right)$ , it suffices to simulate from the original density $f\left(x\right)$ . Since we have

${f}^{t}\left(x\right)=\frac{{\text{e}}^{-\theta x}f\left(x\right)}{\kappa \left(\theta \right)}$ , $\kappa \left(s\right)$ is the LT of the density $f\left(x\right)$ , we have the following inequality ${f}^{t}\left(x\right)\le cf\left(x\right),c=\frac{1}{\kappa \left(\theta \right)}.$

Therefore, if we know how to simulate an observation from the density $f\left(x\right)$ , we can apply the acceptance and rejection method to obtain simulated observations from ${f}^{t}\left(x\right)$ . This is known as the acceptance and rejection method, see

Robert and Casella ( [41] , p.51-57) for example. The constant $\frac{1}{c}=\kappa \left(\theta \right)$ is the

acceptance probability which is useful for planning the sample size which is obtainable from the simulations. Note that this probability decreases as $\theta$ increase making it difficult to obtain a large sample from ${f}^{t}\left(x\right)$ for large values of $\theta$ .

The acceptance and rejection method allows a simple way to simulate observations from a positive tempered stable (PTS) as it is easy to simulate from the positive stable distribution, see Devroye ( [45] , p.350). Consequently, with a simple algorithm to simulate from the PTS distribution, it allows us to test the performance of the MEEL estimators versus the moment estimators originally proposed by Hougaard ( [1] , p.392) for the PTS distribution. The moment estimators were proposed since it is difficult to obtain the density function for the PTS which prevents the use of likelihood estimation.

5.2. A Limited Simulation Study

In this section, we illustrate the implementation of the inferences techniques by considering the MEEL estimators versus the moment estimators for the PTS family using simulated samples. The PTS distribution was introduced by Hougaard [1] with Laplace transform given by Expression (6) as

${L}_{\beta }\left(s\right)=\mathrm{exp}\left[-\frac{\delta }{\alpha }{\left(\theta +s\right)}^{\alpha }-{\theta }^{\alpha }\right],\text{\hspace{0.17em}}\delta ,\theta >0,\text{\hspace{0.17em}}0<\alpha <1,\text{\hspace{0.17em}}\beta ={\left(\delta ,\alpha ,\theta \right)}^{\prime }.$

Hougaard ( [1] , p.392) suggested the following moment methods to estimate the parameters. Let ${c}_{1},{c}_{2},{c}_{3}$ be respectively the first, second and third empirical cumulants, i.e., ${c}_{1}=\stackrel{¯}{X}$ is the sample mean and

${c}_{j}=\frac{{\sum }_{i=1}^{n}{\left({X}_{i}-\stackrel{¯}{X}\right)}^{j}}{n},j=2,3.$

Define $R=\frac{{c}_{2}^{2}}{{c}_{3}{c}_{1}}$ , if ${c}_{2}>0$ and define $R=0$ , if ${c}_{2}=0$ . The moment esti-

mators obtained by matching cumulants for the parameters $\alpha ,\theta$ and $\delta$ are given respectively by

$\stackrel{˜}{\alpha }=2-\frac{1}{1-R},\text{\hspace{0.17em}}\stackrel{˜}{\theta }=\frac{\left(1-\alpha \right){c}_{1}}{{c}_{2}},\text{\hspace{0.17em}}\stackrel{˜}{\delta }={c}_{1}{\left(\stackrel{˜}{\theta }\right)}^{1-\stackrel{˜}{\alpha }}.$

We compare the performance of the moment estimators with the MEEL estimators using the base

$B=\left\{x-E\left(x\right),{x}^{2}-E\left({x}^{2}\right),{\text{e}}^{-\tau x}-{L}_{\beta }\left(\tau \right),\cdots ,{\text{e}}^{-m\tau x}-{L}_{\beta }\left(m\tau \right)\right\}$ ,

$m=5,\delta =0.01.$

We can only have access to laptop computer so the study is limited. The sample size used is approximately with n = 5000 and we draw M = 100 samples in our simulation. The focus is on the following ranges for the parameters, we fix $\theta =1$ , $0.2<\alpha <0.8$ , $1\le \delta \le 10$ . Overall the MEEL estimators are much more efficient than the moment estimators for the range of the parameters considered. The moment estimators do not seem to perform well either for all the parameters values selected outside the range. The overall relative efficiency is defined as

$ARE=\frac{MSE\left(\stackrel{^}{\delta }\right)+MSE\left(\stackrel{^}{\alpha }\right)+MSE\left(\stackrel{^}{\theta }\right)}{MSE\left(\stackrel{˜}{\delta }\right)+MSE\left(\stackrel{˜}{\alpha }\right)+MSE\left(\stackrel{˜}{\theta }\right)}.$

The mean square errors (MSE) are estimated using simulated samples. The mean square error of an estimator $\stackrel{^}{\pi }$ for ${\pi }_{0}$ is defined as

$MSE\left(\stackrel{^}{\pi }\right)=E{\left(\stackrel{^}{\pi }-{\pi }_{0}\right)}^{2}.$

The simulation study is not extensive and more should be done but it does suggest the potential of the MEEL methods.

Some results are summarized using Table 1 to keep the paper within a reasonable length and they are displayed below to give an idea on the gains on using the MEEL method instead of moment methods.

Based on the theory the MEEL estimators cannot be as efficient as the ML estimators over the entire parameter space since only finite number of elements in the base is used. Howewer, the theory suggest that the methods might still have high efficiencies on subspaces where parameters are subject to inequality bounds. The estimate of overall relative efficiency given by Expression (22) might give some ideas whether the methods are recommended. The following considerations might be useful to assess whether the use of MEEL methods are appropriate for a parametric model and data sets which come from a specific field of applications:

Table 1. Asymptotic relative efficiencies comparisons between MEEL estimators and moment (MM) estimators.

$\text{ARE}\left(\text{MEEL}vs\text{MM}\right)=\frac{\text{MSE}\left(\stackrel{^}{\theta }\right)+\text{MSE}\left(\stackrel{^}{\delta }\right)+\text{MSE}\left(\stackrel{^}{\alpha }\right)}{\text{MSE}\left(\stackrel{˜}{\theta }\right)+\text{MSE}\left(\stackrel{˜}{\delta }\right)+\text{MSE}\left(\stackrel{˜}{\alpha }\right)}.$ Legend: Tabulated values are estimates of ARE (MEEL vs MM) based on simulated samples from the chosen parameters δ, θ with α = 0.6.

1) Define a restricted space based on the fields of applications, also obtain $\stackrel{^}{{\beta }_{S}}$ and use the estimate for overall relative efficiency to evaluate the loss of efficiency of MEEL methods in a neighborhood of $\stackrel{^}{{\beta }_{S}}$ which in general should be nested inside the restricted space of interest.

2) For efficiencies of MEEL methods, we shall try to include as many elements in a finite base as possible subject to numerical limitations and try different value for $\tau$ which control the spacing of the functions in the basis to see whether there is any improvement on efficiency in a neighborhood of $\stackrel{^}{{\beta }_{S}}$ .

6. Actuarial Applications

Pricing of insurance contracts is one of the main objectives in actuarial sciences. A contract defines a random loss function $g\left(x\right)$ , $X$ is the individual loss random variable for one unit of time often assumed to be nonnegative and follow a parametric model with distribution function ${F}_{\beta }\left(x\right)$ and LT ${L}_{\beta }\left(s\right)$ . The pure premium is the following expectation under the true vector ${\beta }_{0}$ , i.e.,

$P=P\left({\beta }_{0}\right)={E}_{{\beta }_{0}}\left(g\left(x\right)\right).$

P must be estimated using data and therefore, ${\beta }_{0}$ needs to be estimated first then subsequently analytical methods or simulation methods can be used to approximate the premium. If MEEL methods are used, the parametric families with closed form LT can be validated by means of goodness-of-fit tests.

For insurance, the stop loss premium is defined as $P={E}_{{\beta }_{0}}\left\{{\left(X-d\right)}_{+}\right\}$ , ${\left(X-d\right)}_{+}=\mathrm{max}\left(X-d,0\right)$ . The stop loss premium can be expressed using means of distribution functions instead of expectations, see Expression (8) given by Luong ( [8] , p.543) for analytical methods to evaluate the stop loss premium.

If sampling from the distribution is possible then the pricing of the contracts can also be approximated using simulations based on an estimate of ${\beta }_{0}$ , it involves drawing sample based on the estimated parameters. For example, it is not difficult to simulate from a compound Poisson distribution despite its complicated density function which can only be expressed in series. Clearly, once the parameters for the compound Poisson distribution are estimated pricing of insurance contracts can be done via simulations.

7. Conclusion

We conclude here that MEEL methods appear to be useful for inferences and have been considered to be active fields of research for the last twenty years in econometrics yet they do not seem to receive much attention in actuarial sciences. When the methods are oriented toward actuarial applications and since LT is widely used in actuarial sciences, it is natural to consider extracting moment conditions from LT. It is shown that MEEL estimation is equivalent to QL estimation based on the best quasi score functions obtained by projecting the true score functions on the linear space spanned by a basis specified by the moment conditions. Based on these considerations, two families of bases are proposed in this paper to generate MEEL methods with the objective to achieve high efficiencies for actuarial applications. In general the MEEL methods using these bases are more efficient than QL methods based on quadratic estimating functions and methods of moments. With finite bases, in general the MEEL methods can attain near full efficiency on restricted parameter spaces only. MEEL methods can still be very attractive if depending on the fields of applications; we essentially work with these restricted spaces and it is important to measure the loss of efficiency to verify the appropriateness of the methods for the field of applications. The methods can easily be adapted for estimation of continuous distributions with support on the real line encountered in finance by using constraints extracted from model moment generating function instead of LT.

Acknowledgements

The helpful and constructive comments of a referee which lead to an improvement of the presentation of the paper and support from the editorial staffs of Open Journal of Statistics to process the paper are all gratefully acknowledged.

Cite this paper
Luong, A. (2017) Maximum Entropy Empirical Likelihood Methods Based on Laplace Transforms for Nonnegative Continuous Distribution with Actuarial Applications. Open Journal of Statistics, 7, 459-482. doi: 10.4236/ojs.2017.73033.
References
[1]   Hougaard, P. (1986) Survival Models for Heterogeneous Populations Derived from Stable Distributions. Biometrika, 73, 387-396.
https://doi.org/10.1093/biomet/73.2.387

[2]   Panjer, H. and Willmot, G.E. (1992) Insurance Risk Models. Society of Actuaries, Chicago.

[3]   Schoutens, W. (2003) Lévy Processes in Finance: Pricing Financial Derivatives. Wiley, New York.
https://doi.org/10.1002/0470870230

[4]   Kuchler, U. and Tappe, S. (2013) Tempered Stable Distributions and Processes. Stochastic Processes and Their Applications, 123, 4256-4293.

[5]   Gerber, H.U. (1992) From the Generalized Gamma to the Generalized Negative Binomial Distribution. Insurance, Mathematics and Economics, 10, 303-309.

[6]   Abate, J. and Whitt, W. (1996) An Operational Calculus via Laplace Transforms. Advances in Applied Probability, 28, 75-113.
https://doi.org/10.1017/S0001867800027294

[7]   Klugman, H., Panjer, H.H. and Willmot, G.E. (2012) Loss Models: From Data to Decisions. Wiley, New York.

[8]   Luong, A. (2016) Cramer-Von Mises Distance Estimation for Some Positive Infinitely Divisible Distributions with Actuarial Applications. Scandinavian Actuarial, 2016, 530-549.
https://doi.org/10.1080/03461238.2014.977817

[9]   Gerber, H.U. (1992) On the Probability of Ruin for Infinitely Divisible Claim Amount Distributions. Insurance: Mathematics and Economics, 11, 163-166.

[10]   Dufresne, F. and Gerber, H.U. (1993) The Probability of Ruin for the Inverse Gaussian and Related Processes. Insurance: Mathematics and Economics, 12, 9-22.

[11]   Rao, M.M. (1984) Probability Theory with Applications. Academic Press, New York.

[12]   Sato, K.I. (1999) Lévy Processes and Infinitely Divisible Distributions. Cambridge University Press, Cambridge.

[13]   Feller, W. (1971) An Introduction to Probability and Applications. Vol. 2, Wiley, New York.

[14]   Godambe, V.P. and Thompson, M.E. (1989) An Extension of Quasi-Likelihood Estimation. Journal of Statistical Planning and Inference, 22, 137-152.

[15]   Chan, S. and Gosh, M. (1998) Orthogonal Projection and the Geometry of Estimating Functions. Journal of Statistical Planning and Inference, 67, 227-245.

[16]   Crowder, M. (1987) On Linear and Quadratic Estimating Functions. Biometrika, 74, 591-597.
https://doi.org/10.1093/biomet/74.3.591

[17]   Read, R.R. (1981) Representation of a Certain Covariance Matrices with Application to Asymptotic Efficiency. Journal of the American Statistical Association, 76, 148-154.
https://doi.org/10.1080/01621459.1981.10477621

[18]   Newey, W.K. and Smith, R.J. (2004) Higher Order Properties of GMM and Generalized Empirical Likelihood Estimators. Econometrica, 72, 219-255.
https://doi.org/10.1111/j.1468-0262.2004.00482.x

[19]   Smith, R.J. (2007) Efficient Information Theoretical Inference for Conditional Moment Restrictions. Journal of Econometrics, 138, 430-460.

[20]   Chamberlain, G. (1987) Asymptotic Efficiency in Estimation with Conditional Moment Restrictions. Journal of Econometrics, 34, 305-334.

[21]   Luong, A. and Doray, L.G. (2009) Inference for the Stable Laws Based on a Special Quadratic Distance. Statistical Methodology, 6, 147-156.

[22]   Luong, A. and Thompson, M.E. (1987) Minimum Distance Methods Based on Quadratic Distance for Transforms. Canadian Journal of Statistics, 15, 239-251.
https://doi.org/10.2307/3314914

[23]   Morton, R. (1981) Efficiency of Estimating Equations and the Use of Pivots. Biometrika, 68, 227-233.
https://doi.org/10.1093/biomet/68.1.227

[24]   Owen, A.B. (1988) Empirical Likelihood Ratio Confidence Intervals for a Single Functional. Biometrika, 75, 237-249.
https://doi.org/10.1093/biomet/75.2.237

[25]   Qin, J. and Lawless, J. (1994) Empirical Likelihood and General Estimating Equations. Annals of Statistics, 22, 300-325.
https://doi.org/10.1214/aos/1176325370

[26]   Schennach, S. (2007) Point Estimation with Exponentially Tilted Empirical Likelihood. Annals of Statistics, 35, 634-672.
https://doi.org/10.1214/009053606000001208

[27]   Imbens, G.W., Spady, R.H. and Johnson, P. (1998) Information Theoretic Approaches to Inference in Moment Condition Models. Econometrica, 66, 333-357.
https://doi.org/10.2307/2998561

[28]   Owen, A.B. (2001) Empirical Likelihood. Chapman and Hall, New York.
https://doi.org/10.1201/9781420036152

[29]   Mittelhammer, R.C., Judge, G.G. and Miller, D.J. (2000) Econometrics Foundations. Cambridge University Press, Cambridge.

[30]   Anatolyev, S. and Gospodinov, N. (2011) Methods for Estimation and Inference in Modern Econometrics. CRC Press, New York.

[31]   Garcia, R., Reneault, é. and Veredas, D. (2011) Estimation of Stable Laws by Indirect Inference. Journal of Econometrics, 161, 325-337.

[32]   Zakian, V. and Littlewood, R.K. (1973) Numerical Inversion of Laplace Transforms by Weighted Least-Squares Approximations. The Computer Journal, 16, 66-68.
https://doi.org/10.1093/comjnl/16.1.66

[33]   Brockwell, P.J. and Brown, B.M. (1981) High Efficiency Estimation for the Positive Stable Laws. Journal of the American Statistical Association, 75, 626-631.
https://doi.org/10.1080/01621459.1981.10477695

[34]   Cressie, N., Davis, A.S, Folks, J.L. and Policello II, G.E. (1981) The Moment Generating Function and Negative Integer Moments. The American Statistician, 35, 148-150.

[35]   Brockwell, P.J. and Brown, B.M. (1978) Expansions for the Positive Stable Laws. Z. Wahrscheinlichkeistheorie. Verw. Giebete, 45, 213-224.
https://doi.org/10.1007/BF00535303

[36]   Cressie, N. and Read, T. (1984) Multinomial Goodness-of-Fit Test. Journal of the Royal Statistical Society, Series B, 46, 440-464.

[37]   Imbens, G.W. (2002) Generalized Method of Moments and Empirical Likelihood. Journal of Business and Economic Statistics, 20, 493-506.
https://doi.org/10.1198/073500102288618630

[38]   Chong, E.K.P. and Zak, S.H. (2013) An Introduction to Optimization. 4th Edition, Wiley, New York.

[39]   Davidson, R. and MacKinnon, J.G. (2004) Econometric Theory and Methods. Oxford University Press, Oxford.

[40]   Andrews, D.W.K. (1997) A Stopping Rule for the Computation of the Generalized Method of Moment Estimators. Econometrica, 65, 913-931.
https://doi.org/10.2307/2171944

[41]   Robert, C.P. and Casella, G. (2010) Introducing Monte Carlo Methods with R. Springer, New York.
https://doi.org/10.1007/978-1-4419-1576-4

[42]   Fang, F. and Osterlee, C.W. (2009) A Novel Pricing Method for European Options Based on Fourier Cosine Series Expansions. SIAM Journal on Scientific Computing, 31, 826-848.
https://doi.org/10.1137/080718061

[43]   Powers, D.L. (2010) Boundary Value Problems and Partial Differential Equations. Academic Press, New York.

[44]   Bhapkar, V.P. (1972) On a Measure of Efficiency in an Estimating Equation. Sankhya, Series A, 34, 467-472.

[45]   Devroye, L. (1993) A Triptych of Discrete Distributions Related to the Stable Laws. Statistics and Probability Letters, 18, 349-351.

Top