Simulated Minimum Hellinger Distance Estimation for Some Continuous Financial and Actuarial Models
ABSTRACT
Minimum Hellinger distance (MHD) estimation is extended to a simulated version with the model density function replaced by a density estimate based on a random sample drawn from the model distribution. The method does not require a closed-form expression for the density function and appears to be suitable for models lacking a closed-form expression for the density, models for which likelihood methods might be difficult to implement. Even though only consistency is shown in this paper and the asymptotic distribution remains an open question, our simulation study suggests that the methods have the potential to generate simulated minimum Hellinger distance (SMHD) estimators with high efficiencies. The method can be used as an alternative to methods based on moments, methods based on empirical characteristic functions, or the use of an expectation-maximization (EM) algorithm.

1. Introduction

In actuarial science or finance, we often encounter the problem of fitting distributions to data where the distributions have no closed-form expressions for their densities. These distributions are often infinitely divisible and they happen to be the distributions of the regularly spaced increments of Lévy processes. Beside infinitely divisible distributions, mixture distributions created using a mixing mechanism also provide examples of continuous densities without a closed-form expression. These types of distributions are often encountered in actuarial science. A few examples will be provided as illustrations subsequently.

Likelihood methods might be difficult to implement in such cases, due to the lack of a closed-form expression for the density function. To handle such a situation, we can consider the following approaches:

1) Expectation-maximization (EM) algorithm. Only under special conditions can the EM algorithm be used as it requires some conditional distributions, and these conditional distributions might be difficult to obtain; see McNeil, Frey and Embrechts  (pages 81-85) or McLachlan and Krishnan  .

2) Method of moments. Even though the model density has no closed form, if the model moments can be expressed in closed form, then the method of moments can be used. The main drawback of the method of moments is that estimators thus obtained might not be efficient nor robust for models with three or more parameters as the estimators will depend on a polynomial of degree three or higher, making the methods very sensitive to data which are contaminated; see Küchler and Tappe   for method of moments estimation.

3) The k-L procedure. Even if the density has no closed form, if the model characteristic function has a closed-form expression, then we can select points from the real and imaginary parts of the empirical characteristic function and match them with their model counterparts at the chosen points. This is the k-L procedure as proposed by Feuerverger and McDunnough  (pages 22-24).

4) Indirect inference. These methods are based on simulations and they require two steps. First, we need to choose a proxy model to obtain the estimators which are biased. Second, we remove the bias using simulations. See Garcia, Renault and Veredas  for this method. The proxy models from which the estimators are obtained affect the efficiencies of the estimators. For some models, it is difficult to know which proxy model will generate estimators with high efficiencies.

When implementing these methods for distributions without closed-form densities, there are some drawbacks which motivate us in this paper to extend minimum Hellinger distance methods originally proposed by Beran  to a simulated version (version S) which consists in replacing the model density ${f}_{\theta }\left(x\right)$ by a density estimate ${f}_{\theta }^{S}\left(x\right)$ using a random sample drawn from ${f}_{\theta }\left(x\right)$ and minimizing

${Q}_{n}\left(\theta \right)={\int }_{-\infty }^{\infty }{\left({\left[{f}_{n}\left(x\right)\right]}^{\frac{1}{2}}-{\left[{f}_{\theta }^{S}\left(x\right)\right]}^{\frac{1}{2}}\right)}^{2}\text{d}x$ (1)

to obtain the simulated minimum Hellinger distance (SMHD) estimators, where ${f}_{n}\left(x\right)$ is an empirical density estimate based on the observed data with the property ${f}_{n}\left(x\right)\stackrel{p}{\to }{f}_{{\theta }_{0}}\left(x\right)$ where ${\theta }_{0}$ is the true vector of parameters. This consistency property will imply ${\int }_{-\infty }^{\infty }{\left({\left[{f}_{n}\left(x\right)\right]}^{\frac{1}{2}}-{\left[{f}_{{\theta }_{0}}\left(x\right)\right]}^{\frac{1}{2}}\right)}^{2}\text{d}x\stackrel{p}{\to }0$ as $n\to \infty$ ; see section 3 (page 224) of Tamura and Boos  .

Clearly, the new method proposed here will avoid the problem of arbitrariness in the choice of points for the k-L procedure based on characteristic functions. Unlike indirect inference, the proposed method does not need a proxy model. Furthermore, the estimators obtained using the proposed method might be more robust and efficient than method of moments estimators. Besides, the proposed method does not require conditioning, which can be difficult, whereas the EM algorithm does.

It appears that the proposed method, which originally combines simulation with Hellinger distance, adds to the set of statistical techniques that can be useful for financial and actuarial data, yet many of which do not receive much attention in the actuarial literature. SMHD methods depend on being able to draw samples from the parametric family; in general, this is indeed possible. Consequently, SMHD methods also add to the existing literature on simulated inference which is relatively new; see comments by Davidson and MacKinnon  (page 393).

The new method is built on the classical version (version D) of Hellinger distance as proposed by Beran  which consists in minimizing

${Q}_{n}\left(\theta \right)={\int }_{-\infty }^{\infty }{\left({\left[{f}_{n}\left(x\right)\right]}^{\frac{1}{2}}-{\left[{f}_{\theta }\left(x\right)\right]}^{\frac{1}{2}}\right)}^{2}\text{d}x$ (2)

to obtain the minimum Hellinger distance (MHD) estimators. The MHD estimators have been known to have nice robustness properties with breakdown point greater than 0. Also, they are consistent with, in general, less stringent conditions for consistency than maximum likelihood (ML) estimators. However, more restrictions are placed upon the underlying parametric family for the MHD estimators to attain full efficiency, such assuming ${f}_{\theta }\left(x\right)$ having a compact support for example. Despite this drawback, simulation studies often show that the methods perform well across many models. For a literature review of Hellinger distance (HD) methods, see chapters 3 and 10 of the book by Basu, Shioya and Park  . From the literature, it can be seen that HD methods still do not receive proper attention for their use in actuarial science and finance, especially in actuarial science.

In this paper, we introduce a simulated version of HD methods and show that the SMHD estimators are consistent. However, the question of asymptotic normality is still not resolved for the time being. Further work should generate results on asymptotic distributions for the SMHD estimators that shall then be presented in a subsequent paper. In this paper, the methods are presented with fewer technicalities and we relate them with the traditional likelihood methods. In doing so, we wish to encourage practitioners to use these methods for their applied works in their fields. In the next paragraphs, we will consider a few examples for illustrations of the types of distributions without closed-form expressions often encountered in finance and actuarial science where the new simulated method can be particularly useful.

Example 1

We present here the class of normal mean-variance mixture distributions where the random variable $X$ can be represented using equality in distribution as

$X{=}^{d}\theta +\mu W+\sigma \sqrt{W}Z$ , (3)

where

1) $\theta$ , $\mu$ and $\sigma$ are parameters with $-\infty <\theta <\infty$ , $-\infty <\mu <\infty$ , and $\sigma >0$ ;

2) $W$ is a nonnegative random variable with an infinitely divisible (ID) distribution;

3) $Z$ follows a standard normal distribution $N\left(0,1\right)$ and is independent of $W$ .

The generalized hyperbolic, variance-gamma, and normal-inverse Gaussian distributions belong to this class; see McNeil, Frey and Embrechts  (pages 77-79). By conditioning on $W$ first, the moment generating function (mgf) for

$X$ can be obtained and given by ${M}_{X}\left(s\right)={\text{e}}^{\theta s}{M}_{W}\left(\mu s+\frac{1}{2}{\sigma }^{2}{s}^{2}\right)$ , where the

moment generating functions of $X$ and $W$ are given respectively by ${M}_{X}\left(s\right)$ and ${M}_{W}\left(s\right)$ . Distributions of the increments observed at regular intervals of a subordinated Brownian motion process belong to this class. It can easily be seen that the density function of $X$ depends on the density function of $W$ . Consequently, the density function of $X$ might not have a closed-form expression in general. Closely related to the variance-gamma distribution is the generalized normal-Laplace (GNL) distribution which is introduced by Reed  and is given in the next example.

Example 2

A random variable $X$ follows a GNL distribution if it can be represented as

$X{=}^{d}\rho \mu +\sigma \sqrt{\rho }Z+\frac{1}{\alpha }{G}_{1}-\frac{1}{\beta }{G}_{2}$ , (4)

where

1) the parameters are $\mu$ , $\sigma$ , $\rho$ , $\alpha$ and $\beta$ , with $-\infty <\mu <\infty$ , $\sigma >0$ , $\rho >0$ , $\alpha >0$ , and $\beta >0$ ;

2) the random variables ${G}_{1}$ and ${G}_{2}$ are independent and follow a common gamma distribution with density function $g\left(x;\rho \right)=\frac{1}{\Gamma \left(\rho \right)}{x}^{\rho -1}{\text{e}}^{-x},x>0,\rho >0$ ;

3) $Z$ follows a standard normal distribution, $N\left(0,1\right)$ , with $Z$ being independent of ${G}_{1}$ and ${G}_{2}$ .

The distribution is infinitely divisible and can display asymmetry and fatter tail than the normal distribution. It will be symmetric if $\alpha =\beta$ . The vector of parameters is $\theta ={\left(\mu ,\sigma ,\rho ,\alpha ,\beta \right)}^{\prime }$ and the mgf for $X$ can be obtained using the representation given by Equation (4) and is given by

${M}_{X}\left(s\right)={\text{e}}^{\rho \left(\mu s+\frac{1}{2}{\sigma }^{2}{s}^{2}\right)}{\left(\frac{\alpha }{\alpha -s}\right)}^{\rho }{\left(\frac{\beta }{\beta +s}\right)}^{\rho }$ . (5)

From the cumulant generating function, the mean and variance are given respectively by

$E\left(X\right)=\rho \left(\mu +\frac{1}{\alpha }-\frac{1}{\beta }\right)$ (6)

and

$V\left(X\right)=\rho \left({\sigma }^{2}+\frac{1}{{\alpha }^{2}}+\frac{1}{{\beta }^{2}}\right)$ . (7)

Higher cumulants are

${\kappa }_{r}=\rho \left(r-1\right)!\left(\frac{1}{{\alpha }^{r}}+{\left(-1\right)}^{r}\frac{1}{{\beta }^{r}}\right)$ for $r>2$ . (8)

Due to the lack of a closed-form expression for the density function, Reed  (page 477) has proposed using the method of moments and matching the empirical cumulants with the model cumulants to estimate the parameters. He applied the method to data collected on stocks. In the particular case with four parameters, where $\alpha =\beta$ , moment estimators can be obtained explicitly. However, for the general case with five parameters, the moment equations must be solved numerically. The moment estimators will be discussed in more detail in section 3 and we shall compare their efficiencies with the efficiencies of the SMHD estimators based on simulated samples.

For more on Lévy processes and infinitely divisible distributions used in finance, see chapter 6 of the book by Schoutens  (pages 73-83). For nonnegative infinitely divisible distributions used in actuarial science, see Dufresne and Gerber  , and Luong  . For mixtures of distributions without closed-form density functions, for which the proposed estimators can also be used, see Klugman, Panjer and Willmot  (pages 62-65). We shall consider HD estimation in all those cases.

Assume that we have a random sample of observations ${X}_{1},\cdots ,{X}_{n}$ and they are independent and identically distributed as the random variable $X$ which is continuous with model density given by ${f}_{\theta }\left(x\right)$ . The vector of parameters is denoted by $\theta ={\left({\theta }_{1},\cdots ,{\theta }_{m}\right)}^{\prime }$ . In his seminal paper, Beran  proposes to estimate $\theta$ by the minimum Hellinger distance estimators denoted by $\stackrel{^}{\theta }$ which minimize, with respect to $\theta$ , the Hellinger distance between a consistent empirical density estimate ${f}_{n}$ and the parametric family ${f}_{\theta }$ with the property ${f}_{n}\left(x\right)\stackrel{p}{\to }{f}_{{\theta }_{0}}\left(x\right)$ pointwise. It leads to minimize the objective function

${Q}_{n}\left(\theta \right)={\int }_{-\infty }^{\infty }{\left({\left[{f}_{n}\left(x\right)\right]}^{\frac{1}{2}}-{\left[{f}_{\theta }\left(x\right)\right]}^{\frac{1}{2}}\right)}^{2}\text{d}x$ . (9)

Beran  also noted that, intuitively, the methods are robust as data are smoothed by a kernel density estimator ${f}_{n}$ , and hence the effects of outliers are mitigated. It has been confirmed in various models that the asymptotic break-

down points of the estimators are around $\frac{1}{2}$ and it is well-known that the sam-

ple mean has a breakdown point of 0. See Hogg, McKean and Craig  (pages 594-595), and Maronna, Martin and Yohai  (page 58) for the notions of finite sample and asymptotic breakdown points as measures of robustness of estimators. See Lindsay  for the discussions on robustness and efficiencies of MHD estimators. We also note that, since

${Q}_{n}\left(\theta \right)=2-2{\int }_{-\infty }^{\infty }{\left[{f}_{n}\left(x\right)\right]}^{\frac{1}{2}}{\left[{f}_{\theta }\left(x\right)\right]}^{\frac{1}{2}}\text{d}x$ (10)

and, using the Cauchy-Schwarz inequality, ${\int }_{-\infty }^{\infty }{\left[{f}_{n}\left(x\right)\right]}^{\frac{1}{2}}{\left[{f}_{\theta }\left(x\right)\right]}^{\frac{1}{2}}\text{d}x\le 1$ , we find

$0\le {Q}_{n}\left(\theta \right)\le 2$ . (11)

Moreover, since ${\int }_{-\infty }^{\infty }{\left[{f}_{n}\left(x\right)\right]}^{\frac{1}{2}}{\left[{f}_{\theta }\left(x\right)\right]}^{\frac{1}{2}}\text{d}x=1$ if and only if ${f}_{n}\left(x\right)={f}_{\theta }\left(x\right)$

almost everywhere, it implies ${Q}_{n}\left(\theta \right)=0$ if and only if ${f}_{n}\left(x\right)={f}_{\theta }\left(x\right)$ almost everywhere.

The objective function is stable and bounded. This might explain why, intuitively, minimizing such an objective function, we obtain estimators that are also stable and therefore robust in some sense.

Kernel density estimators are often used to define ${f}_{n}\left(x\right)$ . One of the simplest kernel density estimators is the rectangular kernel density estimator which generalizes the usual histogram estimator. In general, kernel density estimators have the form

${f}_{n}\left(x\right)=\frac{1}{n{h}_{n}}{\sum }_{i=1}^{n}\omega \left(\frac{x-{x}_{i}}{{h}_{n}}\right)$ , (12)

where

a) ${h}_{n}$ is the bandwidth with the property that ${h}_{n}\to 0$ and $n{h}_{n}\to \infty$ as $n\to \infty$ ;

b) $\omega \left(x\right)$ is a density function.

The property specified by a) guarantees the consistency of ${f}_{n}\left(x\right)$ ; see Corollary 6.4.1 given by Lehmann  (pages 406-408). Subsequently, we implicitly assume that density estimates used with the SMHD method meet the requirements specified by a) and b).

For the rectangular kernel density, the following symmetric density around 0 is chosen with $\omega \left(x\right)=\frac{1}{2}$ for $-1 . The kernel $\omega \left(x\right)$ has a compact

support. The density estimate at $x$ is then the average of rectangles located within ${h}_{n}$ units from $x$ . For other kernels and their implementation using the package R, see chapter 10 of the book by Rizzo  (pages 281-318). For Hellinger distance estimation, it is preferable to use a symmetric kernel with a compact support and twice differentiable for meeting the regularity conditions of Theorem 4 as given by Beran  (pages 450-451); also see the discussions by Basu, Shioya and Park  (pages 78-83). In this paper, we only need univariate kernel density estimates but multivariate density estimates based on kernels can also be defined similarly; see Toma  and Scott  .

If ${f}_{\theta }\left(x\right)$ has no closed-form expression but random samples can be drawn from the distribution with density ${f}_{\theta }\left(x\right)$ , clearly we can use the same type of kernel density estimator, used to define ${f}_{n}\left(x\right)$ , to estimate ${f}_{{\theta }_{0}}\left(x\right)$ . In other words, in order to estimate ${f}_{\theta }\left(x\right)$ , we similarly define ${f}_{\theta }^{S}\left(x\right)$ as being the kernel density estimator based on a random sample of size $U=\tau n$ . Note that $U\to \infty$ as $n\to \infty$ and $\tau$ needs to be reasonably large so that there is little loss of efficiencies due to simulations; we recommend $\tau \ge 10$ .

Consequently, for the simulated version, we shall minimize the objective function given by

${Q}_{n}\left(\theta \right)={\int }_{-\infty }^{\infty }{\left({\left[{f}_{n}\left(x\right)\right]}^{\frac{1}{2}}-{\left[{f}_{\theta }^{S}\left(x\right)\right]}^{\frac{1}{2}}\right)}^{2}\text{d}x$ (13)

to obtain the SMHD estimators.

For terminology, we shall call the classical version, which is deterministic in terms of ${f}_{\theta }\left(x\right)$ , version D, and the simulated version, version S. Since ${Q}_{n}\left(\theta \right)$ , as given by Equation (13), is not differentiable, a direct simplex search method which is derivative-free is recommended. The R package already has a built-in function for performing the Nelder-Mead simplex method which is a derivative-free method to minimize a function. Also, there is a built-in function to handle density estimates using various kernels. These features will facilitate the implementation of SMHD methods for applied works by practitioners. Furthermore, because the densities ${f}_{n}\left(x\right)$ and ${f}_{\theta }^{S}\left(x\right)$ based on a rectangular or triangular kernel are positive only in some finite interval and zero elsewhere, this makes the integration for evaluating Equation (13) easy to handle. A trapezoid quadrature method will suffice to find the SMHD estimators. Note that for the simulated version, we still have

$0\le {Q}_{n}\left(\theta \right)\le 2$ . (14)

As data are also smoothed, intuitively, these features will again make the simulated version robust.

The paper is organized as follows. In Section 2, we will look into the asymptotic properties of MHD estimators. More precisely, we shall briefly review the asymptotic properties of the classical MHD estimators in Section 2.1 and establish the consistency of SMHD estimators in Section 2.2. Also in Section 2.2, an estimator for the Fisher information matrix is proposed with the use of SMHD estimators. In Section 3, we use a limited simulation study to compare the efficiencies of the SMHD estimators with those of method of moments estimators, using the GNL distribution. Despite being limited, the study seems to show that the SMHD estimators are more efficient than the method of moments estimators. This seems to point to the potential of SMHD methods to generate estimators with good efficiency and further justify their use in actuarial science and finance.

2. Asymptotic Properties

2.1. Asymptotic Properties of the Classical MHD Estimators

MHD estimators can be seen to be consistent in general for version D and version S. In fact, the conditions are even less restrictive than the conditions for maximum likelihood estimators to be consistent. Since we aim for applications, we only consider asymptotic properties under the strict parametric model, i.e., assuming the observations come from the parametric density family ${f}_{\theta }\left(x\right)$ , where $\theta \in \Omega$ , and the parameter space $\Omega$ is assumed to be compact.

Let

$‖{\left({f}_{1}\right)}^{\frac{1}{2}}-{\left({f}_{2}\right)}^{\frac{1}{2}}‖={\left[{\int }_{-\infty }^{\infty }{\left({\left[{f}_{1}\left(x\right)\right]}^{\frac{1}{2}}-{\left[{f}_{2}\left(x\right)\right]}^{\frac{1}{2}}\right)}^{2}\text{d}x\right]}^{\frac{1}{2}}$ , (15)

where ${f}_{1}\left(x\right)$ and ${f}_{2}\left(x\right)$ are density functions. Note that $‖\cdot ‖$ is a norm in the density functional space and it will respect the triangular inequality.

Tamura and Boos  (page 224) have noted that, if ${f}_{n}\left(x\right)\stackrel{p}{\to }{f}_{{\theta }_{0}}\left(x\right)$ , then $‖{\left({f}_{n}\right)}^{\frac{1}{2}}-{\left({f}_{{\theta }_{0}}\right)}^{\frac{1}{2}}‖\stackrel{p}{\to }0$ , and if $‖{\left({f}_{n}\right)}^{\frac{1}{2}}-{\left({f}_{\theta }\right)}^{\frac{1}{2}}‖>0$ for $\theta \ne {\theta }_{0}$ in probability,

it is sufficient for the MHD estimators given by the vector $\stackrel{^}{\theta }$ obtained by minimizing Equation (10) to be consistent, i.e., $\stackrel{^}{\theta }\stackrel{p}{\to }{\theta }_{0}$ , assuming the parameter space $\Omega$ is compact. See Theorem 3.1 by Tamura and Boos  (page 224). Comparing with the regularity conditions for ML estimators as given by Theorem 2.5 of Newey and McFadden  (page 2131), the regularity conditions for MHD estimation do not require that $E\left({\mathrm{sup}}_{\theta }|\mathrm{log}{f}_{\theta }\left(x\right)|\right)<\infty$ as in likelihood estimation. This makes the MHD estimators consistent in general even with fewer restrictions than ML estimators.

However, for asymptotic normality, they require more stringent conditions to be as efficient as ML estimators. They are found in Theorem 4 given by Beran  (pages 450-451), which is summarized in Theorem 1 below, focusing on the strict parametric model. Beran  (pages 450-451) allows the bandwidth of the kernel to be randomly chosen with ${h}_{n}={c}_{n}{s}_{n}$ , where ${c}_{n}$ is a sequence of constants but ${s}_{n}$ is a sequence of random variable with ${s}_{n}\stackrel{p}{\to }s$ . It also requires

a compact support K for both $\frac{\partial \mathrm{log}{f}_{\theta }\left(x\right)}{\partial \theta }$ and ${f}_{\theta }\left(x\right)$ . Despite these restric-

tions, empirical studies often show that the estimators have high efficiencies in many models without the condition of compact support for the parametric family met. The regularity conditions of Beran’s Theorem 4 when restricted to the strict parametric model are stated using Theorem 1 below. We also require the vector of true parameters ${\theta }_{0}$ to be in $\Omega$ , where $\Omega$ is compact. Theorem 1 can be viewed as a corollary of Theorem 4 as given by Beran  and the proofs have been given there.

Theorem 1

Suppose

1) The kernel density $\omega \left(x\right)$ is symmetric about 0 and has a compact support.

2) The function $\omega \left(x\right)$ is twice differentiable and its second derivative is bounded on the compact support.

3) $\frac{\partial \mathrm{log}{f}_{\theta }\left(x\right)}{\partial \theta }$ and ${f}_{\theta }\left(x\right)$ have a compact support $K$ and ${f}_{\theta }\left(x\right)>0$ on $K$ .

4) ${f}_{\theta }\left(x\right)$ is twice absolutely continuous with its second derivative with respect to $x$ being bounded.

5) ${\text{lim}}_{n\to \infty }{n}^{\frac{1}{2}}{c}_{n}=\infty$ , ${\text{lim}}_{n\to \infty }{n}^{\frac{1}{2}}{c}_{n}^{2}=0$ , and ${\text{lim}}_{n\to \infty }{c}_{n}=0$ .

6) There exists a positive constant $s$ which might depend on ${f}_{{\theta }_{0}}\left(x\right)$ such that $\sqrt{n}\left({s}_{n}-s\right)$ is bounded in probability.

Then $\sqrt{n}\left(\stackrel{^}{\theta }-{\theta }_{0}\right)\stackrel{L}{\to }N\left(0,I{\left({\theta }_{0}\right)}^{-1}\right)$ where $I\left({\theta }_{0}\right)$ is the Fisher information matrix with elements given by

$E\left(\frac{\partial \mathrm{log}{f}_{\theta }\left(x\right)}{\partial {\theta }_{j}}\frac{\partial \mathrm{log}{f}_{\theta }\left(x\right)}{\partial {\theta }_{i}}\right)=-E\left(\frac{{\partial }^{2}\mathrm{log}{f}_{\theta }\left(x\right)}{\partial {\theta }_{j}\partial {\theta }_{i}}\right),i=1,\cdots ,m,j=1,\cdots ,m$ (16)

and assumed to exist.

We just give an outline establishing the results of Theorem 1 and focus only on the strict parametric model for applications with the aim that it might help practitioners in the applied fields to follow more easily the arguments needed to develop the new method subsequently.

Note that, beside the rectangular kernel, the triangular kernel with $\omega \left(x\right)=1-|x|$ for $-1\le x\le 1$ and the Epanechnikov kernel with $\omega \left(x\right)=\frac{3}{4}\left(1-{x}^{2}\right)$ for

$-1\le x\le 1$ meet conditions 1 and 2 as required by Theorem 1 and are available in the package R.

For establishing asymptotic normality results for the estimators as indicated by Theorem 1, we can consider a Taylor expansion of the system of equations

$D\left(\stackrel{^}{\theta }\right)={\frac{\partial {Q}_{n}\left(\theta \right)}{\partial \theta }|}_{\theta =\stackrel{^}{\theta }}=0$ around the true vector of parameters ${\theta }_{0}$ . The system of equations implies

$D\left(\stackrel{^}{\theta }\right)={\int }_{-\infty }^{\infty }\left({\left[{f}_{n}\left(x\right)\right]}^{\frac{1}{2}}-{\left[{f}_{\stackrel{^}{\theta }}\left(x\right)\right]}^{\frac{1}{2}}\right)\frac{\frac{\partial {f}_{\stackrel{^}{\theta }}\left(x\right)}{\partial \theta }}{\sqrt{{f}_{\stackrel{^}{\theta }}\left(x\right)}}\text{d}x=0$ (17)

with $\frac{\partial {f}_{\stackrel{^}{\theta }}\left(x\right)}{\partial \theta }={\frac{\partial {f}_{\theta }\left(x\right)}{\partial \theta }|}_{\theta =\stackrel{^}{\theta }}$ and ${f}_{\stackrel{^}{\theta }}\left(x\right)={{f}_{\theta }\left(x\right)|}_{\theta =\stackrel{^}{\theta }}$ .

We proceed to perform a Taylor expansion by noting

$D\left({\theta }_{0}\right)={\int }_{-\infty }^{\infty }\left({\left[{f}_{n}\left(x\right)\right]}^{\frac{1}{2}}-{\left[{f}_{{\theta }_{0}}\left(x\right)\right]}^{\frac{1}{2}}\right)\frac{\frac{\partial {f}_{{\theta }_{0}}\left(x\right)}{\partial \theta }}{\sqrt{{f}_{{\theta }_{0}}\left(x\right)}}\text{d}x$ , (18)

$\stackrel{˙}{D}\left({\theta }_{0}\right)={\frac{\partial D\left(\theta \right)}{\partial \theta }|}_{\theta ={\theta }_{0}}=-\frac{1}{2}{\int }_{-\infty }^{\infty }\left(\frac{\partial \mathrm{log}{f}_{{\theta }_{0}}\left(x\right)}{\partial \theta }\right){\left(\frac{\partial \mathrm{log}{f}_{{\theta }_{0}}\left(x\right)}{\partial \theta }\right)}^{\prime }{f}_{{\theta }_{0}}\left(x\right)\text{d}x+{o}_{p}\left(1\right)$ ,(19)

assuming $D\left(\theta \right)$ is differentiable with respect to $\theta$ and

${\int }_{-\infty }^{\infty }\left({\left[{f}_{n}\left(x\right)\right]}^{\frac{1}{2}}-{\left[{f}_{{\theta }_{0}}\left(x\right)\right]}^{\frac{1}{2}}\right)\frac{\partial {s}_{{\theta }_{0}}\left(x\right)}{\partial \theta }\text{d}x\stackrel{p}{\to }0$ , with ${s}_{{\theta }_{0}}\left(x\right)=\frac{\frac{\partial {f}_{{\theta }_{0}}\left(x\right)}{\partial \theta }}{\sqrt{{f}_{{\theta }_{0}}\left(x\right)}}$ , using the compact support assumption for $\left\{{f}_{\theta }\right\}$ . As a result, we can write that

$\stackrel{˙}{D}\left({\theta }_{0}\right)=-\frac{1}{2}I\left({\theta }_{0}\right)+{o}_{p}\left(1\right)$ . (20)

Therefore, with the regularity conditions met, we will have the representation

$\sqrt{n}\left(\stackrel{^}{\theta }-{\theta }_{0}\right)=-{\left[\stackrel{˙}{D}\left({\theta }_{0}\right)\right]}^{-1}\sqrt{n}D\left({\theta }_{0}\right)+{o}_{p}\left(1\right)$ , (21)

where ${o}_{p}\left(1\right)$ is the remainder term which converges to 0 in probability, which can be re-expressed using the following equality which holds in law,

$\sqrt{n}\left(\stackrel{^}{\theta }-{\theta }_{0}\right){=}^{d}2{\left[I\left({\theta }_{0}\right)\right]}^{-1}\sqrt{n}{\int }_{-\infty }^{\infty }\left({\left[{f}_{n}\left(x\right)\right]}^{\frac{1}{2}}-{\left[{f}_{{\theta }_{0}}\left(x\right)\right]}^{\frac{1}{2}}\right)\frac{\frac{\partial {f}_{{\theta }_{0}}\left(x\right)}{\partial \theta }}{\sqrt{{f}_{{\theta }_{0}}\left(x\right)}}\text{d}x$ . (22)

Using the argument given by Beran  (page 451) allows us to establish the equality in probability,

$\begin{array}{l}\sqrt{n}{\int }_{-\infty }^{\infty }\left({\left[{f}_{n}\left(x\right)\right]}^{\frac{1}{2}}-{\left[{f}_{{\theta }_{0}}\left(x\right)\right]}^{\frac{1}{2}}\right)\frac{\frac{\partial {f}_{{\theta }_{0}}\left(x\right)}{\partial \theta }}{\sqrt{{f}_{{\theta }_{0}}\left(x\right)}}\text{d}x\\ =\sqrt{n}{\int }_{-\infty }^{\infty }\frac{1}{2}\frac{\left({f}_{n}\left(x\right)-{f}_{{\theta }_{0}}\left(x\right)\right)}{\sqrt{{f}_{{\theta }_{0}}\left(x\right)}}\frac{\frac{\partial {f}_{{\theta }_{0}}\left(x\right)}{\partial \theta }}{\sqrt{{f}_{{\theta }_{0}}\left(x\right)}}\text{d}x+{o}_{p}\left(1\right)\end{array}$ . (23)

This can be viewed as a form of generalized delta method to establish equality of the left-hand side and the right-hand side of Equation (23).

Consequently, Equation (22) can be re-expressed, using the equality in distribution, as

$\sqrt{n}\left(\stackrel{^}{\theta }-{\theta }_{0}\right){=}^{d}{\left[I\left({\theta }_{0}\right)\right]}^{-1}\sqrt{n}{\int }_{-\infty }^{\infty }\left({f}_{n}\left(x\right)-{f}_{{\theta }_{0}}\left(x\right)\right)\frac{\partial \mathrm{log}{f}_{{\theta }_{0}}\left(x\right)}{\partial \theta }\text{d}x$ . (24)

Note that

${\int }_{-\infty }^{\infty }\left({f}_{n}\left(x\right)-{f}_{{\theta }_{0}}\left(x\right)\right)\frac{\partial \mathrm{log}{f}_{{\theta }_{0}}\left(x\right)}{\partial \theta }\text{d}x={\int }_{-\infty }^{\infty }{f}_{n}\left(x\right)\frac{\partial \mathrm{log}{f}_{{\theta }_{0}}\left(x\right)}{\partial \theta }\text{d}x$ (25)

as, in general, ${\int }_{-\infty }^{\infty }{f}_{{\theta }_{0}}\left(x\right)\frac{\partial \mathrm{log}{f}_{{\theta }_{0}}\left(x\right)}{\partial \theta }\text{d}x=0$ . Furthermore,

$\sqrt{n}{\int }_{-\infty }^{\infty }{f}_{n}\left(x\right)\frac{\partial \mathrm{log}{f}_{{\theta }_{0}}\left(x\right)}{\partial \theta }\text{d}x=\sqrt{n}{\int }_{-\infty }^{\infty }\frac{\partial \mathrm{log}{f}_{{\theta }_{0}}\left(x\right)}{\partial \theta }\text{d}{F}_{n}\left(x\right)+{o}_{p}\left(1\right)$ , (26)

where ${F}_{n}\left(x\right)$ is the commonly used sample distribution function. This allows the following representation:

$\sqrt{n}\left(\stackrel{^}{\theta }-{\theta }_{0}\right){=}^{d}{\left[I\left({\theta }_{0}\right)\right]}^{-1}\frac{1}{\sqrt{n}}{\sum }_{i=1}^{n}\frac{\partial \mathrm{log}{f}_{{\theta }_{0}}\left({x}_{i}\right)}{\partial \theta }$ . (27)

Therefore, $\sqrt{n}\left(\stackrel{^}{\theta }-{\theta }_{0}\right)\stackrel{L}{\to }N\left(0,I{\left({\theta }_{0}\right)}^{-1}\right)$ .

For the simulated version, i.e., version S, we can only obtain results for consistency and they will be given in the next section. As for asymptotic normality, we cannot conclude for the time being whether or not conditions of Theorem 7.1 given by Newey and McFadden  (pages 2185-2186) for the asymptotic normality of estimators obtained from a non-smooth function can be met. We hope to have more results on this issue in the future and would present them in a subsequent paper. This does not prevent SMHD estimation from being used as an alternative to methods of moments if the primary interests are in point estimation.

2.2. Asymptotic Properties of the SMHD Estimators

For version S, we minimize

${Q}_{n}\left(\theta \right)={\int }_{-\infty }^{\infty }{\left({\left[{f}_{n}\left(x\right)\right]}^{\frac{1}{2}}-{\left[{f}_{\theta }^{S}\left(x\right)\right]}^{\frac{1}{2}}\right)}^{2}\text{d}x$ . (28)

We recommend using the same seed across different values of $\theta$ if possible and the simulated sample size $U=\tau n$ such that $U\to \infty$ at the same rate as $n\to \infty$ . These recommendations conform with other simulated methods of inference such as the method of simulated moments as discussed by Davidson and McKinnon  (page 284) or simulated quasi-likelihood found in Smith  (page S68). The condition of the same seed being used is not necessary for consistency, but it allows ${Q}_{n}\left(\theta \right)$ to have the value for each $\theta$ fixed each time we want to evaluate ${Q}_{n}\left(\theta \right)$ ; otherwise, the values might differ slightly due to the fact that simulations are needed to evaluate ${Q}_{n}\left(\theta \right)$ . With the same seed, ${Q}_{n}\left(\theta \right)$ behaves like a non-random function with respect to $\theta$ .

Let

$‖{G}_{n}\left(\theta \right)‖={\left({Q}_{n}\left(\theta \right)\right)}^{\frac{1}{2}}$ , (29)

with ${Q}_{n}\left(\theta \right)$ as defined by Equation (28). The following Theorem, which is essentially Theorem (3.1) given by Pakes and Pollard  (page 1038) with the assumption of compactness of the parameter space added, can be used to establish the consistency of SMHD estimators. The proofs of the following Theorem have been given by Pakes and Pollard  (page 1038) using the Euclidean norm. Their proofs are still valid with the norm as defined by Equation (29) and discussed in Section 2.1. It is implicitly assumed that there is no identification problem for the parametric family, i.e., if ${\theta }_{1}\ne {\theta }_{2}$ , then ${f}_{{\theta }_{1}}\left(x\right)\ne {f}_{{\theta }_{2}}\left(x\right)$ except on a set of measure zero.

Theorem 2

Suppose

1) The parameter space $\Omega$ is compact, and ${\theta }_{0}\in \Omega$ .

2) ${\stackrel{^}{\theta }}^{S}$ minimizes $‖{G}_{n}\left(\theta \right)‖$ or equivalently ${Q}_{n}\left(\theta \right)$ .

3) ${\mathrm{sup}}_{‖\theta -{\theta }_{0}‖>\delta }{‖{G}_{n}\left(\theta \right)‖}^{-1}$ for each $\delta >0$ , is bounded in probability, where $‖\text{ }\cdot \text{ }‖$ denotes the norm being used.

Then ${\stackrel{^}{\theta }}^{S}\stackrel{p}{\to }{\theta }_{0}$ .

Clearly, we have consistency for ${\stackrel{^}{\theta }}^{S}$ as $0\le {Q}_{n}\left(\theta \right)\le 2$ and ${Q}_{n}\left(\theta \right)\stackrel{p}{\to }0$ only at $\theta ={\theta }_{0}$ .

For the time being, we cannot assert that ${\stackrel{^}{\theta }}^{S}$ follows a multivariate normal distribution asymptotically as we cannot verify the regularity conditions of Theorem 7.1 given by Newey and McFadden  (pages 2185-2186) for estimators obtained from a non-smooth objective function. For the simulated unweighted minimum chi-square, Pakes and Pollard  (page 1049) find the as-

ymptotic covariance to be $\left(1+\frac{1}{\tau }\right)V$ , with $V$ being the asymptotic covariance

matrix of the estimators without using simulations. Conforming with other simulated methods which typically give the same type of asymptotic covariance formula, we recommend choosing $\tau \ge 10$ to minimize the loss of efficiency due to simulations. The matrix $\left(1+\frac{1}{\tau }\right)V$ , where $V=I{\left({\theta }_{0}\right)}^{-1}$ , can be viewed as a form of benchmark for the approximate asymptotic covariance matrix for ${\stackrel{^}{\theta }}^{S}$ if indeed asymptotic normality can be shown. In the absence of a rigorous proof, we have to rely on simulations to evaluate the efficiency of ${\stackrel{^}{\theta }}^{S}$ , just as for version D when the support of the distribution is not compact. Further asymptotic results to be obtained in the future will be presented in a subsequent paper.

Since we have estimates for densities, it is natural that we can estimate the Fisher information matrix. Clearly, if the model density has a closed-form expression, then the following matrix

$\frac{1}{n}{\sum }_{i=1}^{n}\left(\frac{\frac{\partial {f}_{{\stackrel{^}{\theta }}^{S}}\left({x}_{i}\right)}{\partial \theta }}{{f}_{{\stackrel{^}{\theta }}^{S}}\left({x}_{i}\right)}\right){\left(\frac{\frac{\partial {f}_{{\stackrel{^}{\theta }}^{S}}\left({x}_{i}\right)}{\partial \theta }}{{f}_{{\stackrel{^}{\theta }}^{S}}\left({x}_{i}\right)}\right)}^{\prime }$ (30)

can be used to estimate $I\left({\theta }_{0}\right)$ . Instead of ${f}_{{\stackrel{^}{\theta }}^{S}}\left({x}_{i}\right)$ , if it is not available, we can use the kernel density estimate of ${f}_{{\stackrel{^}{\theta }}^{S}}\left({x}_{i}\right)$ , and, following a method given by Pakes and Pollard  (page 1043), we can use

$\frac{\Delta {f}_{{\stackrel{^}{\theta }}^{S}}\left({x}_{i}\right)}{\Delta {\theta }_{j}}=\frac{{f}_{{\stackrel{^}{\theta }}^{S}+{ϵ}_{n}{e}_{j}}\left({x}_{i}\right)-{f}_{{\stackrel{^}{\theta }}^{S}}\left({x}_{i}\right)}{{ϵ}_{n}}$ , (31)

with ${ϵ}_{n}\to 0$ at the rate ${ϵ}_{n}=o\left({n}^{-\delta }\right)$ , where $\delta \le \frac{1}{2}$ , to estimate $\frac{\partial {f}_{{\stackrel{^}{\theta }}^{S}}\left({x}_{i}\right)}{\partial {\theta }_{j}},j=1,\cdots ,m$ , assuming ${f}_{{\stackrel{^}{\theta }}^{S}}\left({x}_{i}\right)>0,i=1,\cdots ,n$ . The vector ${e}_{j}$ is a unit vector with 1 in its j-th place and 0 elsewhere. Replacing ${f}_{{\stackrel{^}{\theta }}^{S}}\left({x}_{i}\right)$ and $\frac{\partial {f}_{{\stackrel{^}{\theta }}^{S}}\left({x}_{i}\right)}{\partial {\theta }_{j}}$

by these estimates will give an estimator for the information matrix. An estimate of the information matrix is useful as the information matrix is related to the Cramer-Rao lower bound.

3. Limited Simulation Study

In this study, we shall compare the efficiencies of the moment estimators for the case with $\alpha =\beta$ , i.e., the GNL distribution with only four parameters. Reed  (page 477) has given the expressions for the moment estimators using the first six empirical cumulants ${k}_{r},r=2,\cdots ,6$ , with the sample mean $\stackrel{¯}{X}={k}_{1}$ . They can be

obtained using central empirical moments ${m}_{r}=\frac{1}{n}{\sum }_{i=1}^{n}{\left({X}_{i}-\stackrel{¯}{X}\right)}^{2},r=2,\cdots ,6$ as

they follow the same type of relationships which exist between model cumulants ${\kappa }_{r}$ and model central moments. Let ${\mu }_{r}=E{\left(X-\mu \right)}^{2},r>2$ , and $\mu =E\left(X\right)$ . The following relationships can be found in Stuart and Ord  (pages 90-91) and they are given by

${\kappa }_{1}=\mu$ ,

${\kappa }_{2}={\mu }_{2}$ ,

${\kappa }_{3}={\mu }_{3}$ ,

${\kappa }_{4}={\mu }_{4}-3{\mu }_{2}^{2}$ ,

${\kappa }_{5}={\mu }_{5}-10{\mu }_{3}{\mu }_{2}$ ,

${\kappa }_{6}={\mu }_{6}-15{\mu }_{4}{\mu }_{2}-10{\mu }_{3}^{2}+30{\mu }_{2}^{3}$ . (32)

Explicitly, the moments estimators are

$\stackrel{˜}{\alpha }=\stackrel{˜}{\beta }={\left(20\frac{{k}_{4}}{{k}_{6}}\right)}^{\frac{1}{2}}$ , $\stackrel{˜}{\rho }=\frac{100}{3}\frac{{k}_{4}^{3}}{{k}_{6}^{2}}$ , $\stackrel{˜}{\mu }=\frac{{k}_{1}}{\stackrel{˜}{\rho }}$ and ${\stackrel{˜}{\sigma }}^{2}=\frac{{k}_{2}}{\stackrel{˜}{\rho }}-\frac{2}{{\stackrel{˜}{\alpha }}^{2}}$ . (33)

Reed  (page 477) also notes that method of moments estimators (MM estimators) can take on negative values for positive parameters, and it is not easy to include constraints in method of moments estimation. Also, the use of EM algorithm does not appear to be straightforward for the GNL distribution. SMHD estimation can handle constraints by minimizing the objective function, which is given by Equation (28), with constraints.

A limited simulation study using parameters for the symmetric GNL distribution with four parameters, focusing on parameters in the ranges $\mu =0$ , $\sigma =0.008$ , $0.1\le \rho \le 5.0$ , $30\le \alpha \le 40$ , has been carried out and the relevant results are summarized in Table 1. The ranges of parameters as indicated are chosen accordingly and conform with the empirical study conducted by Reed  (page 481) using stock data. The simulated sample size for data is $n=1000$ and the simulated sample size drawn from the model for SMHD estimation is $U=10000$ , hence with $\tau =10$ . It takes about twenty minutes on a laptop computer to obtain the estimators for $M=50$ samples and, due to the limited computer capacity, we fix $M=50$ samples for each combination of parameters for our study. As we only have access to laptop computers, the scale of the study is limited.

We noticed that the method of moments estimator for ${\sigma }^{2}$ is often negative and we set it equal to zero whenever this is the case, and the comparisons of efficiencies use this version of the method of moments estimator. The density estimate is based on the built-in function of the package R with a rectangular kernel and default bandwidth based on the normal distribution. The overall asymptotic relative efficiency ( ) used for comparisons is

$ARE=\frac{MSE\left({\stackrel{^}{\mu }}^{S}\right)+MSE\left({\stackrel{^}{\sigma }}^{S}\right)+MSE\left({\stackrel{^}{\alpha }}^{S}\right)+MSE\left({\stackrel{^}{\rho }}^{S}\right)}{MSE\left(\stackrel{˜}{\mu }\right)+MSE\left(\stackrel{˜}{\sigma }\right)+MSE\left(\stackrel{˜}{\alpha }\right)+MSE\left(\stackrel{˜}{\rho }\right)}$ , (34)

with $MSE\left(\stackrel{^}{\theta }\right)$ being the commonly used mean square error of the estimator $\stackrel{^}{\theta }$ and it is estimated using $M=50$ samples for estimating the expression for ARE and the values of the estimated ARE’s using different sets of parameters are displayed in Table 1.

Despite the scope of the study being limited, it suggests that SMHD estimators perform much better than method of moments estimators overall for the ranges of parameters used in finance. The method of moments estimator for $\theta$ tends to perform better for small values of $\rho$ and deteriorates rapidly as $\rho$ grows larger with $ARE\to 0$ even for various parameter values that we tested which lie outside the ranges indicated above and not shown in Table 1. Table 1 is used for illustration and provides a summary of the key findings of the study. Also, in the ranges considered, the method of moments estimator for $\mu$ tends to perform better than its SMHD counterpart, but the overall efficiency of MM estimators still falls behind the overall efficiency of SMHD estimators in general as shown in Table 1. Clearly, more work needs to be done numerically and theoretically, but it shows the potential efficiencies of SMHD methods.

Table 1. Asymptotic relative efficiencies to compare SMHD estimators with MM estimators with $\sigma =0.008$ .

Note: Tabulated values are estimates of the asymptotic relative efficiencies of the SMHD estimators versus the MM estimators.

Individual ratios of mean square errors for some sets of parameters

$\theta ={\left(\mu =0,\sigma =0.008,\alpha =30,\rho =0.1\right)}^{\prime }$

$\frac{MSE\left({\stackrel{^}{\mu }}^{S}\right)}{MSE\left(\stackrel{˜}{\mu }\right)}=22.9437$ , $\frac{MSE\left({\stackrel{^}{\sigma }}^{S}\right)}{MSE\left(\stackrel{˜}{\sigma }\right)}=0.8584$ , $\frac{MSE\left({\stackrel{^}{\alpha }}^{S}\right)}{MSE\left(\stackrel{˜}{\alpha }\right)}=0.5918$ , $\frac{MSE\left({\stackrel{^}{\rho }}^{S}\right)}{MSE\left(\stackrel{˜}{\rho }\right)}=0.0222$ , $ARE=0.5915$

$\theta ={\left(\mu =0,\sigma =0.008,\alpha =34,\rho =0.3\right)}^{\prime }$

$\frac{MSE\left({\stackrel{^}{\mu }}^{S}\right)}{MSE\left(\stackrel{˜}{\mu }\right)}=925.3334$ , $\frac{MSE\left({\stackrel{^}{\sigma }}^{S}\right)}{MSE\left(\stackrel{˜}{\sigma }\right)}=0.6064$ , $\frac{MSE\left({\stackrel{^}{\alpha }}^{S}\right)}{MSE\left(\stackrel{˜}{\alpha }\right)}=0.3240$ , $\frac{MSE\left({\stackrel{^}{\rho }}^{S}\right)}{MSE\left(\stackrel{˜}{\rho }\right)}=0.0151$ , $ARE=0.3215$

$\theta ={\left(\mu =0,\sigma =0.008,\alpha =40,\rho =1\right)}^{\prime }$

$\frac{MSE\left({\stackrel{^}{\mu }}^{S}\right)}{MSE\left(\stackrel{˜}{\mu }\right)}=1.3739$ , $\frac{MSE\left({\stackrel{^}{\sigma }}^{S}\right)}{MSE\left(\stackrel{˜}{\sigma }\right)}=0.0503$ , $\frac{MSE\left({\stackrel{^}{\alpha }}^{S}\right)}{MSE\left(\stackrel{˜}{\alpha }\right)}=0.0004$ , $\frac{MSE\left({\stackrel{^}{\rho }}^{S}\right)}{MSE\left(\stackrel{˜}{\rho }\right)}=0.0000$ , $ARE=0.0000$

4. Conclusion

As SMHD estimators remain consistent with minimum regularity conditions and despite the lack of results on asymptotic normality, the proposed method appears to be useful for fitting actuarial and financial models using continuous infinitely divisible distributions which arise from Lévy processes or continuous mixture distributions constructed using mixing operations, whenever it is not difficult to simulate from these distributions but the density functions of these distributions have no closed-form expressions. In many models, the proposed method appears to be more efficient than traditional methods such as the method of moments. The proposed method is not difficult to implement but methods based on simulations do not seem to receive much attention in finance and actuarial science. They might be considered as additional robust statistical techniques for analyzing empirical data, especially if point estimation is the main interest.

Acknowledgements

The helpful comments of an anonymous referee and the kind support of the OJS staff, which led to an improvement in the presentation of the paper, are gratefully acknowledged.

Cite this paper
Luong, A. and Bilodeau, C. (2017) Simulated Minimum Hellinger Distance Estimation for Some Continuous Financial and Actuarial Models. Open Journal of Statistics, 7, 743-759. doi: 10.4236/ojs.2017.74052.
References
   McNeil, A.J., Frey, R. and Embrechts, P. (2005) Quantitative Risk Management: Concepts, Techniques and Tools. Princeton University Press, Princeton.

   McLachlan, G.J. and Krishnan, T. (2008) The EM Algorithm and Extensions. 2nd Edition, Wiley, Hoboken.
https://doi.org/10.1002/9780470191613

   Küchler, U. and Tappe, S. (2008) Bilateral Gamma Distributions and Processes in Financial Mathematics. Stochastic Processes and their Applications, 118, 261-283.
https://doi.org/10.1016/j.spa.2007.04.006

   Küchler, U. and Tappe, S. (2013) Tempered Stable Distributions and Processes. Stochastic Processes and Their Applications, 123, 4256-4293.
https://doi.org/10.1016/j.spa.2013.06.012

   Feuerverger, A. and McDunnough, P. (1981) On the Efficiency of Empirical Characteristic Function Procedures. Journal of the Royal Statistical Society, Series B, 43, 20-27.

   Garcia, R., Renault, E. and Veredas, D. (2011) Estimation of Stable Distributions by Indirect Inference. Journal of Econometrics, 161, 325-337.
https://doi.org/10.1016/j.jeconom.2010.12.007

   Beran, R. (1977) Minimum Hellinger Distance Estimates for Parametric Models. The Annals of Statistics, 5, 445-463.
https://doi.org/10.1214/aos/1176343842

   Tamura, R.N. and Boos, D.D. (1986) Minimum Hellinger Distance Estimation for Multivariate Location and Covariance. Journal of the American Statistical Association, 81, 223-229.
https://doi.org/10.1080/01621459.1986.10478264

   Davidson, R. and MacKinnon, J.G. (2004) Econometric Theory and Methods. Oxford University Press, New York.

   Basu, A., Shioya, H. and Park, C. (2011) Statistical Inference: The Minimum Distance Approach. Chapman and Hall, Boca Raton.

   Reed, W.J. (2007) Brownian-Laplace Motion and Its Use in Financial Modelling. Communications in Statistics—Theory and Methods, 36, 473-484.
https://doi.org/10.1080/03610920601001766

   Schoutens, W. (2003) Lévy Processes in Finance: Pricing Financial Derivatives. Wiley, New York.
https://doi.org/10.1002/0470870230

   Dufresne, F. and Gerber, H.U. (1993) The Probability of Ruin for the Inverse Gaussian and Related Processes. Insurance: Mathematics and Economics, 12, 9-22.
https://doi.org/10.1016/0167-6687(93)90995-2

   Luong, A. (2016) Cramér-Von Mises Distance Estimation for Some Positive Infinitely Divisible Parametric Families with Actuarial Applications. Scandinavian Actuarial Journal, 2016, 530-549.
https://doi.org/10.1080/03461238.2014.977817

   Klugman, S.A., Panjer, H.H. and Willmot, G.E. (2012) Loss Models: From Data to Decisions. 4th Edition, Wiley, Hoboken.

   Hogg, R.V., McKean, J.W. and Craig, A.T. (2013) Introduction to Mathematical Statistics. 7th Edition, Pearson, Boston.

   Maronna, R.A., Martin, R.D. and Yohai, V.J. (2006) Robust Statistics: Theory and Methods. Wiley, Chichester.
https://doi.org/10.1002/0470010940

   Lindsay, B.G. (1994) Efficiency versus Robustness: The Case for Minimum Hellinger Distance and Related Methods. The Annals of Statistics, 22, 1081-1114.
https://doi.org/10.1214/aos/1176325512

   Lehmann, E.L. (1999) Elements of Large Sample Theory. Springer, New York.
https://doi.org/10.1007/b98855

   Rizzo, M.L. (2008) Statistical Computing with R. Chapman and Hall, Boca Raton.

   Toma, A. (2008) Minimum Hellinger Distance Estimators from the Johnson System. Journal of Statistical Planning and Inference, 138, 803-816.
https://doi.org/10.1016/j.jspi.2007.05.033

   Scott, D.W. (2014) Multivariate Density Estimation: Theory, Practice and Visualization. 2nd Edition, Wiley, Hoboken.

   Newey, W.K. and McFadden, D. (1994) Large Sample Estimation and Hypothesis Testing. In: Engle, R.F. and McFadden, D.L., Eds., Handbook of Econometrics, Volume 4, North Holland, Amsterdam, 2111-2245.

   Smith Jr, A.A. (1993) Estimating Nonlinear Time-Series Models Using Simulated Vector Autoregressions. Journal of Applied Econometrics, 8, S63-S84.
https://doi.org/10.1002/jae.3950080506

   Pakes, A. and Pollard, D. (1989) Simulation and the Asymptotics of Optimization Estimators. Econometrica, 57, 1027-1057.
https://doi.org/10.2307/1913622

   Stuart, A. and Ord, K. (1994) Kendall’s Advanced Theory of Statistics, Volume 1: Distribution Theory. 6th Edition, Edward Arnold, London.

Top