Option Pricing and Hedging for Discrete Time Regime-Switching Models

Show more

1. Introduction

In complete, frictionless capital markets with no transaction costs and where the underlying securities follow geometric Brownian motions, the Black-Scholes framework [1] provides an elegant and tractable solution for pricing and hedging derivative securities, typically vanilla calls and puts. Unfortunately, actual financial markets are far more complex and empirical testing of the Black- Scholes model has highlighted its many shortcomings. Indeed, it is well docu- mented [2] [3] [4] that the observed properties of financial time series are not consistent with its underlying assumptions. Time-varying volatility, the presence of higher-order moments and serial correlation are now well established stylized facts of asset returns. Furthermore, [5] [6] [7] and [8] demonstrate that un- realistic assumptions about continuous-time hedging can lead to large hedging errors.

Over the past decade, several studies have proposed discrete time hedging models based on different objective functions (see for example [9] , [10] and [11] ). The idea of dynamic hedging, as detailed in [12] and [13] , is to find a self- financing optimal investment strategy that replicates a terminal payoff. In this paper, we build on the previous work of [14] [15] [16] [17] and [18] to derive a discrete time hedging strategy minimizing the expected risk of future contingent liabilities when asset returns follow a regime-switching random walk. Our hedging methodology is therefore robust to serially-correlated and non-Gaussian returns. Previous attempts to incorporate conditional returns in option pricing include GARCH models (see [19] for a complete review), stochastic volatility models [20] [21] [22] and jump models ( [23] and [24] to cite a few). These approaches have generally been successful at reproducing market prices, however none of them offer an effective, let alone optimal, hedging strategy.

Regime-switching models, popularized by [25] and [26] , have many charac- teristics that lend themselves nicely to financial time series modeling. They allow for time-varying conditional moments, a feature that is consistent with the observed non-Gaussian properties of financial returns while being easy to inter- pret. Regime-switching models have previously been used by [27] to capture interest rate dynamics and by [28] and [29] to model volatility. However, very few papers have attempted to apply regime-switching models to returns for option pricing and hedging. The aim of this paper is to demonstrate how to implement optimal hedging strategies and obtain derivatives prices under this class of models. We review the EM algorithm [30] , which provides an efficient estimation procedure, and propose a novel goodness-of-fit test for selecting the optimal number of regimes based on the work of [31] .

To empirically assess the relevance of our approach, we compare the performance of various hedging methodologies in the context of harvesting the negative market volatility risk premium. The implied volatility of equity index options is known to include a significant negative time-varying premium [32] . In economic terms, there is an insurance cost to holding assets that co-vary positively with volatility (such as options), because the latter tends to increase in unfavorable market states [33] [34] . In their seminal paper, [35] initiate short S & P 500 option positions and hedge the underlying risk until expiration. They empirically show gains from such strategy are proportional to the volatility risk premium. However, they derive daily optimal hedging ratios from the classical Black-Scholes approach adjusted for a GARCH conditional volatility. Even though their conclusion appears robust to such mis-specification, the impact on the strategy is unclear. In fact, investors should optimally account for time- varying volatility and discrete-time re-balancing when deriving hedging ratios. The proposed approach achieves this optimality under a quadratic (thus sym- metric) risk criterion for regime-switching models.

We first present empirical pricing results confirming the systematic down- ward bias in model prices relative to market prices. Secondly, we apply the proposed volatility-timing strategy. Intuitively, we expect the regime-switching methodology to under(over)-hedge relative to the Black-Scholes setting in low (high) volatility states. This is confirmed empirically as the proposed protocol decreases the annualized volatility of the strategy from 3.47% to 2.58% and increases the annualized expected return from 1.37% to 1.97% over the classical Black-Scholes delta-hedging approach.

The rest of the paper is structured as follows. Section 2 introduces the model by describing the filtering procedure to predict regimes, the estimation procedure and the goodness-of-fit test, together with an illustration for the S & P 500 index daily returns. The implementation of the estimation is presented in Appendix A and the implementation of the goodness-of-fit is presented in Appendix B. Then, in Section 3, we state the optimal dynamic discrete time hedging model for regime-switching processes and finally, in Section 4, we implement the proposed algorithm for European vanilla options written on the S & P 500 in the context of harvesting the volatility risk premium.

2. Regime-Switching Geometric Random Walk Models

Regime-switching models are quite intuitive. At period $t$ , given that the previous regime ${\tau}_{t-1}$ has value $i$ , the regime ${\tau}_{t}=j$ is chosen with probability ${Q}_{ij}$ , and given ${\tau}_{t}=j$ , the periodic return ${R}_{t}$ has cumulative distribution ${F}_{j}$ . Henceforth, ${F}_{j}$ is assumed Gaussian with mean ${\mu}_{j}$ and standard deviation ${\sigma}_{j}$ . Following [25] , the non-observable regimes ${\left({\tau}_{t}\right)}_{t\ge 1}$ , with values in $\left\{\mathrm{1,}\cdots \mathrm{,}l\right\}$ , are modeled by a Markov chain with transition matrix $Q$ , and stationary distribution $\nu $ . A (d-dimensional) discrete time process ${\left({R}_{t}\right)}_{t\ge 1}$ forms a regime-switching random walk if, given the regimes ${\tau}_{1}={i}_{1},\cdots ,{\tau}_{n}={i}_{n}$ , the random vectors ${R}_{1}\mathrm{,}\cdots \mathrm{,}{R}_{n}$ are independent, with respective densities ${f}_{{i}_{1}}\mathrm{,}\cdots \mathrm{,}{f}_{{i}_{n}}$ (associated to cumulative distributions ${F}_{{i}_{1}}\mathrm{,}\cdots \mathrm{,}{F}_{{i}_{n}}$ ). As a result, the stationary distribution of ${R}_{t}$ is a mixture, with density $f\left(x\right)={\displaystyle {\sum}_{i=1}^{l}}{\nu}_{i}{f}_{i}\left(x\right)$ .

The regime-switching geometric random walk model ${\left({S}_{t}\right)}_{t\ge 1}$ is then a process such that the associated (d-dimensional) log-returns process ${R}_{t}=\mathrm{log}\left({S}_{t}/{S}_{t-1}\right)$ forms a regime-switching random walk. In general, ${\left({S}_{t}\right)}_{t\ge 1}$ is not a Markov process, unless the regimes are serially independent. However, the process ${\left({S}_{t}\mathrm{,}{\tau}_{t}\right)}_{t\ge 0}$ is Markovian. These models are particular cases of Hidden Markov Models (HMM).

The law of financial time series can be modeled adequately by a regime- switching model with Gaussian densities ${f}_{1}\mathrm{,}\cdots \mathrm{,}{f}_{l}$ , provided the number of regimes is large enough. Indeed, the serial dependence in regimes propagates to returns and captures the observed autocorrelation in financial time series. Also, the conditional distribution is time-varying, leading to conditional volatility, as well as conditional asymmetry and kurtosis. Finally, the model nests the Black- Scholes framework when the number of regimes is forced to one. Indeed, in such a case, the log-returns are independent and identically distributed with Gaussian density.

We now state results of practical interest pertaining to regime-switching models, including a regime prediction algorithm, an estimation algorithm and a goodness-of-fit test to determine the optimal number of regimes.

2.1. Regime Prediction

Since the regimes are not observable, we have to find a way to predict them. This will be of outmost importance for pricing and hedging derivatives.

In many applications, one has to find the most likely regime at period $t$ , given the values of the returns ${R}_{1}\mathrm{,}\cdots \mathrm{,}{R}_{t}$ . This is known as a filtering problem. More precisely, one needs to compute ${\eta}_{t}\left(i\right)=P\left({\tau}_{t}=i|{R}_{1}={x}_{1},\cdots ,{R}_{t}={x}_{t}\right)$ . It is remarkable that for the present model, one can compute exactly this conditional distribution, given a starting distribution ${\eta}_{0}$ .

2.1.1. Filtering Algorithm

・ Choose an a priori probability distribution ${\eta}_{0}$ for the regimes. Equivalently, one can choose a non-negative vector ${q}_{0}$ and set ${\eta}_{0}\left(i\right)={q}_{0}\left(i\right)/{Z}_{0}$ , where ${Z}_{0}={\displaystyle {\sum}_{j=1}^{l}}\text{\hspace{0.05em}}{q}_{0}\left(j\right)$ . The choice of ${q}_{0}$ or ${\eta}_{0}$ is not critical, since its impact on predictions decays in time and have virtually no impact on terminal regime probabilities for any reasonable time series length. For simplicity, we assume a uniform distribution, that is ${q}_{0}\equiv 1$ .

・ For any $t\ge 1$ , once ${R}_{t}={x}_{t}$ is observed, compute, for every $i=1,\cdots ,l$ ,

${q}_{t}\left(i\right)={f}_{i}\left({x}_{t}\right){\displaystyle \underset{j=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{q}_{t-1}\left(j\right){Q}_{ji},$ (1)

and

${\eta}_{t}\left(i\right)=\frac{{q}_{t}\left(i\right)}{{Z}_{t}},$ (2)

where ${Z}_{t}={\displaystyle {\sum}_{j=1}^{l}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{q}_{t}\left(j\right)$ .

Having computed the conditional probabilities, ${\eta}_{t}\left(i\right)$ , one can finally estimate ${\tau}_{t}$ by

${\stackrel{^}{\tau}}_{t}=\mathrm{arg}\underset{i}{\mathrm{max}}{\eta}_{t}\left(i\right),$ (3)

that is as the most probable regime.

Remark 2.1. Note that since ${q}_{t}\left(i\right)=E\left[\mathbb{I}\left({\tau}_{t}=i\right){\displaystyle {\prod}_{k=1}^{t}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{f}_{{\tau}_{k}}\left({x}_{k}\right)\right]$ , ${Z}_{t}$ is the joint density of $\left({R}_{1}\mathrm{,}\cdots \mathrm{,}{R}_{t}\right)$ at $\left({x}_{1}\mathrm{,}\cdots \mathrm{,}{x}_{t}\right)$ . One can also write (1) only in terms of ${\eta}_{t-1}$ viz.

${\eta}_{t}\left(i\right)=\frac{{f}_{i}\left({x}_{t}\right)}{{Z}_{t|t-1}}{\displaystyle \underset{j=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\eta}_{t-1}\left(j\right){Q}_{ji},$ (4)

where ${Z}_{t|t-1}={\displaystyle {\sum}_{i=1}^{l}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{f}_{i}\left({x}_{t}\right){\displaystyle {\sum}_{j=1}^{l}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\eta}_{t-1}\left(j\right){Q}_{ji}$ is the conditional density of ${R}_{t}$ at ${x}_{t}$ , given ${R}_{t-1}={x}_{t-1},\cdots ,{R}_{1}={x}_{1}$ .

2.1.2. Conditional Distribution and Stationary Law

From Remark 2.1, the joint density of ${R}_{1}\mathrm{,}\cdots \mathrm{,}{R}_{t}$ , ${f}_{\mathrm{1:}t}$ , can be expressed as ${Z}_{t}$ . Also, for any $t\ge 2$ , the conditional density of ${R}_{t}$ given ${R}_{1}\mathrm{,}\cdots \mathrm{,}{R}_{t-1}$ , ${f}_{t\mathrm{:1}}$ , can be expressed as a mixture:

${f}_{t:1}\left({x}_{t}|{x}_{1},\cdots ,{x}_{t-1}\right)={Z}_{t|t-1}={\displaystyle \underset{i=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{f}_{i}\left({x}_{t}\right){\displaystyle \underset{j=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\eta}_{t-1}\left(j\right){Q}_{ji}={\displaystyle \underset{i=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{f}_{i}\left({x}_{t}\right){W}_{t-1}\left(i\right)$ (5)

where

${W}_{t-1}\left(i\right)={\displaystyle \underset{j=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\eta}_{t-1}\left(j\right){Q}_{ji},\text{\hspace{1em}}i\in \left\{1,\cdots ,l\right\}.$ (6)

Note that ${W}_{t-1}\left(i\right)=P\left({\tau}_{t}=i|{R}_{t-1}={x}_{t-1},\cdots ,{R}_{1}={x}_{1}\right)$ , for all $t>1$ . We can show that the conditional law of ${R}_{t+k}$ given ${R}_{1}\mathrm{,}\cdots \mathrm{,}{R}_{t}$ has density

${f}_{t+k:k}\left(x\right)={\displaystyle \underset{i=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{f}_{i}\left(x\right){\displaystyle \underset{j=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\eta}_{t}\left(j\right){\left({Q}^{k}\right)}_{ji},$ (7)

which is also a mixture with densities ${f}_{i}$ and weights ${\sum}_{j=1}^{l}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\eta}_{t}\left(j\right){\left({Q}^{k}\right)}_{ji$ , for $i\in \left\{\mathrm{1,}\cdots \mathrm{,}l\right\}$ .

If the Markov chain $\tau $ with transition matrix $Q$ is ergodic, then the conditional law of ${R}_{t+k}$ given ${R}_{1}\mathrm{,}\cdots \mathrm{,}{R}_{t}$ , converges as $k\to \infty $ to the stationary distribution $f\left(x\right)={\displaystyle {\sum}_{i=1}^{l}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\nu}_{i}{f}_{i}\left(x\right)$ . That is, for long term predictions the behaviour of ${R}_{t+k}$ becomes independent of its past.

2.2. Estimation of Parameters

As proposed in [25] , the EM algorithm is a quite efficient estimation procedure for incomplete data sets. Under regime-switching models, observations are partial since $\tau $ is unobservable. The algorithm proceeds iteratively to converge to the maximum likelihood estimation of parameters [30] . Its implementation for regime-switching random walks is described in details in Appendix A. The optimal number of regimes must be known a priori, an issue that will be dealt with next.

As shown in [36] and [37] , when the densities are Gaussian with mean ${\left\{{\mu}_{i}\right\}}_{i=1}^{l}$ and covariance matrix ${\left\{{A}_{i}\right\}}_{i=1}^{l}$ , the EM estimators ${Q}_{t}$ of $Q$ and ${\theta}_{t}$ of $\theta =\left({\mu}_{1},{\sigma}_{1},\cdots ,{\mu}_{l},{\sigma}_{l}\right)$ are ${t}^{1/2}$ -consistent. In other words, ${t}^{1/2}\left({Q}_{t}-Q\right)$ and ${t}^{1/2}\left({\theta}_{t}-\theta \right)$ are asymptotically Gaussian. Furthermore, they are regular in the sense that parametric bootstrap can be applied. For more details, see [37] .

2.3. Goodness-of-Fit Test

Having estimated the parameters for a fixed number of regimes, one must next test the adequacy of the fitted model in order to select the optimal number of regimes. This is generally done by using a test based on likelihoods. However, as expressed in [25] , hypothesis testing using maximum likelihoods methods can be problematic due to singularities and unidentifiable parameters. Furthermore, [36] show that goodness-of-fit tests based on likelihood ratios are not recommended for regime-switching models.

Building on [38] [39] proposed to evaluate the conditional distribution function at the observed data points. Intuitively, for univariate time series, these resulting pseudo-observations should approximately be uniformly distributed under the null hypothesis. To overcome the problem of unknown parameters, [39] suggest to estimate the parameters from the first half of the time series and use the second half to perform the goodness-of-fit test, as if the parameters were known. This can hardly be justified theoretically since the limiting distribution of the test statistics usually depends on the unknown parameters. Furthermore, the approach is less powerful since only half the data set is used for testing.

Building on [39] , a solution based on Khmaladze’s transform was proposed by [40] in the univariate case and extended in the multivariate case in [41] . However, its implementation can be quite difficult and computationally ex- pensive. We opt for a simpler approach based on parametric bootstrapping. It was shown to work for a large class of dynamic models, including regime- switching random walks. Its implementation is detailed in Appendix B.

Choosing the Optimal Number of Regimes

The goodness-of-fit test methodology described in Appendix B produces P-value from a Cramér-von Mises type statistic, for a given number of regimes, $l$ . As suggested in [17] , it makes sense to choose the optimal number of regimes, ${l}^{\mathrm{*}}$ , as the first $l$ for which the P-value is larger than 5%. An illustration of the proposed methodology is given in Section 2.4.

2.4. Application on S & P 500 Daily Returns

To illustrate the model, we fit the close-to-close log-returns of the daily price series of the S & P 500 index from January 2^{nd} 1985 to January 31^{th} 2012 (6831 observations). As illustrated in Figure 1, the time series includes persistent periods of low and high volatility. For example, from Figure 1, the 1992-1996 and 2003-2006 represent low periods of volatility, since the variability is much smaller than everywhere else, while the period 2008-2010 is a good example of high volatility, since the variability in the returns is large. The latter coincides with the last financial crisis. It is thus a natural candidate for regime-switching models.

We performed the proposed goodness-of-fit test (see Appendix B) for Gaussian densities. The results are displayed in Table 1. According to Section 2.3.1, the first 3 P-values are 0, so according to our criteria, the number of regimes must be greater than 3. For 4 regimes, the P-value is 26.2% which is greater than 5%, so we optimally select a four-regime model, since 4 is the smallest number of regimes for which the P-value is larger than 5%.

The estimated parameters are presented in Table 2, where the mean and standard deviation of each Gaussian regime density ${f}_{i}$ are respectively denoted by ${\mu}_{i}$ and ${\sigma}_{i}$ . Note that they are presented as annualized percentage values.

Figure 1. Log-returns of the S & P 500 Index (01/02/1985-12/31/2012).

Table 1. P-values (in percentage) for the proposed goodness-of-fit test using $N=1000$ bootstrap samples on the S & P500 daily returns (01/02/1985-12/31/2012).

Table 2. Parameters estimation for the four-regime model on the S & P 500 daily returns (01/02/1985-12/31/2012). $\mu $ and $\sigma $ are presented as annualized percentages.

The table further contains the estimated long-term, or stationary, regime probabilities ${\nu}_{i}$ , together with the terminal conditional regime probabilities, ${\eta}_{n}\left(i\right)$ , and the estimated transition matrix, Q.

In Table 2, regimes are ordered by increasing standard deviation for convenience. Firstly, we observe that volatility tends to be inversely related to the expected premium. Indeed, the latter goes from −52.75% for the high-risk regime to 17.93% for the low-risk regime. Such observation is consistent with previous findings on the contemporaneous asymmetry between risk and returns [42] . Regime 1 and 2 are associated to bull markets, which are characterized by strong positive premiums and low risk. Regime 3 can be interpreted as an intermediate state. It displays less directionality with an expected return of roughly 2% and medium-risk with a standard deviation close to the long run, or unconditional, value. Finally, regime 4 is associated to bear markets, as highlighted by its strong negative premium of −52.75% and very high risk of 46.59%. Even though its steep premium could suggest the presence of crashes, such interpretation can be misleading. Indeed, the persistence of regime 4 is quite high as highlighted by a the value of ${Q}_{44}$ of 0.96. Intuitively, given we are in a stressed market environment today, it will remain stressed for the forthcoming days with high probability. However, in the long-run, such state is rare. Indeed, we expect its realization only roughly 7% of the time as given by the stationary probability ${\nu}_{4}$ . In contrast, markets are expected to be in a bull market state (regime 1 or regime 2) roughly 53% of the time. Such findings are consistent with common perceptions about financial markets. Indeed, periods of high uncertainty and abrupt decreases in levels are usually very concentrated in time (but can last for days), while recoveries are slower processes that extend over several years.

Figure 2 displays the filtered most probable regimes (see Section 2.1 for the filtering procedure) for the whole time series. The regimes are depicted by different shades of grey, ranging from dark for the high risk regime to white for the low risk regime. The 1987 crisis is correctly captured by the high volatility state, together with the early 2000’s bubble burst and the recent financial meltdown (2008-2009).

3. Optimal Discrete Time Hedging

Denote the price process by $S$ , that is, ${S}_{t}$ is the value of $d$ underlying assets

Figure 2. Most probable regimes for the four-regime model fitted on the S & P 500 index from 01/02/1985 to 12/31/2012, together with the cumulative performance of the S & P 500 index. Darker areas represent higher volatility states.

at period $t$ and let $\mathbb{F}=\left\{{\mathcal{F}}_{t},t=0,\cdots ,n\right\}$ a filtration under which $S$ is adapted. Further assume $S$ is square integrable. Set ${\Delta}_{t}={\beta}_{t}{S}_{t}-{\beta}_{t-1}{S}_{t-1}$ , where the discounting factors ${\beta}_{t}$ are predictable (that is ${\beta}_{t}$ is ${\mathcal{F}}_{t-1}$ -measurable for $t=1,\cdots ,n$ ). We are interested in the optimal initial investment amount ${V}_{0}$ and the optimal predictable investment strategy $\stackrel{\to}{\varphi}={\left({\varphi}_{t}\right)}_{t=1}^{n}$ that minimizes the expected quadratic hedging error for a given payoff, C, at time n (for example a call option). Formally, the problem is stated as

$\underset{\left\{{V}_{0}\mathrm{,}\stackrel{\to}{\varphi}\right\}}{min}E\left[{\left\{G\left({V}_{0}\mathrm{,}\stackrel{\to}{\varphi}\right)\right\}}^{2}\right]\mathrm{,}$ (8)

where $G\left({V}_{0},\stackrel{\to}{\varphi}\right)={\beta}_{n}\left(C-{V}_{n}\right)$ , and ${V}_{t}$ is the current value of the replicating portfolio at time $t$ . In other words, it is the current value of the optimal predictable investment strategy, $\stackrel{\to}{\varphi}$ .

To solve (8), set ${P}_{n+1}=1$ , and define, for $t=n,\cdots ,1$ ,

${A}_{t}=E\left({\Delta}_{t}{\Delta}_{t}^{{\rm T}}{P}_{t+1}|{\mathcal{F}}_{t-1}\right)\mathrm{,}$

${b}_{t}={A}_{t}^{-1}E\left({\Delta}_{t}{P}_{t+1}|{\mathcal{F}}_{t-1}\right)\mathrm{,}$

${\alpha}_{t}={A}_{t}^{-1}E\left({\beta}_{n}C{\Delta}_{t}{P}_{t+1}|{\mathcal{F}}_{t-1}\right)\mathrm{,}$

${P}_{t}={\displaystyle \underset{j=t}{\overset{n}{\prod}}}\left(1-{b}_{j}^{{\rm T}}{\Delta}_{j}\right)\mathrm{.}$

We can now state Theorem 1 of [18] , which is an extension of a result from [16] .

Theorem 1. Suppose that $E\left({P}_{t}|{\mathcal{F}}_{t-1}\right)\ne 0$ P-a.s., for $t=1,\cdots ,n$ . This condition is always respected for regime-switching models. Then, the solution $\left({V}_{0}\mathrm{,}\stackrel{\to}{\varphi}\right)$ of the minimization problem (8) is ${V}_{0}=E\left({\beta}_{n}C{P}_{1}\right)/E\left({P}_{1}\right)$ , and

${\varphi}_{t}={\alpha}_{t}-{\beta}_{t-1}{V}_{t-1}{b}_{t},\text{\hspace{1em}}k=1,\cdots ,n.$ (9)

Remark 3.1. Note that ${V}_{0}$ is chosen such that the expected hedging error $G$ is zero. [18] also showed that ${C}_{t}$ (defined by

${\beta}_{t}{C}_{t}=\frac{E\left({\beta}_{n}C{P}_{t+1}|{\mathcal{F}}_{t}\right)}{E\left({P}_{t+1}|{\mathcal{F}}_{t}\right)}\mathrm{,}\text{\hspace{1em}}t=\mathrm{0,}\cdots \mathrm{,}n\mathrm{,}$ (10)

is the optimal investment at period $t$ so that the value of the portfolio at period $n$ is as close as possible to $C$ in terms of mean square hedging error $G$ . As such, ${C}_{t}$ can be interpreted as the option price at period $t$ . By increasing the number of hedging periods, ${C}_{t}$ tends to a price under a risk neutral measure [43] . For example, when there is only one regime and the density is Gaussian, ${C}_{t}$ tends to the usual Black-Scholes price, as the number of hedging periods tends to infinity.

The proposed optimal hedging implementation appears in Appendix C.

3.1. Global Hedging

In practice, an expected hedging error characterized by ${V}_{t}-{C}_{t}$ will emerge. In other words, the replicating portfolio at period t will not be worth the optimal investment ${C}_{t}$ . Under the Black-Scholes setting, such error is unaccounted for since derivatives can be replicated perfectly. In contrast, under the proposed optimal hedging protocol, the exposures ${\varphi}_{t}$ linearly depend on the replicating portfolio value ${V}_{t-1}$ (see Equation (9)), which in turn depend on the past strategy path.

Under extreme scenarios, the replication of a call option might lead to optimal exposures $\varphi $ greater than one share. Intuitively, this feature is optimal with respect to closing the gap between $V$ and $C$ .

3.2. Implementation Issues

There are two main problems related to the implementation of the hedging strategy: ${C}_{t}$ and ${\alpha}_{t}$ defined by expressions (21) and (22) must be approximated and regimes must be predicted.

We discretize
${C}_{t}$ and
${\alpha}_{t}$ with a grid G constituted of 10^{3} equidistant points representing the underlying values s marginally covering at least 3 standard deviations under the respective highest volatility regimes. To solve the recursions given by (21) and (22), we interpolate or extrapolate linearly the simulated outcomes on G. 10^{4} simulations are generated according to a stratified Monte Carlo sampling procedure. The details for the interpolations are presented in Appendix C2.

Next, we need to predict ${\tau}_{1}$ based on $\left({R}_{1}\mathrm{,}{\tau}_{0}\right)$ , and so on. The predicted regime $\stackrel{^}{\tau}$ is the one having the largest probability given the information on prices up to time $t$ , that is the most probable regime given by (3). See Section 2.1 for more details on regime prediction.

Then, according to (15), the optimal weights ${\varphi}_{t}$ for period $\left[t-\mathrm{1,}t\right)$ are approximated by

${\stackrel{^}{\varphi}}_{t}={\alpha}_{t}\left({S}_{t-1},{\stackrel{^}{\tau}}_{t-1}\right)-{V}_{t-1}{D}^{-1}\left({S}_{t-1}\right){\rho}_{t+1}\left({\stackrel{^}{\tau}}_{t-1}\right),\text{\hspace{1em}}t=1,\cdots ,n,$ (11)

where ${V}_{0}$ is approximated by ${C}_{0}\left({S}_{0}\mathrm{,}{\stackrel{^}{\tau}}_{0}\right)$ . In particular, the initial number of shares held ${\varphi}_{1}$ is approximated by

${\stackrel{^}{\varphi}}_{1}={\alpha}_{1}\left({S}_{0},{\stackrel{^}{\tau}}_{0}\right)-{V}_{0}{D}^{-1}\left({S}_{0}\right){\rho}_{2}\left({\stackrel{^}{\tau}}_{0}\right),$ (12)

while the remaining monies, ${V}_{0}-{\stackrel{^}{\varphi}}_{1}^{{\rm T}}{S}_{0}$ , are invested in the riskless asset. Next, as ${S}_{1}$ is observed, one first computes the actual portfolio value ${V}_{1}$ , then one predicts the current regime ${\tau}_{1}$ and finally we approximate the optimal new weights ${\varphi}_{2}$ . This process is iterated at each rebalancing period.

4. Out-of-Sample Vanilla Pricing and Hedging

To exhibit the proposed hedging protocol, we systematically sell vanilla options on the S & P 500 and hedge them until expiration. We then assess the impact of model specification on the delta-hedging strategy by examining the statistical properties of the harvested realized variance risk premium [35] [32] . All hedging portfolios are re-balanced on a daily basis, as is often assumed in the volatility timing literature (see for example [44] ).

The market price of an option is defined as the last (i.e. at 4:15 PM EST) mid point between the bid and the ask. The price of the underlying is its listed close value. For simplicity, we neglect issues related to time-varying discount rates by assuming constant continuously compounded daily rates. Risk free rates, r, are linearly interpolated for a given maturity, n, from the zero-coupon U.S. yield curve.

4.1. The Underlying Asset

We make the reasonable assumption the spot S & P 500 is investable and tradable at a minimal cost. The forward rate is retrieved for the maturities of interest directly from the option data at hand, as proposed by [8] . From put-call parity, the option implied forward value ${F}_{n}$ at n is

${F}_{n}=\left(\stackrel{\u02dc}{C}\left(\stackrel{\u02dc}{K}\mathrm{,}n\right)-\stackrel{\u02dc}{P}\left(\stackrel{\u02dc}{K}\mathrm{,}n\right)\right){\text{e}}^{{r}_{n}n}+\stackrel{\u02dc}{K}\mathrm{,}$

where $\stackrel{\u02dc}{C}\left(K\mathrm{,}T\right)$ and $\stackrel{\u02dc}{P}\left(K\mathrm{,}T\right)$ are respectively the call and put market values expiring at T with strike K and $\stackrel{\u02dc}{K}$ is the at-the-money strike value minimizing $\left|\stackrel{\u02dc}{C}\left(\stackrel{\u02dc}{K}\mathrm{,}T\right)-\stackrel{\u02dc}{P}\left(\stackrel{\u02dc}{K}\mathrm{,}T\right)\right|$ for all strikes offered by the exchange. We use at-the- money options because they are the most liquid and are thus less likely to provide cash-and-carry type arbitrage opportunities. We then compute the daily forward rate as ${f}_{n}=\frac{1}{n}\mathrm{log}\left({F}_{n}/{S}_{0}\right)$ and the associated daily discounting factor $\beta ={\text{e}}^{-{f}_{n}}$ , which reflects the current risk-free return on capital net of the implied continuous dividend yield.

4.2. Option Data Set

Exchange-traded options on the S & P 500 are European, heavily traded and have a high number of listed strikes and maturities.

Assuming 21 trading days per month, we select all dates from December 31^{th} 1999 to January 31^{th} 2012 that have at least one strike listed with 21 trading days to maturity. The period from January 2^{nd} 1985 to December 30^{th} 1999 is used as a burn-in to fit the various models. This gives a total of 165 inception dates from which we initiate the hedging protocols. Since options are listed monthly, this specification maximizes the sample size while avoiding overlaps.

Furthermore, under general continuous time stochastic volatility models, [35] demonstrate the expected hedged short option gain is a first-order homogeneous function of the sensitivity of the option value with respect to volatility, commonly called vega. We thus build a data set maximizing the exposure to the volatility premium by maximizing the vega.

It is well known that vega is maximized for at-the-money options. However, the vega difference with in-the-money and out-of-the-money options tends to decrease as volatility increases. This can easily be verified under the Black- Scholes framework. Thus, it makes sense to condition our selection in such a way that more strikes are included in the data set when expected volatility is high. With the at-the-money B & S implied volatility (IV) as a proxy for market expectations, we select all strikes that are within one IV deviation from the current underlying value,

$K\in \left[{S}_{0}exp\left(\left({f}_{n}-\frac{{\text{IV}}_{n}^{2}}{2}\right)n-{\text{IV}}_{n}\sqrt{\frac{n}{252}}\right)\mathrm{,}{S}_{0}exp\left(\left({f}_{n}-\frac{{\text{IV}}_{n}^{2}}{2}\right)n+{\text{IV}}_{n}\sqrt{\frac{n}{252}}\right)\right],$

where IV is annualized. This essentially allows us to filter out deeply in-the- money and out-of-the-money options, for which the expected volatility premium is low. Overall, we selected a total of 3189 strikes (6378 options including calls and puts). All selected options had significant open interest and non-zero bids at inception.

4.3. Back-Testing

We apply the Gaussian regime-switching (OH-HMM) hedging methodology with a number of regimes ranging from 1 to 4. The case with 1 regime cor- responds to the Black-Scholes (OH-B & S) model and will serve as a benchmark, together with the Gaussian mixture (OH-GM) model. We are especially interested in the case with 4 regimes (OH-HMM4), since it was previously (see Section 2.4) identified as the optimal number of regimes for the S & P 500.

For each inception date, we estimate HMM parameters on the available S & P 500 log-returns from January 2^{nd} 1985. The estimation sample window increases from inception date to inception date. The stability of parameters is highlighted in Figure 3 for the 4 regimes model.

The top graph shows the volatility of each regime versus the B & S at-the- money implied volatility, while the bottom graph shows the expected return under each regime versus the cumulative performance of the S & P 500. The three lower volatility regimes have quite stable parameters. The higher volatility regime parameters stabilize after 2003 and are later shocked by the 2008-2009 financial meltdown. Such behaviour is not surprising as the number of high volatility observations is low. Thus, stressed market observations entering the estimation window have a large impact on high volatility regime parameters.

From [45] , for a given moneyness (that is the strike value divided by the underlying value), the value of an option is homogeneous of degree one with respect to the underlying value. Thus, for each inception date, we normalize the option prices, the strike values and the underlying path at an initial S & P 500 value of 100. Results can thus be aggregated through time and interpreted as a percentage of S & P 500. Note that for each inception date, the hedging protocols are applied out-of-sample for the next 21 days, before re-estimating parameters.

To ensure comparability, the OH-GM assumes the stationary distribution of the OH-HMM, while the OH-B & S volatility match the stationary volatility of both the OH-HMM and OH-GM. Thus, OH-GM4 is a mixture of four Gaussian laws, corresponding to the stationary distribution of OH-HMM4. Both the OH- GM and OH-B & S optimal hedging exposures are derived from an algorithm similar to the one presented in Section 3. Optimal hedging under unconditional

Figure 3. Fitted regime volatility (top figure) and regime expected return (bottom figure) from December 31^{th} 1999 to January 31^{th} 2012 for the four-regimes HMM model, both presented as annualized percentage.

distributions is presented in [46] . All strategies minimize the expected quadratic hedging error under their respective null hypothesis, namely that return follow a regime-switching model (OH-HMM), a Gaussian mixture model (OH-GM) or a Gaussian model (OH-B & S).

OH-B & S is different from the classical Black-Scholes delta hedging. Indeed, the terminology only reflects the fact that we hedge and price under the Black- Scholes framework hypothesis, namely that assets follow geometric Brownian motions. Even though the OH-B & S prices converge to the usual Black-Scholes prices as the number of hedging periods tends to infinity, the discrete time hedging strategies will not necessarily be the same. The classical Black-Scholes delta-hedging methodology (B & S) is thus also considered. Similarly to OH-B & S, the B & S volatility is calibrated to the stationary volatility of OH-HMM and OH-GM.

4.4. Empirical Results―Pricing

4.4.1. One-Period Pricing Result

In Table 3, we present normalized model prices for the 21 days at-the-money puts on 01/19/2012. This day was characterized by a low-volatility regime for all HMM models.

The prices are increasing with the volatility of the regime, the lowest volatility

Table 3. 21 trading days to expiration at-the-money put prices on 01/19/2012 (for OH-HMM4 the most probable regime is highlighted in gray). Confidence intervals are computed from a sample of 100 pricing assuming normality.

Table 4. 21 trading days to expiration at-the-money put prices on 10/23/2008 (for OH- HMM4 the most probable regime is highlighted in gray). Confidence intervals are computed from a sample of 100 pricing assuming normality.

regime having the lowest price. The normalized market price for that day is 1.97%. For the four regimes model, the option price ranges from 1.43% to 4.64%. However, since on that day, the most probable regime is 1, the most probable OH-HMM4 option price is 1.43%. The market is overvalued with respect to this model, but undervalued with respect to B & S, OH-B & S and OH-GM4.

We present the same results for a highly volatile day during the 2008-2009 financial crisis. From Table 4, on 10/23/2008, the at-the-money put is priced at 6.88% by the market. On this day, the HMM is in its highest volatility state. Consequently, we estimate the option price for the four regimes HMM to be 4.16%, considerably lower than the market price. In this case, the market is overvalued with respect to all models.

Finally, to illustrate the behaviour of prices across strikes and maturities, we present Black-Scholes implied volatility surfaces on a low-volatility day in Figure 4. We first price the options using the different models and then solve for the Black-Scholes volatilities that return the computed prices.

Across strikes, the market displays a strong smirk effect that suggests high implied-skewness and high implied-kurtosis in the S & P 500 log-return distribution. Since OH-B & S is in line with Black-Scholes assumptions, we don’t expect it to price any skewness or excess kurtosis, which is confirmed by its flat surface. Thus, both the OH-HMM and the OH-GM overvalue out-of-the-money options with respect to Black-Scholes. Clearly, the HMM captures more kurtosis than the Gaussian mixture as evidenced by its very pronounced smile. However, neither of them succeed at matching the negative implied-skewness. Indeed,

Figure 4. Out-of-the-money implied Black-Scholes volatility surfaces for 21, 40, 50, 64, 103 and 113 trading days to expiration and a 10% moneyness range on 01/19/2012.

out-of-the money puts are considerably more expensive on the market than out-of-the-money calls. Even though both the OH-GM and OH-HMM can theoretically accommodate negatively skewed distributions, their smirk effect are almost unnoticeable, suggesting markets are overvaluing skewness with respect to models.

Since volatility is low, longer maturity options are priced at a higher implied volatility in the market to account for the mean-reverting nature of volatility. The OH-HMM captures this term-structure effect, while, unsurprisingly, both the OH-B & S and OH-GM failed to do so. Indeed, only the HMM allows for time-varying distributions.

4.4.2. Back-Test Pricing Results

In Table 5, we present the pricing results for 21 trading days to expiration puts for the whole period, from December 31^{th} 1999 to January 31^{th} 2012. Letting
${C}_{0}$ the model price and
${V}_{0}$ the market price, we consider the root-mean-squared pricing error:

$\sqrt{\stackrel{^}{E}\left[{\left({C}_{0}-{V}_{0}\right)}^{2}\right]}$

where $\stackrel{^}{E}$ denotes the sample mean. Similarly, we let the pricing bias be:

$\stackrel{^}{E}\left[{C}_{0}-{V}_{0}\right]$

The pricing bias is negative, ranging from −0.41 to −0.20, which suggests

Table 5. Root-mean-squared pricing error and pricing bias for 21 trading days to expiration puts for all selected strikes from December 31^{th} 1999 to January 31^{th} 2012.

options are indeed overpriced with respect to historical observations in the market. In other words, under their respective null hypothesis, each model is expected to generate a profit from the volatility premium strategy. Overall, pricing results are very similar for all static models, namely B & S, OH-B & S and OH-GM, with a bias of 0.41% and a pricing error of roughly 1.13%. Interestingly, the smaller bias of the OH-HMM4 suggests the other models overvalue the premium.

4.5. Empirical Results―Hedging

We now turn to the terminal value of the replicating portfolio
${V}_{n}$ minus the liability
$C$ at expiration,
${V}_{n}-C$ , for all inception dates from December 31^{th} 1999 to January 31^{th} 2012. Once actualized, this quantity can be interpreted as the excess profits & losses (PNL) of the strategy in percent of initial S & P 500 value.

First, the annualized root-mean-squared hedging error, $\sqrt{12\stackrel{^}{E}\left[{\left({\beta}_{n}{V}_{n}-{\beta}_{n}C\right)}^{2}\right]}$ , are presented in Table 6 for put options. This realized risk is the empirical counterpart of the quantity we minimized (see program (8)) and as such, is the most relevant metric for comparing the different models. Results for call options are similar and can be found in Tables 8-10 of Appendix D.

Surprisingly, OH-GM and OH-B & S perform worse than B & S. In fact, the classical delta-hedging strategy performs relatively well with a RMSE of roughly 3.48 versus 3.82 for the worst performer, namely the four regimes Gaussian mixture. Furthermore, the performance of both OH-GM and OH-HMM increases with the number of regimes (results not shown). Overall, the OH- HMM performs best by decreasing the RMSE from 3.48 for the B & S benchmark to 2.64.

Next, we present the profits & losses statistics for all models in Table 7. Both OH-B & S and OH-GM perform worse than B & S under all PNL metrics. The OH-HMM4 yields greater profits, increasing the annualized expected excess PNL from 1.37% for the classical delta-hedging to 1.96%. Furthermore, all higher moments are more desirable under the OH-HMM4 protocol. Indeed, the skewness goes from −2.16 to −0.64 and the kurtosis from 13.35 to 7.69. Furthermore, the extreme negative PNL are better controlled, with a 99% value- at-risk of 2.57% versus 4.11% and a maximum drawdown of 4.63% versus 6.52%.

Finally, we present two common performance metrics, the Sharpe and Omega ratio. The former should be used with caution since its underlying normality

Table 6. Root-mean-squared hedging error for the different hedging strategies applied to 21 trading days to expiration puts for all selected strikes.

Table 7. Descriptive statistics and various performance metrics of the excess PNL,
${\beta}_{n}{V}_{n}-{\beta}_{n}C$ , for the four regimes hedging strategy applied to 21 trading days to expiration puts for all selected strikes from December 31^{th} 1999 to January 31^{th} 2012. The mean and volatility statistics are annualized. Since all returns are already discounted, the Sharpe and Omega ratio are computed by assuming a zero threshold. The value-at-risk is estimated as an empirical quantile.

assumption is clearly violated for the resulting profits & losses. Overall, the Sharpe ratio doubles over the delta-hedging protocol, while the Omega ratio shows gains from 0.67 to 0.89. Overall, the OH-HMM4 outperforms the other specifications, including Black-Scholes both in terms of expected return and risk.

5. Conclusions

In this paper, we propose a discretized version of the continuous time regime- switching model used by [25] , and demonstrate how to implement an optimal hedging strategy when the underlying asset returns are modeled as regime- switching random walks.

We first present estimation and filtering procedures for regime-switching models. Building on the work of [39] [40] , and [31] , we then propose a novel goodness-of-fit test for univariate and multivariate Markovian regime-switching models that builds from Rosenblatt’s transform. This test is particularly useful to determine the optimal number of regimes.

To illustrate the proposed methodology, we model the daily return series of the S & P 500. The resulting dynamics is consistent with commonly assumed characteristics of financial markets, such as the contemporaneous asymmetry between volatility and expected returns.

Furthermore, we state the regime-switching specialization of the optimal hedging algorithm [18] which generates optimal discrete-time (in our case, daily) exposures minimizing a symmetric quadratic criterion for the hedging error risk. It further performs pricing from which we confirmed a downward bias with respect to market prices. Such finding is consistent with the presence of a volatility risk premium and motivated the implementation of a systematic hedged short option strategy [35] . We compare our hedging results to the classical delta-hedging protocol and to optimal hedging protocols when the underlying asset returns are either modelled by a Gaussian or a Gaussian mixture distribution. Gaussian regime-switching models generate lower hedging errors than static models. Interestingly, they further yield greater profits and have favourable PNL distribution characteristics, such as less kurtosis and less negative skewness. Overall, the Sharpe ratio of the regime-switching hedging methodology doubles over the classical delta-hedging framework and the Omega ratio increases by roughly 0.2.

The hedging algorithm could easily be extended to American options, and further adapted to conditional volatility models, such as GARCH models. One limitation of our work is the weak dependence induced by the regimes. It would be preferable to have serial dependence involving also the past returns. This will be done in a future work.

Acknowledgements

This research was supported by the Montreal Institute of Structured Finance and Derivatives, the Natural Sciences and Engineering Research Council of Canada, the Institut de Finance Mathématique de Montréal, and Desjardins Global Asset Management.

Appendix

Appendix A. Estimation of Regime-Switching Models

The EM algorithm for estimating parameters consists of two steps, expectation and maximization:

(E-Step) Compute the conditional probabilities.

${\lambda}_{t}\left(i\right)=P\left({\tau}_{t}=i|{R}_{1},\cdots ,{R}_{n}\right)\text{\hspace{1em}}\text{\hspace{0.05em}}\text{and}\text{\hspace{0.05em}}\text{\hspace{1em}}{\Lambda}_{t}\left(i,j\right)=P\left({\tau}_{t}=i,{\tau}_{t+1}=j|{R}_{1},\cdots ,{R}_{n}\right),$

for all $1\le t\le n$ and $i\mathrm{,}j\in \left\{\mathrm{1,}\cdots \mathrm{,}l\right\}$ .

(M-Step) Estimate the new parameters.

First, a rough estimate of the parameters must be provided. Then, the two- step procedure is repeated until a stopping criteria is met. The E-Step is described next for any densities, whilst the M-Step is stated only for Gaussian densities. For more details on the EM algorithm, see, for example, [36] .

A1. Conditional Distribution of the Regimes (E-Step)

First, define, for all $i\in \left\{\mathrm{1,}\cdots \mathrm{,}l\right\}$ ,

${\stackrel{\xaf}{\eta}}_{n}\left(i\right)=1/l\mathrm{,}$

${\stackrel{\xaf}{\eta}}_{t}\left(i\right)=\frac{{\displaystyle \underset{\beta =1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{\xaf}{\eta}}_{t+1}\left(\beta \right){Q}_{i\beta}{f}_{\beta}\left({R}_{t+1}\right)}{{\displaystyle \underset{\alpha =1}{\overset{l}{\sum}}}{\displaystyle \underset{\beta =1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{\xaf}{\eta}}_{t+1}\left(\beta \right){Q}_{\alpha \beta}{f}_{\beta}\left({R}_{t+1}\right)},\text{\hspace{1em}}t=1,\cdots ,n-1.$

Then, for all $i\mathrm{,}j\in \left\{\mathrm{1,}\cdots \mathrm{,}l\right\}$ , one can check that

${\lambda}_{t}\left(i\right)=\frac{{\eta}_{t}\left(i\right){\stackrel{\xaf}{\eta}}_{t}\left(i\right)}{{\displaystyle \underset{\alpha =1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\eta}_{t}\left(\alpha \right){\stackrel{\xaf}{\eta}}_{t}\left(\alpha \right)},\text{\hspace{1em}}t=1,\cdots ,n,$ (13)

${\Lambda}_{t}\left(i,j\right)=\frac{{Q}_{ij}{\eta}_{t}\left(i\right){\stackrel{\xaf}{\eta}}_{t+1}\left(j\right){f}_{j}\left({R}_{t+1}\right)}{{\displaystyle \underset{\alpha =1}{\overset{l}{\sum}}}{\displaystyle \underset{\beta =1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{\alpha \beta}{\eta}_{t}\left(\alpha \right){\stackrel{\xaf}{\eta}}_{t+1}\left(\beta \right){f}_{\beta}\left({R}_{t+1}\right)},\text{\hspace{1em}}t=1,\cdots ,n-1,$ (14)

${\Lambda}_{n}\left(i,j\right)={\lambda}_{n}\left(i\right){Q}_{ij}.$

We can now verify (13) and (14) are consistent. Indeed, for all $1\le t\le n-1$ ,

$\underset{j=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\Lambda}_{t}\left(i,j\right)=\frac{{\eta}_{t}\left(i\right)\left({\displaystyle \underset{j=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{ij}{\stackrel{\xaf}{\eta}}_{t+1}\left(j\right){f}_{j}\left({R}_{t+1}\right)\right)}{{\displaystyle \underset{\alpha =1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\eta}_{t}\left(\alpha \right)\left({\displaystyle \underset{\beta =1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{\alpha \beta}{\stackrel{\xaf}{\eta}}_{t+1}\left(\beta \right){f}_{\beta}\left({R}_{t+1}\right)\right)}={\lambda}_{t}\left(i\right),$

using the definition of ${\stackrel{\xaf}{\eta}}_{t}$ . Also, ${\sum}_{j=1}^{l}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\Lambda}_{n}\left(i,j\right)={\displaystyle {\sum}_{j=1}^{l}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{n}\left(i\right){Q}_{ij}={\lambda}_{n}\left(i\right)$ . Similarly, for all $1\le t\le n-1$ ,

$\underset{i\mathrm{=1}}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\Lambda}_{t}\left(i\mathrm{,}j\right)={\lambda}_{t+1}\left(j\right)\mathrm{.$

A2. Estimation for Gaussian Regime-Switching Models (M-Step)

For the estimation procedure, we assume the densities ${f}_{1}\mathrm{,}\cdots \mathrm{,}{f}_{l}$ are Gaussian with means ${\left({\mu}_{i}\right)}_{i=1}^{l}$ , and covariance matrices ${\left({A}_{i}\right)}_{i=1}^{l}$ .

The M-step consists of updating the parameters ${\left({\nu}_{i}\right)}_{i=1}^{l}$ , ${\left({\mu}_{i}\right)}_{i=1}^{l}$ , ${\left({A}_{i}\right)}_{i=1}^{l}$ and $Q$ according to

${{\nu}^{\prime}}_{i}={\displaystyle \underset{t=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{t}\left(i\right)/n,$

${{\mu}^{\prime}}_{i}={\displaystyle \underset{t=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{x}_{t}{w}_{t}\left(i\right),$

${{A}^{\prime}}_{i}={\displaystyle \underset{t=1}{\overset{n}{\sum}}}\left({x}_{t}-{{\mu}^{\prime}}_{i}\right){\left({x}_{t}-{{\mu}^{\prime}}_{i}\right)}^{{\rm T}}{w}_{t}\left(i\right),$

${{Q}^{\prime}}_{ij}={\displaystyle \underset{t=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\Lambda}_{t}\left(i,j\right)/{\displaystyle \underset{t=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{t}\left(i\right)=\frac{1}{n}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\Lambda}_{t}\left(i,j\right)/{{\nu}^{\prime}}_{i},$

for all $i\mathrm{,}j\in \left\{\mathrm{1,}\cdots \mathrm{,}l\right\}$ and where ${w}_{t}\left(i\right)={\lambda}_{t}\left(i\right)/{\displaystyle {\sum}_{l=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{l}\left(i\right)$ .

Note that ${\nu}^{\prime}$ is not the stationary distribution for ${Q}^{\prime}$ since for any $j\in \left\{\mathrm{1,}\cdots \mathrm{,}l\right\}$ ,

$\underset{i=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{{\nu}^{\prime}}_{i}{{Q}^{\prime}}_{ij}=\frac{1}{n}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}{\displaystyle \underset{i=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\Lambda}_{t}\left(i,j\right)=\frac{1}{n}{\displaystyle \underset{t=2}{\overset{n+1}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{t}\left(j\right)={{\nu}^{\prime}}_{j}+\frac{{\lambda}_{n+1}\left(j\right)-{\lambda}_{1}\left(j\right)}{n}\ne {{\nu}^{\prime}}_{j}.$

However,

$\underset{1\le j\le l}{max}\left|{\displaystyle \underset{i=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{{\nu}^{\prime}}_{i}{{Q}^{\prime}}_{ij}-{{\nu}^{\prime}}_{j}\right|\le 1/n\mathrm{.}$

Hence, when n is large, ${\nu}^{\prime}$ is close to the stationary distribution of ${Q}^{\prime}$ . In practice, we estimate the stationary distribution from ${Q}^{\prime}$ , rather than ${\nu}^{\prime}$ , for consistency.

A3. Fitting of Sample Moments

Interestingly, the first two sample moments are preserved in the Gaussian case. In other words, the two first fitted stationary moments are coherent with the sample mean and covariance matrix, defined as

$\stackrel{\xaf}{x}=\frac{1}{n}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{x}_{t}\text{\hspace{1em}}\text{\hspace{0.05em}}\text{and}\text{\hspace{0.05em}}\text{\hspace{1em}}\stackrel{\xaf}{S}=\frac{1}{n}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}\left({x}_{t}-\stackrel{\xaf}{x}\right){\left({x}_{t}-\stackrel{\xaf}{x}\right)}^{{\rm T}}.$

Indeed, from the M-step definitions, the stationary mean is

${\mu}^{\prime}\equiv {\displaystyle \underset{i=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{{\nu}^{\prime}}_{i}{{\mu}^{\prime}}_{i}={\displaystyle \underset{i=1}{\overset{l}{\sum}}}\left(\frac{1}{n}{\displaystyle \underset{l=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{l}\left(i\right)\right)\left({\displaystyle \underset{t=1}{\overset{n}{\sum}}}\frac{{x}_{t}{\lambda}_{t}\left(i\right)}{{\displaystyle \underset{l=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{l}\left(i\right)}\right)=\frac{1}{n}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}{\displaystyle \underset{i=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{x}_{t}{\lambda}_{t}\left(i\right)=\frac{1}{n}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{x}_{t}=\stackrel{\xaf}{x},$

Similarly,

$\begin{array}{c}{\displaystyle \underset{i=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{{\nu}^{\prime}}_{i}{{A}^{\prime}}_{i}=\frac{1}{n}{\displaystyle \underset{i=1}{\overset{l}{\sum}}}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{t}\left(i\right)\left({x}_{t}-{{\mu}^{\prime}}_{i}\right){\left({x}_{t}-{{\mu}^{\prime}}_{i}\right)}^{{\rm T}}\\ =\frac{1}{n}{\displaystyle \underset{i=1}{\overset{l}{\sum}}}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{t}\left(i\right){x}_{t}{x}_{t}^{{\rm T}}+\frac{1}{n}{\displaystyle \underset{i=1}{\overset{l}{\sum}}}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{t}\left(i\right){{\mu}^{\prime}}_{i}{\left({{\mu}^{\prime}}_{i}\right)}^{{\rm T}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\frac{1}{n}{\displaystyle \underset{i=1}{\overset{l}{\sum}}}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{t}\left(i\right){x}_{t}{\left({{\mu}^{\prime}}_{i}\right)}^{{\rm T}}-\frac{1}{n}{\displaystyle \underset{i=1}{\overset{l}{\sum}}}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{t}\left(i\right){{\mu}^{\prime}}_{i}{x}_{t}^{{\rm T}}\\ =\frac{1}{n}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{x}_{t}{x}_{t}^{{\rm T}}-{\displaystyle \underset{i=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{{\nu}^{\prime}}_{i}{{\mu}^{\prime}}_{i}{\left({{\mu}^{\prime}}_{i}\right)}^{{\rm T}}.\end{array}$

Finally, from the previous result, the stationary covariance matrix is given by

$\begin{array}{c}{A}^{\prime}\equiv {\displaystyle \underset{i=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{{\nu}^{\prime}}_{i}{{A}^{\prime}}_{i}+{\displaystyle \underset{i=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{{\nu}^{\prime}}_{i}{{\mu}^{\prime}}_{i}{\left({\mu}_{i}\right)}^{\prime}{}^{{\rm T}}-{\mu}^{\prime}{\left({\mu}^{\prime}\right)}^{{\rm T}}\\ =\frac{1}{n}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{x}_{t}{x}_{t}^{{\rm T}}-{\displaystyle \underset{i=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{{\nu}^{\prime}}_{i}{{\mu}^{\prime}}_{i}{\left({{\mu}^{\prime}}_{i}\right)}^{{\rm T}}+{\displaystyle \underset{i=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{{\nu}^{\prime}}_{i}{{\mu}^{\prime}}_{i}{\left({\mu}_{i}\right)}^{\prime}{}^{{\rm T}}-\stackrel{\xaf}{x}\text{\hspace{0.05em}}{\stackrel{\xaf}{x}}^{{\rm T}}\\ =\frac{1}{n}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}\left({x}_{t}-\stackrel{\xaf}{x}\right){\left({x}_{t}-\stackrel{\xaf}{x}\right)}^{{\rm T}}=\stackrel{\xaf}{S}.\end{array}$

Appendix B. Goodness-of-Fit Test for Regime-Switching Models

In this Appendix, we state the goodness-of-fit test, which can be performed to assess the suitability of a regime-switching model as well as to select the optimal number of regimes, ${l}^{\mathrm{*}}$ . The proposed test, based on the work of [39] , [31] and [37] , uses the Rosenblatt’s transform. For conciseness, we detail the implementation for two dimensional Gaussian regime-switching models, but the approach can be easily generalized.

B1. Conditional Distribution Functions and Rosenblatt’s Transform

Let $i\in \left\{\mathrm{1,}\cdots \mathrm{,}l\right\}$ be fixed and ${R}_{i}$ be a random vector with density ${f}_{i}$ . For any $q\in \left\{\mathrm{1,}\cdots \mathrm{,}d\right\}$ , denote by ${f}_{i\mathrm{,1:}q}$ the density of $\left({R}_{i}^{\left(1\right)}\mathrm{,}\cdots \mathrm{,}{R}_{i}^{\left(q\right)}\right)$ , and by ${f}_{i\mathrm{,}q}$ the density of ${R}_{i}^{\left(q\right)}$ given $\left({R}_{i}^{\left(1\right)}\mathrm{,}\cdots \mathrm{,}{R}_{i}^{\left(q-1\right)}\right)$ . Further denote by ${F}_{i\mathrm{,}q}$ the distribution function associated with density ${f}_{i\mathrm{,}q}$ . By convention, ${f}_{i\mathrm{,1}}$ denote the unconditional density of ${R}_{i}^{\left(1\right)}$ . Then, the Rosenblatt’s transform

$x\mapsto {T}_{i}\left(x\right)={\left({F}_{i\mathrm{,1}}\left({x}^{\left(1\right)}\right)\mathrm{,}{F}_{i\mathrm{,2}}\left({x}^{\left(1\right)}\mathrm{,}{x}^{\left(2\right)}\right)\mathrm{,}\cdots \mathrm{,}{F}_{i\mathrm{,}d}\left({x}^{\left(1\right)}\mathrm{,}\cdots \mathrm{,}{x}^{\left(d\right)}\right)\right)}^{{\rm T}}$

is such that ${T}_{i}\left({R}_{i}\right)$ is uniformly distributed in ${\left[\mathrm{0,1}\right]}^{d}$ .

For example, if ${f}_{i}$ is the density of a bivariate Gaussian distribution with mean ${\mu}_{i}$ and covariance matrix

${\Sigma}_{i}=\left(\begin{array}{cc}{v}_{i}^{\left(1\right)}& {\rho}_{i}\sqrt{{v}_{i}^{\left(1\right)}{v}_{i}^{\left(2\right)}}\\ {\rho}_{i}\sqrt{{v}_{i}^{\left(1\right)}{v}_{i}^{\left(2\right)}}& {v}_{i}^{\left(2\right)}\end{array}\right),$

${f}_{i\mathrm{,2}}$ is the density of a Gaussian distribution with mean ${\mu}_{i}^{\left(2\right)}+{\beta}_{i}\left({y}_{i}^{\left(1\right)}-{\mu}_{i}^{\left(1\right)}\right)$ and variance ${v}_{i}^{\left(2\right)}\left(1-{\rho}_{i}^{2}\right)$ , with ${\beta}_{i}={\rho}_{i}\sqrt{{v}_{i}^{\left(2\right)}/{v}_{i}^{\left(1\right)}}$ .

However, for regime-switching random walk models, past returns must also be included in the conditioning information set. For any ${x}^{\left(1\right)}\mathrm{,}\cdots \mathrm{,}{x}^{\left(d\right)}\in \mathbb{R}$ , the (d-dimensional) Rosenblatt’s transform ${\Psi}_{t}$ corresponding to the density (5) conditional on ${x}_{1}\mathrm{,}\cdots \mathrm{,}{x}_{t-1}\in {\mathbb{R}}^{d}$ is given by

${\Psi}_{t}^{\left(1\right)}\left({x}_{t}^{\left(1\right)}\right)={\Psi}_{t}^{\left(1\right)}\left({x}_{1}\mathrm{,}\cdots \mathrm{,}{x}_{t-1}\mathrm{,}{x}_{t}^{\left(1\right)}\right)={\displaystyle \underset{i=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{W}_{t-1}\left(i\right){F}_{i\mathrm{,1}}\left({x}_{t}^{(1)}\right)$

and

$\begin{array}{c}{\Psi}_{t}^{\left(q\right)}\left({x}_{t}^{\left(1\right)},\cdots ,{x}_{t}^{\left(q\right)}\right)={\Psi}_{t}^{\left(q\right)}\left({x}_{1},\cdots ,{x}_{t-1},{x}_{t}^{\left(1\right)},\cdots ,{x}_{t}^{\left(q\right)}\right)\\ =\frac{{\displaystyle \underset{i=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{W}_{t-1}\left(i\right){f}_{i,1:q-1}\left({x}_{t}^{\left(1\right)},\cdots ,{x}_{t}^{\left(q-1\right)}\right){F}_{i,q}\left({x}_{t}^{\left(q\right)}\right)}{{\displaystyle \underset{i=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{W}_{t-1}\left(i\right){f}_{i,1:q-1}\left({x}_{t}^{\left(1\right)},\cdots ,{x}_{t}^{\left(q-1\right)}\right)}.\end{array}$

for $q\in \left\{\mathrm{2,}\cdots \mathrm{,}d\right\}$ .

Suppose ${R}_{1}\mathrm{,}\cdots \mathrm{,}{R}_{n}$ is a size n sample of d-dimensional vectors drawn from a joint (continuous) distribution $P$ . Also, let $\mathcal{P}$ be the parametric family of Gaussian regime-switching models with $l$ regimes. Formally, the hypothesis to be tested is

${\mathcal{H}}_{0}\mathrm{:}P\in \mathcal{P}=\left\{{P}_{\theta}\mathrm{;}\theta \in \Theta \right\}\text{\hspace{1em}}\text{vs}\text{\hspace{1em}}{\mathcal{H}}_{1}\mathrm{:}P\notin \mathcal{P}$

Under the null, it follows that

${U}_{1}={\Psi}_{1}\left({R}_{1},\theta \right),{U}_{2}={\Psi}_{2}\left({R}_{1},{R}_{2},\theta \right),\cdots ,{U}_{n}={\Psi}_{n}\left({R}_{1},\cdots ,{R}_{n},\theta \right)$

are independent and uniformly distributed over ${\left[\mathrm{0,1}\right]}^{d}$ , where ${\Psi}_{1}\left(\cdot \mathrm{,}\theta \right)\mathrm{,}\cdots \mathrm{,}{\Psi}_{n}\left(\cdot \mathrm{,}\theta \right)$ are the Rosenblatt’s transforms conditional on the set of parameters $\theta \in \Theta $ .

Since $\theta $ is unknown, it must be estimated by some ${\theta}_{n}$ . Then, the pseudo- observations, ${\stackrel{^}{U}}_{1}={\Psi}_{1}\left({R}_{1},{\theta}_{n}\right),\cdots ,{\stackrel{^}{U}}_{n}={\Psi}_{n}\left({R}_{1},\cdots ,{R}_{n},{\theta}_{n}\right)$ are approximately uniformly distributed over ${\left[\mathrm{0,1}\right]}^{d}$ and approximately independent. We next propose a test statistic based on these pseudo-observations.

B2. Test Statistic

The test statistic builds from the following empirical process:

$\begin{array}{l}{D}_{n}\left(u\right)=\frac{1}{n}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}{\displaystyle \underset{q=1}{\overset{d}{\prod}}}\mathbb{I}\left({\stackrel{^}{U}}_{t}^{\left(q\right)}\le {u}^{\left(q\right)}\right),\\ u\equiv \left({u}^{\left(1\right)},\cdots ,{u}^{\left(d\right)}\right)\in {\left[0,1\right]}^{d}.\end{array}$

To test ${\mathcal{H}}_{0}$ against ${\mathcal{H}}_{1}$ , we propose a Cramér-von Mises type statistic:

$\begin{array}{c}{S}_{n}\equiv {B}_{n}\left({\stackrel{^}{U}}_{1},\cdots ,{\stackrel{^}{U}}_{n}\right)\\ =n{\displaystyle {\int}_{{\left[0,1\right]}^{d}}}{\left\{{D}_{n}\left(u\right)-{\displaystyle \underset{q=1}{\overset{d}{\prod}}}{u}^{\left(q\right)}\right\}}^{2}\text{d}u\\ =\frac{1}{n}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}{\displaystyle \underset{k=1}{\overset{n}{\sum}}}{\displaystyle \underset{q=1}{\overset{d}{\prod}}}\left\{1-\mathrm{max}\left({\stackrel{^}{U}}_{t}^{\left(q\right)},{\stackrel{^}{U}}_{k}^{\left(q\right)}\right)\right\}-\frac{1}{{2}^{d-1}}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}{\displaystyle \underset{q=1}{\overset{d}{\prod}}}\left(1-{\stackrel{^}{U}}_{t}^{\left(q\right)2}\right)+\frac{n}{{3}^{d}}.\end{array}$

Since ${\stackrel{^}{U}}_{i}$ is almost uniformly distributed on ${\left[\mathrm{0,1}\right]}^{d}$ under the null hypothesis, large values of ${S}_{n}$ should lead to rejection of the null hypothesis. Unfortunately, the limiting distribution of the test statistic will depend on the unknown parameter set, $\theta $ . Since it is impossible to construct tables, we use a different methodology, namely parametric bootstrap, to compute P-values. The validity of the parametric bootstrap approach has been shown for a wide range of assumptions in [31] . These results were recently extended to dynamic models [37] , including regime-switching random walks.

B3. Parametric Bootstrap

・ For a given number of regimes, estimate parameters with ${\theta}_{n}$ computed from the EM algorithm applied to $\left({R}_{1}\mathrm{,}\cdots \mathrm{,}{R}_{n}\right)$ .

・ Compute the test statistic,

${S}_{n}={B}_{n}\left({\stackrel{^}{U}}_{1},\cdots ,{\stackrel{^}{U}}_{n}\right),$

from the estimated pseudo observations, ${\stackrel{^}{U}}_{i}={\Psi}_{i}\left({R}_{1},\cdots ,{R}_{i},{\theta}_{n}\right)$ , for $i\in \left\{\mathrm{1,}\cdots \mathrm{,}n\right\}$ .

・ For some large integer $N$ (say 1000), repeat the following steps for every $k\in \left\{\mathrm{1,}\cdots \mathrm{,}N\right\}$ :

- Generate a random sample $\left\{{R}_{1}^{k}\mathrm{,}\cdots \mathrm{,}{R}_{n}^{k}\right\}$ from distribution ${P}_{{\theta}_{n}}$ .

- Compute ${\theta}_{n}^{k}$ by applying the EM algorithm to the simulated sample, $\left\{{R}_{1}^{k}\mathrm{,}\cdots \mathrm{,}{R}_{n}^{k}\right\}$ .

- Let ${\stackrel{^}{U}}_{i}^{k}={\Psi}_{i}\left({R}_{1}^{k},\cdots ,{R}_{i}^{k},{\theta}_{n}^{k}\right)$ for $i\in \left\{\mathrm{1,}\cdots \mathrm{,}n\right\}$ , and finally compute

${S}_{n}^{k}={B}_{n}\left({\stackrel{^}{U}}_{1}^{k},\cdots ,{\stackrel{^}{U}}_{n}^{k}\right).$

Then, an approximate P-value for the test based on the Cramér-von Mises statistic ${S}_{n}$ is given by

$\frac{1}{N}{\displaystyle \underset{k=1}{\overset{N}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\mathbb{I}\left({S}_{n}^{k}>{S}_{n}\right).$

Appendix C. Optimal Hedging

C1. Optimal Hedging Algorithm

For any d-dimensional vector x, let $D\left(x\right)$ be the diagonal matrix with diagonal elements ${x}^{\mathrm{(1)}}\mathrm{,}\dots \mathrm{,}{x}^{\mathrm{(}d\mathrm{)}}$ , and further let $exp\left(x\right)$ denote the vector with components ${\text{e}}^{{x}^{\left(j\right)}}$ , $j=1,\cdots ,d$ . We assume a constant discounting factor, ${\text{e}}^{-{r}_{n}}$ , with ${r}_{n}$ the daily continuously compounded discounting rate corresponding to the maturity, n, of C. Next, for every $i\in \left\{\mathrm{1,}\cdots \mathrm{,}l\right\}$ , set

$\begin{array}{l}\kappa \left(i\right)={\displaystyle \int}\left\{exp\left(y-r1\right)-1\right\}{f}_{i}\left(y\right)\text{d}y,\\ B\left(i\right)={\displaystyle \int}\left\{exp\left(y-r1\right)-1\right\}{\left\{exp\left(y-r1\right)-1\right\}}^{{\rm T}}{f}_{i}\left(y\right)\text{d}y\mathrm{.}\end{array}$

According to [18] , if ${\varphi}_{t}$ denotes the number of shares of the $d$ risky assets in the portfolio at the beginning of period $t-1$ , and ${V}_{t}$ is the value of the portfolio at period $t$ , the choice of ${V}_{0}$ and ${\varphi}_{1}\mathrm{,}\cdots \mathrm{,}{\varphi}_{n}$ minimizing the mean square hedging error for a given payoff $C$ at maturity $n$ is ${V}_{0}={C}_{0}\left({S}_{0},{\tau}_{0}\right)$ and

${\varphi}_{t}={\alpha}_{t}\left({S}_{t-1},{\tau}_{t-1}\right)-{V}_{t-1}{D}^{-1}\left({S}_{t-1}\right){\rho}_{t+1}\left({\tau}_{t-1}\right),$ (15)

First, let ${\rho}_{n+1}\left(i\right)={\left\{{\displaystyle {\sum}_{j=1}^{l}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{ij}B\left(j\right)\right\}}^{-1}\left\{{\displaystyle {\sum}_{j=1}^{l}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{ij}\kappa \left(j\right)\right\}$ . Then, for all $t=n,\cdots ,1$ and every $i\in \left\{\mathrm{1,}\cdots \mathrm{,}l\right\}$ ,

${\gamma}_{t}\left(i\right)={\displaystyle \underset{j=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{ij}{\gamma}_{t+1}\left(j\right)\left\{1-{\rho}_{t+1}{\left(i\right)}^{{\rm T}}\kappa \left(j\right)\right\},$ (16)

${\rho}_{t}\left(i\right)={\left\{{\displaystyle \underset{j=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{ij}{\gamma}_{t}\left(j\right)B\left(j\right)\right\}}^{-1}\left\{{\displaystyle \underset{j=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{ij}{\gamma}_{t}\left(j\right)\kappa \left(j\right)\right\},$ (17)

$\begin{array}{c}{C}_{t-1}\left(s\mathrm{,}i\right)=\frac{{\text{e}}^{-{r}_{n}}}{{\gamma}_{t}\left(i\right)}{\displaystyle \underset{j\mathrm{=1}}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{ij}{\gamma}_{t+1}\left(j\right){\displaystyle \int}\text{\hspace{0.05em}}{C}_{t}\left\{D\left(s\right)exp\left(x\right)\mathrm{,}j\right\}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\times \left\{1-{\rho}_{t+1}{\left(i\right)}^{{\rm T}}\left(exp\left(x-{r}_{n}1\right)-1\right)\right\}{f}_{j}\left(x\right)\text{d}x\mathrm{,}\end{array}$ (18)

$\begin{array}{c}{\alpha}_{t}\left(s,i\right)={\text{e}}^{-{r}_{n}}{D}^{-1}\left(s\right){\left\{{\displaystyle \underset{j=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{ij}{\gamma}_{t+1}\left(j\right)B\left(j\right)\right\}}^{-1}{\displaystyle \underset{j=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{ij}{\gamma}_{t+1}\left(j\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\times {\displaystyle \int}\text{\hspace{0.05em}}{C}_{t}\left\{D\left(s\right)exp\left(x\right),j\right\}\left\{exp\left(x-{r}_{n}1\right)-1\right\}{f}_{j}\left(x\right)\text{d}x.\end{array}$ (19)

(16) and (17) can be evaluated independently of the payoff, C, through offline computations. However, this is not the case for (18) and (19) for which integrals, or expectations, depending on C must be solved. This can be achieved with the Simulation/Interpolation method proposed in [17] , which we briefly describe next.

C2. Monte Carlo and Interpolation

Expression (18) and (19) are of the form

${g}_{t}\left(s,i\right)={\displaystyle \underset{j=1}{\overset{l}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{ij}{\displaystyle \int}{g}_{t+1}\left\{\pi \left(s,x\right),j\right\}{w}_{t}\left(x,i,j\right){f}_{j}\left(x\right)\text{d}x,\text{\hspace{1em}}t=n-1,\cdots ,0,$

where ${w}_{0}\mathrm{,}\cdots \mathrm{,}{w}_{n}$ and ${g}_{n}$ are known functions, and ${\pi}^{\left(k\right)}\left(s,x\right)={s}^{\left(k\right)}{\text{e}}^{{x}^{\left(k\right)}-r}$ , $k=1,\cdots ,d$ . The methodology proposed in [47] for American options and in [17] for hedging is basically to use a Monte Carlo method to approximate ${g}_{t}\left(s\mathrm{,}i\right)$ for all points s in some finite grid G. Since the values of ${g}_{t+1}$ are approximated at discrete points only, an interpolation method is necessary to evaluate ${g}_{t}$ .

To estimate ${g}_{t}\left(s\mathrm{,}i\right)$ for every $s\in G$ , one can:

・ Fix $i\in \left\{\mathrm{1,}\cdots \mathrm{,}l\right\}$ .

・ For $k=1,\cdots ,N$ , repeat the following steps:

- For every $j\in \left\{\mathrm{1,}\cdots \mathrm{,}l\right\}$ , generate ${X}_{kj}~{f}_{j}$ .

・ For every $s\in G$ , set

$\begin{array}{l}{\stackrel{^}{g}}_{t}\left(s,i\right)\\ =\frac{1}{N}{\displaystyle \underset{j=1}{\overset{l}{\sum}}}{\displaystyle \underset{k=1}{\overset{N}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{ij}{\stackrel{^}{g}}_{t+1}\left\{\pi \left(s,{X}_{kj}\right),j\right\}{w}_{t}\left({X}_{kj},i,j\right).\end{array}$

where ${\stackrel{^}{g}}_{t+1}\left\{\pi \left(s\mathrm{,}{X}_{kj}\right)\right\}$ is the linearly interpolated value from the discretized ${g}_{t+1}$ .

Appendix D. Call Results

Table 8. Root-mean-squared pricing error and pricing bias for 21 trading days to expiration calls for all selected strikes from December 31^{th} 1999 to January 31^{th} 2012.

Table 9. Root-mean-squared hedging error for the different hedging strategies applied to 21 trading days to expiration calls for all selected strikes from December 31^{th} 1999 to January 31^{th} 2012.

Table 10. Descriptive statistics and various performance metrics of the excess PNL,
${\beta}_{n}{V}_{n}-{\beta}_{n}C$ , for the two-regime hedging strategies applied to 21 trading days to expiration calls for all selected strikes from December 31^{th} 1999 to January 31^{th} 2012. The mean and volatility statistics are annualized. Since all returns are already discounted, the Sharpe and Omega ratio are computed by assuming a zero threshold. The value-at-risk is estimated as an empirical quantile.

References

[1] Black, F. and Scholes, M. (1973) The Pricing of Options and Corporate Liabilities. Journal of Political Economy, 81, 637-654.

https://doi.org/10.1086/260062

[2] Fama, E.F. (1965) The Behavior of Stock-Market Prices. Journal of Business, 38, 34-105.

https://doi.org/10.1086/294743

[3] Mandelbrot, B. (1963) The Variation of Certain Speculative Prices. Journal of Business, 36, 307-332.

https://doi.org/10.1086/294632

[4] Schwert, G.W. (1989) Why Does Stock Market Volatility Change over Time? Journal of Finance, 44, 1115-1153.

https://doi.org/10.1111/j.1540-6261.1989.tb02647.x

[5] Boyle, P.P. and Emanuel, D. (1980) Discretely Adjusted Option Hedges. Journal of Financial Economics, 8, 259-282.

https://doi.org/10.1016/0304-405X(80)90003-3

[6] Gilster, J.E.J. (1990) The Systematic Risk of Discretely Rebalanced Option Hedges. Journal of Financial and Quantitative Analysis, 25, 507-516.

https://doi.org/10.2307/2331013

[7] Mello, A.S. and Neuhaus, H.J. (1998) A Portfolio Approach to Risk Reduction in Discretely Rebalanced Option Hedges. Management Science, 44, 921-934.

https://doi.org/10.1287/mnsc.44.7.921

[8] Buraschi, A. and Jackwerth, J. (2001) The Price of a Smile: Hedging and Spanning in Option Markets. The Review of Financial Studies, 14, 495-527.

https://doi.org/10.1093/rfs/14.2.495

[9] Owen, M.P. (2002) Utility Based Optimal Hedging in Incomplete Markets. Annals of Applied Probability, 12, 691-709.

https://doi.org/10.1214/aoap/1026915621

[10] Potters, M., Bouchaud, J.P. and Sestovic, D. (2001) Hedged Monte-Carlo: Low Variance Derivative Pricing with Objective Probabilities. Journal of Physics A, 289, 517-525.

https://doi.org/10.1016/S0378-4371(00)00554-9

[11] Pochart, B. and Bouchaud, J.P. (2004) Option Pricing and Hedging with Minimum Local Expected Shortfall. Quantitative Finance, 4, 607-618.

https://doi.org/10.1080/14697680400000042

[12] Cox, J. and Ross, S. (1976) The Valuation of Options for Alternative Stochastic Processes. Journal of Financial Economics, 3, 145-166.

https://doi.org/10.1016/0304-405X(76)90023-4

[13] Harrison, J.M. and Kreps, D.M. (1979) Martingales and Arbitrage in Multiperiod Securities Markets. Journal of Economic Theory, 20, 381-408.

https://doi.org/10.1016/0022-0531(79)90043-7

[14] Follmer, H. and Schweizer, M. (1991) Hedging of Contingent Claims under Incomplete Information. In: Davis, M. and Elliott, R., Eds., Applied Stochastic Analysis, Volume 5 of Stochastics Monographs, Gordon and Breach Science Publishers, 389-414.

[15] Schweizer, M. (1992) Mean-Variance Hedging for General Claims. Annals of Applied Probability, 2, 171-179.

https://doi.org/10.1214/aoap/1177005776

[16] Schweizer, M. (1995) Variance-Optimal Hedging in Discrete Time. Mathematics of Operations Research, 20, 1-32.

https://doi.org/10.1287/moor.20.1.1

[17] Papageorgiou, N., Rémillard, B. and Hocquard, A. (2008) Replicating the Properties of Hedge Fund Returns. Journal of Alternative Investments, 11, 8-38.

https://doi.org/10.3905/jai.2008.712595

[18] Rémillard, B. and Rubenthaler, S. (2013) Optimal Hedging in Discrete Time. Quantitative Finance, 13, 819-825.

https://doi.org/10.1080/14697688.2012.745012

[19] Christoffersen, P. and Jacobs, K. (2004) Which GARCH Model for Option Valuation? Management Science, 50, 1204-1221.

https://doi.org/10.1287/mnsc.1040.0276

[20] Hull, J.C. and White, A. (1987) The Pricing of Options on Assets with Stochastic Volatilities. The Journal of Finance, 42, 281-300.

https://doi.org/10.1111/j.1540-6261.1987.tb02568.x

[21] Wiggins, J.B. (1987) Option Values under Stochastic Volatility: Theory and Empirical Estimates. Journal of Financial Economics, 19, 351-372.

https://doi.org/10.1016/0304-405X(87)90009-2

[22] Heston, S. (1993) A Closed-Form Solution for Options with Stochastic Volatility with Application to Bond and Currency Options. Review of Financial Studies, 6, 327-343.

https://doi.org/10.1093/rfs/6.2.327

[23] Kou, S.G. (2002) A Jump-Diffusion Model for Option Pricing. Management Science, 48, 1086-1101.

https://doi.org/10.1287/mnsc.48.8.1086.166

[24] Kou, S.G. and Wang, H. (2004) Option Pricing under a Double Exponential Jump Diffusion Model. Management Science, 50, 1178-1192.

https://doi.org/10.1287/mnsc.1030.0163

[25] Hamilton, J.D. (1990) Analysis of Time Series Subject to Changes in Regime. Journal of Econometrics, 45, 39-70.

https://doi.org/10.1016/0304-4076(90)90093-9

[26] Kim, C.J., Piger, J. and Startz, R. (2008) Estimation of Markov Regime-Switching Regression Models with Endogenous Switching. Journal of Econometrics, 143, 263-273.

https://doi.org/10.1016/j.jeconom.2007.10.002

[27] Bansal, R. and Zhou, H. (2002) Term Structure of Interest Rates with Regime Shifts. The Journal of Finance, 57, 1997-2043.

https://doi.org/10.1111/0022-1082.00487

[28] So, M.K.P., Lam, K. and Li, W.K. (1998) A Stochastic Volatility Model with Markov Switching. Journal of Business & Economic Statistics, 16, 244-253.

https://doi.org/10.1080/07350015.1998.10524758

[29] Fong, W.M. and See, K.H. (2001) Modelling the Conditional Volatility of Commodity Index Futures as a Regime Switching Process. Journal of Applied Econometrics, 16, 133-163.

https://doi.org/10.1002/jae.590

[30] Dempster, A.P., Laird, N.M. and Rubin, D.B. (1977) Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society, Series B, 39, 1-38.

[31] Genest, C. and Rémillard, B. (2008) Validity of the Parametric Bootstrap for Goodness-of-Fit Testing in Semiparametric Models. Annales de l’Institut Henri Poincaré, 44, 1096-1127.

https://doi.org/10.1214/07-AIHP148

[32] Carr, P. and Wu, L. (2008) Variance Risk Premiums. Review of Financial Studies, 22, 1311-1341.

https://doi.org/10.1093/rfs/hhn038

[33] French, K.R., Schwert, G. and Stambaugh, R.F. (1987) Expected Stock Returns and Volatility. Journal of Financial Economics, 19, 3-29.

https://doi.org/10.1016/0304-405X(87)90026-2

[34] Glosten, L.R., Jagannathan, R. and Runkle, D.E. (1993) On the Relation between the Expected Value and the Volatility of the Nominal Excess Return on Stocks. The Journal of Finance, 48, 1779-1801.

https://doi.org/10.1111/j.1540-6261.1993.tb05128.x

[35] Bakshi, G. and Kapadia, N. (2003) Delta-Hedged Gains and the Negative Market Volatility Risk Premium. Review of Financial Studies, 16, 527-566.

https://doi.org/10.1093/rfs/hhg002

[36] Cappé, O., Moulines, E. and Rydén, T. (2005) Inference in Hidden Markov Models. Springer Series in Statistics. Springer, New York.

[37] Rémillard, B. (2011) Validity of the Parametric Bootstrap for Goodness-of-Fit Testing in Dynamic Models. Technical Report. SSRN Working Paper Series No. 1966476. Social Science Research Network, Amsterdam.

[38] Durbin, J. (1973) Weak Convergence of the Sample Distribution Function When Parameters Are Estimated. Annals of Statistics, 1, 279-290.

https://doi.org/10.1214/aos/1176342365

[39] Diebold, F.X., Gunther, T.A. and Tay, A.S. (1998) Evaluating Density Eorecasts with Applications to Financial Risk Management. International Economic Review, 39, 863-883.

https://doi.org/10.2307/2527342

[40] Bai, J. (2003) Testing Parametric Conditional Distributions of Dynamic Models. The Review of Economics and Statistics, 85, 531-549.

https://doi.org/10.1162/003465303322369704

[41] Bai, J. and Chen, Z. (2008) Testing Multivariate Distributions in GARCH Models. Journal of Econometrics, 143, 19-36.

https://doi.org/10.1016/j.jeconom.2007.08.012

[42] Bollerslev, T., Litvinova, J. and Tauchen, G. (2006) Leverage and Volatility Feedback Effects in High-Frequency Data. Journal of Financial Econometrics, 4, 353-384.

https://doi.org/10.1093/jjfinec/nbj014

[43] Rémillard, B. and Rubenthaler, S. (2016) Option Pricing and Hedging for Regime-Switching Geometric Brownian Motion Models. Working Paper Series, SSRN Working Paper Series No. 2599064. Social Science Research Network, Amsterdam.

[44] Fleming, J., Kirby, C. and Ostdiek, B. (2001) The Economic Value of Volatility Timing. The Journal of Finance, 56, 329-352.

https://doi.org/10.1111/0022-1082.00327

[45] Merton, R.C. (1973) An Intertemporal Capital Asset Pricing Model. Econometrica, 41, 867-887.

https://doi.org/10.2307/1913811

[46] Rémillard, B. (2013) Statistical Methods for Financial Engineering. Chapman and Hall/CRC Financial Mathematics Series. Taylor & Francis Group, Abingdon-on-Thames.

[47] Del Moral, P., Rémillard, B. and Rubenthaler, S. (2012) Monte Carlo Approximations of American Options that Preserve Monotonicity and Convexity. In: Camona, R., Moral, P.D., Hu, P. and Oudjane, N., Eds., Numerical Methods in Finance, Springer, New York, 117-145.

https://doi.org/10.1007/978-3-642-25746-9_4