Volatility is a key ingredient for derivative pricing, portfolio optimization and value-at-risk analysis. Hence, accurate estimates and good modeling of stock price volatility are of central interest in financial applications. The valuation of financial instruments is complicated by two characteristics of the volatility process. First, it is generally acknowledged that the volatility of many financial return series is not constant over time and exhibits prolonged periods of high and low volatility, often referred to as volatility clustering   . Second, volatility is not directly observable1. Two models have been developed which capture this time-varying autocorrelated volatility process: the GARCH and the Stochastic Volatility (SV) model. GARCH models define the time-varying variance as a deterministic function of past squared innovations and lagged conditional variances whereas the variance in the Stochastic Volatility model is modeled as an unobserved component that follows some stochastic process. Stochastic volatility models are also attractive because they are close to the models often used in financial theory to represent the behavior of financial prices. Furthermore, their statistical properties are easy to derive using well-known results on log-normal distributions. Finally, compared with the more popular GARCH models, they capture the main empirical properties often observed in daily series of financial returns (see, for example, Carnero et al.,  ). For surveys on the extensive GARCH literature we refer to Bollerslev et al.  , Bera and Higgins  and Bollerslev et al.  and for stochastic volatility we refer to Taylor  , Ghysels et al.  Shephard  , and Broto and Ruiz  . Both models are defined by their first and second moments. The Stochastic Volatility model introduced by Taylor  provides an alternative to the GARCH model in accounting for the time- varying and persistent volatility as well as for the leptokurtosis in financial return series. The stochastic volatility models present two main advantages over ARCH models. The first one is their solid theoretical background, as they can be interpreted as discretized versions of stochastic volatility continuous-time models put forward by modern finance theory (see Hull and White  ). The second is their ability to generalize from univariate to multivariate series, as far as their estimation and interpretation are concerned. On the other hand, stochastic volatility models are more difficult to estimate than ARCH models, due to the fact that it is not easy to derive their exact likelihood function. For this reason, a number of econometric methods have been proposed to solve the problem of estimation of stochastic volatility models.
1For a comprehensive review of volatility measures and their properties see Andersen, Bollerslev and Diebold  and for forecasting financial volatility see the survey by Poon and Granger  .
The stochastic volatility model defines volatility as a logarithmic first-order autoregressive process. It is an alternative to the GARCH models which have relied on simultaneous modeling of the first and second moment. For certain financial time series such as stock index return, which have been shown to display high positive first-order autocorrelations, this constitutes an improvement in terms of efficiency; see Campbell et al.  . The volatility of daily stock index returns has been estimated with stochastic volatility models but usually results have relied on extensive pre-modeling of these series, thus avoiding the problem of simultaneous estimation of the mean and variance. Koopman and Hol Uspensky  proposed the Stochastic Volatility in Mean model (SVM) that in- corporates volatility as one of the determinants of the mean. This modification makes the model suitable for empirical applications between the mean and variance of returns. The SVM model can be viewed as the SV counterpart of the ARCH-M model of Engle et al.  with the main difference between the two models is that the ARCH-M model intends to estimate the relationship between expected returns and expected volatility, whereas the aim of the SVM model is to simultaneously estimate the ex ante relation between returns and volatility and the volatility feedback effect.
Another way of modeling financial time series is to define different states of the world or regimes, and to allow for the possibility that the dynamic behavior of financial variables to depend on the regime that occurs at any given point in time. That means that certain properties of the time series, such as its mean, variance and/or autocorrelation, are different in different regimes. Regime switching models were first introduced by Goldfeld and Quandt  to provide a simple way to model endogenously determined structural breaks or regime shifts in parameters. Hamilton  generalizes this setting by allowing the mixing probability to be time-varying function of the history of the data. To illustrate the importance of stochastic regime switching for financial time series, for example, LeBaron  shows that the autocorrelations of stock returns are related to the level of volatility of these returns. In particular, autocorrelations tend to be larger during periods of low volatility and smaller during periods of high volatility. The periods of low and high volatility can be interpreted as distinct regime―or, put differently, the level of volatility can be regarded as the regime-determining process. In this setup, the level of volatility is not known with certainty and what we can do is to make a sensible forecast of this level, and hence, of the regimes that will occur in the future, by assigning probabilities to the occurrence of the different regimes.
Markov switching models have been found to provide a flexible framework to handle many features of asset returns. In particular, they allow for nonlinearities arising from persistent jumps in the model parameters and have several appealing features. First, they provide a convenient framework to endogenously identify regime shifts that are commonplace in financial data. Regimes are treated as latent processes which are not observable, but can be inferred from the estimation algorithm using observable data, such as the history of the asset’s returns. Second, as Markov switching models belong to the mixture-of-distributions class of stochastic processes, they are as versatile as mixture models in capturing salient features of financial data such as time-varying volatilities, skewness, and leptorkurtosis. A detailed study of the statistical properties of Markov switching models by Timmerman  shows the Markov switching models can indeed approximate general classes of density functions with a wide range of conditional moments. Ang and Bekaert  show that Markov switching models with state-dependent means and variances can match exceedance correlations better than do standard GARCH models or bivariate jump diffusion processes.
Related to the two models, returns on equity markets were also found to be characterized by jumps, and these jumps tend to occur at the same time across countries, implying that conditional correlations between international equity returns tend to be higher in periods of high market volatility or following large downside moves. Evidence on jumps is provided by Jorion  , Akgiray and Booth  , Bates  , and Bekaert et al.  , and Asgharian and Bengtsson  2. For example, Asgharian and Bengtsson  studied the jump spillover between equity indexes using a Bayesian approach and found the probabilities that jumps in large countries cause jumps or large returns in other countries. They also found significant evidence of jump spillover, particularly large between countries that belong to the same regions and have similar industry structure3.
2For evidence on changing conditional correlations see, for instance Ang and Chen  , Longin and Solnik  , Karolyi and Stulz  , and Chakrabarti and Roll  .
3Other studies using copula functions were used to study diversification benefits and dependence between American and developed markets as done by Chollete et al.  , and Buraschi et al.  .
4Our result fall in line of those conducted by Kuester et al.  . Yet their study is made only on the NASDAQ, while ours cover more markets and a larger sample period.
In this paper, we extend on the existing literature by modeling the international equity markets according to two volatility models: the log-normal SV model and the two-regime switching model. The log-normal SV model will be is estimated by quasi-maximum likelihood with the kalman filter while the two- regime switching model will be estimated by maximum likelihood with the Hamilton filter. The results provide new evidence on the dynamics of risk and return in equity markets with the possible existence of regimes in these markets. Then based on the one-day-ahead forecasted conditional volatility from each model, we calculate the one day Value-at-Risk (VaR). Then, we backtest those results from each model using unconditional and conditional tests. We find that the value at risk estimates are higher for the SV model than those obtained under the regime-switching model for all markets and over all horizons. The exception is for the Japanese market. The stochastic volatility model generates lower VaR values than those of the regime switching model. A characteristic that reflects the performance of the Japanese market during the sample period, when Japan was hit by a real estate bubble and a banking crisis that made the volatility in that market lower than those observed in other markets. Then, considering the value at risk measures obtained directly from the two models and comparing them to those obtained from the unconditional return distribution, the two models provide smaller value at risk measures. Finally, comparing how the Value-at-Risk behaves with the time horizon, value at risk measures increase more slowly with horizon under the regime switching model than those obtained under the stochastic volatility model4. The performance of both models are then backtested using conditional and unconditional tests and we find that the Canadian equity market represented by the S & P/TSX performs the worst among all markets, while the DAX seems to be better modeled by the stochastic volatility model as opposed to that of the regime switching model.
Our results deviate from those obtained by the above mentioned literature in the following aspects: 1) the sample size is longer than previously studied; 2) the previous literature either focuses on one single market or few (i.e, Kuester et al.  ) we provide a forecasted one day ahead volatility based on each model and consequently use that to calculate value at risk measures; and finally 4) we find that the Canadian and Japanese markets appear to have different features than those obtained in previous results, whether in terms of the risk measures obtained by the two models or in terms of the suitability of each model when we backtest them.
The paper is organized as follows. Section 2 introduces the two models: the regime switching model and the stochastic volatility model. Section 3 describes the available data and presents the stylized facts of the corresponding realized volatility. Section 4 presents the estimation results from the two models. Section 5 provides the Value at Risk measures and backtesting results. Section 5 concludes.
2. Models of Volatility
The empirical regularities of asset returns (i.e., volatility clustering; squared returns exhibit prolonged serial correlation; and heavy tails and persistence of volatility) suggest that the behavior of financial time series can be captured by a model which recognizes the time-varying nature of return volatility as follows:
with follows NID(0, 1). represents the mean and depends on a constant a and regression coefficients . The explanatory variables may also contain lagged exogenous and dependent variables. The disturbance term is IID with zero mean and unit variance and a usual assumption of a normal distribution.
Following Shephard  , models of changing volatility can be usefully partitioned into observation-driven and parameter-driven models and both can be expressed using a parametric framework as: follows a . In the first class, the autoregressive heteroskedasticity (ARCH) models introduced by Engle  are the most representative example. In the second class, is a function of an unobserved or latent component. The log-normal stochastic volatility model created by Taylor  is the simplest and best known example:
with following a N(0, exp(ht)) and ηt being .
Where ht represents the log-volatility, which is unobserved but can be estimated using the observations. One interpretation for the latent ht is to represent the random and uneven flow of new information, which is difficult to model directly, into financial markets. The most popular model from Taylor  , puts
where εt and ηt are two independent Gaussian white noises, with variances 1 and , respectively. Due to the Gaussianity of ηt, this model is called a log-normal SV model. Although the assumption of Gaussianity of ηt can seem ad hoc at first sight, Andersen et al.   show that the log-volatility process can be well approximated by a Normal distribution.
Another possible interpretation for ht is to characterize the regime in which financial markets are operating and then it could be described by a discrete valued variable. The most popular approach to modelling changes in regime is the class of Markov switching models introduced by Hamilton  . In that case the model is where and where st is a two state first- order Markov chain which can take values 0, 1 and is independent of εt. The value of the time series st, for all t, depends only on the last value st−1 for i, j = 0, 1:
The probabilities are called transition probabilities of moving from one state to the other. These transition probabilities are collected in the transition matrix P:
which fully describes the Markov chain and also we get: . A two-state Markov chain can be represented by a simple AR(1) process as follows:
where and the volatility equation can be written the following way:
which implies the same structure of the stochastic volatility model but with a noise that can take only a finite set of values.
3. Estimation Methods
A variety of estimation procedures has been proposed for the stochastic volatility models, including for example the Generalized Method of Moments (GMM) used by Melino and Turnbull  , the Quasi Maximum Likelihood (QML) approach followed by Harvey et al.  and Ruiz  , the Efficient Method of Moments (EMM) applied by Gallant et al.  , and Markov-Chain Monte Carlo (MCMC) procedures used by Jacquier et al.  and Kim et al.  . In this paper, the parameters of the SV model are estimated by the exact maximum likelihood method using Monte Carlo importance sampling techniques. We refer the reader to Koopman and Hol Uspensky  for more explanation. The likelihood function for the SV model can be constructed using simulation methods developed by Shephard and Pitt  and Durbin and Koopman  . For the SV model we can express the likelihood function as:
where , . An efficient way of evaluating such ex- pressions is by using importance sampling; see Ripley  , Chapter 5). A simulation device is required to sample from an importance density which is preferred to be as close as possible to the true density . A choice for the importance density is the conditional Gaussian density since in this case it is relatively straightforward to sample from using simulation smoothers such as the ones developed by de Jong and Shephard  and Durbin and Koopman  . All models were estimated using programs written in the Ox language of Doornik  using SsfPack by Koopman, She- phard and Doornik  . The log-normal SV model which is estimated by quasi-maximum likelihood with the kalman filter, and the two-regime switching model which is estimated by maximum likelihood with the Hamilton filter. The Ox programs were downloaded from http://personal.vu.nl/s.j.koopman/SJresearch.html.
The log-normal SV model is represented by Equation (4) with εt and ηt independent Gaussian white noises. Their variances are 1 and , respectively. The volatility equation is characterized by the constant parameter α, the autoregressive parameter β and the variance of the volatility noise. The mean is either imposed equal to zero or estimated with the empirical mean of the series. Since the specification of the conditional volatility is an autoregressive process of order one, the stationarity condition is |β| < 1. Moreover, the volatility ση must be strictly positive. In the estimation procedure the following logistic and logarithm reparameterizations:
have been considered in order to satisfy these conditions.
The second model is a particular specification of the regime switching model introduced by Hamilton, with the distribution of the returns is described by two regimes with the same mean but different variances and by a constant transition matrix:
where st is a two-state Markov chain independent of εt, which is a Gaussian white noise with unit variance. The parameters of this model are the mean μ, the low and high standard deviation σ0, σ1 and the transition probabilities p00, p11 (also called regime transformations probabilities). As for the log-normal SV model, the logarithm and the logistic transformations ensure the positiveness of the volatilities and constrain the transition probabilities to assume values in the (0, 1) interval. Further, for the log-normal SV model the returns are modified as follows: where is the empirical mean. Thus, for the log-normal SV model the mean is not estimated but is simply set equal to the empirical mean. For the estimation, the starting values of the parameters are calculated considering the time series analyzed. For example, the sample mean is used as an approximation of the mean of the switching regime model and the empirical variance multiplied by appropriate factors is used for the high and low variance. However, for the log-normal SV model, a range of possible values of the parameters were fixed and a value is randomly extracted.
4. Estimation Results
We examine the behavior of the following equity markets. These are the S & P500 for USA, FTSE100 for United Kingdom, CAC40 for France, S & P/TSX for Canada, Nikkei225 for Japan, DAX for Germany, and Swiss Market for Switzerland. We use a sample from 11/4/1996 to 12/10/2008 resulting in 3158 data points. The price data was obtained from Datastream. Each of the price indices was transformed via first differencing of the log price data to create a series, which approximates the continuously compounded percentage return. The stock index prices are not adjusted for dividends following studies of French et al.  and Poon and Taylor  who found that inclusion of dividends affected estimation results only marginally. Returns are calculated on a continuously compounded basis and expressed in percentages, they are therefore calculated as , where Pt denotes the stock index in day t.
The summary statistics are presented in Table 1. We observe that the Swiss Market shows the highest mean returns followed by CAC40 and then the DAX. All the indices exhibit similar patterns of volatility represented by the standard deviation, with Nikkei225 having the highest variability and S & P/TSX having the lowest. We further observe that the returns are highly autocorrelated at lag 1, with S & P/TSX maintaining the highest autocorrelation. The high first-order autocorrelation reflects the effects of non-synchronous or thin trading, whereas highly correlated squared returns can be seen as an indication of volatility clustering. The Q(12) and Qs(12) test statistics, which is a joint test for the hypothesis that the first twelve autocorrelation coefficients on returns and squared returns are equal to zero, indicate that this hypothesis has to be rejected at the 1% significance level for all return series and squared return series. A number of empirical studies has found similar results on market returns distributional characteristics. Kim and Kon  showed similar results for 30 stocks in DJIA, S & P500, and CRSP indices. Campbell, Lo and Mackinlay  concluded that daily US stock indexes show negatively skewed and positive excess kurtosis. The autocorrelation of squared returns is consistent also with the presence of time- varying volatility such as GARCH effects. As pointed out by Lamoureux and
Table 1. Summary statistics of daily returns.
The table contains summary statistics for the international equity markets. J.B. is the Jarque-Bera normality test statistic with 2 degrees of freedom; ρk is the sample autocorrelation coefficient at lag k with asymptotic standard error and Q(k) is the Box-Ljung portmanteau statistic based on k-squared autocorrelations. ρsk are the sample autocorrelation coefficients at lag k for squared returns and Qs(12) is the Box-Ljung portmanteau statistic based on 12-squared autocorrelations. * indicates significance at 99%. ** indicates significance at 95%. *** indicates significance at 90%.
5Before estimating the models, we test whether there are indeed regime shifts in the stock markets and whether a stochastic volatility model fits the data well. To do so, we apply Hansen’s  modified likelihood ratio test for regimes and Kobayashi and Shi  tests. Results are available upon request.
Lastrapes  and confirmed by Hamilton and Susmel  , regime shifts in the volatility process can also induce a spuriously high degree of volatility clustering5.
The estimation results of the two models are reported in Table 2 and Table 3. Table 2 presents the results of estimating the regime switching model in the different markets. For this model, we can judge the persistence of the volatility from the values taken by the transition (or persistence) probabilities p00 and p11, they are all high and higher than 0.90, confirming the high persistence of the volatility in all markets. The parameter which govern the mean process is also reported in the first column of Table 2 with the corresponding standard errors. The mean parameter is positive and statistically significant for all series, except being negative for Nikkei225. The Japanese market is the exception since it had gone through major structural changes during the sample period, in terms of its risk and return characteristics. The estimation results of the log-normal SV model are reported in Table 3. For this model, the standard errors are calculated following Ruiz  for the log-normal SV model and as the inverse of the information matrix for the switching model. In both cases the z-statistics asymptotically follow an N(0, 1) distribution. All markets show strong persistence, since all the estimated autoregressive coefficients of the volatility equation (β) are higher than 0.90. Also all the volatility estimates are all highly significant and
Table 2. Results of the regime switching model applied to international equity markets.
The table reports the estimation results of the two regime switching model. A two-regimes switching model introduced by Hamilton is applied to equity markets and estimated by maximum likelihood with the Hamilton filter. In this model the returns are distributed with the same mean and different variances and a constant transition matrix. The standard errors are calculated following Ruiz  as the inverse of the information matrix for the switching model and result in z-statistics asymptotically following an N(0, 1) distribution. μ is the mean value and LogL represents the loglikelihood. * indicates significance at 99%. ** indicates significance at 95%. *** indicates significance at 90%.
Table 3. Results of estimating the log-normal SV model applied to international equity markets.
The table reports the estimation results of the log-normal SV model. The log-normal SV model is applied to equity markets and estimated by quasi-maximum likelihood with the kalman filter. The volatility equation is characterized by the constant parameter α (constant), the autoregressive parameter β (AR part) and the variance of the volatility noise (SD). The standard errors are calculated following Ruiz  for the log-normal SV model and result in z-statistics asymptotically following an N(0, 1) distribution. * indicates significance at 99%. ** indicates significance at 95%. *** indicates significance at 90%.
quite similar for all markets. In practice, for many financial time series this coefficient is often found to be bigger than 0.90. This near-unity volatility persistence for high-frequency data is consistent with findings from both the SV and the GARCH literature. Among all the markets, the Swiss market, FTSE100, Nikkei225 and DAX show the highest variability in their volatility noise. For example, the standard deviation of the volatility noise in the FTSE100 is 0.1066, while that in the S & P500 is 0.071.
A graphical representation is provided from both models, yet we only include a sample of the Japanese market to save space. In the case of the log-normal SV model, the estimated volatility is obtained by using the Kalman smoother which is not very useful. Thus, a first-order Taylor expansion of is considered and compute the conditional mean and estimated the volatility. In the case of the switching model, we present historical return series, the estimated volatility and the estimated switches between regimes. Figure 1 and Figure 2 present the Japanese market. It can be seen from the graphs how the two models are able to capture some major market crises during the sample period, like the 1997 Asian financial market crisis, the collapse of LTCM in 1998, the tech bubble in 2000 and the 911 in 2001. All the other graphs are available from the author for inspection and capture those events.
The Japanese market is a special case where volatility forecasted from the regime switching model is the highest among all markets, an indication of some structural changes that took place during the sample period. Equity price volatility
Figure 1. Weighted volatility and regime shifts based on the regime switching model for Japan.
Figure 2. Estimated and simulated volatility based on the log-AR stochastic volatility model for Japan.
has trended up since the mid-1990s, and has been particularly high since 2000, as the Technology bubble burst, followed by shocks such as the events of September 11, 2001, the Enron and WorldCom accounting scandals. In the aftermath of the Louvre Accord, the Bank of Japan kept interest rates down to support the value of the dollar and to boost Japan’s domestic economy, stimulating demand for equities. Easy monetary conditions encouraged leveraged investment, aggressive equity financing, and excessive borrowing. The stock market were also amplified by portfolio insurance products and by arbitrage activities between stock and futures markets. Lending based on land and, to a lesser extent, equities as collateral amplified Japan’s financial bubble and the subsequent burst. Further, in February 1999, to abate deflationary pressures, the Bank of Japan adopted the zero interest rate policy. At the same time, a series of deregulations was introduced to improve the efficiency of the financial system and the government promoted financial consolidation. Mark-to market accounting was introduced and several agencies were established by the government to purchase nonperforming loans and shares held by banks. Consequently, the financial system became more volatile6.
6We thank a referee for pointing out at this point.
5. Value-at-Risk Results
Value-at-Risk (VaR) indicates the maximum potential loss at a given level of confidence (p) for a portfolio of financial assets over a specified time horizon (h). The VaR is a solution to the following problem:
with x being the value of the portfolio. Different methods have been proposed to calculate the VaR. One of them is the parametric model that can be used to forecast the portfolio return distribution, if this distribution is known in a closed form and the VaR simply being the quantile of this distribution. In the case of non-linearity we can use either Monte Carlo simulation or historical simulation approaches. The advantage of the parametric approach is that the factors can be updated using a general model of changing volatility. Having chosen the asset or portfolio distribution, it is possible to use the forecasted volatility to characterize the future return distribution. Thus, a conditional forecasted volatility measure, can be used to calculate the VaR over the next period. In our case, a different approach using both models, the stochastic volatility and regime switching models, is to devolatize the observed return series and to revolatilize it with an appropriate forecasted value, obtained with a particular model of changing volatility. This approach is considered in several recent works (Barone-Adesi et al.  ; Hull and White  ; and Christoffersen  ). This method is labeled also under the filtered historical simulation method to investigate the nonparametric distribution-based VaR7.
7The historical simulation method discards particular assumptions regarding the return series and calculates the VaR from the immediate past history of the returns series (Dowd,  ). However, the filtered historical simulation method is designed to improve on the shortcomings of historical simulation by augmenting the model-free estimates with parametric models. For example, Prisker  asserts that filtered historical simulation method compares favorably with historical simulation, the historical simulation method may not avoid the many shortcomings of purely model-free estimation approaches. When historical return series include insufficient extreme outcomes, the simulated value at risk may seriously undersestimate the actual market risk.
The idea is to consider a portfolio which perfectly replicates the composition of each stock market index. Given the estimated volatility of the stochastic volatility model, the Value-at-Risk of this portfolio can be obtained following the procedure proposed in Barone-Adesi et al.  . The historical portfolio returns are rescaled by the estimated volatility series to obtain the standardized residuals , . This historical simulation can be performed by boos- trapping the standardized returns to obtain the desired number of residuals for , where M can be arbitrarily large. To calculate the next period return, it is sufficient to multiply the simulated residuals by the forecasted volatility and then the VaR for the next day, at the desired level of confidence h, is calculated as the Mth element of these returns sorted in ascending order.
To make the historical simulation consistent with empirical findings, we use the two models: the log-normal SV model and the regime switching model to describe the volatility behavior. Then, past returns are standardized by the estimated volatility to obtain the standardized residuals. We obtained those residuals and our statistical tests confirm that these standardized residuals behave approximately as an iid series which exhibit heavy tails. Then we use the historical simulation to calculate the Value-at-Risk measures. Finally, to adjust them to the current market conditions, the randomly selected standardized residuals are multiplied by the forecasted volatility obtained from the stochastic volatility and regime switching models.
The VaRs measures from the two models are presented together with the results obtained from the unconditional returns in Table 4 and Table 5. An examination of the results reveals that the VaR estimates, in general, are higher for the stochastic volatility model than those for the regime-switching model for almost all markets and over all horizons. The exception is that of Japan repre- sented by the Nikkei225 index, where in both cases, whether using historical
Table 4. VaR measures obtained by using historical simulation method.
The table reports the value-at-risk VaR estimates based on conditional and unconditional distribution of the returns and calculated by historical simulation method. The VaR are calculated for 5-, 10- and 15-days holding period with the significance level is 1%. Unconditional distribution measures are based on historical returns, while conditional distribution are those obtained by weighting the standardized residuals by the forecasted volatility. Values reported are in percentage terms.
Table 5. VaR obtained by delta-normal approximation.
The table reports the VaR estimates based on historical data. The significance level is 1% and VaR are calculated based on 5-, 10- and 15-days time horizons. Unconditional distribution measures are based on historical returns, while conditional distribution are those obtained by weighting the standardized residuals by the forecasted volatility. Values reported are in percentage terms.
simulation or delta-normal approximation, the stochastic volatility model generates lower VaR values than those obtained from the regime switching model. Then comparing the VaRs calculated directly from the two models with those obtained from the unconditional distribution of returns, we find that the two models generate smaller VaRs. When we consider the time horizon and its impact on the calculation of the Value-at-Risk measures, we find that VaRs increase with the time horizon; generally, and according to the regime switching model, VaRs increase more slowly with horizon than the SV approach.
6. Backtesting the VaR Results
The Value-at-Risk measure promises that the actual return will only be worse than the forecast p * 100 of the time. Given a time series of past ex-ante VaR forecasts and past ex-post returns, we can define the “hit sequence” of VaR violations as:
8For other methods and elements in backtesting VaR models, see Christoffersen and Diebold  , Christoffersen and Pelletier  , McNeil and Frey  , Diebold, Gunther, and Tsay  , and Diebold, Hahn, and Tsay  .
The hit sequence returns a 1 on day t + 1 if the loss on that day was larger than the VaR number predicted in advance for that day. If the VaR was not violated, then the hit sequence returns a 0. When backtesting our models, we construct a sequence across T days indicating when the past violations occurred. We implement the following three test statistics derived from Christo- ffersen  : the unconditional, independence, and conditional coverage8. Chris- toffersen  idea is to separate out the particular predictions being tested, and then test each prediction separately. The first of these is that the model generates the “correct” frequency of exceedances, which is in this context is described as the prediction of correct unconditional coverage. The other prediction is that exceedances are independent of each other. This later prediction is important in so far as it suggests that exceedances should not be clustered over time. To explain the Christoffersen  approach, we briefly explain the three tests.
6.1. Unconditional Coverage Testing
According to this test, we are interested in testing if the fraction of violations obtained from our models, call it π, is significantly different from the promised fraction, p. We call this the unconditional coverage hypothesis. To test this, we write the likelihood of an i.i.d. Bernoulli (π) hit sequence as:
where T0 and T1 are the number of 0s and 1s in the sample. π can be estimated from ―that is, the observed fraction of violations in the sequence. Plugging the estimate back into the likelihood function gives the optimized likelihood as:
Under the unconditional coverage null hypothesis that π = p, where p is the known VaR coverage rate, we have the likelihood:
The unconditional coverage hypothesis using a likelihood ratio test can be checked as:
Asymptotically, as T goes to infinity, this test will be distributed as a χ2 with one degree of freedom. Substituting in the likelihood functions, we write:
which follows a χ2. The VaR model is rejected or accepted either using a specific critical value, or calculating the P-value associated with our test statistic.
6.2. Independence Testing
According to this test, the hit sequence is assumed to be dependent over time and that it can be described as a so-called first-order Markov sequence with transition probability matrix:
These transition probabilities simply mean that conditional on today being a nonviolation (that is, It = 0), then the probability of tomorrow being a violation (that is, It+1 = 1) is π01. The probability of tomorrow being a violation given today is also a violation is: π11 = Pr(It = 1 and It+1 = 1). Accordingly, the two probabilities π01 and π11 describe the entire process. The probability of a nonviolation following a nonviolation is 1 − π01, and the probability of a nonviolation following a violation is 1 − π11. If we observe a sample of T observations, then the likelihood function of the first-order Markov process can be written as:
where Tij, i, j = 0, 1 is the number of observations with a j following an i. Taking first derivatives with respect to π01 and π11 and setting these derivatives to zero, we can solve for the maximum likelihood estimates:
Using the fact that the probabilities have to sum to one, we have: π00 = 1 − π01 and π10 = 1 − π11, which can be used to determine the matrix of the estimated transition probabilities.
In the case of the hits being independent over time, then the probability of a violation tomorrow does not depend on today being a violation or not, and we can write π01 = π11 = π. In this case, we can test the independence hypothesis that π01 = π11 using a likelihood ratio test:
following a . Where L(π) is the likelihood under the alternative hypothesis from the LRuc test.
Although the LRuc test can reject a model that either overestimates or underestimates the true but unobservable VaR, it cannot examine whether the exceptions are randomly distributed. In a risk management framework, it is important that VaR exceptions be uncorrelated over time, which prompts independence and conditional coverage tests based on the evaluation of interval forecasts. Christoffersen  developed independence and conditional coverage tests that jointly investigates whether the total number of failures is equal to the expected one, and the VaR exceptions are independently distributed. In particular, the advantage of Christoffersen’s procedure is that it can reject a model that generates either too many or too few clustered exceptions. Since accurate VaR estimates exhibit the property of correct conditional coverage, the hit sequence series must exhibit both correct unconditional coverage and serial independence.
6.3. Conditional Coverage Testing
Ultimately, we care about simultaneously testing if the VaR violations are independent and the average number of violations is correct. We can test jointly for independence and correct coverage using the conditional coverage test:
again following a distribution and correspond to testing that π01 = π11 = p. It can be proved that LRcc = LRuc + LRinp. The Christoffersen approach enables us to test both coverage and independence hypotheses at the same time. Moreover, if the model fails a test of both hypotheses combined, his approach enable us to test each hypothesis separately, and so establish where the model failure arises.
The results for the unconditional and conditional coverage tests are reported in Table 6 and Table 7. Table 6 reports the results based on the stochastic volatility model, and Table 7 reports those based on the regime switching model. The symbol * indicates that the test did reject the null hypothesis. We use two significance levels of 5% and 1%. If LRuc is statistically insignificant, it implies that the expected and the actual number of observations falling below the VaR estimates are statistically the same. Further, rejection of the null hypothesis indicates that the computed VaR estimates are not sufficiently accurate. According to the LRuc test statistics, and at the 5% significance levels, VaR models based on both the stochastic volatility and regime switching models perform relatively the same for all markets, except for FTSE100, where the LRuc rejects the null hypothesis. However, according to the LRind and LRcd, the VaR models based on the two volatility models perform again relatively in a similar fashion. The performance of both models at the 5% significance level is the worst for the S & P/TSX;
Table 6. Unconditional, conditional and independence coverage tests based on log-normal stochastic volatility model.
The table reports the unconditional, conditional and independence coverage tests based on the Log-Normal Stochastic Volatility model. * indicates rejection of the VaR model.
Table 7. Unconditional, conditional and independence coverage tests based on regime switching model.
The table reports the unconditional, conditional and independence coverage tests based on the regime switching model. * indicates rejection of the VaR model.
this is because of the rejection of both tests and the failure of both models to provide an accurate prediction of the downside risk at the 5% significance level. Further, the backtesting results indicate that the regime switching model performs poorly for the DAX series using the LRind test.
This paper proposes two models, namely the log stochastic volatility model and regime switching model for calculating value at risk. The two models were applied for international equity markets and then used to forecast future daily volatility. Then based on the forecasted daily volatility, we calculated the Value at Risk in each market. It was observed that the two models generate smaller VaRs than the unconditional distributional method. Then, based on each model, it was found that the Japanese market display lower values of Value at Risk under the stochastic volatility model than under the regime switching model. Considering how the VaRs increase with time horizon, generally and according to the regime switching model, VaRs increase more slowly with horizon than the stochastic volatility model. Finally, we backtest each model and find that the performance of both models is the worst for the S & P/TSX, while the regime switching model does not perform well for the DAX series in some cases. The results have significant implications for risk management, trading and hedging activities as well as in the pricing of equity derivatives.
 Andersen, T.G., Bollerslev, T. and Diebold, F.X. (2005) Parametric and Nonparametric Volatility Measurement. In: Hansen, L.P. and Ait-Sahalia, Y., Eds., Handbook of Financial Econometrics, North Holland, Amsterdam, 67-137.
 Ghysels, E., Harvey, A.C. and Renault, E. (1996) Stochastic Volatility. In: Maddala, G.S. and Rao, C.R., Eds., Handbook of Statistics, Vol. 14, Statistical Methods in Finance, North-Holland, Amsterdam, 128-198.
 Shephard, N. (1996) Statistical Aspects of ARCH and Stochastic Volatility. In: Cox, D.R., Hinkley, D.V. and Barndorff-Nielsen, O.E., Eds., Time Series Models in Econometrics, Finance and Other Fields, Monographs on Statistics and Applied Probability, Vol. 65, Chapman and Hall, 1-67.
 Koopman, S. and Hol Uspensky, E. (2002) The Stochastic Volatility in Mean Model: Empirical Evidence from International Stock Markets. Journal of Applied Econometrics, 17, 667-689.
 Bekaert, G, Erb, C., Harvey, C. and Viskanta, T. (1998) Distributional Characteristics of Emerging Market Returns and Asset Allocation. Journal of Portfolio Management, 24, 102-116.
 Karolyi, A. and Stulz, R. (1996) Why Do Markets Move Together? An Investigation of U.S.-Japan Stock Return Movements. Journal of Finance, 51, 951-986.
 Andersen, T.G., Bollerslev, T., Diebold, F.X. and Labys, P. (2001) The Distribution of Realized Exchange Rate Volatility. Journal of the American Statistical Association, 96, 42-55.
 Kim, S., Shephard, N. and Chib, S. (1998) Stochastic Volatility: Likelihood Inference and Comparison with ARCH Models. Review of Economics Studies, 65, 361-393.
 Doornik, J. (1998) Object-Oriented Matrix Programming Using Ox 2.0. Timberlake Consultants Ltd., London.
 Koopman, S., Shephard, N. and Doornik, J. (1999) Statistical Algorithms for Models in State Space Using Ssfpack 2.2. Econometrics Journal, 2, 113-166.
 Lamoureux, C.G. and Lastrapes, W.D. (1990) Persistence in Variance, Structural Change and the GARCH Model. Journal of Business and Economic Statistics, 8, 225-243.
 Hansen, B. (1992) The Likelihood Ratio Test under Nonstandard Conditions: Testing the Markov Switching Model of GNP. Journal of Applied Econometrics, 7, S61-S82.
 Christoffersen, P. and Diebold, F. (2000) How Relevant Is Volatility Forecasting for Financial Risk Management? Review of Economics and Statistics, 82, 12-22.
 Diebold, F.X., Gunther, T. and Tsay, A. (1998) Evaluating Density Forecasts, with Applications to Financial Risk Management. International Economic Review, 39, 863-883.
 Diebold, F.X., Hahn, J. and Tsay, A. (1999) Multivariate Density Forecasts Evaluation and Calibration in Financial Risk Management: High Frequency Returns on Foreign Exchange. Review of Economics and Statistics, 81, 661-673.