Value-at-Risk Based on Time-Varying Risk Tolerance Level

Show more

1. Introduction

A class of risk measures, which are commonly referred to as “tail-related risk measures” in the economic literature, is based on basics of fixing ex-ante a risk tolerance level. Value-at-Risk is a common example of this class. Risk tolerance is the level of risk that an investor is willing to take. But, gauging risk appetite accurately can be a tricky task. In practice, the risk tolerance level is generally decided by judgement/or perception by a risk manager or a risk management committee or, in certain cases, an external regulatory body. For this purpose, it has been a common practice to follow recommendations by the BASEL committee of banking supervision. At present, BASEL guidelines are 99% and 99.9% confidence level for Value-at-Risk (VaR) and 97.5% confidence level for Expected Shortfall (ES) [1] . Most probably, those recommendations are drawn on the basis of country-wise experiences of analysing large set of historical data. Alternatively, in certain cases, the risk modeller adopts commonly used percentages viz. 99%, 95% and 90% for this purpose. Majumder [1] , however, documented evidence from various developed and emerging equity markets of those incidents where a minor change in the risk tolerance level translated into a large difference in VaR. Nevertheless, such instances are not uncommon in financial markets. Similar observations were documented by Degennaro [2] who formed examples to establish that non-cooperative choices of the risk tolerance level by two investors were resulting in a substantial variation in their VaR estimates. Therefore, in many occasions, the risk modeller’s preferences on the risk tolerance level could have large impacts on the tail measure. When those preferences are biased, being over concerned to the high volatile period/or stress or due to any other reason, the bias would be transfused into the tail measure. In this approach, the risk tolerance level, which was decided ex-ante during turbulence, maybe appropriate for the turbulent period. However, the same could be suboptimal for quiet periods. Logically, it is extremely difficult to get a risk tolerance level which is suited uniformly across scenarios and this is perhaps a reason for model risk in the conventional approach.

In an alternative approach, the present paper proposes that the risk tolerance level ought not to be pre-assigned, but may be determined by the model itself. In this framework, this parameter may vary with the shape of the loss distribution. One way to determine the same might be using the Pickands-Balkema-de Haan theorem which essentially says that, for a wide class of distributions, losses which exceed the high enough threshold follow the generalized Pareto distribution (GPD) [3] [4] . Using this theorem, it is easy to establish that the extreme right tail part of a distribution asymptotically converges to the tail of a generalised Pareto distribution (GPD). This hypothesis reveals that we can always find a region in the extreme right tail of the loss distribution, for which the equivalent region from a suitable GPD is available. Therefore, there exists a threshold, data above which shows generalized Pareto behavior. The threshold would essentially be reasonably large to cover all events which are “extreme” in nature. Naturally, events belonging to the rest of the distribution are “normal” or “non-extreme” in nature. The procedure gives us the opportunity to estimate simultaneously the tail size and the starting point of the tail. In other words, it allows simultaneous estimation of VaR and the risk tolerance level. The rest of the paper is organized as follows: Section 2 describes the model. Section 3 provides empirical findings and Section 4 concludes.

2. The Model

2.1. Behaviour of Losses Exceeding a High Threshold

Suppose
${x}_{1},{x}_{2},\cdots ,{x}_{n}$ are n independent realizations from a random variable (X) representing the loss with distribution function
${F}_{X}\left(x\right)$ with a finite or infinite right endpoint (x_{0}). We are interested in investigating the behavior of this distribution exceeding a high threshold (u). In the line of Hogg and Klugman [5] , the distribution function
$\left({F}_{{Y}_{1}^{u}}\right)$ of the truncated loss (
${Y}_{1}^{u}$ ) (truncated at the point u) can be defined as:

${F}_{{Y}_{1}^{u}}\left(x\right)=P\left[{Y}_{1}^{u}\le x\right]=P\left[X\le x/X>u\right]=\{\begin{array}{l}0\text{if}x\le u\\ \frac{{F}_{X}\left(x\right)-{F}_{X}\left(u\right)}{1-{F}_{X}\left(u\right)}\text{if}x>u\end{array}$

Based on ${F}_{{Y}_{1}^{u}}$ , we can define the distribution function of the excess over a high threshold u:

${F}_{{Y}^{u}}\left(x\right)=P\left[X-u\le x/X>u\right]=\frac{{F}_{X}\left(x+u\right)-{F}_{X}\left(u\right)}{1-{F}_{X}\left(u\right)}$ (1)

for $0\le x<{x}_{0}-u.$

Balkema and de Haan [3] and Pickands [4] showed that, for a large class of distributions, the generalised Pareto distribution (GPD) is the limiting distribution for the distribution of the excess, as the threshold (u) tends to the right endpoint. According to this theorem, we can find a positive measurable function $\sigma \left(u\right)$ such that

$\underset{u\to {x}_{0}}{Lim}\text{}\underset{0\le x<{x}_{0}-u}{Sup}\text{}\left|{F}_{{Y}^{u}}\left(x\right)-{G}_{\xi ,\sigma \left(u\right)}\left(x\right)\right|=0$ (2)

where the distribution function of a two parameter generalised Pareto distribution with the shape parameter ( $\xi $ ), and scale parameter ( $\sigma \left(u\right)$ ) has the following representation:

${G}_{\xi ,\sigma (u)}\left(x\right)=\{\begin{array}{l}1-{\left(1+\xi x/\sigma (u)\right)}^{-1/\xi}\text{if}\xi \ne 0\\ 1-\mathrm{exp}\left(-x/\sigma \left(u\right)\right)\text{if}\xi =0\end{array}$

where $\sigma >0$ , $x\ge 0$ when $\xi \ge 0$ and $0\le x\le -\frac{\sigma}{\xi}$ when $\xi <0$ . (2) holds if

and only if F belongs to the maximum domain of attraction of the generalised extreme value (GEV) distribution (H) [6] . The equivalent representation of (2) could be in terms three parameter GPD: for $x-u\ge 0$ , the distribution function of the three parameter GPD $\left({G}_{\xi ,u,\text{}\sigma}\left(x\right)\right)$ can be expressed as the limiting distribution function of the excess. ${G}_{\xi ,u,\text{}\sigma}\left(x\right)$ with shape parameter ( $\xi $ ), location parameter (u) and scale parameter ( $\sigma $ ) has the following representation.

${G}_{\xi ,u,\text{}\sigma}\left(x\right)=\{\begin{array}{l}1-{\left(1+\xi \left(x-u\right)/\sigma \right)}^{-1/\xi}\text{if}\xi \ne 0\\ 1-\mathrm{exp}\left(-\left(x-u\right)/\sigma \right)\text{if}\xi =0\end{array}$

where $\sigma >0$ , $\left(x-u\right)\ge 0$ when $\xi \ge 0$ and $0\le \left(x-u\right)\le -\frac{\sigma}{\xi}$ when $\xi <0$ .

This representation would provide us a theoretical ground to claim that there exists a threshold, the data above which would have generalized Pareto be haviour.

2.2. Identifying the Tail Region

Equations (1) and (2) suggest that for a sufficiently high threshold, it can be written:

${F}_{X}\left(x+u\right)\approx {F}_{X}\left(u\right)+{G}_{\xi ,\sigma ,u}\left(x\right)(1-{F}_{X}(\; u\; )\; )$

Setting y = x + u

${F}_{X}\left(y\right)\approx {F}_{X}\left(u\right)+{G}_{\xi ,\sigma ,u}\left(y-u\right)\left(1-{F}_{X}\left(u\right)\right)$ (3)

The right hand side of the Equation (3) can be simplified in the form of a distribution function of a GPD:

${F}_{X}\left(y\right)\approx {G}_{\xi ,\stackrel{\u02dc}{\sigma}}\left(y-\stackrel{\u02dc}{\mu}\right)$ (4)

where $\stackrel{\u02dc}{\sigma}=\sigma {\left(1-{F}_{X}\left(u\right)\right)}^{\xi}$ and $\stackrel{\u02dc}{\mu}=u-\stackrel{\u02dc}{\sigma}\left({\left(1-{F}_{X}\left(u\right)\right)}^{-\xi}-1\right)/\xi $ .

Hence, if we can fit the GPD to the conditional distribution of the excess above a high threshold, it can also be fitted to the tail of the original distribution above a certain threshold [7] .

When u is fixed at $\stackrel{^}{u}$ , $\stackrel{^}{y}$ would be the minimum value of y for which the Equation (4) will hold. The deviation of ${F}_{X}\left(y\right)$ from ${G}_{\xi ,\stackrel{\u02dc}{\sigma}}\left(y-\stackrel{\u02dc}{\mu}\right)$ would, therefore, be non-zero for $y<\stackrel{^}{y}$ , which is expected to be zero for all $y\ge \stackrel{^}{y}$ . We may consider an indicator, viz. the cumulative square deviation for $y<{y}_{0}$ ,

$D\left({y}_{0}\right)={{\displaystyle \underset{y<{y}_{0}}{\sum}\left[{F}_{X}\left(y\right)-{G}_{\xi ,\stackrel{\u02dc}{\sigma}}\left(y-\stackrel{\u02dc}{\mu}\right)\right]}}^{2}$ , which might be useful for identifying

$\stackrel{^}{y}$ . By its nature, $D\left({y}_{0}\right)$ would be an increasing function of ${y}_{0}$ for ${y}_{0}<\stackrel{^}{y}$ and would be nearly flat for ${y}_{0}\ge \stackrel{^}{y}$ . Therefore, the slope of the $D\left({y}_{0}\right)$ would be positive for ${y}_{0}<\stackrel{^}{y}$ , which would be almost zero for ${y}_{0}\ge \stackrel{^}{y}$ . We can identify the cut-off point, $\stackrel{^}{y}$ , after which the slope of the $D\left({y}_{0}\right)$ would be statistically insignificant [1] . To test this hypothesis, we have plotted D(y) versus for normal and t distributions (Figure 1). D(y) is almost flat after a certain cut-off in both of these cases which validates our postulate.

Therefore, we can bifurcate the underlying distribution into two parts: $X\ge \stackrel{^}{y}$ is the risky region of the distribution in the sense that this region could be approximated by the tail of an equivalent GPD. All large unforeseen losses would belong to this part. Conversely, $X<\stackrel{^}{y}$ is the region of the distribution which does not cause severe tail risk.

2.3. Measuring the Tail Risk

For a small quantile of order p, $P=1-{F}_{X}\left(\stackrel{^}{y}\right)<\stackrel{^}{y}$ , we can write

$P\approx \left(1-{F}_{X}\left(\stackrel{^}{u}\right)\right)\left(1-{G}_{\varsigma ,\sigma \left({u}_{0}\right)}\left(\stackrel{^}{y}-\stackrel{^}{u}\right)\right)$ (5)

VaR represents in probabilistic terms a quantile of the loss distribution function F_{X} [8] . Therefore,

$Va{R}_{p}=\stackrel{^}{y}$ $Va{R}_{p}=\stackrel{^}{y}$ (6)

Equations (5) and (6) lead to interesting inferences: when the distributional form of the underlying distribution (F_{X}(.)) is known, p and VaR_{p} can be estimated simultaneously. Majumder [1] has named the new risk measure as non-subjective Value-at-Risk (
$Va{R}^{N-S}$ ).

(a)(b)

Figure 1. Plot of D(y) versus y for normal and t distribution. (a) Plot of D(y) based on Normal Distribution (mean: 0, standard deviation: 1.76); (b) Plot of D(y) based on t distribution (Degrees of freedom: 2.18).

2.4. Simulation Study for Threshold Choice

When the form of the underlying loss distribution F_{X}(.) is known, we can develop a procedure for estimating the threshold,
$\stackrel{\u2322}{u}$ , by a simulation study. We may recall our result in the preceding section that we can get a sufficiently high threshold u, above which the distribution function of the excesses
${F}_{{Y}^{u}}\left(x\right)$ can be approximated by the distribution function of a generalised Pareto distribution,
${G}_{\xi ,\sigma \left(u\right)}\left(x\right)$ . Initially, we fix u to some u^{/} and generate 100 samples each of size 4000 from the underlying distribution F_{X}. If u^{/} is the true threshold, then the

deviation of ${G}_{\xi ,\sigma \left({u}^{/}\right)}\left(x\right)$ from ${G}_{\xi ,\sigma \left({u}^{/}\right)}\left(x\right)$ is expected to be zero for all $x\ge {u}^{/}$

for the j th sample, $j=1,2,\cdots ,100.$ We may consider an indicator, viz. the cumulative square deviation for $x\ge {u}^{/}$ , $D2\left({u}^{\prime}\right)={{\displaystyle \underset{x\ge {u}^{\prime}}{\sum}\left[{F}_{{Y}^{{u}^{/}}}\left(x\right)-{G}_{\xi ,\sigma \left({u}^{/}\right)}\left(x\right)\right]}}^{2}$ ,

which might be useful for identifying the threshold. If ${u}^{/}$ is the true threshold, $D2\left({u}^{\prime}\right)$ would be zero for each sample. Based on this indicator, we can form a Mean Squared Error (MSE):

$MSE\left({u}^{/}\right)=\frac{1}{100}{\displaystyle \underset{i=1}{\overset{100}{\sum}}\frac{\left\{D2\left({u}^{/}\right)\right\}}{{n}_{i}}}$

where n_{i} is the number of observation in the ith sample exceeding
${u}^{/}$ .
$MSE\left(u\right)$ can be computed for various values of u starting from 0. The best estimate of u (say
$\stackrel{\u2322}{u}$ ) would be one, for which
$MSE\left(u\right)$ is minimum.

3. Empirical Findings

VaR and VaR^{N}^{-S} based on daily returns on S & P 500 Composite Index for the period of 30 years, from 18th February, 1985 to 17th February, 2015, computed using five risk models separately for the full sample and the simulated stress scenario are reported in Table 1. The stress scenario is simulated in the line of Studer [10] and Breuer and Krenn [11] , who employed the Mahalanobis distance as a mathematical tool to choose stress scenarios [1] . Additionally, the conditional EVT framework proposed by McNeil and Frey [6] was adopted to compute VaR^{N-S} for GARCH. For each risk model, in the normal as well as the turbulent period, the equilibrium probability level^{1} in VaR^{N-S} lies in-between 0.05 and 0.1 and the estimate of VaR^{N-S} in-between VaR_{0.1} and VaR_{0.05}. Furthermore, similar to the conventional model, the estimate of VaR^{N}^{-S} in the stress scenario is greater than the estimate of the same for the full sample indicating that the new risk measure correctly captures riskiness of markets. Hence, estimates of VaR^{N}^{-S} are not too arbitrary numbers to be accepted the same as a risk measure. Interestingly, the standard error of the probability level is low (highest value: 0.024 (Normal (unconditional)). This indicates that additional volatility in VaR due to introduction of time variation in the probability level would be limited.

4. Conclusion

The recurring criticism against the existing framework of market risk management has been in two leading directions: 1) it is often not possible to find a risk model which accurately predicts the data generating process and 2) input parameters are judgement-based which makes the risk measure subjective. Precision in prediction of data generating process, however, depends on skill and expertise of the risk modeller and so it is more of an art than a science. On the other hand, non-subjectivity in selection of input parameters is possible to be obtained. This could be achieved if the risk tolerance level and the threshold are simultaneously determined by the risk model. Based on this insight, we have improved on the VaR model by allowing time variation in the risk tolerance

Table 1. A comparison between VaR and VaR^{N}^{-S} based on S & P 500 Composite Index.

Note: VaR and VaR^{N}^{-S} are average based on 50 estimates. The standard error of the estimate is provided in the parenthesis. Data Source: Data Stream.

level. Our empirical study based on S & P 500 composite index reveals that the tail risk of the loss distribution is well captured by the new risk measure in the normal as well as in the stress scenarios. The significance of the research is twofold: a) reduction of bias by minimising the scope of human intervention in risk measurement which is of practical as well as of social significance and b) gauging risk appetite methodically which is of academic significance. The approach may widen the applicability of tail-related risk models in institutional and regulatory policymaking. At this stage, however, it is not possible to provide the method for backtesting the new VaR model. This might be the topic for future research.

Acknowledgements

The author is grateful to Prof. Romar Correa, former Professor of Economics, University of Mumbai and Prof. Raghuram Rajan, Katherine Dusar Miller Distinguished Professor of Finance at University of Chicago Booth School of Business and former Governor of Reserve Bank of India for their insightful comments/suggestions. He is also thankful to Chief Science Officer, Dr. Chitro Majumdar, RsRL (R-square Risk Lab) for his contribution and inspiration at the initial stage of this study.

NOTES

*The views expressed in this paper are of the author and not of the organization to which he belongs.

^{1}One minus the probability level is the risk tolerance level [9] .

References

[1] Majumder, D. (2016) Proposing Model Based Risk Tolerance Level for Value-at-Risk—Is It a Better Alternative to BASEL’s Rule Based Strategy? Journal of Contemporary Management, 5, 71-85.

[2] Degennaro, R.P. (2008) Value-at-Risk: How Much Can I Lose by This Time Next Year? The Journal of Wealth Management, 11, 92-96.

https://doi.org/10.3905/JWM.2008.11.3.092

[3] Balkema, A.A. and de Haan, L. (1974) Residual Lifetime at Great Age. Annals of Probability, 2, 792-804.

https://doi.org/10.1214/aop/1176996548

[4] Pickands, J. (1975) Statistical Inference Using Extreme Order Statistics. Annals of Statistics, 3, 119-131.

https://doi.org/10.1214/aos/1176343003

[5] Hogg, R.V. and Klugman, S.A. (1984) Loss Distributions. Wiley, New York.

https://doi.org/10.1002/9780470316634

[6] McNeil, A.J. and Frey, R. (2000) Estimation of Tail-Related Risk Measures for Heteroscedastic Financial Time Series: An Extreme Value Approach. Journal of Empirical Finance, 7, 271-300.

https://doi.org/10.1016/S0927-5398(00)00012-8

[7] Reiss, R-D. and Thomas, M. (1997) Statistical Analysis of Extreme Values. Birkhauser, Basel.

https://doi.org/10.1007/978-3-0348-6336-0

[8] McNeil, A.J., Frey, R. and Embrechts, P. (2005) Quantitative Risk Management. Princeton University Press, Princeton, NJ.

[9] Yadav, V. (2008) Risk in International Finance. Routledge, Abingdon.

[10] Studer, G. (1999) Market Risk Computation for Nonlinear Portfolios. Journal of Risk, 1, 33-53.

https://doi.org/10.21314/JOR.1999.016

[11] Breuer, T. and Krenn, G. (1999) Stress Testing. Guidelines on Market Risk. Oesterreichische National Bank, Vienna.
http://www.oenb.at/en/img/ band5ev40_tcm16-20475.pdf