Back
 JMF  Vol.7 No.3 , August 2017
From Power Curves to Discriminative Power: Measuring Model Performance of LGD Models
Abstract: Measuring model performance of rating systems is a major task for banks. The concept of discrimination, i.e. the discriminative power, is used in credit risk modeling to assess the quality of a risk model concerning the separation of extreme events. For PD models CAP (Cumulative Accuracy Profile) or ROC (Receiver Operating Characteristic) curves are used to build a quantity called Accuracy Ratio, which is used to measure the discriminative power. These ideas are well known and broadly used in practice. Although such a measure is also desirable for models of the loss given default (LGD models), it is not documented in the literature. In this note we close this gap. We develop a measure for the discriminative power of LGD models based on Lorenz curves. We study first properties and introduce some alternatives for its calculation from a practical point of view.

1. Introduction and Literature Review

The Internal Rating Based Approach allows banks to determine their capital requirements according to internal models for the risk parameters PD (=proba- bility of default), EAD (=exposure at default) and LGD (=loss given default). The underlying rules according to which this shall be done are contained in the Capital Requirements Regulation (CRR, [1] ) and in the corresponding Regulatory Technical Standards (RTS). CRR specifies no requirements with regard to a model choice, in principle all types of models are allowed. This is also true for the new IFRS 9 Standard that will be authoritative for the determination of credit impairments from January 2018 on. Accurate estimates for risk parameters are essential in both cases.

At the beginning of the century modeling of the risk parameter LGD was carried out in a rather simplified manner. Over the years banks have recognized its (PD equal) significance and advanced models have been developed. Basically the parameter can be separated into two categories: market LGD and workout LGD. A market LGD is usually calculated from market data, especially from data on defaulted bonds. The calculation of a workout LGD takes into account banks' internal support of defaulted customers and LGD is calculated using discounted cash flows over the whole workout period. In both cases modeling of accurate LGD estimators is ambitious for many reasons. One reason is the lack of data especially for low default portfolios. Another one is the general complexity in modeling LGD. In order to be able to predict losses accurately, banks must differentiate LGD values on the basis of a wide set of transaction characteristics. The most important characteristics are borrower types, collateral types, product types and default scenarios. Another difficulty arises from some interaction of these characteristics over time, which results in an extremely heterogeneous and multidimensional estimation problem. The interaction produces, however, some stylized facts of historically observed LGDs. The maybe most important stylized fact is the bimodal (under some circumstances also a multimodal) structure of the empirical LGD distribution, which is displayed in the next Figure 1.

The bimodal structure is a characteristic often observed in LGD data. The peaks at 0% and at 100% are generated for two main reasons: Firstly, for defaults that end with a cure event or are fully collateralized a loss realization of 0% (or nearly 0%) is the baseline case. On the other hand, banks also realize total losses from defaulted engagements quite often. Here, the most prominent explanations are for instance extremely unfavorable liquidations of collateral or long ongoing legal proceedings. Another explanation is a write off of the entire or a big proportion of outstanding exposure without starting the workout process. These facts explain the bimodal loss structure very well.

Figure 1. Bimodal shape of the empirical LGD distribution based on approximately 6.000 default observations (source: bank internal loss data).

When modeling LGDs two main approaches may be distinguished: parametric and non-parametric models. The non-parametric approach contains tree models, models based on neural networks ( [2] ) and option theoretic models ( [3] , [4] and [5] ). Parametric LGD models are regression based. Besides OLS and logit regression new models have been developed recently: inflated beta regression ( [6] ), generalized beta regression ( [7] ), censored gamma regression ( [8] ), zero- adjusted gamma regression ( [9] ), and mixture-models ( [10] and [11] ). In [12] the authors point out some problems that arise in LGD estimation and show how they may be solved. All these models have been developed to accurately take into account the special shape of the empirical LGD distribution. Moreover, meanwhile many empirical studies exist, which compare different LGD models: [13] , [14] , [15] , [16] and [17] .

If banks use internal models for regulatory capital estimation, these models must be compliant with CRR [1] . Two important requirements are concerned with usage of historical data for model building (Article 179 (1) CRR) and validation that must be done at least annually (Article 185 b) CRR). In addition, validation is required to be done both qualitatively and quantitatively. The quantitative part of the validation process is about assessing the predictive power of the model (backtesting), its stability and discriminative power. Backtesting and stability assessment are usually done by splitting the defaulted portfolio into an in-time and an out-of-time sample. The assessment of discriminative power is more challenging. For PD models, the assessment is usually based on the Accuracy Ratio. This measure is derived from ROC- (Receiver Operating Characteristic) or CAP- (Cumulative Accuracy Profile) curves and is a common tool in the validation process (see [18] , [19] and [20] ). For that reason an equivalent measure for LGD models is desirable. However, such a performance measure is not documented in the literature. A direct transcription of the concept seems not to be possible. A major reason for this is that, in contrary to PD, LGD is not digital but a continuous parameter that takes values in the interval [0,1].

In each of the above mentioned LGD studies the assessment of model quality relies on statistical criteria without properly taking into account the model's ability to discriminate between low and high LGD scores. These criteria are: mean absolute error (MAE), relative absolute error (RAE), mean squared error (MSE), root mean squared error (RMSE) and the coefficient of determination ( R 2 or the adjusted R 2 ). They are defined as follows

M A E = 1 N i = 1 N | L G D i R L G D i P | ,

R A E = i = 1 N | L G D i R L G D i P | i = 1 N | L G D i R E [ L G D R ] | ,

M S E = 1 N i = 1 N ( L G D i R L G D i P ) 2 ,

R M S E = M S E ,

and

R 2 = 1 M S E V a r [ L G D R ] ,

1In [15] the authors also use the correlation coefficient between realized and predicted LGDs as a performance criterion.

2For a piecewise constant c.d.f. the inverse function is not defined. In this case a general inverse can be defined as F 1 ( p ) = inf { x : F ( x ) = p } .

where N is the sample size L G D i R is the realized (observed) and L G D i P is the predicted loss quota for an engagement i1. These measures are somewhat one-sided and biased as they are not able to account for concentrations, being obvious in the empirical LGD distribution. As a matter of fact they are limited in the assessment, whether a LGD model is able to distinguish between small and big losses or not.

The aim of this paper is to close this gap. We develop a performance measure that is equivalent to the Accuracy Ratio known from PD models. The derivation is based on Lorenz curves and Gini coefficients. As there is a direct relationship between Lorenz curves and CAP curves, the measure may be regarded as a CAP- based measure. The results presented in this paper will enable banks to quantify, how well a model is able to predict concentrations observed in historical data. This in turn will enrich the tools used for a model assessment and finally help banks to validate their internal models more accurately.

The remainder of the paper is structured as follows: Section 2 introduces the relevant concepts. Section 3 contains the main ideas of the paper. After defining the new measure, we state first properties and give some interpretations. Section 4 focuses on providing alternatives for its calculation. These alternatives are important from a practical perspective. Section 5 concludes.

2. Lorenz Curve and Gini Index

The concept of Lorenz curves is well established in macroeconomics. The theory is profound and the idea has central applications in quantifying the growth of an economy and income inequality. The literature covering the topic is rich (see for instance [21] , [22] , [23] or [24] ). Financial applications also exist ( [25] and [26] ). As we want to use the concept in the context of LGD validation, it will be necessary to recall some theoretical basics.

As usual, the random variable L G D [ 0 , 1 ] is understood as a conditional quantity:

L G D = L G D | D = 1

where D is the default indicator. The variable can take discrete values or be continuous. For the moment we will assume that LGD is continuous, predicted by an arbitrary but fixed model. Let F ( x ) = P ( L G D x ) be the cumulative distribution function (c.d.f.). Since F ( x ) is continuous and monotonically increasing, we can define its inverse or quantile function2: For p [ 0 , 1 ] , F 1 ( p ) = q ( p ) is the unique number x with F ( x ) = p . Then it is true that

Ÿ F 1 ( ) is monotonically increasing

Ÿ F 1 ( F ( x ) ) x .

Ÿ F ( F 1 ( p ) ) p .

If F ( x ) is continuously differentiable, we call the derivative a density function, F ( x ) = f ( x ) . Then F ( x ) = x f ( t ) d t = 0 x f ( t ) d t . The expected value of LGD equals

E [ L G D ] = x f ( x ) d x = 0 1 x f ( x ) d x = 0 1 [ 1 F ( x ) ] d x .

The expectation may also be determined using the quantile function q ( p ) :

E [ L G D ] = 0 1 x f ( x ) d x = 0 1 F 1 ( p ) d p = 0 1 q ( p ) d p .

Now, we can define the Lorenz curve for the random variable LGD.

Definition 2.1: Let E [ L G D ] 0. We define the Lorenz curve in two steps:

1) Determine the p-quantile, i.e. solve the equation p = F ( x ) = 0 x f ( t ) d t .

2) Set L ( p ) = 1 E [ L G D ] 0 x t f ( t ) d t .

An immediate consequence is the following Lemma.

Lemma 2.2: The Lorenz curve L ( p ) can be determined as

L ( p ) = 1 E [ L G D ] 0 p q ( t ) d t = 0 p F 1 ( t ) d t 0 1 F 1 ( t ) d t , (1)

or

L ( p ) = p E [ L G D ] 0 F 1 ( p ) P ( L G D > t | L G D F 1 ( p ) ) d t . (2)

Proof: The first equation follows directly from x = F 1 ( p ) . To prove the second equation, we apply integration by parts ( u v = u v u v ) with u = t and v = f ( t ) . Using the definition we obtain

L ( p ) = 1 E [ L G D ] ( x F ( x ) 0 x F ( t ) d t ) = F ( x ) E [ L G D ] 0 x ( 1 F ( t ) F ( x ) ) d t .

This completes the proof. ∎

From the above statements we deduce the following properties of L ( p ) :

Ÿ Assuming an increasing ordering of LGDs (increasing ranking), the Lorenz curve L ( p ) quantifies, which proportion of total loss is assigned to the cumulative proportion of the population.

Ÿ L ( p ) is contained in the unit square with L ( 0 ) = 0 and L ( 1 ) = 1. Moreover, L ( p ) p for all p [ 0 , 1 ] .

Ÿ L ( p ) is monotonically increasing and convex. The first two derivatives of L ( p ) satisfy

L ( p ) = d L ( p ) d p = F 1 ( p ) E [ L G D ] > 0

and

L ( p ) = d 2 L ( p ) d p 2 = 1 E [ L G D ] f ( p ) > 0.

Especially, the value L ( 1 / 2 ) measures the ratio between the median and the expectation.

Remark 2.3: We will call the graph of L ( p ) , i.e. the set of points

= { ( p , L ( p ) ) ; p [ 0 , 1 ] } = { ( F ( x ) , L ( x ) ) ; x [ 0 , 1 ] } [ 0 , 1 ] × [ 0 , 1 ]

a Power curve.

Next, we need the notion of a Gini index (Gini coefficient). The index is defined in terms of L ( p ) :

Definition 2.4: The Gini index G is defined by the following equation:

G = 2 0 1 ( p L ( p ) ) d p = 1 2 0 1 L ( p ) d p . (3)

The definition has a clear geometric interpretation. It is twice the area between the bisection line and the Lorenz curve. The factor 2 is a scaling factor. It ensures that G [ 0 , 1 ] . It is worth noting that for a uniformly distributed random variable G 0. If X ~ U ( 0 , 1 ) , then F ( x ) = x , x [ 0 , 1 ] and L ( p ) = p 2 . Thus G = 1 / 3 .

The next result relates Gini indexes of two linearly transformed random variables.

Lemma 2.5: Let X [ 0 , 1 ] be a random variable with E [ X ] 0 , c.d.f. F X ( x ) and Gini index G = G X . For a > 0 and b 0 define the new random variable Y = a X + b . Then, the Gini index of Y is given by

G Y = a E [ X ] a E [ X ] + b G X . (4)

Proof: We have F Y ( x ) = F X ( ( x b ) / a ) , f Y ( x ) = 1 / a f X ( ( x b ) / a ) and F Y 1 ( p ) = a F X 1 ( p ) + b . Therefore,

L Y ( p ) = a E [ X ] a E [ X ] + b L X ( p ) + b p a E [ X ] + b .

This proves the statement. ∎

From the case b = 0 it follows that the index G is invariant under positive scaling.

The next expression provides an important statistical interpretation of the Gini index.

Lemma 2.6: Let X [ 0 , 1 ] be a random variable with E [ X ] 0 , c.d.f. F X ( x ) = F ( x ) . Then the Gini index admits the following representation:

G = 2 E [ X ] C o v ( x , F ( x ) ) . (5)

The Gini index equals a scaled covariance of the underlying variable and its rank.

Proof: Applying integration by parts to the definition, it follows that

G = 1 + 2 0 1 p L ( p ) d p .

The transformation p = F ( x ) together with L ( p ) = F 1 ( p ) / E [ X ] gives

G = 2 E [ X ] ( 0 1 x F ( x ) f ( x ) d x E [ X ] 2 ) .

Now, from

E [ F ( x ) ] = 0 1 F ( x ) f ( x ) d x = 1 0 1 F ( x ) f ( x ) d x ,

it follows that E [ F ( x ) ] = 1 / 2 . Thus,

G = 2 E [ X ] ( E [ x F ( x ) ] E [ X ] E [ F ( x ) ] ) ,

and the proof is completed. ∎

Remark 2.7: Since,

C o v ( x , F ( x ) ) = 1 2 0 1 F ( x ) ( 1 F ( x ) ) d x , (6)

the Gini index can be expressed as

G = 1 E [ X ] 0 1 F ( x ) ( 1 F ( x ) ) d x . (7)

In many cases the explicit determination of L ( p ) or G is tedious. However, closed-form expressions exist for some prominent distributions (e.g. the lognormal, Pareto or Weibull distribution). As a final example we want to state the expression for the Gini index for the beta distribution. The result is established in [27] . The beta distribution is interesting in this context, since it has been proposed recently for LGD modeling ( [7] , [11] ). Let X [ 0 , 1 ] be a beta distributed random variable with parameters α , β > 0. The density and c.d.f. of X are given by

f ( x ) = 1 B ( α , β ) x α 1 ( 1 x ) β 1 ,

and

F ( x ) = 1 B ( α , β ) 0 x t α 1 ( 1 t ) β 1 d t ,

where B ( α , β ) is the beta function. Then

G = G X = 2 α B ( 2 α , 2 β ) B 2 ( α , β ) .

3. The Power Ratio

In this section we are going to apply the ideas from the last section to define a new measure for LGD model performance. The measure may be seen as a counterpart of the Accuracy Ratio, well known from PD modeling. Hereby, we make use of the following principle: an estimation model is usually developed on the basis of historical data. The historical experience is a vital model component and has a significant input on its development and calibration. This is also true for LGD models, as risk drivers and correlations are identified from historical loss data. Therefore, known realized losses must serve as a benchmark for an estimation model. This principle is completely in line with the PD model building and validation process.

Let V be the historical loss portfolio that is used for model building or validation. We assume that V consists of N defaulted borrowers/agreements3. At time of default each borrower i ( i = 1 , , N ) has an exposure E A D i . Let E denote the entire portfolio exposure, i.e. E = i = 1 N E A D i . At the end of the model building or validation process the bank is able to assign a realized ( L G D i R ) and a predicted loss quota ( L G D i P ) to each borrower i . For that reason the model is completely characterized by the following N vectors:

{ ( i , E A D i , L G D i R , L G D i P ) ; i = 1 , , N } .

We define the new performance measure, which we call the Power Ratio (PR), as the ratio of predicted and realized Gini coefficients, which are associated with predicted and realized loss distributions, respectively:

P R = G ( L G D P ) G ( L G D R ) , (8)

assuming that G ( L G D R ) 0 i.e. E [ L G D R ] 0. Also an ascending ranking of the random variable LGD is assumed. In general, it holds true that 0 P R 1. We have P R = 1 if the model is able to pattern the structure of realized LGDs over the entire spectrum of observations. This will tell us, that the model is able to predict concentrations caused by risk drivers in an exact manner. For a model that fails to do this, a Power Ratio of (nearly) zero will be the result.

An equivalent expression for the Power Ratio that corresponds more accurately to PD estimation is

P R = 1 2 0 1 L P ( t ) d t 1 2 0 1 L R ( t ) d t = 1 2 A U C P 1 2 A U C R . (9)

3As banks offer a wide range of products, it is clear that a borrower may have several contracts with a bank. For instance, a customer may possess a mortgage loan account in package with a current account and a credit card account. Depending on the structure of collateralization these products will realize different losses. Since a LGD model can be built on different segmentation levels (customer types or product types) we will use these two terms as synonyms.

Here, the quantity A U C denotes the area under the Lorenz curve for estimations and realizations, respectively. Since historical LGD realizations must be used as a benchmark model, the Lorenz curve for realized LGDs will be termed “the optimal curve”. The notion of A U C is also commonly used in the context of PD validation.

The Power Ratio as is defined above, allows a clear geometric interpretation: It is the ratio of two areas: P R = A / B , where A is the area between the bisection line and the Lorenz curve of the model and B is the area between the bisection line and the Lorenz curve of the benchmark model.

As both the numerator and the denominator in the defining equations depend on ordered LGD-levels (ascending ranking of LGDs), P R measures, how adequate the model discriminates high realized losses from low realized losses. A P R value of 1 is achieved, if predicted concentrations exactly cover realized concentrations. However, it must be mentioned that a P R value of 1 is impossible to achieve for a CRR compliant LGD model. This is due to several requirements for IRB-models. In accordance with Article 179 (1) a), Article 179 (1) f), and Article 181 (1) of the CRR [1] banks are required to incorporate margins of conservatism in their LGD estimates. These margins cover different issues: economic downturn scenarios, statistical uncertainty and/or data quality. The compliance of these requirements leads to a direct impact on model performance. This will be explained using the following argument: Let us assume that the defaulted portfolio V is composed of 50% of cured borrowers. In case of cure a loss of zero is realized ( L G D R , C = 0 ). The other proportion of V is assumed to be terminated agreements with a total loss realization ( L G D R , T = 1 ). By construction V exhibits a bimodal structure that should be taken into account by a model. However, to meet the CRR requirements, for the cured proportion of V a conservative (positive) LGD estimation must be valid ex ante. Let L G D P , C = a > 0 be the predicted LGD in case of cure. Then a simple calcu-

lation shows that P R = 1 a 1 + a < 1.

This rather simple example shows that regulatory requirements directly impact LGD model performance. This is true for all performance criteria. From practical experience models with P R values around 0.5 turn out to be sufficiently risk differentiating. Also, a P R value of nearly 1 may indicate an overfitting of the model.

From Lemma 2.6 we get the following equation for the Power Ratio:

P R = C o v ( L G D P , F ( x ) ) C o v ( L G D R , F ˜ ( x ) ) E [ L G D R ] E [ L G D P ] , (10)

where E [ L G D P ] is the model mean, E [ L G D R ] is the empirical mean and F ˜ ( x ) denotes the empirical distribution function. If a model is calibrated on the ex post level ( E [ L G D P ] = E [ L G D R ] ) , which may be plausible for impairment purposes, then P R allows the interpretation as a ratio of two covariances.

Another theoretical aspect of P R is concerned with its sensitivity. Let G ( L G D R ) be fixed. Recalling the expression for the Gini index

G ( L G D P ) = 1 E [ L G D P ] 0 1 F ( x ) ( 1 F ( x ) ) d x , (11)

we see that the measure P R is robust to extreme values of the distribution. The function I ( x ) = F ( x ) ( 1 F ( x ) ) with I ( 0 ) = I ( 1 ) = 0 attains the maximum value for x = F 1 ( 1 / 2 ) , meaning that P R is most sensitive to changes near the median of the LGD distribution.

4. PR Calculation in Practice

On the next pages we will give guidance concerning PR calculation in the banking practice. From the previous analysis it is clear that PR can be calculated in many different ways. We will focus on the two most important alternatives:

Ÿ default-weighted PR calculation.

Ÿ exposure-weighted PR calculation.

The first alternative is crucial for banks that use IRB-models. In accordance with Article 181 (1) a) of CRR banks are required to use default-weighted LGD estimates. Hence, model validation should be compliant with the requirement. Exposure-weighted estimation is important for impairment and economic capital calculation, since it may help to identify risks that are driven by exposure concentrations.

Let V be the defaulted portfolio consisting of N borrowers. V is characterized by the vectors: { ( i , E A D i , L G D i R , L G D i P ) ; i = 1 , , N } . First, let us focus on LGD realizations. We point out that realized LGDs do not necessarily lie in the unit interval, i.e. L G D R [ a , b ] with [ 0 , 1 ] [ a , b ] . Unfavorable liquidations of collateral may lead to LGD values above 1. Also, negative LGD values are possible for specific portfolios. For instance, defaulted leasing contracts may lead to negative LGDs. In this case we have G ( L G D R ) > 1.

Let the realized LGDs be ranked, i.e. ( L G D i R ) i = 1 , , N ( L G D ( i ) R ) i = 1 , , M with

L G D ( 1 ) R < < L G D ( i ) R < L G D ( i + 1 ) R < < L G D ( M ) R .

Since different borrowers may realize equal losses ( L G D i R = L G D j R , i j ) , it holds that 1 M N . If each LGD class (rank) ( i ) is weighted equally, then the empirical distribution is given by

F ˜ ( x ) = 1 M # { i { 1 , , M } : L G D ( i ) R x } = { 0 , x < L G D ( 1 ) R k M , L G D ( k ) R x < L G D ( k + 1 ) R 1 , L G D ( M ) R x .

Thus,

L R ( p ) = i = 1 k L G D ( i ) R p ( i ) E [ L G D ( i ) R ] = i = 1 k L G D ( i ) R ( F ( x ( i ) ) F ( x ( i 1 ) ) ) i = 1 M L G D ( i ) R ( F ( x ( i ) ) F ( x ( i 1 ) ) ) = i = 1 k L G D ( i ) R ( i M i 1 M ) i = 1 M L G D ( i ) R ( i M i 1 M ) .

Finally, with L R ( 0 ) = ( 0 , 0 )

L R ( k ) = { ( k M , i = 1 k L G D ( i ) R i = 1 M L G D ( i ) R ) ; k = 1 , , M } . (12)

Obviously, since N and M are finite, L R ( k ) is piecewise linear. For each line segment ( k , k + 1 ) , the slope of L R ( k ) , Δ L R ( k ) , equals

Δ L R ( k ) = L G D ( k + 1 ) R M i = 1 M L G D ( i ) R . (13)

In accordance with the findings of the last section, we may write this result as

Δ L R ( k ) = L G D ( k + 1 ) R E [ L G D ( i ) R ] , k = 0 , 1 , , M 1 , (14)

where the mean LGD is computed as E [ L G D ( i ) R ] = 1 / M i = 1 M L G D ( i ) R . Observe that if a fraction of LGD realizations is negative, so is the Lorenz curve for small k . Hence, Δ L R ( k ) may also be negative.

In the same manner we can construct the Lorenz curve for predicted LGDs. Let us assume that we have fixed a prediction model. This model will produce a LGD ranking of the form ( L G D i P ) i = 1 , , N ( L G D ( i ) P ) i = 1 , , Q with 1 Q N . Hence, with L P ( 0 ) = ( 0 , 0 ) ,

L P ( m ) = { ( m Q , i = 1 m L G D ( i ) P i = 1 Q L G D ( i ) P ) ; m = 1 , , Q } , (15)

and

Δ L P ( m ) = L G D ( m + 1 ) P E [ L G D ( i ) P ] , m = 0 , 1 , , Q 1 , (16)

where the mean LGD is computed as E [ L G D ( i ) P ] = 1 / Q i = 1 Q L G D ( i ) P . Since LGD estimates will be non negative, so will be Δ L P ( m ) .

We also see that the following results are true:

Proposition 4.1: The following statements hold:

Ÿ A LGD estimation model with a single rank ( Q = 1 , M > 1 ) possesses no discriminative power ( P R = 0 ) .

Ÿ Let M = Q > 1. If LGD estimations are linear transformations of realized LGDs, L G D P = a L G D R + b , a > 0 , b 0 , then the discriminative power of the model equals

P R = a E [ L G D R ] a E [ L G D R ] + b . (17)

Proof: Part one is trivial. The second part follows from Lemma 2.5. ∎

Next, we show how unequal weighting of LGD classes can be integrated into PR calculations. The starting point is an ordered sequence of LGD realizations:

( L G D i R ) i = 1 , , N ( L G D ( i ) R ) i = 1 , , M with

L G D ( 1 ) R < < L G D ( i ) R < L G D ( i + 1 ) R < < L G D ( M ) R .

Let n R ( i ) = # { j | L G D j R = L G D ( i ) R } be the number of borrowers contained in a LGD class ( i ) . Then

L R ( k ) = { ( i = 1 k n R ( i ) i = 1 M n R ( i ) , i = 1 k n R ( i ) L G D ( i ) R i = 1 M n R ( i ) L G D ( i ) R ) ; k = 1 , , M } , (18)

and

Δ L R ( k ) = L G D ( k + 1 ) R E [ L G D ( i ) R ] , k = 0 , 1 , , M 1 , (19)

where the mean realized LGD is now computed as

E [ L G D ( i ) R ] = i = 1 M w ( i ) L G D ( i ) R , (20)

with w ( i ) = n R ( i ) / i = 1 M n R ( i ) = n R ( i ) / N . It is clear that the weights w ( i ) allow an interpretation as the probabilities of sorting a borrower i into a LGD class ( i ) .

Analogously, we get for L P ( m )

L P ( m ) = { ( i = 1 m n P ( i ) i = 1 Q n P ( i ) , i = 1 m n P ( i ) L G D ( i ) P i = 1 Q n P ( i ) L G D ( i ) P ) ; m = 1 , , Q } , (21)

and

Δ L P ( k ) = L G D ( k + 1 ) P E [ L G D ( i ) P ] , m = 0 , 1 , , Q 1 , (22)

with n P ( i ) = # { j | L G D j P = L G D ( i ) P } .

It is interesting to compare the two approaches, i.e. especially equations (12) with (18). They coincide if the underlying portfolios are either completely or sufficiently heterogeneous. In the first case, we have M = N and n R ( i ) = 1 for all i . For a sufficiently heterogeneous portfolio we would expect that each LGD

class has an equal weight in the sense that M < N , n R ( i ) N M and lim M n R ( i ) = 1 . Therefore,

L R ( k ) = { ( N M k N M M = k M , i = 1 k L G D ( i ) R i = 1 M L G D ( i ) R ) ; k = 1 , , M } . (23)

Finally, we state the expressions for L R and L P assuming an exposure- weighted calculation:

L R ( k ) = { ( i = 1 k E A D ( i ) i = 1 M E A D ( i ) , i = 1 k E A D ( i ) L G D ( i ) R i = 1 M E A D ( i ) L G D ( i ) R ) ; k = 1 , , M } , (24)

with E = i = 1 N E A D i = i = 1 M E A D ( i ) and

L P ( m ) = { ( i = 1 m E A D ( i ) i = 1 Q E A D ( i ) , i = 1 m E A D ( i ) L G D ( i ) P i = 1 Q E A D ( i ) L G D ( i ) P ) ; m = 1 , , Q } , (25)

with E = i = 1 N E A D i = i = 1 Q E A D ( i ) . In both cases the equations for the slopes in

a line segment ( j , j + 1 ) , allow an interpretation as a fraction of the ( j + 1 ) -th LGD class to the exposure-weighted portfolio mean.

5. Conclusion

In this paper, we have developed a new measure to evaluate LGD model performance. The measure, which we term the Power Ratio, is a counterpart of the Accuracy Ratio known from PD modeling, and accounts for concentrations in the LGD distribution. Since the measure is model independent, it has universal applicability. This means that it can be applied likewise to Through-The-Cycle and Point-In-Time models. After presenting the background of the new measure, we derived its analytical properties. Finally we have focused on practical issues and stated alternatives for its explicit calculation from a banking perspective. We see two main fields of application: Firstly, the new measure must be regarded as an extension of existing validation tools. It will support banks to achieve a more multifaceted model assessment and finally help practitioners to validate their models more accurately. Secondly, the new tool will also help to assess the quality of new models proposed for LGD modeling.

Acknowledgements

The authors thank the referees for a careful reading of the manuscript and the constructive comments.

Cite this paper: Frontczak, R. , Jaeger, M. and Schumacher, B. (2017) From Power Curves to Discriminative Power: Measuring Model Performance of LGD Models. Journal of Mathematical Finance, 7, 657-670. doi: 10.4236/jmf.2017.73034.
References

[1]   Capital Requirements Regulation (CRR): Regulation (EU) No. 575/2013 of the European Parliament and of the Council of 26 June 2013 on Prudential Requirements for Credit Institutions and Investment Firms and Amending Regulation (EU) No. 648/2012.

[2]   Bastos, J. (2010) Forecasting Bank Loans Loss-Given-Default. Journal of Banking and Finance, 34, 2510-2517.
https://doi.org/10.1016/j.jbankfin.2010.04.011

[3]   Jokivuolle, E. and Peura, S. (2003) Incorporating Collateral Value Uncertainty in Loss Given Default Estimates and Loan-to-Value Ratios. European Financial Management, 9, 299-314.
https://doi.org/10.1111/1468-036X.00222

[4]   Jacobs, M. (2010) An Option Theoretic Model for Ultimate Loss-Given-Default with Systematic Recovery Risk and Stochastic Returns on Defaulted Debt.
https://ssrn.com/abstract=1551401

[5]   Frontczak, R. and Rostek, S. (2015) Modeling Loss Given Default with Stochastic Collateral. Economic Modelling, 44, 162-170.
https://doi.org/10.1016/j.econmod.2014.10.006

[6]   Ospina, R. and Ferrati, S. (2010) Inflated Beta Distributions. Statistical Papers, 51, 111-126.
https://doi.org/10.1007/s00362-008-0125-4

[7]   Huang, X. and Oosterlee, C. (2011) Generalized Beta Regression Models for Random Loss Given Default. The Journal of Credit Risk, 7, 1-26.
https://doi.org/10.21314/JCR.2011.150

[8]   Sigrist, F. and Stahel, W. (2011) Using the Censored Gamma Distribution for Modeling Fractional Response Variables with an Application to Loss Given Default. ASTIN Bulletin, 41, 673-710.

[9]   Tong, E., Mues, C. and Thomas, L. (2013) A Zero-Adjusted Gamma Model for Mortgage Loan Loss Given Default. International Journal of Forecasting, 29, 548-562.
https://doi.org/10.1016/j.ijforecast.2013.03.003

[10]   Calabrese, R. and Zenga, M. (2010) Bank Loan Recovery Rates: Measuring and Non-Parametric Density Estimation. Journal of Banking and Finance, 34, 903-911.
https://doi.org/10.1016/j.jbankfin.2009.10.001

[11]   Calabrese, R. (2014) Downturn Loss Given Default: Mixture Distribution Estimation. European Journal of Operational Research, 237, 271-277.
https://doi.org/10.1016/j.ejor.2014.01.043

[12]   Gürtler, M. and Hibbeln, M. (2013) Improvements in Loss Given Default Forecasts for Bank Loans. Journal of Banking and Finance, 37, 2354-2366.
https://doi.org/10.1016/j.jbankfin.2013.01.031

[13]   Bade, B., Rosch, D. and Scheule, H. (2011) Empirical Performance of Loss Given Default Prediction Models. Journal of Risk Model Validation, 5, 25-44.
https://doi.org/10.21314/JRMV.2011.072

[14]   Qi, M. and Zhao, X. (2011) A Comparison of Methods to Model Loss Given Default. Journal of Banking and Finance, 35, 2842-2855.
https://doi.org/10.1016/j.jbankfin.2011.03.011

[15]   Yashkir, O. and Yashkir, Y. (2013) Loss Given Default Modelling: Comparative Analysis. Journal of Risk Model Validation, 7, 25-59.
https://doi.org/10.21314/JRMV.2013.101

[16]   Hartmann-Wendels, T., Miller, P. and Tows, E. (2014) Loss Given Default for Leasing: Parametric and Nonparametric Estimation. Journal of Banking and Finance, 40, 364-375.
https://doi.org/10.1016/j.jbankfin.2013.12.006

[17]   Li, P., Qi, M., Zhang, X. and Zhao, X. (2016) Further Investigation of Parametric Loss Given Default Modeling. Journal of Credit Risk, 12, 17-47.
https://doi.org/10.21314/JCR.2016.215

[18]   Engelmann, B., Hayden, E. and Tasche, D. (2003) Testing Rating Accuracy. Risk, 16, 82-86.

[19]   Engelmann, B, Hayden, E. and Tasche, D. (2003) Measuring the Discriminative Power of Rating Systems. Deutsche Bundesbank Discussion Paper, Series 2 (no 01).

[20]   Irwin, R. and Irwin, T. (2012) Appraising Credit Ratings: Does the CAP Fit Better than the ROC? IMF Working Paper.

[21]   Kakwani, N. (1980) Income Inequality and Poverty: Methods of Estimation and Policy Applications. Oxford University Press, Oxford.

[22]   Chotikapanich D. (1993) A Comparison of Alternative Functional Forms for the Lorenz Curve. Economics Letters, 41, 129-138.
https://doi.org/10.1016/0165-1765(93)90186-G

[23]   Basmann, R., Hayes, K., Johnson, J. and Slottje D. (2002) A General Functional Form for Approximating the Lorenz Curve. Journal of Econometrics, 43, 77-90.
https://doi.org/10.1016/0304-4076(90)90108-6

[24]   Sarabia, J. (2008) Parametric Lorenz Curves: Models and Applications, In: Chotikapanich, D., Ed., Modeling Income Distributions and Lorenz Curves (Economic Studies in Inequality), Springer, 167-190.

[25]   Shalit, H. and Yitzhaki, S. (1984) Mean-Gini, Portfolio Theory, and the Pricing of Risky Assets. Journal of Finance, 39, 1449-1468.
https://doi.org/10.1111/j.1540-6261.1984.tb04917.x

[26]   Shalit, H. and Yitzhaki, S. (2005) The Mean-Gini Efficient Portfolio Frontier. Journal of Financial Research, 28, 59-75.
https://doi.org/10.1111/j.1475-6803.2005.00114.x

[27]   Pham-Gia, T. and Turkkan, N. (1992) Determination of the Beta Distribution from Its Lorenz Curve. Mathematical and Computer Modelling, 18, 73-84.
https://doi.org/10.1016/0895-7177(92)90008-9

 
 
Top