Credit-card debts are among the most significant assets of many banks, and in the modern era, credit-card ownership is exceptionally widespread. People often prefer to obtain “advanced consumption” as well as loans from their financial institutions. The percentage of owning General-purpose credit cards such as Visa, MasterCard and Discover is 70.2 percent of all American households, based on information from the U.S. Census Bureau . Moreover, halfway through 2019, U.S. credit-card debt had already reached $971 billion. This represented a 5% increase over 2018, reflecting a general upward trend . Unlike large loans, such as mortgage loans and car loans, using a credit card to purchase goods is a more widespread and routine phenomenon in daily life.
On average, the revolving balance of one American household is approximately $6829. Hence, finding a reliable means to assess clients’ ability to repay credit card debt (and thus avoid costly defaults) is an essential task for banks. Who can be trusted with a card, for example, and what should a client’s credit limit be? According to the Board of Governors of the Federal Reserve System, the charge-off rate in 2019 for all commercial banks in the United States is about 3.7% . Obviously, these institutions should deploy every possible resource to predict and forestall default behaviour.
Currently, loan officers decide whether to approve applications based on credit history and personal experience. Nonetheless, these criteria are far from perfect. First, not every applicant has a credit history, so this approach will not work when these applicants ask for credit cards. Second, loan officers are often subjective: the standards applied by one are not necessarily those of another. In other words, their decisions can be heavily influenced by personal factors. Despite the acceptance that predicting credit-card default is highly significant for banks, at present most research papers focus on financial policies and banking regulation, to examine how these policies impact default probability. Research prioritizes theoretical derivation but pays insufficient attention to rigorous mathematical analysis. Some other research, meanwhile, relates loan default to “machine intelligence”. Student loans and mortgages often receive more intensive scrutiny than credit-card debt. Therefore, studying credit-card charge-off rates through mathematical models seems both timely and relevant.
This paper will address the primary contributing factors to default probability by means of the XGBoost model. In addition, it predicts the probability of the default of a credit-card holder through the relevant information pertaining to that individual, then converts the probability into an intuitive credit score through “certain score mapping”. The attributes and credit ratings of clients may be intuitively reflected in credit scores. Obviously, the higher the score, the better the credit; the lower the score, the lower the credit rating.
Moreover, according to the credit scores of users, credit agencies can implement relevant management measures, including reminders, collection procedures and other strategies. Clients with lower scores could be issued with payment reminders, while higher-scoring clients need not receive special attention. Institutions may take “further measures” with those with very low scores. Following such a strategy would allow banks and others to manage their credit exposure far more efficiently.
Credit cards are the growing trend in the market and are quickly replacing the other mediums of transactions in the face of the market economy. With the ever-increasing usage of the medium, the constant threat of default is on the continuous rise. The credit card issuers are in a constant dilemma to recognize the key drivers that will represent the likelihood of default on the path of the customers. The decision to allocate the credit limit towards the individual customers is yet another major source of the dilemma faced by the issuers. Off lately, the market of credit card has expanded massively, bringing about the massive number of defaults. The credit card debt in the USA is more than ever putting pressure on the banks and other financial institutions to come up with a mechanism to understand the potential customers and reduce the risk of debt. The issuers need to come up with a mechanism that can evaluate the probability of default from the consumer end.
The research aims at developing a mechanism to predict the credit card default beforehand and to identify the potential customer base that can be offered various credit instruments so as to invite minimum default.
2. Literature Review
2.1. Asymmetric Information Theory
According to information economics, if the information related to a transaction is asymmetrical, the progenitors of “adverse selection” and “moral hazard” will arise, which are important factors for the formation of default risk. Research by American economists Stiglitz and Weiss showed that adverse selection and improper incentives always exist in the credit market . In terms of information, broadly speaking, banks and other lenders are in a position of relative weakness while borrowers are in a position of comparative strength. Banks often do not know the borrower’s repayment motivation, repayment ability and “project risk” . It is hardly a secret that, to obtain a short-term advantage, certain borrowers distort or conceal negative information in their dealings with lenders; more positive “information” may simply be invented. Financial institutions are aware of their information disadvantage. As a result, they tend to charge higher interest rates, which crowd out creditworthy borrowers, while those who continue actively looking for loans are more likely to represent potential non-performing loan risks. In other words, good clients can be squeezed out of the credit market by poor ones.
On the other hand, when financial institutions do provide borrowers with funds, those borrowers may have a strong motivation to hide certain facts of operation and profitability in order to obtain economic benefit. If the borrower uses the loan for high-risk investment projects after the loan contract is signed, once that project fails, he/she transfers the risk to the bank. The borrower may simply refuse to pay either principal or interest, thus effectively avoiding the debt obligation. As a result of this “moral hazard”, the financial institution must absorb the loss of both interest and principal.
2.2. Previous-Study Review
In 2009, Yeh and Lien applied a standardised dataset and published a research paper, which compared the prediction accuracy of credit-card clients’ default probability in terms of six data-mining techniques: these comprised K-nearest neighbour (KNN) classifiers, logistic regression (LR), discriminant analysis (DA), naive Bayesian (NB) classifiers, artificial neural networks (ANNs) and classification trees (CTs) . In terms of error rates, the study detected no significant difference between the six. There was, conversely, a pronounced difference regarding area ratios. More importantly, artificial neural networks (ANNs) executed classification more accurately than the other five methods. A subsequent study (2016) was undertaken by Vendakesh and Jacob to assess the comparative performance of classifiers . They found that Random Tree, Random Forest and IBK obtained good rates of accuracy.
Currently, papers addressing the application of machine learning to credit-card default tend to pay more attention to comparing machine-learning models to see which models exhibit better accuracy. Therefore, in this paper, instead of comparing different methods, we will instead merely apply one currently popular technique for mining data, to see how it can provide information about credit-card default probability.
As per Figure 1 presented above, the credit card delinquency rates are showing an upward trend in both the small and large banks. However, the ratio seems to be more for smaller banks in comparison to the larger banks. It is mainly due to the credit mapping practices, and default mapping is more stringent in larger banks as compared to the smaller banks . It has also been observed that the smaller banks often provide risky credits to attract and lure mass consumer base, often letting the security protocols aloof for short-term gains. This eventually leads to long-term debts.
Figure 1. Credit card delinquency rate .
In a survey of banks and credit unions, it has been evidently stated that institutions have suffered extreme losses due to the credit card frauds. 2015 and 2016 have been taken as two base years to compare the scams arising due to credit cards. As the results shown in Figure 2, the survey indicates that the frauds due to credit card have been more in 2016 as compared to 2015. It is a clear indication that the frauds arising in the credit card segment have been on the growing trend as its popularity .
Figure 3 represents the credit card volume in the USA, indicating that the general-purpose transactions in the nation are more in comparison to the private-label transactions. The transactions show a positive trend. The general-purpose value has increased significantly; however, the private-label value has also demonstrated a positive growth but not as much as compared to the former.
3.1. Research Variables
The dataset  comprises 24 variables, including clients’ personal information (gender, age, marital status, education level) and financial information such as account balance, monthly bills and repayment status. Default payment, a binary variable, is meanwhile deployed as the response variable. Detailed descriptions and summary statistics of the explanatory variables are presented in Table 1 and Table 2.
The client’s ability and willingness to repay are reflected in his/her repayment details, which are themselves merely part of the wider financial data. For example, if the user has paid on time for the past six months and can cover the current month’s bills, this probably means that the user has a strong willingness to repay and strong repayment ability. Hence, there is obviously a good chance that he/she will repay next month. Conversely, if the client has delayed repayment once or more in the previous six months, this may bode ill for future repayments.
Figure 2. Credit card fraud index .
Figure 3. Credit card volume and dollar value .
Table 1. Description of explanatory variables in the dataset.
Table 2. Statistics summary of explanatory variables in the dataset.
These financial factors, rather than being separate features, are interrelated. Therefore, simply applying these variables to the model is not enough. This paper uses similar ideas to mine, in depth, the billing information and repayment information of sample users in the past six months to depict the default attributes that users may have.
The research variables can be divided into three types.
1) Proportional category: monthly payment amount/monthly bill amount, monthly bill amount/credit card amount/credit card amount. This category addresses both the usage rate of the amount in question and real repayment ability.
2) Frequency category: the frequency of having more than three months overdue, having more than four months overdue, and having more than five months overdue, no monthly consumption status, monthly full coverage of the number of bill repayments. The frequency of having monthly bills that exceed 8000 and more than 16,000, as well as the frequency of having monthly repayments that exceed 0. These data can be used to analyse clients’ historical repayment behaviour.
3) Statistical category: maximum, minimum, average and uneven monthly bill amount; maximum, minimum, average and uneven monthly repayment amount.
In conclusion, a total of 42 connected variables were excavated, once feature engineering had been completed.
The research variables will aid in clarity of the purpose of the research. The individual variables will collectively help in understanding the credit viability of the customer. The personal information and the financial analysis of an individual will provide transparency to the issuers and institutions about the credit habits of the user. The personal data will provide demographic information about the customer, which will help in understanding and gaining a new customer base. Personal information is imperative in instances wherein the customer has no previous financial or credit record. As customer acquisition is critical to the organizations, the issuers scrutinize the personal information very minutely. Financial information puts the issuers at ease as the entire credit history of the customer can be viewed. The can check the credit ranking of the customer along with his default record and decide upon issuing the credit.
Issuing credit merely on the basis of personal information is more of a gamble for the issuers as the financial behavior of the customer is unknown, and hence, it poses a high risk. Therefore, these variables are scrutinized very stringently.
A secondary research approach has been used by taking the already present information of the samples in the form of financial records and personal data. Exploratory research on the basis of the survey has been conducted. An inductive research format has been applied to gain maximum insight into the research problem. Definitive conclusions have been derived from the research with the help of quantitative data.
3.2. XGBoost Model
XGBoost is a machine-learning system centred on the “lifting tree” proposed by Chen (2016) , on the basis of a great deal of previous research work on the gradient lifting algorithm. It contains a set of iterated residual trees. Each tree is the residual of the N-1 tree before learning. Adding the output values of new samples predicted by each tree produces the predicted value of the final sample (2016). Nonetheless, unlike the commonly used gradient-boosting decision tree (GBDT), which only uses the first derivative information in optimisation, XGBoost expands the cost function in the second-order Taylor expansion and uses the first and second derivatives at the same time, which allows XGBoost to obtain credible results. Its main characteristics are as follows.
1) By supplementing the loss function with a regularisation term, XGBoost acquires a formidable anti-overfitting characteristic.
2) The second-order Taylor expansion is used to make the loss function more accurate.
3) The efficacy of the model iteration is greatly enhanced by the qualities of the parallel operation.
4) XGBoost supports column sampling, which reduces overfitting, reduces calculation and improves iteration efficiency.
Because of its advantageous algorithm, XGBoost has been deployed in the financial sector and elsewhere with increasing frequency in recent years. The present context allows its popularity and practicality to be more objectively assessed. Therefore, this paper takes XGBoost as its basic tool to complete the users’ credit-card characteristic analysis.
4. Research Findings
In this paper, a total of 30,000 samples of credit-card billing information and repayment information are modelled, together with some basic user information. The label is the repayment status of the user in October 2005. “1” indicates default and “0” indicates non-default. The distribution of repayment and non-repayment is shown in Table 3. As shown in this table, the default probability of the sample is 22.12%.
An outlier detection should be undertaken prior to merging the original basic data with the feature-engineering data and implementing the XGBoost model. The following three figures, Figures 4-6 depict the distribution of several explanatory variables that possibly include outliers. Some variables do indeed reflect outliers that need to be discounted.
For LIMIT_BAL, more than 1,000,000 is an exception value. For BILL_AMT1 and BILL_AMT2, more than 1,000,000 is an abnormal value. For BILL_AMT3, more than 750,000 is an outlier. For PAY_AMT1, more than 75,000 is an abnormal value. For PAY_AMT2, more than 150,000 is an abnormal value. The remaining sample figure is 29,996, once the necessary exclusion of abnormal-value samples has taken place.
The training samples and test samples were randomly divided into train_data and test_data at a ratio of 8:2. 800 trees are selected in this case, due to the relatively small number of variables and samples. The maximum depth of the tree is set to 3, and the learning rate is 0.03. The final model on the test set has AUC score 0.779, where the Receiver Operating Characteristic (ROC) curve is represented in Figure 7.
As per the research conducted on the basis of the personal and the financial information of the customers. About 30,000 samples have been used a majority of which presented information about the financial information of the customers and a meagre amount of personal information about the customers . The financial credentials of the customers played a pivotal role in estimating the default. The personal information of the customers did not play a much essential part; however, it helped in understanding the demographic data like the age, income, geographical location and employers’ details of the defaulters . As per the research methods applied, it can be seen that the willingness of the customers to pay the timely credits is on a positive end. An improvement in the debt payment has been established through the research. The personal data of the users did not help much in reaching this consensus.
Figure 4. Outlier detection of the amount of given credit and age.
Figure 5. Outlier detection of bill statement in September, August, July 2005.
Figure 6. Outlier detection of the amount paid in September, August, July 2005.
Figure 7. Receiver Operating Characteristic (ROC) Curve.
Table 3. Default probability from the collecting data.
The developed system of credit mapping intuitively estimates the default rate of the customers. The research by considering the five factors has clearly stated that the default of the credit is likely to reduce in the future. A high credit rating points towards low default and a low credit rating points towards a high default rate.
The models above, and their associated results exhibit an improved distinguishing effect; the AUC is 0.779. Moreover, the model lists the features we applied in order of importance based on the index it calculated. Table 4 provides the five most important variables. They are:
For the preceding six months, the number representing bill-repayment coverage reflects ability and willingness to repay: hence, this variable is highly significant. The ultimate goal of the model is to predict the default probability of users, but compared with the comparative model, the “probabilistic probability” paradigm cannot directly reflect the user’s characteristics. Hence, in the context of a scorecard application, there is a particular mapping relationship between default probability and component numbers.
The research has been successful in utilizing the financial information of the customers and determining a credit rating mechanism. The rating will eventually help the issuers to check the credit score of the customers and evaluate their default status. The samples taken for the purpose of the research indicate a high credit ranking in the customers, which is a clear mark that the credit default is likely to decrease in the future. The sample size in however small and cannot be guaranteed, however, the similar mechanism can be used to check the feasibility of the credit and viability of the customer.
Table 4. Five most important variables affecting credit default probability.
There is a specific hypothesis:
Hypothesis: Assuming that the default normal ratio has an expected score and the default normal ratio doubles, the expected score is reduced by 50 points.
Score = A − B * log(odds) (1)
Score-50 = A − B * log(2 * odds) (2)
Odds = y/1 – y.
(1), (2) Solvable:
Score = 50 * log2((1-pro_y)/pro_y) + 450 (3)
Therefore, the probabilistic default probability of the sample prediction is brought into (3) to obtain credit-card user ratings.
The relationship between default probability and sample score is in Figure 8, for the 6000 test samples cited above. The abscissa is the user credit score, and the ordinate is the default ratio. The trend clearly shows that high-level contract breach is proportionately lower while low-level breach is proportionately higher. When over 760 points, we can see that the default rate is 0. Generally, the relationship manifests an obvious trend.
The present paper has addressed the five principal factors that impact the probability of credit-card default. This probability has also been converted into a credit-score system that, via certain score mapping, can be understood intuitively. From the above data analysis, it can be concluded that XGBoost modelling is effective in predicting users’ willingness to repay credit-card debt. As the hypothesis predicted, the default normal ratio has an expected score and the default normal ratio doubles: the expected score is reduced by 50 points, which provides a new paradigm for the study of credit-card default in the future.
The user’s characteristics, or “qualifications”, and his/her credit rating can be directly expressed via the credit score. The higher the score, the better the credit rating, and the lower the score, the lower the credit rating. More importantly, the credit card companies can arrange relevant credit-management measures
Figure 8. Relationship of default probability and credit score.
according to the user credit score, including strategies such as reminding and collecting. Clients with higher scores will require relatively little attention, while those with lower scores can receive tailored management proportionate to their higher risk. Companies may need to take additional measures for those clients with extremely low scores. Through this credit score system, the credit-management efficiency of banks and other credit institutions can be greatly improved.