Big Data Analytics for Lenders and Creditors
Credit scoring means applying a statistical model to assign a risk score to a credit application or to an existing credit account. Here we are suggesting how data science and big data can help making the better sense of different risk factors and accurate predictions.
On the other hand, a decision tree may outperform a scorecard in terms of predictive accuracy because, unlike the scorecard, it detects and exploits interactions between characteristics. In a decision tree model, each answer that an applicant gives determines what question he is asked next. If the age of an applicant is, for example, greater than 50 the model may suggest granting a credit without any further questions, because the average bad rate of that segment of applications is sufficiently low. If, on the other extreme, the age of the applicant is below 25 the model may suggest asking about time on the job next. The credit would then maybe only granted to those that have exceeded 24 months of employment because only in that sub-segment of youngsters the average bad rate is sufficiently low. A decision tree model thus consists of a set of if .. then … else rules that are still quite straightforward to apply. The decision rules also are easy to understand, maybe even more so than a decision rule that is based on a total score that is made up of many components. However, a decision rule from a tree model, while easy to apply and easy to understand, may be hard to justify for applications that lie on the border between two segments. There will be cases where an applicant will for example say: ‘If I had only been 2 months older I would have received a credit without further questions, but now I am asked for additional securities. That is unfair.’ That applicant may also be tempted to make a false statement about his age in his next application. Even if a decision tree is not used directly for scoring, this model type still adds value in a number of ways: the identification of clearly defined segments of applicants with a particular high or low risk can give dramatic new insight into the risk structure of the population. Decision trees are also used in scorecard monitoring, where they identify segments of applications where the scorecard under performs.
Finally, decision trees often can achieve a similar predictive power as a scorecard with much fewer characteristics. Models that only require few characteristics, sometimes called ‘short scores’, are becoming especially popular in the context of campaigning and marketing for credit products. However, there is a fundamental problem associated with short scores: they diminish the richness of information that the organization can collect on the applicants and thereby erode the basis for future modeling.
With the decision tree, we could see that there is such thing as a decision rule that is too easy to understand and thereby invites fraud. Ironically speaking, there is no danger of this happening with a neural network. Neural networks are extremely flexible models that combine combinations of characteristics in a variety of ways. Their predictive accuracy can, therefore, be far superior to scorecards and they don’t suffer from sharp ‘splits’ as decision trees do. However, it is virtually impossible to explain or understand the score that is produced for a particular application in any simple way. It can therefore be difficult to justify a decision that is made on the basis of a neural network model. In some countries it may even be a legal requirement to be able to explain a decision and such a justification then must be produced with additional methods. A neural network of superior predictive power therefore is best suited for certain behavioral or collection scoring purposes, where the average accuracy of the prediction is more important than the insight into the score for each particular case.Neural network models can not be applied manually like scorecards or simple decision trees, but require software to score the application. Then, however, their use is just as simple as that of the other model types.
After building both a scorecard and a decision tree model we now want to compare the quality of the models on the validation data. One of the standard Enterprise Miner charts in the Assessment node is the concentration curve and is shown in Figure 9. It shows how many of all the bads in the population are concentrated in the group of 2% (4%, 6%, …) worst applicants as predicted by the model. Sorting applicants based on the scorecard scores will result, for example, in around 30% of all the bads being concentrated in the 10% applicants that are considered the worst by the scorecard model. The decision tree is only able to concentrate about half as many bads in the same number of what it calls the worst applicants (the 10% decile is marked by the vertical black line in In summary, the scorecard is assessed to be superior, because its curve stays above that of the tree.
fining decision rules for application approval and risk management
Application approval and risk management do not rely on scores alone, but scores do form the basis of a decision strategy that groups customers into homogenous segments. These segments can then be treated with the same action. For example, in the case of approval decisions, customers are often classified using appropriate cutoff scores as approved, referred for examination or rejected. Other segmentation strategies can determine the limit amount that is assigned to a segment or the collection actions taken. An important type of segmentation is the division of customers into risk pools for the purpose ofcalculating certain risk components: probability of default (PD), loss given default (LGD) and exposure at default (EAD). These risk components are required by the risk weighted assets (RWA) calculation mandated by the Basel II and III capital requirements regulation. Analysts apply the scorecard and the pooling definition to a historical data set. The long-run historical averages of the default rate, losses and exposures can then be calculated by pool and used as input into the RWA calculation. There are various ways to group customers into segments using a scorecard. Often segmentation involves the setting of thresholds. Sometimes analysts define these thresholds manually, and sometimes they use an algorithm to automatically find a decision rule that is optimal in a specific way. The way multiple thresholds are combined further characterizes a decision rule. Typical examples of decision rules include policy rules (exclusions), single score bins, multiple score bins and decision trees.
Execution of decision rules can be done in batch for all customers so that the assignment of each customer to a group and an action is available in an operational data store for instant retrieval by front-office software. Or, alternatively, the front office software can initiate execution of the decision rule to make a decision on an individual customer, possibly using new or updated information supplied by the customer at that time (online). The decision is then passed back immediately to the front-office software. In either case, the decision rule is not executed by the front-office software but through middle layer software on a central server.For existing credit customers, the batch option will be most commonly used, since behavioral information derived from the customer transaction history and other stored customer characteristics is typically more predictive than information a customer might supply in the front office.
- Are Big Data and Privacy at odds? FICO Interview
- Online course: Credit Risk Modeling
- CRN 50 Big Data Business Analytics Companies
|Top Stories Past 30 Days|