KDnuggets Home » News » 2017 » Mar » Tutorials, Overviews » The Challenges of Building a Predictive Churn Model ( 17:n10 )

The Challenges of Building a Predictive Churn Model


 
 
http likes 29

Unlike other data science problems, there is no one method for predicting which customers are likely to churn in the next month. Here we review the most popular approaches.



By DataScience.com Sponsored Post.

Part of being a data scientist is constantly encountering new problems and not always having the answers. There's nothing wrong with doing a quick web search for a solution, but in many cases, what you'll find isn't technical or specific enough to solve your problem. Take the majority of online materials about churn modeling, for example.

Challenges Predictive Churn Model

There's a lot on the web about churn for business users, since churn is a metric that affects marketing, customer service, and other largely non-technical departments. On the other extreme, a search for academic literature on churn will produce thousands of papers on innumerable techniques, most of them applied in a very particular context.

Why is it so difficult to find quality and unbiased technical information on churn? Because, like many other problems in the data science world, there is no one method for predicting which customers are likely to churn in the next month. Even the term "churn modeling" has multiple meanings: It can refer to calculating the proportion of customers who are churning, forecasting a future churn rate, or predicting the risk of churn for particular individuals. Most of the time it's the latter, which has a multitude of applications that you can read more about here.

No "Silver Bullet" Methodology


The two most popular broad approaches to churn modeling are machine learning techniques and survival analysis, which each require distinct data structures and feature selection procedures. Ultimately, there is no single churn methodology that is proven to work in most situations; either machine learning models or survival regression could be appropriate based on the application.

Machine learning methods, specifically classification, are widely used due to their high performance and ability to handle complex relationships in data. On the other hand, survival analyses can provide value by answering a different set of questions. Quantities, such as survival and hazard functions, can be used to forecast which customers will churn in a particular time period.

In addition to these two approaches, there are many others: Ensemble models can provide superior accuracy, but could be time consuming to train and tune; rule-based techniques, latent probability models, and network-based models have all also shown some promising results. So what should you use to perform your churn analysis? There are a lot of studies out there that can help narrow down the methods that make sense for your use case, but it is still a good idea to compare the performance of several models on your data to find out which is the most effective.

Laying the Groundwork: Features and Exploratory Analysis


As with many other machine learning models, a churn model is only as good as the features going into it. In addition to domain knowledge, skill and creativity are needed to construct a robust feature set with information that is predictive of a churn event. Many roadblocks can arise at this stage, such as target leakage, unavailable or missing information, or the need for optimal feature transformations.

Even constructing the target variable for the churn event may not always be straightforward. For example, in a setting where customers cancel and renew frequently, how can we define churn? What about in a setting where customers can subscribe and purchase multiple product lines? Careful exploratory analysis, and sometimes auxiliary model building, often have to occur before you embark on building an overall churn model. Exploratory analysis can reveal any irregularities, correlations, outliers, and relationships that domain knowledge alone wouldn't account for.

Validating churn model performance


When estimating model accuracy, it's important to choose the correct metric the right validation dataset to train the model on. Both class imbalance and model monetary impact are metrics that could potentially be optimized; for instance, in the case of class imbalance, the area under curve (AUC) can be used to accurately estimate the model's ability to identify churners.

Similarly, when model performance is directly tied to quantifiable actions, lift becomes an important metric. For example, an email campaign with a 20% discount code may make monetary sense only for those customers who have a very high risk of churning. This would mean that maximizing model precision is important, and lift captures how well a model identifies churners compared to the results you'd see sending out a discount to a random group of customers to retain them. Optimizing lift would ultimately help you maximize your return on investment.

Once you've optimized the correct metric, you still need to measure model performance on new, unseen data. In an ideal case, you'd monitor a deployed model or several versions of the model to identify problems. But when a live test is too costly, careful construction of a validation set can achieve a realistic estimate of model performance.

All of these steps can be difficult to navigate on a deadline, and scaling this knowledge with a number of data scientists can be both time consuming and distracting. In the coming weeks, we'll continue to explore the intricacies of churn modeling to help equip your team with the right tools to accurately measure when and why your customers churn.