KDnuggets Home » News » 2015 » Apr » Opinions, Interviews, Reports » Interview: Ksenija Draskovic, Verizon on Dissecting the Anatomy of Predictive Analytics Projects ( 15:n12 )

Interview: Ksenija Draskovic, Verizon on Dissecting the Anatomy of Predictive Analytics Projects

We discuss Predictive Analytics use cases at Verizon Wireless, advantages of a unified data view, model selection and common causes of failure.

ksenija-draskovicKsenija Draskovic is Associate Fellow and Head Data Scientist at Verizon Wireless. She has over 20 years experience in using multichannel data sources to understand customer behavior providing key findings and actionable insights to business stakeholders, embedding those insights into business processes and deploying predictive analytics throughout the organization.

She and her team covers variety of analytics for Marketing, Finance, Real Estate department, including predictive modeling, Big Data integration and unstructured data analytics. Ksenija is a regular speaker at Predictive and Big Data Analytics conferences in US and Internationally.

Here is my interview with her:

Anmol Rajpurohit: Q1. What are the key aspirations from Predictive Analytics at Verizon Wireless? What are the most prominent use cases?

verizonKsenija Draskovic: Predictive analytics is used throughout the Verizon business units as a way to utilize the latest technology to understand key business drivers, optimize customer service, address and anticipate customer needs beyond customer expectations and build trusting and lasting relationship with our customer base.

AR: Q2. What are the data sources for Customer Intelligence Analytical Data Set? What are the advantages of this unified, comprehensive view?

customer_intelligenceKD: To understand which products and services are beneficial for the specific customer groups to recognize and anticipate the immediate customer needs, organization's legacy data systems and new Big Data sources need to be integrated into the comprehensive unified view where pieces of the puzzle are assembled for the single version of the truth.

Just looking and analyzing limited bits and pieces of data, although presenting value within its own departmental areas, rarely gives a complete picture; for that you have to find the way to integrate different data sources and systems.

AR: Q3. What approach do you follow for model selection and optimization for Predictive Analytics?

data-modelsKD: Deciding what to model is often easy, usually those are the hot trending topics that everybody is interested in solving hence contributing time and resources. Once the first model is built, the model selection process is indeed where the math and science meet business reality. You can spend days and weeks deriving and optimizing model results, and although sometime few percent of accuracy can translate to significant additional ROI, lost time in doing that can deplete the estimated benefit.

In addition each model has to go through a 'sanity' check. I tend to favor the models that confirm something that I do know, however, discovering new facts that gives 'aha' moment is also critical. These are the best candidates for story-telling, acceptance and implementation.

AR: Q4. What are the most common reasons behind the failure of Predictive Analytics projects? How can one avoid those?

KD: Here are some pitfalls that in my opinion can lead to the failure of Predictive Analytics projects:
  • Data scientist/modelers and business users do not understand each other or do not communicate during the discovery process. The most important question that every modeler should ask a business user before even start working on the modeling request is “What are you going to do with the results that I give to you, how are you planning to use it”? The discussion often leads to very different modeling targets from the ones originally requested or scoped.
  • missing-dataOnce the task at hand is scoped and defined the next possible hurdle is an inadequate data set, not enough of historical data, or limited data attributes that don't capture the full scope of problem at hand. There should be little concern about collecting the large number of data points, machine learning algorithms have evolved over time and can easily handle thousands of data elements with no major issues.
  • A probably less obvious obstacle is an inappropriate modeling tool, a lack of robust machine learning algorithms or limited hardware capacity for the task. For example you cannot use clustering algorithms to cluster millions of records on the desktop, it is an impossible task to accomplish in a reasonable time frame.
  • An overlooked roadblock for the full model implementation is a missing environment to consume the modeling results or insights. For example if a model provides propensity scores there has to be an environment where the scores are deployed and made easily accessible for business use, model performance monitoring and overall results/ROI tracking.
  • presentation-skillsAnd of course communication of the predictive modeling results are often confusing and not clear to the business users. Data Scientists have to find a way to show the results in a clear, non-ambiguous business language without the complexity of the math and stat used to derive it. It needs to be a well told factual story that resonates with the end user and intuitively makes sense.
  • Finally, reporting the ROIs and showing how predictive models make a difference within the organization. Once that is communicated, be ready for the avalanche of the fresh requests that might land on your doorstep.

Second part of the interview