How to make AI/Machine Learning models resilient during COVID-19 crisis

COVID-19-driven concept shift has created concern over the usage of AI/ML to continue to drive business value following cases of inaccurate outputs and misleading results from a variety of fields. Data Science teams must invest effort in post-model tracking and management as well as deploy an agility in the AI/ML process to curb problems related to concept shift.



By Mayank Kumar, Data Science Consultant and Leader, UnitedHealth Group.

Recently, I have come across several articles and conversations where Data Science & Analytics leaders across industries have been expressing concerns over the performance and reliability of AI/ML. The problem of concept drift or data drift is not new, but it has just made its presence felt very significantly with COVID-19. In this article, I talk about some best practices to ensure the creation of resilient AI/ML solutions to tackle such data shifts.

 

Breadth of COVID’s Impact

 

AI/ML has an extremely broad use-case across all industries and verticals. However, not all industries or use-cases are getting impacted by COVID triggered data shifts. As I assess and see around, only areas where AI/ML is being used to capture consumer/end-user behaviour have been impacted, because that is the only thing COVID has changed. So, for example, your Alexa has not suddenly stopped understanding English or self-driving cars have not forgotten how to drive because there is no user behaviour involved here. What has changed is our lifestyle, daily needs, online presence, etc. In a B2B market, what has changed is how companies are carrying out their business, what products are they consuming vs which ones are they shedding off. On top of these changes, the other problem is that the change is volatile. We see shifts, which themselves are driven by a lot of extraneous factors on a weekly basis.

The problem primarily exists because AI/ML uses historical data to profile behaviours and predict future events. If the recent behaviours have changed drastically and continue to be volatile, it makes sense that static AI/ML solutions would not stand a chance and would be inaccurate at best and misleading at worst.

 

How to Outflank COVID’s Impact?

 

While there is no definitive and exhaustive guidebook to get around this, and depending upon the problem statement at hand, one may have to employ one or more strategies in different permutations and combinations. Below are some of the ways that I think can help in that direction.

Monitor your data & features

One of the key things I have learned as a data science leader is that a successful data science team should always have the capability to monitor underlying data and analytical features. This is a key step to take control of your advanced analytics strategy, where you have a direct line of sight on how your data is faring and how your models are performing.

I am not talking about ad hoc monitoring, which I sense most of us already do on-demand as we see our end goals are not being met by an analytical solution. I am talking about creating proactive measures with the ability to alert solution owners on possible deteriorations in data shifts, outcome degradation, etc.

Assess flexibility of your AI/ML solutions

Agility is key to everything, and so is AI/ML. Ask yourself whether your model requires retraining for every change in feature distribution or has some degree of dynamism built into the way you have engineered features and your model. What is the kind of effort it would take to make your solution more sensitive to recent data to track recent user behaviour? How easy is it to make human corrections to the weights that you are giving to different decision parameters? Can rule-based business filters be applied over your AI/ML workflow?

Dynamic features, self-learning models, ability to increase the sensitivity of your solutions, and ease of introducing human corrections in your AI/ML decision flow are key in deciding how agile your data science and analytical solutions are in making quick tweaks to accommodate for changing data patterns. A termed that has been coined for this is MLOps, similar to DevOps in technology, to bring similar agility into the Data Science domain.

Re-look at definitions of outcome variables

Because what we call as normal has shifted, it does warrant that data scientists revisit the definitions of their outcome variables to see if what they have been calling abnormal is the new normal and vice versa. Can our data fall in a new category that was nonexistent prior to or have some categories vanished? Questions should also be asked on how your model treats new data points that have not been seen earlier during training? Does it outcast them as a positive or negative outlier or consider them as a part of the normal population, more of a use-case based decision that may need to be taken.

Shifting from profile oriented methods to non-profile based ones

While this section may not be applicable everywhere, however, wherever possible, one should explore for methodologies that do not require a normal profile to train rather have intrinsic ability to find outliers in the data basis the given sample. The statement is mainly applicable for outlier based detection systems in unsupervised and hybrid setups.

Trusting more on real-time validation of your solutions

Given that our current data may have shifted drastically from what we have seen in past, trusting past model performances that we may have established during training, test and validation would no longer provide the real picture. Data science units should move as much as possible to real-time validation setups using A/B testing to be able to observe how the model is faring against a scenario with no AI/ML solution in the current setup.

 

Conclusion

 

As I read it somewhere, AI/ML is living and breathing. So, as with any other living and breathing object, it requires constant care and intervention. One cannot build an AI/ML solution and go away. Data science setups, in my perspective, should invest around 20-30% of their effort in monitoring, management, and potential enhancements to be successful.

Are you already leveraging some of the above to evaluate your AI/ML solution performance? What other techniques is your team employing to curb the potential impact of frequent data shifts?

 

References

 

The Pandemic Has Seriously Confused Machine Learning Systems

Why Are Machine Learning Projects So Hard To Manage?

4 Steps To Ensure AI/ML System Survives COVID-19

Concept Drift and the Impact of COVID-19 on Data Science

How COVID-19 Affects Machine Learning

Bio: Mayank Kumar has 10+ years of experience in data science working in the Healthcare and Pharmaceutical Industry, predominantly corresponding to the US marketplace. Currently, Mayank manages multiple engagements to develop targeted analytical solutions across various verticals leveraging state-of-the-art tools and techniques in Machine Learning, Deep Learning, & Big Data Processing.

Related: