AI is not set and forget

Just like a car, AI-based system can tick along in decent shape for a while. But neglect it too long and you’re in trouble. Unfortunately, failing to maintain your AI will destroy the project.



By Paul Barba, Chief Scientist, Lexalytics

AI Tools

Every new car comes with an owner’s manual. Among the wisdom contained in that manual is a maintenance and repair logbook with recommendations for when to change out parts or flush the system. That’s because all mechanical items are subject to wear and tear over time. Things break down, go awry or are subject to the whims of the weather.

AI is the same – although to an even greater (and more expensive) extent. Just like a car, it can tick along in decent shape for a while. But neglect it too long and you’re in trouble. Unfortunately, failing to maintain your AI is a great way to end up totaling your project.

Here’s how to ensure that your AI can remain robust enough to get you from point A to point B without encountering major issues along the way.

Plan for failure

AI is the large sum of a complicated IT problem, and with it come the same concerns and requirements. Your network might go down, a server might fail or a machine may experience a hiccup. All of the hardware and software issues that are standard in IT all exist in an AI project – with the added challenge that something may go wrong with your training data.

 

This is why planning for failure is your friend. Determine ways for detecting issues and then develop a series of recovery paths to deal with them. If your system goes offline for an hour, how will you manage this? If your model breaks, how will you revert to a working one?

 

Work on your reaction times

The more complex a project and the longer it’s left to run unchecked, the more likely you are to run into a point of no return. AIs need feedback to let them know when they’re wandering off topic – and new input to get them back to where they need to be. Because the more embedded and interdependent their false understandings get, the harder they are to fix.

This issue only compounds as your AI scales and becomes more complex. When you’re working with one or two models, corrections can be made. But when you have 10 models working together and learning from each other you have 100 edges where strange things can happen. Think about it like this. If a student hasn’t quite grasped fractions in grade school, it’s easily corrected. But if they’re taking college-level courses, fixing the problem’s a lot harder – especially if they’re tutoring other students!

That’s why early detection and correction is key. Ask yourself how you’re going to identify issues, and how you’re going to correct them when you find them. Will you retrain the AI as you go, or allow it to retrain itself? How will you prepare for these eventual complexities? How will you track your data flow to identify the source of issues and “undo” them?

While AI can self-regulate, having a human at hand to audit potential issues is essential.

 

Adopt a change management model

Sometimes your AI may be working just fine, but the world around it has shifted. Perhaps there are new understandings around your data, or there’s a sudden PR storm that necessitates a shift in how you proceed. Maybe the business problems you’re trying to solve have changed since spinning up your models.

Relevant AI is robust AI. Your project needs to work not just at the tech level, but at the operational, engineering, business and marketing levels as well. When implementing your AI, make sure that you have a plan for monitoring, measuring and adjusting to non-technical changes that may upend your project “truths”. If your AI is perfectly optimized for identifying churn based on data from 5 years ago, it’s a dinosaur, not a pioneer.

Build a change management model into your AI and you’ll have less to throw away or undo.

 

Keep your AI trucking on

Sure, maintaining an AI is more complex than maintaining a vehicle. But the principles aren’t so different. By doing the equivalent of inspecting your tires and oil levels, keeping an eye out for the “check engine” light and bringing it in for a regular tune-up, you’ll be able to head off major issues before they occur.

Through auditing, quantitative measuring and proactive organizational responsiveness, you can avoid the equivalent of blowing an AI gasket. Instead, you can ensure that your AI project continues to create value for you. Plan to manage AI changes and issues ahead of time and you’ll be able to maximize the insights and value it can provide.

Bio: Paul Barba is the Chief Scientist of Lexalytics, where he is focused on applying force multiplying technologies to solve artificial intelligence-related challenges and drive innovation in AI even further. Paul has years of experience developing, architecting, researching and generally thinking about AI/machine learning, text analytics and natural language processing (NLP) software. He has been working on growing system understanding while reducing human intervention and bringing text analytics to web scale.

 

Related: