Building and Operationalizing Machine Learning Models: Three tips for success
With more enterprises implementing machine learning to improve revenue and operations, properly operationalizing the ML lifecycle in a holistic way is crucial for data teams to make their projects efficient and effective.
By Jason Revelle, CTO, Datatron.
One of the biggest promises of machine learning was that it would make things easier by computerizing human cognition. More enterprises are implementing machine learning (ML) to improve revenue and operations as they digitally transform their businesses. But with all the promise and opportunity behind ML, it can quickly make life harder for the teams tasked with managing it in production.
Across industries, organizations are using ML for all manner of processes: predicting prices, detecting fraud, classifying health risks, processing documents, preventive maintenance, and more. Models are trained and evaluated on historical data until they appear to fit targets for performance and accuracy. The results promise high business value by predicting, classifying, or prescribing future outcomes—and taking action. Enterprises are keen to reap the benefits that ML promises.
However, once the model is “ready,” automating its use through reliable delivery mechanisms introduces operational complexities and risks that need careful attention. Delivery and operational teams must holistically manage the ML lifecycle to make these projects efficient and effective. Data must be available and of the expected quality compared to what was used to train. Other complexities begin to emerge for the business as realization sets in: this isn’t quite like other engineering efforts, and you need to start thinking about the problem differently to truly become an AI-powered company. To succeed with machine learning, and specifically with ML models, here are three things you should consider:
1. Invest in quickly deploying multiple versions silently until you've got the right fit.
ML models are never right the first time, the second time... or usually even the third time! Data between training and production rarely matches well enough to get it right out of the gate. Make specific and targeted investments to have deployment targets that can run and log results without production systems or customers seeing those results, and the ability to deploy there easily and fluidly until you have the model you like. It’s much more effective and cheaper in the long term to assume there will be a lot of optimization or tuning of your models and a need to compare current versions to new candidates promising better outcomes.
2. Accept that the innovation promised by your data scientists and machine learning engineers usually won’t fit within traditional, application-centric “approved software” policies.
Machine learning is a rapidly growing and diversifying field, with a constantly expanding list of technology providers, both large and small. No one questions that IT needs to maintain proper controls, security scans, and support for your operating environments. However, applying the same controls and processes for governing operating or product hosting technology to your machine learning practice is likely to greatly diminish the returns you will receive over time before you’ve ever crossed the starting line. Additionally, many data scientists are strong technologists and inventors and could quite possibly go find other employment if they feel they are being pressed to deliver better and better results without changing what technology they use or how they can leverage it.
3. Don’t mistake the model development lifecycle for just another software development lifecycle
Creating machine learning models is a much different process than software development – and trying to treat it the same way will get you in trouble. Many businesses currently have elected to treat model delivery as just another software release and end up with sequential, extended timelines, gaps in cross-cutting capabilities like monitoring and analysis, and high overhead for knowledge transfer between creators and operators. Specialists who deploy and support your models must understand how the model and the data work, not just triage error codes and service reliability. Find the right talent, build hybrid teams and invest in tools, so you can test and interpret not just whether the software is executing but also to what degree the responses are accurate and explainable.
Worth the work
ML models have the potential to bring huge efficiencies and advantages to your organization, but keeping up with all the aspects of building and managing a sound model can become a full-spectrum enterprise problem. Its operational complexities and risks must be recognized early; consider using these and other principles to anticipate where your problems and challenges will arise. If managed properly, ML models can be highly agile, easy to change, and well worth the learning curve.
Bio: Jason Revelle is a technology leader with hybrid product management experience in creating solutions and platforms, and has held engineering and development roles, ranging from small tech companies to large enterprises.
- How to solve machine learning problems in the real world
- MLOps is an Engineering Discipline: A Beginner’s Overview
- Overview of MLOps
|Top Stories Past 30 Days|