The Last Defense Against Another AI Winter
My short answer is this: Yes, another AI Winter will be here if you don’t deploy more ML solutions. You and your Data Science teams are the last line of defense against the AI Winter. You need to solve five key challenges to keep the momentum up.
By Ian Xiao, Engagement Lead at Dessa
TLDR: Many people worry about another AI Winter. We don’t lack ML pilots, but enterprises are only deploying about 10% of them. We must lower the cost of deployment with five tactical solutions. I hope this post can help ML Executives, Managers, and Practitioners to think deeper and act faster. We are the last line of defense against another AI Winter. Lastly, you can find a real-time survey to see how others think about this problem.
This is a dense post. Here is a Table of Content to help you navigate:
- A Story
- The Big Picture: the Interest in and Supply of AI
- The Small Picture: the Demand for AI
- A (Very) Brief History of AI Winters & the Core Problem Today
- Five Sub-problems and Tactical Solutions
- A Real-Time Survey for Community Input
Special thanks to Heathcliff Lewis for his valuable inputs. His team is doing something incredible in Canada!
Disclaimer: This post is not endorsed or sponsored by any of the firms I work for or by any of the tools I mentioned. I use the term AI, Data Science, and ML interchangeably.
Like What You Read? Follow me on Medium, LinkedIn, or Twitter. Also, do you want to learn business thinking and communication skills as a Data Scientist? Check out my “Influence with Machine Learning” guide.
1. The Story
After reading my “Data Science is Boring” that discussed the realities of deploying ML solutions, Michelle, a senior executive at a top Canadian bank with an aggressive ML agenda, and I had a very good discussion recently.
Michelle oversees a portfolio of ML Proof-of-Concept (POC). Each POC aims to understand if certain ML technology is valuable to the business within 4–6 months. Her goal is to deploy, not just to complete, more POC per year. Her current deployment rate is around 13%.
It comes down to two questions: Why can’t we deploy more ML solution? Is another AI Winter here?
My short answer is this: Yes, another AI Winter will be here if you don’t deploy more ML solutions. You and your Data Science teams are the last line of defense against the AI Winter. You need to solve five key challenges to keep the momentum up. Otherwise, you and your data science teams will lose the sexiest jobs in the 21st century (obviously I didn’t say that).
2. The Big Picture: the Interest in and Supply of AI
We have been experiencing an “AI Spring” (e.g. lots of excitement about AI) since 2012. This was due to technological breakthroughs, commercialization of Deep Learning, and cheap computation. Such uptick in interest in AI was largely driven by the work from Alex Krizhevsky (a student of Geoff Hinton and co-worker of mine) and investment from firms like Google and Nvidia.
We had similar AI Springs every decade since the 60s. However, AI Winters, defined by 1) skepticism and 2) cut in funding, followed every time.
Are people skeptical? It seems like it (or at least starting to). There is a wide spectrum of opinions in the market today. One way to summarize it is to look at the Google Search Trend. Although it is oversimplified, we can see the Big Picture trends: overall interest is still high but it seems to be flattening out.
Are fundings being cut? Not yet. There are two critical streams: VC and Enterprise fundings. According to a KPMG report, the overall VC market has cooled down a bit if we compare the capital invested of Q1 in 2018 vs. 2019 and the historical deals. But, there are still lots of VC money. AI remains the hottest area (well, until VCs find a better opportunity). From a supply standpoint, AI start-ups and talents are likely to keep the momentum up.
On the other hand, Enterprises define true demand in and the fate of AI because 1) they are the customers of many AI start-ups, and 2) they hire the most ML talents. Unfortunately, there aren’t much public data on how enterprises fund AI initiatives internally. We can extrapolate by looking at the fundamentals: are enterprises deploying AI solutions to realize, not just to illustrate, the promised values? If so, they will keep or increase the funding given the profit-driven objective.
3. A Small(er) Picture: the Demand for AI
Let’s zoom in and look at how enterprises have been adopting and deploying AI capabilities in recent years.
Caveats: a) Surveys don’t represent the full picture. Some companies certainly deploy more than 10%; I’ve seen companies that deploy 25–40%, but they are usually smaller companies. b) We don’ t know if 10% deployment is enough. There is limited public data to show, for example, deployment rates of ML vs. Non-ML POCs or if the returns from the 10% of the deployment cover the total cost of the POC program; but the general sentiment is that “we can do better than this”. c) each survey covers different companies but generally represents large enterprises in North America.
My key takeaway is this: if enterprises don’t deploy more ML solutions, the internal demand for AI will decrease. This will have a ripple effect. ML talents will lose patience and leave; VCs will move investments to other more promising opportunities; Executives will lose confidence and cut funding to AI initiatives. History will repeat: another AI Winter will certainly come. I can feel the chill.
4. A (Very) Brief History of AI Winter & the Core Problem in Enterprises Today
There were many reasons why AI Winters happened; they could be political, technical, and societal. Libby Kinsey wrote an article that analyzes how today is different. The good news: many limiting factors from the past, such as data (there are more services and tools to provide good training data), processing power, commercial readiness, and overall level of digitization have improved. The bad news: we still need to get through a big hurdle (some old issues still exist, but they can be better managed, relatively).
In enterprises, of which the lense I am looking through, the core problem is the economics of deploying AI, just like adopting any other technology. This is the key hurdle we, collectively as the industry, must overcome. Many economic factors are addressable if we take action now.
Joan Didion, my favorite writer, said: “Life changes in the instant. The ordinary instant”. We can’t predict when things tip over. Regardless of AI Winter, we should always be mindful, proactive, and prepared.
So, let’s think deeper about why enterprises are only deploying ~10% of their ML POCs; and what we can do about it, now.
5. Let’s get specific and tactical
In short, deploying ML solutions is still too expensive. We can break it into five sub-problems, understand the core questions, and solve each one accordingly.
1) Process: The path from POC to deployment isn’t clear. Most enterprises source POC ideas across the organization, prioritize, and fund a few promising ones. Once the pilots are completed, people pop some champagnes and show some fancy presentations, then silence. Many teams don’t know what the next steps are; they don’t know where to get funding from; they don’t know who to work with to enhance the POC into a production-grade solution. This is a problem on its own, see point 3).
Core question(s): how to go from POC to production systems?
Solutions: Earmark funding for deployment upfront. Set clear deployment criteria to trigger funding release (e.g. at least 2% accuracy improvement from the old model). Have a gated approach to releasing subsequent funding. Set up an intake process to engage IT and Operation experts early for consultation. Have a process to plan for resourcing if the PoC ends up moving forward to deployment.
2) Incentive: POC programs have the wrong KPI. Often, the ML POC program is part of a bigger enterprise innovation mandate. By definition, the ideas need to be a bit “out there”. The goal is to learn rather than to deploy. This sets the wrong incentive and expectation. So, data science teams focus on trying cutting-edge techniques rather than balancing innovation and engineering; they deliver solutions that are demo-able rather than integratable; they share learnings about techniques rather than plans of incorporating the technique to core business operations. Incentive drive behavior; behavior drives results.
Core question(s): how to get teams to build more deployable solutions? How to build such a team?
Solutions: Switch KPI from “Learning” to “Deployable Innovation”. Use my MJIT method to strike a balance of innovation and deployability 😎. Emphasize thoughtful engineering (just enough for deployment, not over-engineering before proving value). Standardize deliverables to include, for example, deployment-ready applications (this should already be demo-able), integration plan, and business case on learning, pros and cons, and risk.
3) Teams: Many POC teams don’t have the right skillsets. Many data science teams only want to build models; they don’t want to do engineering or operation. Incentive, as discussed in 2), and general expectations play critical roles. Without incorporating the right engineering practices, teams increase the barrier to deployment. Imagine a scenario: after you spend 4 months creating a great PoC and executives love it. But you realize you need to spend at least 18 months to re-design, line up the right teams, and re-build with the proper engineering due diligence. This ruins the Return of Investment.
Core question(s): how to get teams to build deployable solutions? How to build such a team?
Solutions: Hire Data Scientists with experience and passion for engineering. Encourage Data Scientists to learn Full Stack ML (this is a good starting point). If you can’t find them or they are too expensive, create a hybrid team by leveraging experts from both engineering and operation teams. If none of these options work, DM me on LinkedIn, I am happy to chat 😉.
4) Tech: There is a big gap in infrastructure. Development (DEV) and production (PROD) environments have different data and tooling. As a result, lots of extra refactoring and testing are required when moving a solution from DEV to PROD. From the data perspective, most production data cannot be used in DEV(for good reasons). ML performance can vary significantly when it uses PROD data. From the tooling standpoint, there are many new tools available in DEV for innovation purposes, but PROD still uses legacy tools that optimize stability and scalability (this is not a bad thing).
Core question(s): What is the best technology stack to enable innovation and steady-state operation? How to integrate and simplify them?
Solutions: Create a sandbox environment to host sanitized and up-to-date PROD-like data. Have a guideline to help teams to choose the right tools across the ML workflow (e.g. always use good old SQL for data pipelining in DEV if PROD does not support Python Pandas; switching language for such critical component is a real pain). Allow and encourage teams to use Dockers architectures to allow the flexible deployment of the higher-level application stack, even some Infrastructure & Security teams may not like it. Incorporate ML DevOps practices (Eric Broda wrote a good piece on this and this by Martin Fowler).
5) Politics: The resistance to change is strong. I debated this a lot because it seems generic and overly obvious, but I think it’s still worth addressing. Like any introduction of new ideas, tools, or processes, it creates a level of uncertainty due to skepticism, unfamiliarity, or misunderstanding. Fear of failure gets into the way of important and rational decisions. As a result, teams spend extra time navigating internal politics; good POCs missed the launch window.
Core question(s): How to get buy-in from stakeholders?
Solutions: Align values and interests. Have the right use cases with clear and strong value propositions. Get both executives and operational stakeholders from up- and down-stream process involve early. Co-design the solution with them. Get early buy-in by considering their expert inputs with the process mentioned in 2). Have a phased approach of rolling-out; this is not a new idea but is worth reiterating. Hire good consultants who are less tied up in the internal politics to knock doors down (also hedge the risk 😉). Check out the approach I outlined in the Last Mile Problem of AI.
6. A Real-Time Survey
These are my observations and suggestions. They are not exhaustive and are subject to my experience and biases. I’d like to take the opportunity to get inputs from the community. I invite you to a 10-second survey. You can see what others think once you share your inputs. As always, please leave a comment if you have any feedback or ideas that I missed.
I will follow up with another post in a few weeks to share the survey results (and the answers to “Other”). Follow me on Medium so you get a notification.
To Sum Up
If we don’t deploy more ML solutions, people will lose confidence and businesses will shift attention to more promising opportunities, just like the AI Winters in the past. I believe many issues can be addressed immediately. Some are specific issues of ML technology, but many are timeless challenges in enterprises. Although it may sound ignorant, let’s steer the course of history and avoid another AI Winter! ML Executives, Managers, and Practitioners, we are the last defense against the AI Winter.
Thanks for making it this far. Like What You Read? Follow me on Medium, LinkedIn, or Twitter. Also, do you want to learn business thinking and communication skills as a Data Scientist? Check out my “Influence with Machine Learning” guide.
You may also like my other writings:
Data Science is Boring
My normal (boring) days in doing Data Science and how I cope with it
The Last-Mile Problem of AI
One Thing Many Data Scientists Don’t Think Enough About
Bio: Ian Xiao is Engagement Lead at Dessa, deploying machine learning at enterprises. He leads business and technical teams to deploy Machine Learning solutions and improve Marketing & Sales for the F100 enterprises.
Original. Reposted with permission.
|Top Stories Past 30 Days|