Cloud Machine Learning’s Ostrich Mania & Uncanny Valley
Cloud machine learning services are popping up by the tens, providing automated data science solutions. What will the anticipated customers want? They may follow a peculiar distribution reminiscent of the uncanny valley.
Pages: 1 2
(Image from Wikipedia)
It's a charming idea with just enough empirical validation and just enough organic intuition that it arrived in this world practically gift-wrapped for the editors of the New Yorker or NPR's Radiolab. (note: I since Googled for stories, and NPR has covered the concept on All Things Considered, On the Media, and Radio Lab!)
So what does this have to do with cloud machine learning services? Generally, when we consider preferences over attributes, either an attribute is outright preferable, outright undesirable, or our preferences peak somewhere in the middle, and tail off to either side. For example, if you were choosing among stocks to invest in, holding everything else constant, you'd strictly prefer a company which has more profit to one with less. Similarly, ceteris paribus, you'd strictly prefer a company with less debt to one with more. On the other hand, when choosing the size of a car you might drive, your preference might increase as the size grew from too small to just right, and then your preference would dip as the car grew too large to fit in a parking space.
The uncanny valley represents a distinct, less intuitive distribution of preferences. In this case, we most prefer either extreme to some point in the undesirable dip in the middle.
I believe that a distribution of preferences resembling the uncanny valley may exist for cloud machine learning services. On one end, clearly there is a market for magic machine learning-based solutions which solve problems without the user having to know anything about the learning process. Many such services don't even require the user to supply their own data. Shazam predicts songs given mp3s, without requiring the user to know anything at all about data sets, loss functions, models, or training algorithms. On the extreme other end, there are clearly motivated industries seeking expert machine learning talent to custom-build systems for their tasks. But in the middle, I believe many companies may risk developing services which are too technical for the novice and too restrictive for an expert.
The Takeaway
In the first part of this article, I discussed the rush to launch companies providing machine learning as a service, all in anticipation of future demand which has only begun to materialize. In the second, I introduced a theory that preferences of users for machine learning services may resemble the uncanny valley, dipping in the middle. Customers may want services that are either mostly automated, or highly powerful and flexible, but not something uncomfortably in the middle. If the theory holds, it would be advisable for ML companies rushing into this space to bear in mind that preferences over different levels of automation may follow this peculiar distribution.
As a concrete example, I recently took Amazon ML, IBM Watson Analytics, Azure ML, and ForecastThis's DSX for a spin. DSX cycles rapidly through 100s or 1000s of models (note: impractical for true big data) returning statistics on performance to the user. They report log loss, Accuracy, F1 score, and more. However, most details of the model are concealed. Anyone who cared about the log loss would need to know precisely which loss function was actually optimized during training. For some models, good results might correspond to infinite log loss!
For a novice, the system may be confusing. For an expert, there is enough information to provoke questions but not enough functionality to answer them. Azure ML is considerably more hands on, allowing users to choose algorithms and hyper-parameters by hand. In our metaphor, this would be a point slightly to the right of the valley. Still, at times I couldn't shake the feeling that many prospective users of this service might just prefer to write code themselves.
Amazon's service, in contrast, falls to the left of the dip, offering a far more restricted service, accessible with default settings to any user with data in AWS. Truly far to the "magic" end of the spectrum, the business model adopted by Metamind and Clarifai makes virtually no demands on the user. Upload an image, get an annotation. Any software developer could leverage this ability even if they didn't know what a training set was in the first place.
Related:
- Cloud Machine Learning Wars: Amazon vs IBM Watson vs Microsoft Azure
- MetaMind Competes with IBM Watson Analytics and Microsoft Azure Machine Learning
- IBM Watson Analytics vs. Microsoft Azure Machine Learning (Part 1)
- (Deep Learning’s Deep Flaws)’s Deep Flaws
- 11 Best Practices of Cloud and Data Migration to AWS Cloud
- Tips & Tricks of Deploying Deep Learning Webapp on Heroku Cloud
- Cloud Data Warehouse is The Future of Data Storage
- Snowflake and Saturn Cloud Partner To Bring 100x Faster Data Science to…
- Travel to faster, trusted decisions in the cloud
- Cloud Computing, Data Science and ML Trends in 2020–2022: The battle of…
Pages: 1 2