Shortcomings of Deep Learning
Current Deep Learning successes such as AlphaGo rely on massive amount of labeled data, which is easy to get in games, but often hard in other contexts. You can't play 20 questions with nature and win!
By Oren Etzioni, CEO of the Allen AI, Founder of Farecast, Professor at UW, CSE
Deep Learning has been incredibly successful in recent years, but it is still merely a tool for classifying items into categories (or for nonlinear regression).
We have seen outstanding results in mapping images, audio segments, even board positions, into categories with ever-increasing accuracy, but AI needs to go way beyond classification and regression.
Let's talk about AlphaGo, which is a phenomenal technical achievement by the team at DeepMind.
Yet, the overblown claims about the impressive success of AlphaGo are a case of a person climbing to the top of the tree and shouting "I'm on my way to the moon!"
Here's why:
Most of these comments are not specific to AlphaGo or Deep Learning, but are broadly applicable to all supervised learning programs. As Alan Newell said (in a different context), "you can't play 20 questions with nature and win!"
With all due respect to the brilliant Geoff Hinton, thought is not a vector, and AI is not a problem in statistics.
Bio: Oren Etzioni is CEO of the Allen Institute for Artificial Intelligence, and a Professor at the University of Washington's Computer Science department. He received numerous awards, founded several companies including Farecast (sold to Microsoft in 2008) and Decide (sold to eBay in 2013), and authored over 100 technical papers. Oren received Ph.D. from CMU in 1991, and his B.A. from Harvard in 1986.
Original on Quora.
Related:
We have seen outstanding results in mapping images, audio segments, even board positions, into categories with ever-increasing accuracy, but AI needs to go way beyond classification and regression.
Let's talk about AlphaGo, which is a phenomenal technical achievement by the team at DeepMind.
Yet, the overblown claims about the impressive success of AlphaGo are a case of a person climbing to the top of the tree and shouting "I'm on my way to the moon!"
Here's why:
- AlphaGo relied on a massive amount of labeled data, which is easily available in games, and often unavailable in other contexts. Consider, for example, classifying citations into "influential" versus "not" in Semantic Scholar . While unlabeled data is plentiful, labeled data is difficult to obtain.
- We don't know how to build sophisticated background knowledge or reasoning capabilities into deep learning systems.
- AlphaGo relied on a set of manually-crafted neural networks, that have to be changed from application to application.
- AlphaGo relied on people to specify its input representation and its output target categories--it cannot specify its own.
- In many cases, even specifying the appropriate categories is difficult due to nuance, ambiguity, etc.
Most of these comments are not specific to AlphaGo or Deep Learning, but are broadly applicable to all supervised learning programs. As Alan Newell said (in a different context), "you can't play 20 questions with nature and win!"
With all due respect to the brilliant Geoff Hinton, thought is not a vector, and AI is not a problem in statistics.
Bio: Oren Etzioni is CEO of the Allen Institute for Artificial Intelligence, and a Professor at the University of Washington's Computer Science department. He received numerous awards, founded several companies including Farecast (sold to Microsoft in 2008) and Decide (sold to eBay in 2013), and authored over 100 technical papers. Oren received Ph.D. from CMU in 1991, and his B.A. from Harvard in 1986.
Original on Quora.
Related:
- Deep Learning Research Review: Generative Adversarial Nets
- AlphaGo is not the solution to AI
- Google’s Great Gains in the Grand Game of Go