Deep Learning is not Enough
Deep Learning has real successes, but is not enough to reach artificial intelligence, according to latest KDnuggets Poll. For more complex problems, should pure neural-net approaches be combined with symbolic, knowledge-based methods?
- image recognition (reported used to scan every image uploaded to Facebook for face and object recognition),
- automatic caption generation, (for example this work by Andrej Karpathy and Li Fei-Fei - image below)
- speech recognition (Google Now, Siri, Baidu),
- machine translation
- robotics / motor control
- games (exceeding human-level in many Atari games and defeating a European Go champion),
- and many other areas (watch Oriol Vinyals of Google excellent presentation on recent advances of Deep Learning.
Fig 1. Image, automatically annotated by Deep Learning.
Deep Learning is probably the hottest technology now, and Google Trends shows the interest in it is skyrocketing.
The promise of Deep Learning is to be the engine for Universal Machine Learning
With all those successes there is an inevitable amount of hype.
The latest KDnuggets Poll dived into the Deep Learning Hype and asked
Deep Learning: does reality match the hype?
Here are the results, based on 634 votes.
|Yes, DL advances are real and likely to lead to true Artificial Intelligence||20%|
|Partially, DL advances are real but the reality does not match the hype||58%|
|No, DL is mostly hype||14%|
|Not sure (52)||8%|
The majority of Data Scientists recognize that Deep Learning advances are real, but not sufficient.
Note than only 14% are skeptics who think that Deep Learning is mostly hype. This is normal with every new technology. Most of these skeptics probably don't have smartphones or don't realize that smartphones already use Deep Learning for speech recognition. 20 years ago some of these skeptics probably doubted that web will amount to anything.
What does Deep Learning need to next big advances?
In an important recent article, Chasm that AI Hasn't Yet Crossed, Gary Marcus examines the victory of Google DeepMind program AlphaGo over the the European champion in Go.
Marcus writes that AlphaGo isn't a pure neural, but a hybrid, melding deep reinforcement learning with classical AI methods, like tree-search. He argues that pure deep net approach that was so successful for Atari games does not work for more complex problems like Go, and the future lies with hybrid approaches.
What do you think?