What To Expect from Deep Learning in 2016 and Beyond
Predictions from some of the top names in deep learning, including Ilya Sutskever and Andrej Karpathy, about what to expect in the field over the next 5 years.
As 2015 draws to a close, all eyes are on the year’s accomplishments, as well as forecasting technology trends of 2016 and beyond. One particular field that has frequently been in the spotlight during the last year is deep learning, an increasingly popular branch of machine learning, which looks to continue to advance further and infiltrate into an increasing number of industries and sectors.
Over the last year we’ve had the privilege of hearing from many of the great minds working in artificial intelligence and computer science, at RE•WORK events, and we look forward to meeting and learning from many more in 2016!
As part of our ongoing speaker Q&A series, we asked some of the top names in deep learning for their predictions for the field over the next 5 years.
What developments can we expect to see in deep learning in the next 5 years?
Ilya Sutskever, Research Director of OpenAI: We should expect to see much deeper models, models that can learn from many fewer training cases compared to today’s models, and substantial advances in unsupervised learning. We should expect to see even more accurate and useful speech and visual recognition systems.
Sven Behnke, Full Professor and Head of the Autonomous Intelligent Systems Group at University of Bonn: I expect deep learning methods to be applied to increasingly multi-modal problems with more structure in the data. This will open new application domains for deep learning, such as robotics, data mining, and knowledge discovery.
Christian Szegedy, Senior Research Scientist at Google: Current deep learning algorithms and neural networks are far from their theoretically possible performance. Today, we can design vision networks that are 5-10 times cheaper and use 15 times less parameters while outperforming their much more expensive counterparts from one year ago, solely by the virtue of improved network architectures and better training methodologies. I am convinced that this is just the start: deep learning algorithms will become so efficient that they will be able to run on cheap mobile devices, even without extra hardware support or prohibitive memory overhead.
Andrej Karpathy, Computer Science Ph.D. student at Stanford University and Research Scientist at OpenAI: Instead of describing several interesting on-the-horizon developments on high level I’ll focus on one in more detail. One trend I’m seeing is that the architectures are quickly becoming bigger and more complex. We’re building towards large neural systems where we swap neural components in/out, pretrain parts of the networks on various datasets, add new modules, finetune everything jointly, and so on. For example, Convolutional Networks were once among the largest/deepest neural network architectures, but today they are abstracted away as a small box in the diagrams of most newer architectures. In turn, many of these architectures tend to become just another small box in the next year’s innovations. We’re learning what the lego blocks are, and how to wire and nest them effectively to build large castles. Read more from Andrej here.
Pieter Abbeel, Associate Professor in Computer Science at UC Berkeley and Co-Founder of Gradescope: Lots of verticals based on current deep supervised learning technology, as well as scaling to video, figuring out how to make deep learning outperform current approaches to natural language processing, and significant advances in deep unsupervised learning and deep reinforcement learning.
Eli David, CTO of Deep Instinct: In the past two years we are observing an accelerated success of deep learning in most areas it is applied to. Even if we don’t achieve the Holy Grail of human level cognition within the next five years (though this will most probably happen in our lifetimes), we will see huge improvements in many additional domains. Specifically, I think the most promising area will be unsupervised learning, as most of the data in the world is unlabeled, and our own brain’s neocortex is primarily a very good unsupervised learning box.
While Deep Instinct is the first company using deep learning for cybersecurity, I would expect more companies to employ it in the upcoming years. However, the barrier of entry to deep learning is still quite high, especially for cybersecurity companies which are not typically using AI methods (e.g., only a few solutions use classical machine learning), so it will take a few more years until deep learning becomes a commodity technology of widespread use within cybersecurity.
Daniel McDuff, Director of Research at Affectiva: Deep learning is already promising to be the dominant form of machine learning within computer vision, speech analysis and a number of other areas. I hope that the ability to build accurate recognition systems with the computing power available from one or two GPUs will allow researchers to develop and deploy new software in the real-world. I expect that more focus will be given to unsupervised training and/or semi-supervised training algorithms, as the amount of the data only continues to increase.
Jörg Bornschein, Global Scholar with the Canadian Institute for Advanced Research (CIFAR): Predicting the future is always hard. I expect that unsupervised, semi-supervised and reinforcement-learning approaches will play much more prominent roles than today. When we consider machine learning as a component in larger systems, e.g. in robotic control systems or as parts that steer and focus the computational resources of larger systems, it just seems obvious that purely supervised approaches are conceptually too limited to appropriately solve these.
Ian Goodfellow, Senior Research Scientist at Google: I expect within five years, we will have neural networks that can summarize what happens in a video clip, and will be able to generate short videos. Neural networks are already the standard solution to vision tasks. I expect they will become the standard solution to NLP and robotics tasks as well. I also predict that neural networks will become an important tool in other scientific disciplines. For example, neural networks could be trained to model the behavior of genes, drugs, and proteins and then used to design new medicines.
Nigel Duffy, CTO of Sentient Technologies: To date the Big Data ecosystem has been focused on the collection, management, and curation of large amounts of data. Obviously, there has also been a lot of work on analysis and prediction. Fundamentally though, business users don’t care about any of that. Business users only care about outcomes, i.e., “will this data change the way I behave, will it change the decisions I make”. We believe that these are the key questions to be addressed in the next 5 years. And we believe that AI will be the bridge between data and better decisions.
Obviously, deep learning will play a significant role in that evolution, but it will do so in combination with other AI approaches. Over the next 5 years we will increasingly see hybrid systems where deep learning is used to handle some hard perceptual tasks while other AI and machine learning (ML) techniques are used to address other parts of the problem, e.g., reasoning.
Koray Kavukcuoglu & Alex Graves, Research Scientists at Google DeepMind: A lot will happen in the next five years. We expect both unsupervised learning and reinforcement learning to become more prominent. We also expect an increase in multimodal learning, and a stronger focus on learning that persists beyond individual datasets.
Charlie Tang, PhD student in the Machine Learning group at the University of Toronto: Deep learning algorithms will be gradually adopted for more tasks and will “solve” more problems. For example, 5 years ago, algorithmic face recognition accuracy was still somewhat worse than human performance. However, currently, super-human performances are reported on the main face recognition dataset (LFW) and the standard image classification dataset (Imagenet). In the next 5 years, harder and harder problems such as video recognition, medical imaging or text processing will be successfully tackled by deep learning algorithms. We can also expect deep learning algorithms to be ported to commercial products, much like how the face detector was incorporated into consumer cameras in the past 10 years.
To learn more about the future impact of artificial intelligence and deep learning on business and society, join us at one of our 2016 events:
- Deep Learning Summit, San Francisco, 28-29 January
- Virtual Assistant Summit, San Francisco, 28-29 January
- Women in Machine Intelligence Dinner, London, 17 February
- Deep Learning in Healthcare Summit, London, 7-8 April
- Deep Learning Summit, Boston, 12-13 May
- Machine Intelligence Summit, Berlin, 29-30 June
- Deep Learning Summit, London, 22-23 September
- IoT Meets AI Dinner, London, 15 September
- Deep Learning Summit, Singapore, 20-21 October
For further discussions on deep learning, machine intelligence and more, join our group on Linkedin!
- Top 5 arXiv Deep Learning Papers, Explained
- Deep Learning Reading List, January
- FICO Chief Analytics Officer 2016 Predictions
|Top Stories Past 30 Days|