Predictions for Deep Learning in 2017

The first hugely successful consumer application of deep learning will come to market, a dominant open-source deep-learning tool and library will take the developer community by storm, and more Deep Learning predictions.



deep_learningDeep learning is all the rage as we move into 2017. Grounded in multilayer neural networks, this technology is the foundation of artificial intelligence, cognitive computing, and real-time streaming analytics in many of the most disruptive new applications.

For data scientists, deep learning will be a top professional focus going forward. Here are my predictions for the chief trends in deep learning in the coming year:

  • The first hugely successful consumer application of deep learning will come to market: I predict that deep learning’s first avid embrace by the general public will come in 2017. And I predict that it will be to process the glut of photos that people are capturing with their smartphones and sharing on social media. In this regard, the golden deep-learning opportunities will be in apps that facilitate image search, auto-tagging, auto-correction, embellishment, photorealistic rendering, resolution enhancement, style transformation, and fanciful figure inception. Where audio processing is concern, deep learning’s first mainstream success in 2017 may very well be in composing music that feels like it was created by an actual human musician. Deep learning may also enter our lives in the coming year as the intelligence that driving a new generation of wearables that helps disabled people to see, hear, and otherwise sense their surroundings. The technology will definitely find its way into toys, games, and consumer appliances in 2017, especially in those that incorporate embedded cameras, microphones, and Internet of Things endpoints. To some degree, consumers may also encounter deep learning in 2017 in autonomous vehicles, though these products will take several years to enter the mainstream, as their developers tackle a rat’s nest of technological, regulatory, legal, cultural, and other issues.
  • A dominant open-source deep-learning tool and library will take the developer community by storm: As 2016 draws to a close, we’re seeing more solution providers open-source their deep learning tools, libraries, and other intellectual property. This past year, Google open-sourced its DeepMind and TensorFlow code, Apple published its deep-learning research, and the OpenAI non-profit group has started to build its deep-learning benchmarking technology. Already, developers have a choice of open-source tools for development of deep-learning applications in Spark, Scala, Python, and Java, with support for other languages sure to follow. In addition to DeepMind and TensorFlow, open tools for deep-learning development currently include DeepLearning4J, Keras, Caffe, Theano, Torch, OpenBLAS and Mxnet.

    In 2017, open-source development options for deep-learning developers will continue to proliferate. Nevertheless, we’re sure to see at least one of them become the de facto standard by this time next year. By the end of the decade, no data science workbench will be complete without at least one open-source deep-learning tool and library that integrates closely with Spark, Zeppelin, R, and Hadoop. In that regard, I predict that Apache Spark to evolve in the next 12-24 months to beef up its native support for deep learning.

  • A new generation of low-cost commercial off-the-shelf deep-learning chipsets will come to market: Deep learning relies on the application of multilevel neural-network algorithms to high-dimensional data objects. As such, it requires the execution of fast-matrix manipulations in highly parallel architectures in order to identify complex, elusive patterns—such as objects, faces, voices, threats, etc. For high-dimensional deep learning to become more practical and pervasive, the underlying pattern-crunching hardware needs to become faster, cheaper, more scalable, and more versatile. Also, the hardware needs to become capable of processing data sets that will continue to grow in dimensionality as new sources are added, merged with other data, and analyzed by deep learning algorithms of greater sophistication. The hardware—ranging from the chipsets and servers to the massively parallel clusters and distributed clouds—will need to keep crunching through higher-dimensionality data sets that also scale inexorably in volume, velocity, and variety.

    Widespread adoption and embedding of deep-learning technology will depend on the continued commoditization and miniaturization of low-cost hardware technologies that accelerate algorithmic processing. In 2017, we’ll see mass deployment of a new generation of neural chipsets, graphic processing units, and other high-performance deep-learning-optimized computing architectures. These increasingly nano-scale components will provide the foundation for most new deep-learning applications in embedded, mobile, and Internet of Things form factors.

  • The algorithmic repertoire of deep learning will grow more diverse and sophisticated: Deep learning remains a fairly arcane, specialized, and daunting technology to most data professionals. The growing adoption of deep learning in 2017 will compel data scientists and other developers to grow their expertise in such cutting-edge techniques as recurrent neural networks, deep convolutional networks, deep belief networks, restricted Boltzmann machines, and stacked auto-encoders. As discussed in this recent KDnuggets blog, deep learning professionals will also need to wrap their heads around sophisticated new approaches ranging from genetic programming and particle swarm optimization to agent-based computational economics and evolutionary algorithms.

    Data scientists will need to stay on top of innovative new approaches for performing automated feature extraction, transfer learning, high-dimensionality reduction, and accelerated distributed training in deep learning.Developers working deep-learning projects will confront many challenges that require them to blend tools and techniques from various traditional AI “schools,” such as the “connectionists,” “symbolists,” and “evolutionaries.” To guide the increasingly complex design and optimization of deep-learning applications, data scientists will need to converge around standardized architectural patterns, such as those discussed in this recent article.

Please visit this site to learn how you can get started fast with deep learning on Power Systems.

And check out this site to learn how you can use Watson Data Platform to put machine learning and cognitive computing to work in your business in 2017 and beyond.