Top 13 Python Deep Learning Libraries
Part 2 of a new series investigating the top Python Libraries across Machine Learning, AI, Deep Learning and Data Science.
Python continues to lead the way when it comes to Machine Learning, AI, Deep Learning and Data Science tasks. According to builtwith.com, 45% of technology companies prefer to use Python for implementing AI and Machine Learning.
Because of this, we’ve decided to start a series investigating the top Python libraries across several categories:
Top 8 Python Machine Learning Libraries ✅
Top 13 Python Deep Learning Libraries ✅  this post
Top X Python Reinforcement Learning and evolutionary computation Libraries – COMING SOON!
Top X Python Data Science Libraries – COMING SOON!
Of course, these lists are entirely subjective as many libraries could easily place in multiple categories. For example, TensorFlow is included in this list but Keras has been omitted and features in the Machine Learning library collection instead. This is because Keras is more of an ‘enduser’ library like SKLearn, as opposed to TensorFlow which appeals more to researchers and Machine Learning engineer types.
As always, please feel free to vent your frustrations/disagreements/annoyance in the comments section below!
Fig. 1: Top 13 Python Deep Learning Libraries, by Commits and Contributors. Circle size is proportional to number of stars.
Now, let’s get onto the list (GitHub figures correct as of October 23rd, 2018):
1. TensorFlow (Contributors – 1700, Commits – 42256, Stars – 112591)
“TensorFlow is an open source software library for numerical computation using data flow graphs. The graph nodes represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) that flow between them. This flexible architecture enables you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code. “
2. PyTorch (Contributors – 806, Commits – 14022, Stars – 20243)
“PyTorch is a Python package that provides two highlevel features:
 Tensor computation (like NumPy) with strong GPU acceleration
 Deep neural networks built on a tapebased autograd system
You can reuse your favorite Python packages such as NumPy, SciPy and Cython to extend PyTorch when needed.”
3. Apache MXNet (Contributors – 628, Commits – 8723, Stars – 15447)
“Apache MXNet (incubating) is a deep learning framework designed for both efficiency and flexibility. It allows you to mixsymbolic and imperative programming to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly.”
4. Theano (Contributors – 329, Commits – 28033, Stars – 8536)
“Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multidimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.”
5. Caffe (Contributors – 270, Commits – 4152, Stars – 25927)
“Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berkeley Vision and Learning Center (BVLC) and community contributors.”
6. fast.ai (Contributors – 226, Commits – 2237, Stars – 8872)
“The fastai library simplifies training fast and accurate neural nets using modern best practices. See the fastai website to get started. The library is based on research in to deep learning best practices undertaken at fast.ai, and includes "out of the box" support for vision, text, tabular, and collab (collaborative filtering) models.”
7. CNTK (Contributors – 189, Commits – 15979, Stars – 15281)
“The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes neural networks as a series of computational steps via a directed graph. In this directed graph, leaf nodes represent input values or network parameters, while other nodes represent matrix operations upon their inputs. CNTK allows users to easily realize and combine popular model types such as feedforward DNNs, convolutional nets (CNNs), and recurrent networks (RNNs/LSTMs).”
8. TFLearn (Contributors – 118, Commits – 599, Stars – 8632)
“TFlearn is a modular and transparent deep learning library built on top of Tensorflow. It was designed to provide a higherlevel API to TensorFlow in order to facilitate and speedup experimentations, while remaining fully transparent and compatible with it.”
9. Lasagne (Contributors – 64, Commits – 1157, Stars – 3534)
“Lasagne is a lightweight library to build and train neural networks in Theano. It supports feedforward networks such as Convolutional Neural Networks (CNNs), recurrent networks including Long ShortTerm Memory (LSTM), and any combination thereof.”
10. nolearn (Contributors – 14, Commits – 389, Stars – 909)
“nolearn contains a number of wrappers and abstractions around existing neural network libraries, most notably Lasagne, along with a few machine learning utility modules. All code is written to be compatible with scikitlearn.”
11. Elephas (Contributors – 13, Commits – 249, Stars – 1046)
“Elephas is an extension of Keras, which allows you to run distributed deep learning models at scale with Spark. Elephas currently supports a number of applications, including:
 Dataparallel training of deep learning models
 Distributed hyperparameter optimization
 Distributed training of ensemble models”
12. sparkdeeplearning (Contributors – 12, Commits – 83, Stars – 1131)
“Deep Learning Pipelines provides highlevel APIs for scalable deep learning in Python with Apache Spark. The library comes from Databricks and leverages Spark for its two strongest facets:
 In the spirit of Spark and Spark MLlib, it provides easytouse APIs that enable deep learning in very few lines of code.
 It uses Spark's powerful distributed engine to scale out deep learning on massive datasets.”
13. Distributed Keras (Contributors – 5, Commits – 1125, Stars – 523)
“Distributed Keras is a distributed deep learning framework built on top of Apache Spark and Keras, with a focus on "stateoftheart" distributed optimization algorithms. We designed the framework in such a way that a new distributed optimizer could be implemented with ease, thus enabling a person to focus on research.”
Keep an eye out for the next part of this series  which focuses on Reinforcement Learning and evolutionary computation libraries  that will be published over the next few weeks!
Related:
Top Stories Past 30 Days

