5 Deep Learning Projects You Can No Longer Overlook
There are a number of "mainstream" deep learning projects out there, but many more niche projects flying under the radar. Have a look at 5 such projects worth checking out.
Deep learning libraries and frameworks such as Theano, Keras, Caffe, and TensorFlow have gained enormous recent popularity. In fact, Google's TensorFlow is the most starred machine learning repository on Github. By a lot. TensorFlow, despite being in the wild for little more than 6 months, has captured such a formidable market share that one could argue that it has become the default deep learning library by a large swath of seasoned neural network veterans and newcomers alike.
It's not the only library to consider, obviously. There are many others, a few of which are mentioned above. But there a many more smaller projects, ranging from complete libraries implemented form scratch, to high-level building blocks that sit atop established deep learning projects to fit particular niches. Below you can find a mix of these project types, noted for a variety of reasons as encountered over time spent online.
Maybe you find something that fills a need for you in this list of 5 deep learning projects you should not overlook any longer. Items are in no particular order, but I like to number things, and so number things I shall.
Leaf is a neural network framework, described in its Github repo README as:
Open Machine Intelligence Framework for Hackers. (GPU/CPU)
Somewhat interestingly, Leaf, which is quite a new project but has already gathered over 4000 repo stars, is written in Rust. Rust, itself, is only about 6 years old, with development sponsored by Mozilla. For those unfamiliar with Rust, it is a systems language with similarities to C and C++, self-described as:
Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.
A book, Leaf Machine Learning for Hackers, is freely-available online, and is likely a good first stop for those looking to give Leaf a try. I would guess that Leaf won't gain a lot of converts from outside of the Rust ecosystem, even given the claims and quantitative support that Leaf is faster than most other similar frameworks out there (see the above image). However, the number of Rust users continue to grow, and no doubt some of them will be interested in building neural nets. It's good to know they have a quality native framework to employ in this pursuit.
From tiny-cnn's Github repo:
tiny-cnn is a C++11 implementation of deep learning. It is suitable for deep learning on limited computational resource, embedded systems and IoT devices.
tiny-cnn is relatively quick without a GPU; it boasts a 98.8% accuracy on MNIST in 13 minutes of CPU training. It's also simple to use. Since it's header-only, you simply include the tiny_cnn.h header and write your C++ code, with nothing else to install. tiny-cnn supports a whole host of network architectures, activation functions, and optimization algorithms.
Here's a quick example of constructing a Multi-layer Perceptron:
Check out the documentation, as well as this project using tiny-cnn to implement a convolutional neural net implementation in Android. If you are set on implementing neural networks in C++, this is worth checking out.
Layered is authored by independent machine learning researcher Danijar Hafner, who recently contributed to KDnuggets the article "Introduction to Recurrent Networks in TensorFlow."
Clean implementation of feed forward neural networks.
Hafner wrote Layered in Python 3 as a "clean and modular implementation of feed forward neural networks." He states that he undertook the project as a means for better understanding deep learning concepts himself, and recommends doing so if you are interested in gaining a real appreciation of how deep neural networks actually function.
Here is an example of a simple neural network implementation in Layered:
The project currently supports identity, rectifiers, sigmoid, and softmax activation functions, and squared error and cross-entropy cost functions. As such, if you are looking to see a no-nonsense, from-scratch implementation of neural network functionality, Layered would be a good place to start. The fact that the project also works and is actively developed are a pair of reasons to do more than use it as a learning tool.
Hafner also has a number of tutorials on practical deep learning with TensorFlow, which I encourage you to have a look at.
Here is an example of approximating the exclusive or (XOR) function with Brain:
Brain supports hidden layers, and by default uses one (unless otherwise specified). Training a network is easy (shown above), with options easily being set and passed as a hash:
Train also returns a hash with training outcomes. Networks can be serialized via JSON.
Fast, scalable, easy-to-use Python based Deep Learning Framework by Nervana.
neon is indisputably fast: "For fast iteration and model exploration, neon has the fastest performance among deep learning libraries." This is definitely the reason to give this library a look, if you are currently unfamiliar with it.
Developed by Nervana Systems, neon supports convolution, RNN, LSTM, GRUs, and more. neon has a lot more going for it as well: it has a great workflow overview, its documentation is thorough, and it has a number of useful tutorials.
You can also check out a number of Jupyter notebook versions of the tutorials from a neon Deep Learning meetup, which is nice. If speed in training neural networks is important to you, and you're in the Python ecosystem, check out neon.