KDnuggets Home » News » 2016 » Jan » Publications » Top 5 Deep Learning Resources, January ( 16:n01 )

Top 5 Deep Learning Resources, January

There is an increasing volume of deep learning research, articles, blog posts, and news constantly emerging. Our Deep Learning Reading List aims to make this information easier to digest.

With the seemingly exponential growth in deep learning research papers, articles, blog posts, tutorials, and other resources that is emerging from the woodwork these days, it would take a full time job and a competent assistant to cut through the volume in a reasonable amount of time. At KDnuggets, we sympathize. That's why we have decided to collect in a single location some of the most interesting, innovative, or otherwise view-worthy recent deep learning resources of note.

Each of the selected entries in the reading list will include a brief overview, a link to the original resource, and occasional links to further related items for interested readers. The list does not claim to be exhaustive; in fact, it most definitely is not! But for those with limited time to invest in tracking down deep learning information, this post should prove useful.

Let's not waste any more time. The top selected deep learning resources for January are outlined below.

Kanji Examples

1. Recurrent Net Dreams Up Fake Chinese Characters in Vector Format with TensorFlow

Resource Type: Blog Post
Author: hardmaru
Date: 28 Dec 2015

Going only by the handle of hardmaru, this blogger outlines their implementation of a Long Short-Term Memory (LSTM) network which is trained to reproduce Kanji character sequences from training examples, some results of which are shown in the above image. Says hardmaru:

In this blog post, I will describe how to train a recurrent neural network to generate fake, but plausible Chinese characters, in vector .svg format.

I created a tool called sketch-rnn that would attempt to learn some structure from a large collection of related .svg files, and be able to generate and dream up new vectorised drawings that is similar to the training set.

Before getting into the technical details, hardmaru first gives some motivational background as to how children learn to reproduce Kanji characters by rote memorization and practice, drawing at least a loose parallel between child learning and the LSTM model. The model is then given full technical treatment - including the use of mixture Gaussian distribution (mixture density networks) - as is the training data used. hardmaru implements the model using TensorFlow.

The author has also recently shared the following related blog posts, using deep neural nets to accomplish other tasks:

This is an interesting read, especially if you are a fan of the recent deep generative works such as Inceptionism, Deep Forger, or the deep convolutional generative adversarial networks (DCGANs) paper immediately below.

Bedroom Generation

2. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

Resource Type: Academic Paper
Authors: Alec Radford, Luke Metz, Soumith Chintala
Date: 19 Nov 2015

This resource is a bit older than the others listed here, dating all the way back to November of 2015. This paper's corresponding code has been discussed in a recent KDnuggets monthly /r/MachineLearning subreddit summary; however, it is both fantastically cool and technically relevant, and so it has been included here to ensure that you haven't overlooked it.

From the abstract:

We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.

The corresponding code repository with the authors' implementation of what is described in the paper is located here. More recently, an implementation of a deep convolutional generative adversarial network in TensorFlow (by a different author) has been shared.

A concise description of what is included in the paper is also in the original code repo readme. Located in the same write-up are application examples, including generating bedroom images, walking through bedrooms and generating advancing positional views, generating album covers, and performing facial arithmetic.

Interested in more like this? Check out the following similar resources: