Top 5 Deep Learning Resources, January

There is an increasing volume of deep learning research, articles, blog posts, and news constantly emerging. Our Deep Learning Reading List aims to make this information easier to digest.

Generative Adversarial Network Images

3. Autoencoding Beyond Pixels Using a Learned Similarity Metric

Resource Type: Academic Paper
Authors: Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Ole Winther
Date: 31 Dec 2015

The previous paper transitions nicely into this one. Researchers from the Department for Applied Mathematics and Computer Science at the Technical University of Denmark argue that the similarity metrics currently used in deep generative models can be outperformed by employing learned similarity measures, and that higher-level, sufficiently invariant image representations are ideal for doing so.

The researchers state that element-wise distance metrics are problematic, and attempts to move past the pixel level are numerous, but generally rely on less-than-optimal hand-engineered solutions. The research aims to learn a function for the task of engineering a suitable measure of element-wise metric problems, and they find that jointly training a variational autoencoder along with a generative adversarial network (GAN) allows for using the GAN discriminator to measure the sample similarity.

From the abstract:

We present an autoencoder that leverages the power of learned representations to better measure similarities in data space. By combining a variational autoencoder (VAE) with a generative adversarial network (GAN) we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors that better capture the data distribution while offering invariance towards e.g. translation.

The authors regard the results of their research as an extension of the VAE framework, but recognize that it is more genuinely a combination of a VAE decoder and a GAN generator. They believe their preliminary results are of convincing quality, but acknowledge the lack of a good comparative measure for true evaluation.

This paper is definitely a recommended read for those with an interest in representation learning. Refer to the previous entry for related items.

4. Attention and Memory in Deep Learning and NLP

Resource Type: Blog Post
Author: Denny Britz
Date: 3 Jan 2016

This post outlines the use of attention mechanisms in neural networks. Author Denny Britz points out that Ilya Sutskever recently mentioned attention mechanisms as "one of the most exciting advancements" of late, and the rest of the post goes on to outline the technology. Very loosely, attention mechanisms are based on the human visual attention mechanism, and in this context attention refers to a model "attending" its input in a more interactive manner than simple batching, allowing for model interpretation.

Britz introduces attention mechanisms in the context of Neural Machine Translation. Instead of encoding full sentences into fixed-length vectors for language-to-language translation (the "traditional" method), attention mechanisms allow for a different approach. Says Britz:

Rather, we allow the decoder to "attend" to different parts of the source sentence at each step of the output generation. Importantly, we let the model learn what to attend to based on the input sentence and what it has produced so far.

Importantly, weighted combinations of all input states are important to translation using this manner, as opposed to only the final input state using the traditional method. This is important for moving between languages which do not share similar sentence word order structure.

Britz then considers the costs of attention, and also points out that attention has applications not just in machine translation but in any recurrent neural network. He also covers some drawbacks of both the mechanism and its name (is attention a misnomer?), as well as related (memory and attention) concepts.

Some additional papers Britz references that use attention in other ways:

This is a well-written post that leaves you with an understanding of what attention mechanisms and their uses are, even after a short 10 minute read.

TensorFlow Logo

5. TensorFlow White Paper Notes

Resource Type: Technical Paper Summary
Author: Sam Abrahams
Date: 28 Dec 2015

Developer Sam Abrahams offers a fresh perspective on TensorFlow, accomplished by sharing annotated notes on the TensorFlow white paper, as well as some additional figures and resources. The notes follow the TensorFlow white paper section by section, serving as both a sort of Cliff's Notes as well as expanded explanation, linking to additional external resources and official TensorFlow documentation when appropriate.

For those interested in learning more about TensorFlow, which is Google's de facto deep learning framework ("numerical computation library using data flow graphs"), this is as good a place to start as any. Note that KDnuggets also has a number of additional articles linking to TensorFlow resources and tutorials, as well as a few opinion pieces on TensorFlow and its continued development (the titles of which, in chronological order, read like a 'phoenix from the ashes' redemption trilogy):

This concludes our inaugural deep learning reading list. Should it prove useful it will be back next month.