Deep Learning Research Review: Generative Adversarial Nets
This edition of Deep Learning Research Review explains recent research papers in the deep learning subfield of Generative Adversarial Networks. Don't have time to read some of the top papers? Get the overview here.
Generative Adversarial Text to Image Synthesis
Introduction
This paper was released just this past June and looks into the task of converting text descriptions into images. For example, the input to the network could be “a flower with pink petals” and the output is a generated image that contains those characteristics. So this task involves two main components. One is utilizing forms of natural language processing to understand the input description and the other is a generative network that is able to output an accurate and natural image representation. One note that the authors make is that the task of going from text to image is actually a lot harder than that of going from image to text (remember Karpathy’s paper). This is because of the incredible amount of pixel configurations and because we can’t really decompose the task into just predicting the next word (the way that image to text works).
Approach
The approach the authors take is training a GAN that is conditioned on text features created by a recurrent text encoder (won't go too much into this, but here's the paper for those interested). Both the generator and the discriminator use these features at points in their respective network architectures. This is what enables the GAN to make the correlation between the input description and the generated image.
Network Architecture
Let’s look at the generator first. We have our noise vector z along with a text encoding as the inputs to the network. Basically, this text encoding is a way of encapsulating information about the input description in a way that it can then be concatenated to the noise vector (see image below for a visualization). Deconv layers are then used to transform the input vector into the synthetic image.
The discriminator takes in an image, passes it through a series of conv layers (with BatchNorm and leaky ReLUs). When the spatial dimensions finally get to 4x4, the network performs a depth concatenation with that text encoding we were talking about earlier. After this, there are 2 more conv layers and the output is (as always) a score for the realness of the image.
Training
One of the interesting things about this model is the way that it has to be trained. If you think closely about the task at hand, the generator has to get two jobs right. One is that it has to generate natural and plausible looking images. The other is that the images must correlate to the given text description. The discriminator, thus, must also keep these two things into account, making sure that “fake” or unnatural images are rejected as well as images that mismatch the text. In order to create these versatile models, the authors train with three types of data: {real image, right text}, {fake image, right text}, and {real image, wrong text}. With that last training data type, the discriminator must learn to reject mismatched images (even if they look very natural).
Super Resolution using GANs
Introduction
As a testament to the type of rapid innovation that takes place in this field, the team at Twitter Cortex released this paper only a couple weeks ago. The model being proposed in this paper is a super-resolution generative adversarial network, or SRGAN (Will we ever run out of these acronyms?). The main contribution is a brand new loss function (better than plain old MSE) that enables the network to recover realistic textures and fine grained details from images that have been heavilydownsampled.
Approach
Let’s first take a look at this new perceptual loss function that was introduced. This loss function can be divided into two parts, the adversarial loss and the content loss. From a high level, the adversarial loss encourages images that look natural (look like they’re from the distribution) and the content loss makes sure that the new resolution image has similar features to the original low res image.
Network Architecture
Okay, now let’s get into the specifics. Let’s start off with a high resolution version of a given image and then a lower resolution version. We want to train our generator so that given the low resolution image, we can have an output that’s as close to the high res version as possible. This output is called a super-resolved image. The discriminator will then be trained to distinguish between these images. Same old same old, right? The generator network architecture uses a set of B residual blocks that contain ReLUs and BatchNorm and conv layers. Once the low res image passes through those blocks, there are two deconv layers that enable the increase of the resolution. Then, looking at the discriminator, we have eight convolutional layers that lead into a sigmoid activation function which outputs the probabilities of whether the image is real (high res) or artificial (super res).
Loss Function
Now let’s look at that new loss function. It is actually a weighted sum of individual loss functions. The first is called a content loss. Basically, it is a Euclidean distance loss between the feature maps (in a pretrained VGG network) of the new reconstructed image (output of the network) and the actual high res training image. From what I understand, the main goal is to ensure that the content of the two images are similar by looking at their respective feature activations after feeding them into a trained convnet (Comment below if anyone has other ideas!). The other major loss function the authors defined is the adversarial loss. This one is similar to what you normally expect from GANs. It encourages outputs that are similar to the original data distribution through negative log likelihood. A regularization loss caps off the trio of functions. With this novel loss function, the generator makes sure to output larger res images that look natural and still retain a similar pixel space when compared to the low res version.
Quick general side note: GANs use a largely unsupervised training process (all you need to have is a dataset of real images, no labels or anything). This means that we can take advantage of a lot of the unstructured image data that is available today. After training, we can use the output or intermediate layers as feature extractors that can be used for other classifiers, which now won’t need as much training data to achieve good accuracy.
Paper that I couldn’t get to, but still insanely cool: DCGANs. The authors didn’t do anything crazy. They just trained a really really large convnet, but the trick is that they had the right hyperparameters to really make the training work (aka BatchNorm, Adam, Leaky ReLUs).
GANs that could change the fashion industry: https://www.youtube.com/watch?v=9c4z6YsBGQ0
Bio: Adit Deshpande is currently a second year undergraduate student majoring in computer science and minoring in Bioinformatics at UCLA. He is passionate about applying his knowledge of machine learning and computer vision to areas in healthcare where better solutions can be engineered for doctors and patients.
Original. Reposted with permission.
Related: