Top /r/MachineLearning Posts, March: Hugs, Deep Learning Navigation, 3D Face Capture, AlphaGo!
What's huggable, adversarial images for deep learning, overview of real-time 3D face capture and reenactment, deep learning quadcopter navigation, and a whole lot of AlphaGo!
In March on /r/MachineLearning, we learn what's huggable, find adversarial images for deep learning, get an overview of real-time 3D face capture and reenactment, see deep learning navigate a quadcopter, and AlphaGo, AlphaGo, AlphaGo!
Note: given the high number of top-ranking AlphaGo-related posts in /r/MachineLearning this month (and for good reason), a number of the highest performing posts will be consolidated below in order to avoid repetition.
The top 5 /r/MachineLearning posts of the past month are:
1. Can I Hug That? I trained a classifier to tell you whether or not what's in an image is huggable. +586
This is a link to some resulting images of a classifier trained to determined whether a given photo is of something deemed huggable, along with its huggability score. The comments include some insight into the classifier's construction, along with some other entertaining discussion.
2. AlphaGO WINS! +578
This is the first of the AlphaGo posts, started way back when AlphaGo had just beaten Lee Sedol in their very first match. The post is a discussion of what did, and would, happen, and in its already-historical context is somewhat entertaining. Here are a few more related posts, for those interested in AlphaGo's recent exploits:
- AlphaGo wins match 2 - a link to a video of the actual match; comments here
- AlphaGo lost the 4th game: AlphaGo 3-1 Lee Sedol - a discussion of the happenings of game 4
- AlphaGo is 3-0 - a discussion of the first signs of a crack in AlphaGo's armor
3. Adversarial images for deep learning +456
This is a link to the now-classic 'Chihuahua or muffin?' image, the best adversarial example image this month besides 'Pug or bread?'. The comments are here, with a a host of additional adversarial images linked within.
4. Face2Face: Real-time Face Capture and Reenactment of RGB Videos (CVPR 2016 Oral) +425
This is a link to a very cool video demonstrating a computer vision system outlined here. From the authors:
Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure.
The video is only a few minutes long and definitely worth your time, if even for a high-level overview.
5. Quadcopter Navigation in the Forest using Deep Neural Networks +357
This is a link to another video, this time of a quadcopter navigating through a forest via deep learning, just as the title suggests. The video is only 5 minutes long, but is really much more than a simple system demo, as it provides insight into how the classifier was trained, and problems encountered, and their solutions, during development. Catch the discussion here.
And here's a parting image, from a latecomer post from the month, which came in too late to reach the top. However, it was posted with the titled, "This xkcd was released less than 2 years ago," and is particularly relevant to recent developments.

Here is the link to the original comic. The discussion is worth a look as well.
Related: