Top /r/MachineLearning Posts, 2016: Google Brain AMA; Google Machine Learning Recipes; StarCraft II AI Research Environment
Google Brain AMA; Google Machine Learning Recipes; StarCraft II AI Research Environment; Huggable Image Classifier; xkcd: Linear Regression; AlphaGO WINS!; TensorFlow Fizzbuzz
/r/MachineLearning continues to be one of the best places for breaking machine learning news, projects, discussions, and exclusive content such as AMAs, for veteran machine learning scientists as well as beginners, and the entire spectrum in between. As the new year begins, we will look back at the top stories on /r/MachineLearning from 2016. To help convey a sense of what a particular thread/discussion/post is about, I have included a poignant quote from said item when appropriate.
The top 10 /r/MachineLearning posts of 2016 were:
1. AMA: We are the Google Brain team. We'd love to answer your questions about machine learning.
We’re a group of research scientists and engineers that work on the Google Brain team. Our group’s mission is to make intelligent machines, and to use them to improve people’s lives. For the last five years, we’ve conducted research and built systems to advance this mission.
2. Google Brain will be doing an AMA in /r/MachineLearning on August 11
See above :)
3. Google has started a new video series teaching machine learning and I can actually understand it.
Watching that made me feel like it was pbs of the future.
4. DeepMind and Blizzard to release StarCraft II as an AI research environment
Perfect next step after Go. Imperfect information game in real time where previous states matter rather than perfect information, turn-based, and Markov decision process. Hopefully this will push automated planning much further along.
5. Can I Hug That? I trained a classifier to tell you whether or not what's in an image is huggable.
Did you have to label thousands of training pictures as huggable or not
Actually, just 160 images for the HUG category and the NOT-HUG category.
Images in HUG were tagged with: puppy, kitten, bunny, bear, cloud, dandelion, pillow, fluffy
Images in NOT-HUG were tagged with: cactus, porcupine, nails, pufferfish, broken glass, lego, knife, shark teeth
Okay so it's not actually learning huggability
Yes, playing millions of matches against itself is the main method of how it has improved once it became better than the best computer competitors. Playing Fan Hui was a benchmark to see if the improvement was real. Between then and now the system has also been constantly improving by playing against itself.
8. Great summary of deep learning
So... what does Yann LeCun think HE does?
ELI5: why doesn't deep learning suffer from the curse of dimensionality?
Because deep learning was unpopular at the time, so none of the other machine learning algorithms wanted it to come along on the expedition when they opened the tomb of dimensionality.
9. Synopsis of top Go professional's analysis of Google's Deepmind's Go AI
/u/yetipirate suggested this synopsis might interest some people here as well, since it digests the salient points of a two hour video with lots of Go jargon into a more manageable post. So hence I'm posting it here, I hope you all enjoy it. Feel free to ask me any questions about Go, but I'm not that strong myself so ymmv. Anyway without further ado:
Best TensorFlow Tutorial ever.
If you don't already, you should make /r/MachineLearning a regular stop on your daily browsing journey.
Related: