Top /r/MachineLearning Posts, April: Why Momentum Really Works; Machine Learning with Scikit-Learn & TensorFlow

Why Momentum Really Works; O'Reilly's Hands-On Machine Learning with Scikit-Learn and TensorFlow; Implemented BEGAN and saw a cute face at iteration 168k; Self-driving car course; Exploring the mysteries of Go; DeepMind Solves AGI

In April on /r/MachineLearning, find out why momentum really works, learn about a well-received new book from O'Reilly on machine learning with Scikit-Learn and TensorFlow, find out about a self-driving car course, have some fun with generative adversarial networks, explore the mysteries of Go, and read about DeepMind solving AGI... and summoning the demons.

Reddit ML

The top /r/MachineLearning posts of April are:

1. Why Momentum Really Works


Gabriel Goh, in the online machine learning journal Distill, discusses gradient descent and momentum. In the discussion he covers the dynamics of momentum, its limitations, dampening, momentum with stochastic gradients, and more. Complete with interactive visualizations -- the real strength of Distill's chosen publication medium -- this is a thorough overview which makes for an interesting read.

O'Reilly book2. O'Reilly's Hands-On Machine Learning with Scikit-Learn and TensorFlow

This is a link to the book's page on the O'Reilly website. The book by Aurélien Géron, covering machine learning using TensorFlow and Scikit-learn, has been getting very good reviews; myself, I'd love to get my hands on it, but it's back-ordered on Amazon at the moment (edit: it was back-ordered at the time of writing, but I am happy ot report I have since ordered a copy).

As the OP posed a question asking if anyone had feedback on the book, here is a corresponding Reddit discussion which is compesed of some back and forth on readers' opinions.

3. Implemented BEGAN and saw a cute face at iteration 168k. Haven't seen her since :(


Is this a movie plot where researcher gets in love with a virtual persona generated by an algorithm?

Or, she keeps showing up in other people's research. Harmless at first, but then people start noticing small changes to their code that they didn't make.

I found my true love in the manifolds of a deep neural network, but she was gone by the next epoch.

Just a few of the gems in this otherwise relatively pointless discussion...

4. Self-driving car course with Python, TensorFlow, OpenCV, and Grand Theft Auto 5

Sentdex self-driving car

Harrison Kinsley aka sentdex, known for all sorts of helpful Python-related instructional videos on his YouTube channel, has unleashed a new series aimed at helping folks understand autonomous driving and related AI in video games. Harrison sticks to his proven format of employing and explaining the use of Python and related libraries to accomplish his instructional goals in this ongoing series. You can find a text overview of the course here, and some related code here.

Check out Harrison's extensive collection of video tutorials on his YouTube channel.

5. Exploring the mysteries of Go with AlphaGo and China's top players


This post from DeepMind revisits last year's AlphaGo successes, and explains the game of Go with help from some of China's top players. Not much new ground covered here, but the main takeaway is likely this:

Clearly, there remains much more to learn from this partnership between Go’s best human players and its most creative A.I. competitor. That’s why we’re so excited to announce AlphaGo’s next step: a five-day festival of Go and artificial intelligence in the game's birthplace, China.

6. DeepMind Solves AGI, Summons Demon

Approximately Correct's and KDnuggets' Zachary Lipton provides this incredible write-up of how DeepMind has solved artificial general intelligence (AGI). Securing an interview with the lead researcher of the project was quite a feat for the young yet already prestigious upstart website Approximately Correct, especially given the following:

Dr. Falscher Wissenschaftler, the DeepMind scientist behind the discovery, granted Approximately Correct an exclusive interview prior to the press release. We caught up with him at his flat in London’s Barnsbury neighborhood. Wiry and tall, Wissenschaftler rarely talks to the press. His friends describe him as “fiercely logical”. He doesn’t often make eye-contact, but when he does, his lucidity penetrates your corporeal form, briefly revealing a glimpse into his elegant, mathematical world.

Writes Lipton:

By the year’s end, Alphabet executives expect that these neural networks will exhibit fully autonomous self-improvement. What comes next may affect us all.

Before arming for the inevitable war with SkyNet, the astute reader would do well to note the date of the article's publication.