Follow Gregory Piatetsky, No. 1 on LinkedIn Top Voices in Data Science & Analytics

KDnuggets Home » News » 2016 » Mar » News, Features » Top /r/MachineLearning Posts, February: AlphaGo, Distributed TensorFlow, Neural Network Image Enhancement ( 16:n09 )

Top /r/MachineLearning Posts, February: AlphaGo, Distributed TensorFlow, Neural Network Image Enhancement


In February on /r/MachineLearning, we get a run-down of the AlphaGo matches, Distributed TensorFlow is released, convolutional neural nets are cleaning Star Wars images, vintage science is on parade, military machine learning is criticized, and the overwhelmed researcher is given advice.



February on /r/MachineLearning brings us more AlphaGo, sees Distributed TensorFlow get a release, deep neural networks are enhancing images, a video form the 70s gives us insight into science 4 decades ago, the overwhelmed machine learning researcher is given advice, and military machine learning comes under criticism.

The top /r/MachineLearning posts of the past month are:

1. Synopsis of top Go professional's analysis of Google's Deepmind's Go AI +526

The thread's author explains that he had a discussion in January on another subreddit about computerized Go, and that during the time he made a synopsis of this video analyzing the games between the AlphaGo AI and human professional Fan Hui. After the attention the news achieved regarding AlphaGo's success, the author of the thread shared his written video synopsis with users on /r/MachineLearning. The synopsis includes a breakdown of each game, and some insight from a player of the game of Go.

Go board

2. Distributed TensorFlow just open-sourced +343

After months of anticipation following the release of its initial single-node version, Google has just released its distributed version of TensorFlow into the wild. This is a direct link to the directory in the TensorFlow Github repository which houses the distributed runtime. The readme provides some useful overview of the distributed version. The comment thread on the /r/MachineLearning post also present some interesting questions and conversation regarding distributed TensorFlow, which you may want to have a look at as well.

3. BB-8 Image Super-Resolved +287

BB-8

This is a thread which links directly to a full-sized version of the above image of BB-8 (of Star Wars: The Force Awakens fame), an image which shows a fuzzy, unclear original image side-by-side with a version of the image which has been enhanced via convolutional neural network. This paper from November, 2015, is responsible for the enhancement process used, with the paper's abstract shown below:

We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding/vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connection. Our method outperforms previous methods by a large margin.

For more discussion on this topic, the process, and the image, have a look at the comment thread on the /r/MachineLearning post.

4. 1976 Matrix Singular Value Decomposition Film +209

This is a direct link to a documentary from 1976 on the singular value decomposition, a process of matrix factorization which is well-used today in numerous applications, a specific case of interest to data science and machine learning being the recommender system. The film is definitely of vintage stock and provides some insight into 1970s science. As a bonus, at 6 minutes in length, it doesn't take very long to get through.

5. The NSA’s SKYNET program may be killing thousands of innocent people +204

This is a link to an Ars Technica UK article, which has the byline:

"Ridiculously optimistic" machine learning algorithm is "completely bullshit," says expert.

This quote sets the tone of the article. I'll leave it at that.

6. Anyone else feeling overwhelmed by the fast pace of the field? +197

This is simply a discussion on the rapid advancements in the field of machine learning (mostly deep learning, to be specific), and how people cope with keeping up on as much of it as possible. The thread starts with the author posing this question:

How do you keep yourself updated in the field while simultaneously working on your own research and still having time to leave your desk every once in a while?

What follows is a lively discussion. A number of suggestions and strategies on keeping up (and not bothering to keep up) are shared. Worth a look if you are feeling like the author (spoiler alert: almost everyone is feeling like the author).

Related:


Sign Up