KDnuggets Home » News » 2016 » Sep » News, Features » Top /r/MachineLearning Posts, August: Google Brain AMA, Image Completion with TensorFlow, Japanese Cucumber Farming ( 16:n32 )

Top /r/MachineLearning Posts, August: Google Brain AMA, Image Completion with TensorFlow, Japanese Cucumber Farming


Google Brain AMA; Image Completion with Deep Learning in TensorFlow; Japanese Cucumber Farming; Andrew Ng's machine learning class in Python; Google Brain datasets for robotics research



In August on /r/MachineLearning we heard from the Google Brain team, were treated to Python code for a popular machine learning course's implemented exercises, saw how to use deep neural networks in TensorFlow to perform image completion, gained access to a few Google Brain robotics datasets... and talked Japanese cucumber farming.

The top 5 /r/MachineLearning posts of the past month are:

1. Google Brain Team AMA +1177

Google Brain

This is a lengthy and in-depth AMA with the Google Brain Team, including, but not limited to, rockstars such as Jeff Dean, Geoff Hinton, Vincent Vanhoucke, Chris Olah, and Quoc Le. The team fields questions related to organizational issues, research directions, keeping up with all of the research being produced, and much more. It's not a 2 minute read, but I shouldn't have to tell you that it's definitely worth a look.

2. All of Andrew Ng's machine learning class in Python +495

If you have taken Andrew Ng's course, you know that the programming language of instruction and for assignments is Octave, which is open source and Matlab-like. Unfortunately, Octave is not a well-employed language in either industry or research, and Matlab is not the powerhouse it once was. Many (most?) machine learning models are implemented today in Python (definitely not the only option, but a big one).

As Ng's course is often a first stop, or a supporting stop along the way, for those learning machine leanring, John Wittenauer decided to implement the course exercises in Python. It's a great initiative, and though he started nearly 2 years ago (he just recently uplaoded his most recent effort), the code is all still very relevant, and is probably worth having as a reference at the very least if you plan on working through Andrew Ng's course exercises.

Kudos to John for his efforts.

3. Image Completion with Deep Learning in TensorFlow [OC] +320

Image completion

The title on this one is pretty straightforward, but I've culled an excerpt from the blog post for some more specific info:

Content-aware fill is a powerful tool designers and photographers use to fill in unwanted or missing parts of images. Image completion and inpainting are closely related technologies used to fill in missing or corrupted parts of images. There are many ways to do content-aware fill, image completion, and inpainting. In this blog post, I present Raymond Yeh and Chen Chen et al.’s paper “Semantic Image Inpainting with Perceptual and Contextual Losses,” which was just posted on arXiv on July 26, 2016. This paper shows how to use deep learning for image completion with a DCGAN. This blog post is meant for a general technical audience with some deeper portions for people with a machine learning background.

Deep learning is rendering moot entire episodes of Seinfeld.

4. How a Japanese cucumber farmer is using deep learning and TensorFlow +290

Cucumbers

Sorting cucumbers is (apparently) rote, tedious work. So, why not have an artificial neural network implemented in TensorFlow take care of that for you? That's exactly what a farmer in Japan did, and this blog post outlines the technical implementation (hardware, too), as well as results.

Vincent Vanhoucke (in the comments section) quickly points out that this is not a publicity stunt, and that Google had nothing to do with it :)

5. Google Brain released two large datasets for robotics research +198

Robotics

This is a link to a post on Vincent Vanhoucke's Google+ profile (is this guy everywhere this month or what?), which announces and outlines 2 new robotics research datasets that Google Brain has shared, including:

Grasping: A collection of 650k grasp attempts, data used in: http://arxiv.org/abs/1603.02199

Push: A collection of 59k examples of pushing motions, data used in: http://arxiv.org/abs/1605.07157

Vanhoucke states that the data were collected in a controlled environment, using a wide array of objects.

Google: getting us one step closer to SkyNet every day.

Related:


Sign Up