# Top /r/MachineLearning Posts, August: Deep Learning paints in style of many famous painters

Deep Learning algorithm generating paintings in the styles of famous artists, Genetic algorithms pioneer John Holland passes away, Beginner Python data analysis tutorial, LSTM networks explained, and Google Thought Vectors.

**By Matthew Mayo**

In August on /r/MachineLearning, we see a neural algorithm turning photos into stylized paintings, check out a Python data analysis tutorial notebook, explore LSTM networks, learn about Google’s Thought Vector, and say goodbye to an influential figure.

**1. Neural Algorithm That Paints Photos Based on Given Painting Style +435**

Neural networks and their art are all the rage lately. In a fresh take on things, however, German researchers share their algorithm for having NNs generate stylized paintings of photos based on a given input painting style. Picasso’s Obama, or Munch’s take on the Eiffel Tower? No problem! The paper is located here, and code (Lua, Torch and CUDA) can be found here.

**2. John Henry Holland Passes Away +242**

Modern day renaissance man John Henry Holland passed away this month at 86. Holland was known for his work in complex systems, as well as for being the father of the genetic algorithm. While this post is simply a link to a tweet, much more on this story can be read here. RIP Mr. Holland.

**3. Example Machine Learning Notebook +182**

This iPython Notebook tutorial is meant to demonstrate a basic data analysis pipeline task, from start to finish, using the Python stack. Put together by Randal Olson, graduate student at the University of Pennsylvania, the tutorial works with the well-known Iris dataset along with mainstay libraries such as Numpy and scikit-learn. Aimed at beginners, it is noteworthy in that it covers from defining the question right through to ensuring reproducibility.

**4. Understanding LSTM Networks +172**

This article explains LSTM, or Long Short Term Memory, Networks in an embarrassingly accessible manner. While the mathematics are treated, writer Christopher Olah does a great job of explaining in plain English what makes LSTM Networks unique from other Recurrent Neural Networks. His step by step walkthrough of an LSTM is an exercise in clarity.

**5. Google and Its “Thought Vectors” +158**

Geoffrey Hinton is at it again. Building on previous work in neural networks and deep learning, Hinton says that Google is working on a new algorithm for encoding thoughts as numbers - hence, thought vectors. Though at an early stage, it is thought that an algorithmic breakthrough could help overcome 2 currently unsolved problems in AI - the ability to make leaps in logic, and the mastery of conversational language.

**Related:**