Up to Speed on Deep Learning: August Update
Check out this thorough roundup of deep learning stories that made news in August, and see if there are any items of note that you missed.
Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post on August 2nd. In case you missed it, here’s the July update (part 2), here’sthe July update (part 1), here’s the June update, and here’s the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.
An Intuitive Explanation of Convolutional Neural Networks by Ujjwal Karn. An thorough overview of CNNs: what they do, why they’re important, how they work, some history, and their underlying concepts. Inspired by Denny Britz’s Understanding Convolutional Neural Networks for NLP — Denny’s blog, WildML, is also an excellent resource with many deep learning explanations and tutorials.
Image Completion with Deep Learning in TensorFlow by Brandon Amos. A deep learning tutorial that explains how to do image completion and inpainting via deep learning. Image completion and inpainting are closely related technologies used to fill in missing or corrupted parts of images — this is important to designers and photographers, who often need to fill unwanted or missing parts of images. The code is also available on GitHub. Based on Raymond Yeh and Chen Chen et al.’s paper Semantic Image Inpainting with Perceptual and Contextual Losses.
Deep Learning Summer School lecture notes. Held the first week of August in Montreal and organized by Aaron Courville and Yoshua Bengio, professors at the University of Montreal, this conference provides a broad overview of current research in the deep neural networks. Speakers include preeminent deep learning researchers from Google, Facebook, Twitter, NVIDIA, and many others. All the lecture slides are available for review.
Vincent AI Artist (GitHub repository) by Saikat Basak. Vincent is an attempt to implement “a neural algorithm of artistic style”. A convolutional neural network (CNN) separates ‘style’ and ‘content’ from artistic images, and combines that artistic style with another image to create a unique expression.Leverage this repo to build your own of Prisma.
Robotics Science and Systems (RSS 2016) Workshop notes and videos. This workshop held in Ann Arbor, MI on June 18, 2016 convened a broad set of experts to discuss the topic of deep learning in robotics, particularly around computer vision. Speakers such as Pieter Abbeel of UC Berkeley and Ashutosh Saxena of Brain of Things spoke about their research in the field. Their recorded talks and slides are available for review.
Isaac Madan is an investor at Venrock, an early-stage venture capital firm with investments in companies like Dollar Shave Club, Cloudflare, AppNexus, Dataminr, and Pearl Automation, and previously Nest, Apple, Intel, etc. Isaac’s background is in machine learning & artificial intelligence, having been previously an entrepreneur and data scientist. He can reached via email at firstname.lastname@example.org.
Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. If you think there’s someone we should feature in an upcoming issue, nominate them by sending Isaac an email.
Original. Reposted with permission.