Up to Speed on Deep Learning: July Update
Check out this thorough roundup of deep learning stories that made news in July. See if there are any items of note you missed.
Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post on June 20th. In case you missed it, here’s the June update, and here’s the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.
Google’s DeepMind partners with the National Health Service’s Moorfields Eye Hopsital to apply machine learning to spot common eye diseases earlier. The five-year research project will draw on one million anonymous eye scans which are held on Moorfields’ patient database, with the aim to speed up the complex and time-consuming process of analyzing eye scans (news article). The goal is that this leads to a better understanding of eye disease, earlier detection, and treatment. We previously wrote about the challenges inherent to deep learning in medical imaging here.
Andrew Ng announces the pre-launch of his book Machine Learning Yearning, to share practical advice & experience around building AI systems, to help practitioners get up to speed faster. Over 35,000 people signed up to receive a free draft copy as of June 21.
Google releases Wide & Deep Learning as part of the TensorFlow API. The project combines the power of both memorization and generalization, to better reflect the properties that make the human brain such an effective learning machine. They provide an in-depth example that illustrates the project’s purpose and potential via a fictional food delivery app.
The Harvard NLP and Visual Computing groups announce LSTMVis, a visual analysis tool for recurrent neural networks (RNNs). RNNs learn a black-box hidden state representation, and changes in these representations are challenging to study. The tool makes it easier to visually observe and isolate patterns in state changes. The Verge provides additional context around the black-box aspect of AI systems here.
Explanation, Review, and Cool Stuff
Experience and Advice for Using GPUs in Deep Learning. Tim Dettmers provides a comprehensive analysis of various GPUs and advice on how to best use them for deep learning. For example, he answers questions likeShould I get multiple GPUs? and What kind of accelerator should I get? — along with discussion of convolutional neural networks, speed, and memory considerations.
ICML 2016 not by the day by Stephanie Hyland. Review of the 2016 International Conference on Machine Learning (ICML), highlighting the important trends and papers that emerged.
It’s ML, not magic. Stephen Merity addresses over-hype and mysticism around artificial intelligence. He articulates both the reason why we see this hype, as well as the types of questions we should ask to gut-check and better understand the potential of AI.
Chasing Cats. Robert Bond of NVIDIA develops an end-to-end cat surveillance system for his front yard, which is a nice articulation of the full pipeline from camera to processor to neural network (to sprinklers).
Isaac Madan is an investor at Venrock, an early-stage venture capital firm with investments in companies like Dollar Shave Club, Cloudflare, AppNexus, Dataminr, and Pearl Automation, and previously Nest, Apple, Intel, etc. Isaac’s background is in machine learning & artificial intelligence, having been previously an entrepreneur and data scientist. He can reached via email at [email protected].
Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. If you think there’s someone we should feature in an upcoming issue, nominate them by sending Isaac an email.
Original. Reposted with permission.