KDnuggets Home » News » 2017 » Sep » Opinions, Interviews » AI Conference in San Francisco, Sep 2017 – highlights and key ideas ( 17:n38 )

AI Conference in San Francisco, Sep 2017 – highlights and key ideas

Highlights from recent AI Conference include the inevitable merger of IQ and EQ in computing, Deep learning to fight cancer, AI as the new electricity and advice from Andrew Ng, Deep reinforcement learning advances and frontiers, and Tim O’Reilly analysis of concerns that AI is the single biggest threat to the survival of humanity.

By Jitendra S. Mudhol, Founder and CEO CollaMeta

The Artificial Intelligence (AI) Conference San Francisco 2017 was held from September 17 – 20 at the Hilton.  It was presented by O’Reilly Media and Intel Nervana.  Organizers said it sold out with over 1400 attendees.  The sessions were packed with standing room only for some talks.

This is a flyby view of the Conference.  You can check out the schedule of the various sessions here.

The first pre-conference day (September 17) was packed with all-day tutorials covering Deep Learning, Natural Language Processing (NLP) and Neural Networks for Time Series.September 18 had smaller sessions with experts from MIT, UC Berkeley and others covering shorter sessions on Reinforcement Learning, Probabilistic Programming, word2vec and Topological Data Analysis.

Tuesday, September 19, 2017

Program Chairs Roger Chen and Ben Lorica kicked off the Conference on Tuesday, September 19.

Ai Conference Sf 2017 Chen Lorica

The inevitable merger of IQ and EQ in technology

Rana el Kaliouby, the Co-Founder and CEO of Affectiva set the ball rolling.  Affectiva came out of MIT Media Labs and has released a cloud API to help machines understand emotions in human speech by analyzing the prosodic features (tone, pitch, energy, tempo) and mapping it to speech emotions.

Engineering the future of AI for businesses

(Sponsored Talk): Ruchir Puri (IBM) spoke about the opportunities and the challenges for businesses interested in AI.

The state of AI adoption

Roger Chen and Ben Lorica swept through Google Trends, Indeed job openings, the MIT survey on companies adopting AI, startup funding and talent shortages in the AI field.

Deep learning to fight cancer: Fireside chat

The fireside chat between Peter Norvig (Google) and self-taught high schooler Abu Qader, the CTO of GliaLab brought out interesting nuggets about using Machine Learning to detect and fight cancer.  O’Reilly has put out a snippet of this video on YouTube here.  At the recent Google I/O 2017, Sundar Pichai shared Abu Qader’s remarkable story.

Fast forwarding AI in the datacenter

Lisa Spelman, Vice President at Intel spoke about businesses benefiting from AI and the role Intel is playing in data centers.

How Ai is ushering a new era of healthcare

Vijay Pande from Andreessen Horowitz walked us through how Machine Learning is steering our healthcare questions towards prevention as he cited examples from portfolio startups from drug discovery through computational biomedicine: Freenome, Omada Health and Patient Ping.  He continued the dive in a separate session.

AI is the new electricity

Andrew Ng held the audience’s attention, white-boarding his thoughts, emphasizing that AI is the new electricity. Some key points: All value generated till now from AI has been supervised learning; and, the most lucrative applications are probably internet advertising or deciding loan applications. Four promising areas: Supervised learning, Transfer learning, Unsupervised learning, Reinforcement learning.  A specific word of advice for learners: go through research papers and try to replicate the findings.

Andrew Ng

Backing off toward simplicity: Understanding the limits of deep learning

Stephen Merity (Salesforce Research) discussed how Deep Learning is being applied across different sets of problems, it is most suited to specific ones but not necessarily to all.  It also has limitations when applied to the production constraints.

A visual and intuitive understanding of deep learning

Otavio Good (Google) demonstrated how the Word Lens (now part of Google Translate) works to detect and translate printed text, what its capabilities and limitations are.

Deep reinforcement learning: Recent advances and frontiers

Li Erran Li (Uber) discussed multiple interesting points starting from how deep RL can learn actions from pixels (Atari games and Go), Q-learning which is a model-free RL technique, Mnih et.al. in Nature, 2015 (Human-level control through deep reinforcement learning) and the Asynchronous Advantage Actor-Critic (A3C) algorithm.  It was a fascinating walk through these advances.

Deep learning in the enterprise: Opportunities and challenges

Ron Bodkin (Teradata) listed four main areas where Deep Learning is applied in the enterprise space: Time series, long tail distributions, complex correlations and structured data.  Among challenges, he highlighted how just Machine Learning code is a small part of an enterprise implementation.  Such an endeavor covers: configuration, data collection, feature extraction, data verification, machine resource management, analysis tools, process management tools, the serving infrastructure and finally, monitoring.

Deep learning in enterprise IoT: Use cases and challenges

Jisheng Wang (Aruba) shared how his team in Aruba applied DL in enterprise IoT applications, specifically for IoT device identification and IoT security.  He started off by stating that the total economic impact of manufacturing IoT in 2025 is expected to be $1.2 - $3.7 Trillion.  The top areas: operations optimization, predictive maintenance, inventory optimization and health/safety.

Wednesday, September 20, 2017

How to escape saddle points efficiently

Michael Jordan (UC Berkeley), true to his self-description as a contrarian, shared some of the progress on the theory of Machine Learning as it has been lagging applications and as most of the talks focused on applications.  His talk focused on the main problem of optimization – bad local minima – saddle points that cause the learning curve to flatten out.

Why democratizing AI matters: Computing, data algorithms and talent

Jia Li (Google) ardently argued about open sourcing and democratizing access to three critical areas of AI to harness its potential: computing power, breakthrough algorithms that crunch the data and the perennial shortage of good talent.

AI mimicking nature: Flying and talking

Lili Cheng (Microsoft) focused on sail planes that are engine-less and fly autonomously by looking for thermal drifts and floating through them.  They built low-power models to look for such pockets and drifts to harness – it uses a Markov Decision Process with communication modules, long range radar, etc.

Accelerating AI

Steve Jurvetson (DFJ) started off with Ray Kurzweil’s famous chart – 120 years of Moore’s Law – the compounding computation capabilities and the astounding potential ahead!  In various industries, it will be AI who will determine the winner.  In Automotive for instance, it will not be the one with the best Internal Combustion engine who’ll win but the one with the best AI software stack who will lead the pack.

Fireside chat – Naveen Rao (Intel) and Steve Jurvetson (DFJ)

A short but fascinating chat that started with Naveen’s journey with Nervana and now as part of Intel’s AI team, how he looks at AI to serve up the future.

Build smart applications with your new super power: Cloud AI

Philippe Poutonnet (Google) showed how Deep Learning is being adopted rapidly inside of Google (more than 4000 team project directories that contain Brain models).  10% of all responses sent on mobile used the Smart reply feature in Gmail, which uses Machine Learning.  Other applications include Photos, Translate and all the elements of Google Cloud Platform.  He shared a specific example of working with Airbus to clean up images of landscapes differentiating between clouds and snow using TensorFlow.

Our Skynet moment

Tim O’Reilly(O’Reilly Media) dissected Elon Musk’s concerns that AI is the single biggest threat to the survival of humanity.  Pulling in the responses from other experts in the field, he dived into how a runaway objective function differs from sentient AI.  He also explored the fear of the next world war being triggered by AI via landmines, autonomous guns, intelligent drones, cyber warfare, biological interpretations of us as ecosystems of microorganisms.  He concluded optimistically that the future will be what we make of it.

Self-supervised visual learning

Alyosha Efros (UC Berkeley) began with the Autoencoders as the simplest form of self-supervised learning and moved on from General Adversarial Networks (GANs) to conditional GANs.  The whole idea in self-supervision is to have the Discriminator give up.  As examples, he showed how Cezanne like paintings were created by the models.  Concluded with edges2cats, a neural network trained on stock images of cats to turn simple line drawings into photorealistic feline images.

AI for manufacturing: Today and tomorrow

David Rogers (Sight Machine) shared that the Manufacturing vertical is data rich, information poor (DRIP) and how a typical Digital Manufacturing journey looks.  He explained how Sight Machine used Digital Twins to help the early adopters in their journey to Industry 4.0.

HPC Opportunities in Deep Learning

Greg Diamos (Baidu)highlighted why we care about HPC in Deep Learning: The error rate (for machine learning in audio, for example) follows a power law scaling decline over three orders of magnitude. So, one needs more computing power to crunch higher magnitudes of data.  Compared to the supercomputers, GPUs and TPUs have still a long way to go.  The algorithms need to advance to match the hardware fully.

All the linear algebra you need for AI

Rachel Thomas (fast.ai)spoke to a packed house, covering aspects of matrix and vector operations in Python (using TensorFlow) including broadcasting and how to apply it to develop Machine Learning solutions.

Reinforcement Learning in the cloud

Melanie Warrick (Google) provided an excellent overview of Reinforcement Learning, walking through the Markov Decision Process (MDP), exploring differences between policy learning versus policy gradients, Deep Q-Network (DQN) and a bit about Asynchronous Advantage Actor-Critic Agent (A3C).

Industrial Robotics and deep reinforcement learning

Derik Pridmore (Osaro) presented a fascinating convergence of three things: the latest advances in DL and RL for robotics, the gap with real deployed industrial robots and how his team at Osaro is working to fill the gap.

Apart from the sessions, the Demos, Meet the Expert and Book Signings added value to the relatively small floor of exhibitors.  Two people I spoke to, said that their takeaway from Meet the Expert session was short but useful.  One even remarked, “As they say, a brief conversation with a real expert is well worth reading lots of white papers and case studies.  It allows one to get to the heart of my context.”

I took informal polls, trying to get a pulse of how they were taking in the Conference.  On both days, at the lunch table, I asked how many were hands-on with implementation of Machine Learning.  It was two out of nine on the first day and two of six on the second day.  Rest were either in Marketing, Brand building, Business Development or Investing.  How about the most valuable session?  One investor said it was the Intel Saffron presentation.  Another said it was Andrew Ng.  Third person said it was Ion Stoica’s session on Ray, the distributed execution framework for reinforcement learning applications from UC Berkeley.

Most definitely, the AI Conference is an event that adds tremendous value by connecting you to some of the latest advances in the field and to others working in AI.

Bio: Jitendra Mudhol and his team members at CollaMeta are passionate about designing and developing Machine Learning applications in Manufacturing and Utilities.  You may reach him at jsmudhol at collameta dot com.