Key Takeaways from AI Conference in San Francisco 2017 – Day 2
Highlights and key takeaways from day 2 of AI Conference San Francisco 2017, including current state review, future trends, and top recommendations for AI initiatives.
Lili Cheng, CVP, AI and Research Division, Microsoft highlighted the deep connection between AI and Nature in her keynote on “AI mimicking nature”. Her team was fascinated by the technology behind planes flying autonomously and without-engines through leveraging thermal drift. Instead of being limited to playing games, they wanted to expand the boundaries of software in the real world.
They took up the challenge to make a remote-controlled sailplane fly autonomously nonstop across thousands of miles using the power of AI. It required superhuman sensing, prediction, planning, careful power management & harvesting using tight onboard computational resources while allowing little room for error.
Using a low-power device, they built onboard capabilities for adaptive route and control computation. The batteries were designed to be recharged in flight through solar panels.
Microsoft has shared the code for this project on Github under the name AirSim.
Steve Jurvetson, Partner, DFJ delivered an interesting talk on “Accelerating AI”. Referring to Ray Kurzweil’s version of Moore’s Law, he highlighted the steady exponential increase in the compute capability for a fixed amount of money; even through the shifts in standard technologies of those times. This has consistently created lots of opportunities for innovation as compute became more and more affordable.
The recent advances in machine learning and deep learning can be applied across a wide range of fields such as microbial design, satellite imagery, cybersecurity, and drug discovery. Such AI methods are also increasing the need for greater computation power which requires specialized computational substrates. Within the ASIC domain, we have different categories: digital, analog, and quantum.
The ASIC implementations have classically been supervised, discriminative, deterministic, and parallelizable. Recent research is showing the promise of Quantum ASIC which is unsupervised, generative, and probabilistic (thus, inherently being difficult on GPUs). Google is currently working on their own quantum computers which can have massively more computation power than the current technology.
Philippe Poutonnet, Head of Product Marketing, Cloud AI, Google delivered his keynote on “Build smart applications with your new super power: Cloud AI”. In recent years, deep learning has been rapidly adopted across all Google products, and today, almost all Google products are using it. The AI-empowered Smart Reply feature in Inbox by Gmail currently accounts for about 10% of all responses sent on mobile.
He described the portfolio of Google Cloud services right from data ingestion all the way to production deployment.
He shared the case study on Airbus about cleaning satellite imagery to remove the cloud to get a clear view of earth’s surface. Google Cloud AI tools reduced the development time from 180 months to 3 months while simultaneously dropping the error rate from 11% to 3%.
Tim O'Reilly, Founder and CEO, O'Reilly Media gave the closing keynote on “Our Skynet moment”. When James Cameron’s Terminator movies introduced Skynet - a hostile, self-aware AI bot – few had imagined that we would be living that science fiction within decades.
We are seeing highly polarized views on AI. Some leaders such as Elon Musk consider it one of the biggest threats to mankind, whereas others such as Andrew Ng have dismissed it as an impractical fear for current times.
We are too focused on the evil aspect of AI, rather than worrying about what power will do with AI. Face recognition technology has already been used to identify people (such as the protesters who are wearing caps and scarfs to cover their face) and target them.
Nations are increasingly building autonomous weapons, including drones which can think for themselves. Unless we take swift policy action, we need to embrace for unintended consequences.
Lastly, he talked about his book “WTF: What’s the future and why it’s up to us?” in which he explores the question “What if we are thinking about AI in the wrong way?”