KDnuggets Home » News » 2017 » Oct » Opinions, Interviews » Key Takeaways from AI Conference in San Francisco 2017 – Day 2 ( 17:n38 )

Key Takeaways from AI Conference in San Francisco 2017 – Day 2


Highlights and key takeaways from day 2 of AI Conference San Francisco 2017, including current state review, future trends, and top recommendations for AI initiatives.



Last week, experts from the AI world came together for the Artificial Intelligence Conference at San Francisco to discuss insights, opportunities, challenges and trends related to the rapidly expanding field of AI. The conference included hands-on trainings, tutorials, startup showcase (which was won by PipelineAI), keynotes, sessions, expo, and social events.

Here is my report on Key Takeaways from AI Conference in San Francisco 2017 – Day 1.

Across the keynotes and sessions (on 9/19 and 9/20), the following points appeared in multiple talks and do provide a sense of current prevailing trends:

  • It’s the platform, stupid: Tech giants (Microsoft, Amazon, Google, Intel, IBM, and others) are investing heavily (in R&D as well as in Marketing) on the bundled offering of Cloud and AI. They are vigorously working on attracting the independent developers and start-ups through open-source libraries with advanced capabilities, easy-access APIs to leverage proprietary code, and free/cheap access to cloud resources (compute and storage) to use their platform.
  • Deep Learning isn’t everything: AI discussion is unfairly biased towards Deep Learning, partly due to the recent splendid success of DL in some niche areas. It will take some time for this obsession to fade away, as the interest in AI moves from experimentation to implementation.
  • AI – great potential but no clear path: Despite enormous progress in last few years, a lot of challenges still exist in deploying AI at enterprise scale; scarcity of good quality labeled data being one of the top ones.
  • Mainstream adoption is lagging: Majority of C-level executives agree that AI will have an impact on their industry. However, the investment and commitment to AI projects have been primarily slow outside the high-tech sector.

Here are the key takeaways from Day 2 (Wednesday, September 20, 2017):

Michael Jordan, Distinguished Professor, UC Berkeley gave his keynote on “How to escape saddle points efficiently”. We are in a great time with regards to AI and Machine Learning, due to immense interest and the pace of technological advances. However, the theories and our understanding is lagging to keep up with the challenges.

For a long time, one of the key focus areas of ML optimization has been “how to avoid a local minima?” But, many ML techniques either have no local minima or finding them is not too hard. However, such techniques are often challenged by the flattening out of the learning curve due to saddle points. Once stuck in a saddle point you might stay there for a long time and not even realize that there are better solutions elsewhere. The problem is particularly hard in high dimensions.

AI Conference

Recent papers show that gradient descent will asymptotically avoid saddle points and it can take exponential time to escape saddle points. In a quick overview of gradient descent, he highlighted that gradient descent has a great advantage that it is not slowed down significantly by a high number of dimensions.

He described the Perturbed Gradient Descent approach developed by his team and explained how it helps overcome the challenge of saddle points.

AI Conference

Jia Li, Head of R&D, Cloud AI and Machine Learning, Google gave an inspirational keynote on “Why democratizing AI matters: Computing, data, algorithms, and talent”. We are already seeing wide ranging applications of AI, however, a lot more is yet to come as AI is applied to not so AI-savvy fields such as agriculture and healthcare. Experts in those fields do not know much AI, while at the same time the AI experts do not know much about those fields.

Jia mentioned that one of her key learning has been that the right dataset can make any AI problem substantially easier to solve. The great progress in computer vision is a good example. In order to teach computer how to comprehend images we need to simulate the human representation of the world by providing the computer multiple, properly-tagged images of every object in the world shot in a variety of ways. IMAGENET is one such dataset and it has contributed to the steady decrease in error rate of computer vision.

AI Conference

However, vision involves deeper problems than merely image classification, for example, image captioning and objects relationship. Projects such as Microsoft COCO and Visual Genome are helping us solve those problems.

Can we learn anything meaningful from large scale data without manual labeling? YFCC100M is one such dataset. Recent research shows that computers can make sense of noise internet data by learning from a clean subset and some knowledge graph such as Wikipedia.

AI Conference

Beyond data, computing and algorithms also need to be made easily accessible to democratize AI. Google has recently added TPUs to Google Cloud so that people working on AI do not need to worry about the hardware. Talking about algorithms, she mentioned that her team switched from phrase-based machine translation to Neural Machine Translation (NMT), and this has led to many improvements.

Finally, she emphasized on the need to make the AI learning resources available to everyone.