5 Tribes of Machine Learning: Nov 24 ACM Webinar with Pedro Domingos, moderated by Gregory Piatetsky
Prof. Pedro Domingos, a leading AI/Machine Learning researcher will talk about 5 main schools in machine learning, each with its own master algorithm, a possible universal Master Algorithm, and implications for society. KDnuggets Editor Gregory Piatetsky will moderate.
"The Five Tribes of Machine Learning (And What You Can Learn from Each),"
presented on Thursday, November 24, 2015 at 12 pm ET (11 am CT/10 am MT/9 am PT/5 pm GMT) by Pedro Domingos, Professor of Computer Science at the University of Washington in Seattle and winner of the SIGKDD Innovation.
Gregory Piatetsky-Shapiro, President of KDnuggets, founder of Knowledge Discovery in Database (KDD) conferences, and co-founder of ACM SIGKDD, moderates.
(Scroll down to learn Pedro Domingos opinion on whether Deep Learning is THE Master Algorithm)
(If you'd like to attend but can't make it to the virtual event, you still need to register to receive a recording of the webinar when it becomes available.)
Note: You can stream this and all ACM Learning Webinars on your mobile device, including smartphones and tablets.
Abstract: There are five main schools of thought in machine learning, and each has its own master algorithm - a general-purpose learner that can in principle be applied to any domain. The symbolists have inverse deduction, the connectionists have backpropagation, the evolutionaries have genetic programming, the Bayesians have probabilistic inference, and the analogizers have support vector machines. What we really need, however, is a single algorithm combining the key features of all of them. In this webinar Pedro Domingos will summarize the five paradigms and describe his work toward unifying them, including in particular Markov logic networks. Pedro will conclude by speculating on the new applications that a universal learner will enable, and how society will change as a result.
Webinar Duration: 60 minutes (including audience Q&A)
Pedro Domingos is a professor of computer science at the University of Washington in Seattle. He is a winner of the SIGKDD Innovation Award, the highest honor in data science. He is a Fellow of the Association for the Advancement of Artificial Intelligence, and has received a Fulbright Scholarship, a Sloan Fellowship, the National Science Foundation's CAREER Award, and numerous best paper awards. He received his Ph.D. from the University of California at Irvine and is the author or co-author of over 200 technical publications. He has held visiting positions at Stanford, Carnegie Mellon, and MIT. He co-founded the International Machine Learning Society in 2001. His research spans a wide variety of topics in machine learning, artificial intelligence, and data science, including scaling learning algorithms to big data, maximizing word of mouth in social networks, unifying logic and probability, and deep learning.
Gregory Piatetsky-Shapiro is President of KDnuggets and an expert in business analytics, data mining, and data science. He was a Chief Scientist at two startups and is currently President of KDnuggets. Gregory is the founder of Knowledge Discovery in Database (KDD) conferences and co-founder and past chair (2005-2009) of ACM SIGKDD, the leading professional organization for Knowledge Discovery and Data Mining. Gregory received ACM SIGKDD Service Award (2000) and IEEE ICDM Outstanding Service Award (2007) for contributions to data mining field and community.
Gregory Piatetsky: Soon after his book
Here is his answer:
The connectionists certainly have the wind in their sails these days, and deep learning, with its stunning successes in one area after another, is a tantalizing preview of what having the Master Algorithm will be like. But at the end of the day it's still a far cry from the real thing, because it only solves one of the major problems a general-purpose learner needs to solve: assigning credit for successes and blame for errors to the different parts of a complex system.
It doesn't allow different pieces of knowledge to be composed in arbitrary ways (like symbolist algorithms do), evolve structure (like evolutionary algorithms), properly handle uncertainty (like Bayesian methods), or generalize to very different situations (like analogical reasoning).
Right now the connectionists are moving from vision and speech, where their early successes were, to things like language understanding and commonsense reasoning, but I think it's going to be much harder going there, because of all the missing pieces. And even vision and speech, contrary to some claims we hear, are still very far from solved.
But if we can combine deep learning with they key features of the other approaches we will certainly be on the path to the Master Algorithm.
Related: