Does Deep Learning Come from the Devil?

Deep learning has revolutionized computer vision and natural language processing. Yet the mathematics explaining its success remains elusive. At the Yandex conference on machine learning prospects and applications, Vladimir Vapnik offered a critical perspective.



yandex-berlin

Over the past week in Berlin, I attended Machine Learning: Prospects and Applications, a conference of invited speakers from the academic machine learning community. Organized by Yandex, Russia's largest search engine, the conference prominently featured the themes Deep Learning and Intelligent Learning, two concepts that were often taken to be in opposition. Although I attended as a speaker and participant on the deep learning panel, the highlight of the conference was witnessing the clash of philosophies between empiricism and mathematics expressed by many leading theorists and practitioners.

The first day, which featured deep learning, was capped by an evening panel discussion. Moderated by Dr. Li Deng, the discussion challenged speakers from the deep learning community, including myself, to explain machine learning's mathematical underpinnings and also to offer a vision of its future. Questions about model interpretability, a topic which I addressed in a previous post, specifically concerning applications to medicine were abundant. On Wednesday, a second evening of discussion was held. Here, Vladimir Vapnik, the co-inventor of the support vector machine and widely considered among the fathers of statistical learning theory, held forth on his theory of knowledge transfer from an intelligent teacher. Additionally, he offered a philosophical view spanning machine learning, mathematics, and the source of intelligence. Perhaps most controversially, he took on deep learning, challenging its ad hoc approach.

vapnik

This past summer, I posted an article suggesting that deep learning's success more broadly reflected the triumph of empiricism in the setting of big data. I argued that absent the risk of overfitting, the set of methods which could be validated on real data might be much larger than those which we can guarantee to work from first principles mathematically. Following the conference, I'd like to follow up on this topic by presenting an alternative perspective, specifically those challenges put forth by Vladimir Vapnik at the conference.

To preempt any confusion, I am a deep learning researcher. I do not personally dismiss deep learning and respect both its pioneers and torchbearers. But I also believe that we should be open to the possibility that eventually some mathematical theory will either explain its success more fully or point the way forward to a new approach. Clearly, there is value in digesting both the arguments for the deep learning approach, and those critical of it, and in that spirit I present some highlights from the conference, particularly from Professor Vapnik's talk.

Big Data and Deep Learning as Brute Force

Although Professor Vapnik had several angles on deep learning, perhaps this is the most central: During the audience discussion on Intelligent Learning, Vapnik, invoked Einstein's metaphorical notion of God. In short, Vapnik posited that ideas and intuitions come either from God or from the devil. The difference, he suggested is that God is clever, while the devil is not.

devil-deep-learning

In his career as a mathematician and machine learning researcher, Vapnik suggested that the devil appeared always in the form of brute force. Further, while acknowledging the impressive performance of deep learning systems at solving practical problems, he suggested that big data and deep learning both have the flavor of brute force. One audience member asked if Professor Vapnik believed that evolution (which presumably resulted in human intelligence) was a brute force algorithm. In keeping with a stated distaste for speculation, Professor Vapnik declined to offer any guesses about evolution. It also seems appropriate to mention that Einstein's intuitions about how God might design the universe while remarkably fruitful, did not always pan out. Most notably Einstein's intuition that "God does not play dice" appears to conflict with our modern understanding of quantum mechanics (see this great, readable post on the topic by Stephen Hawking).

While I may not agree that deep learning necessarily equates to brute force, I see more clearly the argument against modern attitudes towards big data. As Dr. Vapnik and Professor Nathan Intrator of Tel Aviv University both suggested, a baby doesn't need billions of labeled examples in order to learn. In other words, it may be easy to learn effectively with gigantic labeled datasets, but by relying upon them, one may miss something fundamental about the nature of learning. Perhaps, if our algorithms can learn only with gigantic datasets what should be intrinsically learnable with hundreds, we have succumbed to laziness.

Deep Learning or Deep Engineering

Another perspective that Professor Vapnik offered concerning deep learning is that it is not science. Precisely, he said that it distracted from the core mission of machine learning, which he posited to be the understanding of mechanism. In more elaborate remarks, he suggested that the study of machine learning is like trying to build a Stradivarius, while engineering solutions for practical problems was more like being a violinist. In this sense, a violinist may produce beautiful music, and have an intuition for how to play, but not formally understand what they are doing. By extension, he suggested that many deep learning practitioners have a great feeling for data and for engineering, but similarly do not truly know what they are doing.

Do Humans Invent Anything?

A final sharp idea raised by Professor Vapnik was whether we discover or invent algorithms and models. In Vapnik's view, we do not really invent anything. Specifically, he addressed the audience saying that he is "not so smart as to invent anything". By extension, presumably no one else was so smart either. More diplomatically, he suggested things we invent (if any), are trivial next to those which are intrinsic in nature and that the only source of real knowledge derives from an understanding of mathematics. Deep learning, in which models are frequently invented, branded, and techniques patented, seems somewhat artificial compared to more mathematically motivated machine learning. Around this time, he challenged the audience to offer a definition of deep learning. Most audience members, it seemed, were reluctant to offer one. At other times, audience members challenged his view by invoking deep learning's biological inspiration. To this Dr. Vapnik asked, "do you know how the brain works?"

Zachary Chase Lipton Zachary Chase Lipton is a PhD student in the Computer Science Engineering department at the University of California, San Diego. Funded by the Division of Biomedical Informatics, he is interested in both theoretical foundations and applications of machine learning. In addition to his work at UCSD, he has interned at Microsoft Research Labs and as a Machine Learning Scientist at Amazon, is a Contributing Editor at KDnuggets, and has signed on as an author at Manning Publications.

Related: