Does Deep Learning Come from the Devil?

Deep learning has revolutionized computer vision and natural language processing. Yet the mathematics explaining its success remains elusive. At the Yandex conference on machine learning prospects and applications, Vladimir Vapnik offered a critical perspective.

By Zachary Chase Lipton

Over the past week in Berlin, I attended Machine Learning: Prospects and Applications, a conference of invited speakers from the academic machine learning community. Organized by Yandex, Russia's largest search engine, the conference prominently featured the themes Deep Learningand Intelligent Learning, two concepts that were often taken to be in opposition. Although I attended as a speaker and participant on the deep learning panel, the highlight of the conference was witnessing the clash of philosophies between empiricism and mathematics expressed by many leading theorists and practitioners.

The first day, which featured deep learning, was capped by an evening panel discussion. Moderated by Dr. Li Deng, the discussion challenged speakers from the deep learning community, including myself, to explain machine learning's mathematical underpinnings and also to offer a vision of its future. Questions about model interpretability, a topic which I addressed in a previous post, specifically concerning applications to medicine were abundant. On Wednesday, a second evening of discussion was held. Here, Vladimir Vapnik, the co-inventor of the support vector machine and widely considered among the fathers of statistical learning theory, held forth on his theory of knowledge transfer from an intelligent teacher. Additionally, he offered a philosophical view spanning machine learning, mathematics, and the source of intelligence. Perhaps most controversially, he took on deep learning, challenging its ad hoc approach.

This past summer, I postedan article suggesting that deep learning's success more broadly reflected the triumph of empiricism in the setting of big data. I argued that absent the risk of overfitting, the set of methods which could be validated on real data might be much larger than those which we can guarantee to work from first principles mathematically. Following the conference, I'd like to follow up on this topic by presenting an alternative perspective, specifically those challenges put forth by Vladimir Vapnik at the conference.

To preempt any confusion, I am a deep learning researcher. I do not personally dismiss deep learning and respect both its pioneers and torchbearers. But I also believe that we should be open to the possibility that eventually some mathematical theory will either explain its success more fully or point the way forward to a new approach. Clearly, there is value in digesting both the arguments for the deep learning approach, and those critical of it, and in that spirit I present some highlights from the conference, particularly from Professor Vapnik's talk.

Big Data and Deep Learning as Brute Force

Although Professor Vapnik had several angles on deep learning, perhaps this is the most central: During the audience discussion on Intelligent Learning, Vapnik, invoked Einstein's metaphorical notion of God. In short, Vapnik posited that ideas and intuitions come either from God or from the devil. The difference, he suggested is that God is clever, while the devil is not.

In his career as a mathematician and machine learning researcher, Vapnik suggested that the devil appeared always in the form of brute force. Further, while acknowledging the impressive performance of deep learning systems at solving practical problems, he suggested that big data and deep learning both have the flavor of brute force. One audience member asked if Professor Vapnik believed that evolution (which presumably resulted in human intelligence) was a brute force algorithm. In keeping with a stated distaste for speculation, Professor Vapnik declined to offer any guesses about evolution. It also seems appropriate to mention that Einstein's intuitions about how God might design the universe while remarkably fruitful, did not always pan out. Most notably Einstein's intuition that "God does not play dice" appears to conflict with our modern understanding of quantum mechanics (see this great, readable post on the topic by Stephen Hawking).

While I may not agree that deep learning necessarily equates to brute force, I see more clearly the argument against modern attitudes towards big data. As Dr. Vapnik and Professor Nathan Intrator of Tel Aviv University both suggested, a baby doesn't need billions of labeled examples in order to learn. In other words, it may be easy to learn effectively with gigantic labeled datasets, but by relying upon them, one may miss something fundamental about the nature of learning. Perhaps, if our algorithms can learn only with gigantic datasets what should be intrinsically learnable with hundreds, we have succumbed to laziness.

Deep Learning or Deep Engineering

Another perspective that Professor Vapnik offered concerning deep learning is that it is not science. Precisely, he said that it distracted from the core mission of machine learning, which he posited to be the understanding of mechanism. In more elaborate remarks, he suggested that the study of machine learning is like trying to build a Stradivarius, while engineering solutions for practical problems was more like being a violinist. In this sense, a violinist may produce beautiful music, and have an intuition for how to play, but not formally understand what they are doing. By extension, he suggested that many deep learning practitioners have a great feeling for data and for engineering, but similarly do not truly know what they are doing.

Do Humans Invent Anything?

A final sharp idea raised by Professor Vapnik was whether we discover or invent algorithms and models. In Vapnik's view, we do not really invent anything. Specifically, he addressed the audience saying that he is "not so smart as to invent anything". By extension, presumably no one else was so smart either. More diplomatically, he suggested things we invent (if any), are trivial next to those which are intrinsic in nature and that the only source of real knowledge derives from an understanding of mathematics. Deep learning, in which models are frequently invented, branded, and techniques patented, seems somewhat artificial compared to more mathematically motivated machine learning. Around this time, he challenged the audience to offer a definition of deep learning. Most audience members, it seemed, were reluctant to offer one. At other times, audience members challenged his view by invoking deep learning's biological inspiration. To this Dr. Vapnik asked, "do you know how the brain works?"

Zachary Chase Lipton is a PhD student in the Computer Science Engineering department at the University of California, San Diego. Funded by the Division of Biomedical Informatics, he is interested in both theoretical foundations and applications of machine learning. In addition to his work at UCSD, he has interned at Microsoft Research Labs and as a Machine Learning Scientist at Amazon, is a Contributing Editor at KDnuggets, and has signed on as an author at Manning Publications.

Related:

 


Most popular last 30 days

Most viewed last 30 days

  1. 60+ Free Books on Big Data, Data Science, Data Mining, Machine Learning, Python, R, and more - Sep 4, 2015.
  2. The one language a Data Scientist must master - Sep 1, 2015.
  3. How to become a Data Scientist for Free - Aug 28, 2015.
  4. 50+ Data Science and Machine Learning Cheat Sheets - Jul 14, 2015.
  5. R vs Python for Data Science: The Winner is ... - May 26, 2015.
  6. Gartner 2015 Hype Cycle: Big Data is Out, Machine Learning is in - Aug 28, 2015.
  7. Top 20 Data Science MOOCs - Sep 5, 2015.

Most shared last 30 days

  1. 60+ Free Books on Big Data, Data Science, Data Mining, Machine Learning, Python, R, and more - Sep 4, 2015.
  2. The one language a Data Scientist must master - Sep 1, 2015.
  3. A Great way to learn Data Science by simply doing it - Sep 11, 2015.
  4. Top 20 Data Science MOOCs - Sep 5, 2015.
  5. SentimentBuilder: Visual Analysis of Unstructured Texts - Sep 18, 2015.
  6. Deep Learning and Artistic Style - Can art be quantified? - Sep 17, 2015.
  7. Salaries by Roles in Data Science and Business Intelligence - Sep 9, 2015.

Does Deep Learning Come from the Devil?的更多相关文章

  1. Why Deep Learning Works – Key Insights and Saddle Points

    Why Deep Learning Works – Key Insights and Saddle Points A quality discussion on the theoretical mot ...

  2. (转) Awesome - Most Cited Deep Learning Papers

    转自:https://github.com/terryum/awesome-deep-learning-papers Awesome - Most Cited Deep Learning Papers ...

  3. Deep learning:五十一(CNN的反向求导及练习)

    前言: CNN作为DL中最成功的模型之一,有必要对其更进一步研究它.虽然在前面的博文Stacked CNN简单介绍中有大概介绍过CNN的使用,不过那是有个前提的:CNN中的参数必须已提前学习好.而本文 ...

  4. 【深度学习Deep Learning】资料大全

    最近在学深度学习相关的东西,在网上搜集到了一些不错的资料,现在汇总一下: Free Online Books  by Yoshua Bengio, Ian Goodfellow and Aaron C ...

  5. 《Neural Network and Deep Learning》_chapter4

    <Neural Network and Deep Learning>_chapter4: A visual proof that neural nets can compute any f ...

  6. Deep Learning模型之:CNN卷积神经网络(一)深度解析CNN

    http://m.blog.csdn.net/blog/wu010555688/24487301 本文整理了网上几位大牛的博客,详细地讲解了CNN的基础结构与核心思想,欢迎交流. [1]Deep le ...

  7. paper 124:【转载】无监督特征学习——Unsupervised feature learning and deep learning

    来源:http://blog.csdn.net/abcjennifer/article/details/7804962 无监督学习近年来很热,先后应用于computer vision, audio c ...

  8. Deep Learning 26:读论文“Maxout Networks”——ICML 2013

    论文Maxout Networks实际上非常简单,只是发现一种新的激活函数(叫maxout)而已,跟relu有点类似,relu使用的max(x,0)是对每个通道的特征图的每一个单元执行的与0比较最大化 ...

  9. Deep Learning 23:dropout理解_之读论文“Improving neural networks by preventing co-adaptation of feature detectors”

    理论知识:Deep learning:四十一(Dropout简单理解).深度学习(二十二)Dropout浅层理解与实现.“Improving neural networks by preventing ...

随机推荐

  1. jmeter分布式压力测试之添加压力机

    前提:多台电脑可以互相ping通 1.jmeter的bin目录下的jmeter.properties配置文件里面remote_hosts添加测试机的 IP:端口号,用英文“,”逗号间隔例如:remot ...

  2. java锁经典示例——卖车票场景

    场景:20张车票 3个窗口同时售票 1.不加锁 package com.yao.lock; /** * 不加锁的情况 */ public class Runnable_demo implements ...

  3. wordpress学习三:wordpress自带的模板学习

    在<学习二>里,大概说了下怎么去查找模板,本节我们以一个简单的模板为例子,继续说说wordpress的模板机制,看看做一个自己的模板需要哪些知识点. 页面模板渲染 wordpress的模板 ...

  4. 第二阶段冲刺——four

    个人任务: 季方:实现团队博客作业查询. 王金萱:优化统计团队博客结果界面的显示. 马佳慧:选择功能界面的背景设计. 司宇航:用servlet完成名单打印功能. 站立会议: 任务看板和燃尽图:

  5. javascript 函数的几种声明函数以及应用环境

    本页只列出常用的几种方式,当然还有比如new Function()以及下面三种的组合. 1.函数式声明 例子:function sum(a,b){ return a+b; }; 2.函数表达式声明(匿 ...

  6. Linux 信号:signal 与 sigaction

    0.Linux下查看支持的信号列表: france@Ubuntux64:~$ kill -l ) SIGHUP ) SIGINT ) SIGQUIT ) SIGILL ) SIGTRAP ) SIGA ...

  7. ubuntu 12.04下 ns3的下载 安装

    这个的内容我主要是参考了 http://blog.sina.com.cn/s/blog_7ec2ab360102wwsk.html 这个链接的学习,基本上过程没有出现的问题. 就是这个链接少了测试的一 ...

  8. 获取移动端 touchend 事件中真正触摸点下方的元素

    移动端的touchstart, touchmove, touchend三个事件,拖动元素结束时,获取到了touchend事件, 但是event.touches[0].target所指向的元素却是tou ...

  9. MySQL5.7 的编译安装

    转: 5.7的安装: https://www.insp.top/article/make-install-mysql-5-7 5.6的安装: https://www.chenyudong.com/ar ...

  10. Spring点滴九:Spring bean的延迟初始化

    Spring bean延迟初始化: 官网API: By default, ApplicationContext implementations eagerly create and configure ...