The Promise of Deep Learning
The Promise of Deep Learning
Humans have long dreamed of creating machines that think. More than 100 years before the first programmable computer was built, inventors wondered whether devices made of rods and gears might become intelligent. And when Alan Turing, one of the pioneers of computing in the 1940s, set a goal for computer science, he described a test, later dubbed the Turing Test, which measured a computer’s performance against the behavior of humans.
In the early days of my academic field, artificial intelligence, scientists tackled problems that were difficult for humans but relatively easy for computers–such as large-scale mathematical calculations. In more recent years, we’re taking on tasks that are easy for people to perform but hard to describe to a machine–tasks humans solve “without thinking,” such as recognizing spoken words or faces in a crowd.
That more difficult quest gave rise to the domain of machine learning, the ability of machines to learn. This is what interests me. It’s not really my goal to make machines that think like humans do. My aim is to understand the fundamental principles that may enable an entity, machine or living being, to be intelligent. I have long ago made the bet that this would happen thanks to the ability of such an entity to learn, and my focus is on building machines that can learn and understand the world by themselves, i.e., learn to make sense of it.
The reason I’m laying out this chronology is that I believe we’re at a turning point in the history of artificial intelligence–and, indeed, computing itself. Thanks to more powerful computers, the availability of large and varied datasets, and advances in algorithms, we’re able to cross a threshold that has long held back computer science. Machine learning is shifting from a highly manual process where humans have had to design good representations for each task of interest into an automated process where machines learn more like babies do — through experience – building internal representations that help to make sense of the world. This is the field of deep learning.
Deep learning isn’t brand new. Indeed, when I was a student in the 1980s, it was the concept of neural networks, the precursor of deep learning, that got me interested in pursuing an academic career in computer science. What’s new is that the accumulation of many scientific and technical advances has yielded breakthroughs in AI applications such as speech recognition, computer vision, and natural language processing. This has brought into the field a large group of researchers, mostly graduate students, and we’re now making progress in deep learning at a gallop.
We’re able to do that because of advances in creating hierarchies of concepts and representations that computers discover by themselves. The hierarchies allow a computer to learn complicated concepts by building them out of simpler ones. This is also how humans learn and build their understanding of the world; they gradually refine their model of the world to better fit what they observe and discover new ideas from the composition of older ones, new ideas that help them to better fit the evidence, the data.
For example, a deep learning system can represent the concept of an image of a cat by combining simpler concepts, such as corners and contours, which are in turn defined in terms of edges. But we don’t have to teach it explicitly about these intermediate concepts, it learns them on its own. We don’t have to show the system pictures of all the possible cat colors, shapes, and behaviors for such object recognition systems to correctly identify that it is a Siamese cat that’s somersaulting in a photograph. When it “sees” a cat, it “knows” it is one.
I’m privileged to be part of a troika of computer scientists who are widely credited with spearheading advances in this field–along with Geoffrey Hinton and Yann LeCun. We co-authored a paper, Deep Learning,which was published in the journal Nature in May, where we laid out the promise of our branch of A.I. But this isn’t a field where a few “media stars” are doing all that needs to be done. To produce the advances that are possible and to find applications for them will require thousands of scientists and engineers–in academia and in industry.
That’s why I’ve been dedicated to rallying people to our exciting project. I’m co-authoring a book, Deep Learning, with Ian Goodfellow and Aaron Courville. Our core audiences are university students studying machine learning and software engineers working in a wide variety of industries that are likely to find important uses for it. This book-in-progress is posted on the Web, and we welcome people to read, learn and give us feedback.
Which brings me to another key point: I’m an advocate of open science. Like open source developers, participants in the open science movement believe that we should share knowledge as soon as we gain it to increase the pace at which the boundaries of science are pushed, and for the benefit of all. Many of my research colleagues and I contribute all of our deep learning inventions to the Theano project and its derivatives on GitHub. There, anybody who is building deep learning systems can use the algorithms and programming tools, and we urge them to contribute back to the project: hundreds already do so.
Just as sharing is essential to open science, so is collaboration–the kind that’s done transparently. The whole enterprise of science is a giant brainstorm. The Montreal Institute for Learning Algorithms (MILA), with its 60 researchers — including 5 professors, contributes to it via numerous collaborative research projects with scientists in universities and industry.
The newest of our collaborative research partners is IBM. We look forward to working with scientists and engineers in IBM Research and the Watson Group on a very ambitious research agenda, including deep learning for language, speech and vision. We believe that, together, we’ll be able to scale up and extend deep learning methods by using powerful computers to take on very large datasets. It will help machines learn more, across broader domains, faster and from a larger set of data sources, including the vast amounts of unlabeled data – that have not been curated by humans.
I’m tremendously excited about the future of deep learning. We’ve made rapid progress, and while we’re far from solving the great riddle of what it will take to enable machines to truly understand the world, I’m very hopeful that we’ll crack it.
And then the floodgates will open. Once computers truly understand text, speech, images and sounds, they will become our indispensible assistants. This will revolutionize the way we interact with computers, helping us live more conveniently in our day-to-day lives and perform more effectively at work. It will enable society to take on some of the grand challenges that matter to us–such as curing deadly diseases and spreading knowledge and wealth more broadly. As importantly, it will help us understand who we are and that part of who we are that has always fascinated me, i.e., how intelligence arises. This has been my dream for more than 30 years, and it’s fast becoming our reality.
The Promise of Deep Learning的更多相关文章
- (zhuan) Where can I start with Deep Learning?
Where can I start with Deep Learning? By Rotek Song, Deep Reinforcement Learning/Robotics/Computer V ...
- (转) The major advancements in Deep Learning in 2016
The major advancements in Deep Learning in 2016 Pablo Tue, Dec 6, 2016 in MACHINE LEARNING DEEP LEAR ...
- (转) Deep Learning in a Nutshell: Reinforcement Learning
Deep Learning in a Nutshell: Reinforcement Learning Share: Posted on September 8, 2016by Tim Dettm ...
- Rolling in the Deep (Learning)
Rolling in the Deep (Learning) Deep Learning has been getting a lot of press lately, and is one of t ...
- 深度学习材料:从感知机到深度网络A Deep Learning Tutorial: From Perceptrons to Deep Networks
In recent years, there’s been a resurgence in the field of Artificial Intelligence. It’s spread beyo ...
- [C3] Andrew Ng - Neural Networks and Deep Learning
About this Course If you want to break into cutting-edge AI, this course will help you do so. Deep l ...
- Deep learning:五十一(CNN的反向求导及练习)
前言: CNN作为DL中最成功的模型之一,有必要对其更进一步研究它.虽然在前面的博文Stacked CNN简单介绍中有大概介绍过CNN的使用,不过那是有个前提的:CNN中的参数必须已提前学习好.而本文 ...
- 【深度学习Deep Learning】资料大全
最近在学深度学习相关的东西,在网上搜集到了一些不错的资料,现在汇总一下: Free Online Books by Yoshua Bengio, Ian Goodfellow and Aaron C ...
- 《Neural Network and Deep Learning》_chapter4
<Neural Network and Deep Learning>_chapter4: A visual proof that neural nets can compute any f ...
随机推荐
- 集合练习——Map部分
练习: 输入诗的名称查询出诗的内容,当输入exit时,退出程序,“春晓”,“静夜思”,“鹅”. package CollectionPart; public class Poetry { privat ...
- Asp.net上传出现“超过了最大请求长度”的问题解决方法
在开发ASP.NET网站后台管理系统时,我们可能会遇到这样的问题:上传大于4M的文件时,会提示错误:错误信息如下: 1.异常详细信息:超过了最大请求长度. 2.引发异常的方法:Byte[] GetEn ...
- Quartz Cron表达式生成器
格式: [秒] [分] [小时] [日] [月] [周] [年] 序号 说明 是否必填 允许填写的值 允许的通配符 1 秒 是 0-59 , - * / 2 分 是 0 ...
- Jstl标签库/Filter过滤器
JSTLJSP Standard Tag Library JSP标准标签库 是Sun公司定义的一套标准,由Apache组织基于这套标准开发的一套标准库之后又转给Sun公司被称为JSTL,成为了java ...
- dbforge studio for mysql 怎样破解
下载好dbforge studio压缩包有两个exe,dbforge.studio.for.mysql.6.0.315-loader.exe ,和dbforgemysql.exe,安装后目录在C:\P ...
- C#中运用事件实现异步调用
问题引出: winform程序中的耗时操作,一般不能在UI线程中执行,需要另开线程.往往我们需要在耗时操作结束后将结果显示在UI上. 以下是Mainform.cs中调用耗时操作的一段代码: Job j ...
- javascript 获取url参数
/** window.location.search获取url地址?以后的值 获取url参数有两种方法,第一种如下,第二种是通过正则 */ //基本版 function getParam() { va ...
- HTTP 417解决方案
在一次模拟HTPP请求时,本人在项目中的一般处理程序中调用客户接口返回非成功的结果.为了方便调试,所以将核心代码拷贝至控制台中进逐个调试. 在控制台中,启动调试时提示: 未经处理的异常: S ...
- 【BZOJ1042】【DP + 容斥】[HAOI2008]硬币购物
Description 硬币购物一共有4种硬币.面值分别为c1,c2,c3,c4.某人去商店买东西,去了tot次.每次带di枚ci硬币,买si的价值的东西.请问每次有多少种付款方法. Input 第一 ...
- $.ajax参数备注-转转转
jquery中的ajax方法参数总是记不住,这里记录一下. $,ajax()方法参数详解 1.url: 要求为String类型的参数,(默认为当前页地址)发送请求的地址. 2.type: 要求为St ...