Training Neural Networks: Q&A with Ian Goodfellow, Google

Neural networks require considerable time and computational firepower to train. Previously, researchers believed that neural networks were costly to train because gradient descent slows down near local minima or saddle points. At the RE.WORK Deep Learning Summit in San Francisco, Ian Goodfellow, Research Scientist at Google, will challenge that view and look deeper to find the true bottlenecks in neural network training.

Before joining the Google team, Ian earned a PhD in machine learning from Université de Montréal, under his advisors Yoshua Bengio and Aaron Courville. During his studies, which were funded by the Google PhD Fellowship in Deep Learning, he wrote Pylearn2, the open source deep learning research library, and introduced a variety of new deep learning algorithms. Previously, he obtained a BSc and MSc in Computer Science from Stanford University, where he was one of the earliest members of Andrew Ng's deep learning research group.

We caught up with Ian ahead of the summit in January 2016 to hear more about his current work and thoughts on the future of deep learning.

What are you currently working on in deep networks?
I am interested in developing generic methods that make any neural network train faster and generalize better. To improve generalization, I study the way neural networks respond to “adversarial examples” that are intentionally constructed to confuse the network. To improve optimization, I study the structure of neural network optimization problems and determine which factors cause learning to be slow.

What are the key factors that have enabled recent advancements in deep learning? 
The basic machine learning algorithms have been in place since the 1980s, but until very recently, we were applying these algorithms to neural networks with fewer neurons than a leech. Unsurprisingly, such small networks performed poorly. Fast computers with larger memory capacity and better software infrastructure have allowed us to train neural networks that are large enough to perform well. Larger datasets are also very important. Some changes in machine learning algorithms, like designing neural network layers to be very linear, have also led to noticeable improvements.

What are the main types of problems now being addressed in the deep learning space?
There is a gold rush to be the first to use existing deep learning algorithms on new application areas. Every day, there are new articles about deep learning for counting calories from photos, deep learning for separating two voices in a recording, etc.

What are the practical applications of your work and what sectors are most likely to be affected?
My work is generic enough that it impacts everything we use neural networks for. Anything you want to do with a neural net, I aim to make faster and more accurate.

What developments can we expect to see in deep learning in the next 5 years?
I expect within five years, we will have neural networks that can summarize what happens in a video clip, and will be able to generate short videos. Neural networks are already the standard solution to vision tasks. I expect they will become the standard solution to NLP and robotics tasks as well. I also predict that neural networks will become an important tool in other scientific disciplines. For example, neural networks could be trained to model the behavior of genes, drugs, and proteins and then used to design new medicines.

What advancements excite you most in the field?
Recent extensions of variational auto-encoders and generative adversarial networks have greatly improved the ability of neural networks to generate realistic images. Generating data has been a constantly studied problem for decades, and we still do not seem to have the right algorithm to do it. The last year or so has shown that we are getting much closer though.

Ian Goodfellow will be speaking at Deep Learning Summit in San Francisco, on 28-29 January 2016, alongside speakers from Baidu, Twitter, Clarifai, MIT and more.

Training Neural Networks: Q&A with Ian Goodfellow, Google的更多相关文章

  1. 实现径向变换用于样本增强《Training Neural Networks with Very Little Data-A Draft》

    背景: 做大规模机器学习算法,特别是神经网络最怕什么--没有数据!!没有数据意味着,机器学不会,人工不智能!通常使用样本增强来扩充数据一直都是解决这个问题的一个好方法. 最近的一篇论文<Trai ...

  2. (转)A Recipe for Training Neural Networks

    A Recipe for Training Neural Networks Andrej Karpathy blog  2019-04-27 09:37:05 This blog is copied ...

  3. 1506.01186-Cyclical Learning Rates for Training Neural Networks

    1506.01186-Cyclical Learning Rates for Training Neural Networks 论文中提出了一种循环调整学习率来训练模型的方式. 如下图: 通过循环的线 ...

  4. A Recipe for Training Neural Networks [中文翻译, part 1]

    最近拜读大神Karpathy的经验之谈 A Recipe for Training Neural Networks  https://karpathy.github.io/2019/04/25/rec ...

  5. [Converge] Training Neural Networks

    CS231n Winter 2016: Lecture 5: Neural Networks Part 2 CS231n Winter 2016: Lecture 6: Neural Networks ...

  6. [CS231n-CNN] Training Neural Networks Part 1 : activation functions, weight initialization, gradient flow, batch normalization | babysitting the learning process, hyperparameter optimization

    课程主页:http://cs231n.stanford.edu/   Introduction to neural networks -Training Neural Network ________ ...

  7. [转]Binarized Neural Networks_ Training Neural Networks with Weights and Activations Constrained to +1 or −1

    原文: 二值神经网络(Binary Neural Network,BNN) 在我刚刚过去的研究生毕设中,我在ImageNet数据集上验证了图像特征二值化后仍然具有很强的表达能力,可以在检索中达到较好的 ...

  8. [CS231n-CNN] Training Neural Networks Part 1 : parameter updates, ensembles, dropout

    课程主页:http://cs231n.stanford.edu/ ___________________________________________________________________ ...

  9. Binarized Neural Networks_ Training Neural Networks with Weights and Activations Constrained to +1 or −1

    转载请注明出处: http://www.cnblogs.com/sysuzyq/p/6248953.html by 少侠阿朱

随机推荐

  1. [图的遍历&多标准] 1087. All Roads Lead to Rome (30)

    1087. All Roads Lead to Rome (30) Indeed there are many different tourist routes from our city to Ro ...

  2. java synchronized关键字浅析

    synchronized这个关键字想必学Java的人都应该知道. 直接上例子: 方法级别实例 public class AtomicInteger { private int index; publi ...

  3. 5G时代

    电信语音承载在CDMA2G网络--所以2G基本没有网络 网络走fdd4g 如果5G时代来临,4g网络可能就会像3G一样的慢

  4. curl 实例详解

    使用PHP的cURL库可以简单和有效地去抓网页.你只需要运行一个脚本,然后分析一下你所抓取的网页,然后就可以以程序的方式得到你想要的数据 了.无论是你想从从一个链接上取部分数据,或是取一个XML文件并 ...

  5. poj 3254(状态压缩DP)

    poj  3254(状态压缩DP) 题意:一个矩阵里有很多格子,每个格子有两种状态,可以放牧和不可以放牧,可以放牧用1表示,否则用0表示,在这块牧场放牛,要求两个相邻的方格不能同时放牛,即牛与牛不能相 ...

  6. sklearn-特征工程之特征选择

    title: sklearn-特征工程之特征选择 date: 2016-11-25 22:49:24 categories: skearn tags: sklearn --- 抄袭/参考资料 使用sk ...

  7. wai

    外键的过滤是怎么做的, 一个class有两个外键A和B,其中A又是B的外键,在这种情况下,比如A选择了学校之后,可否在B中过滤出A学校的所有的专业?也就是说在选择的时候能不能按照已经填好的一个选项来选 ...

  8. properties文件读取与写入

    将peoperties文件的读取和写入封装成了一个工具类: import java.io.BufferedInputStream; import java.io.FileInputStream; im ...

  9. Dapper 事务处理

    例子: using (var connection = GetOpenConnection()) using (var transaction = connection.BeginTransactio ...

  10. CodeForces - 988C(STL大法好)

    请你找出两个编号不同的数列,并从这两个数列中各恰好删除一个数,使得这两个数列的和相等. 用vector存每一个数 用map标记 即可 #include <bits/stdc++.h> us ...