Training Neural Networks: Q&A with Ian Goodfellow, Google
Training Neural Networks: Q&A with Ian Goodfellow, Google
Neural networks require considerable time and computational firepower to train. Previously, researchers believed that neural networks were costly to train because gradient descent slows down near local minima or saddle points. At the RE.WORK Deep Learning Summit in San Francisco, Ian Goodfellow, Research Scientist at Google, will challenge that view and look deeper to find the true bottlenecks in neural network training.
Before joining the Google team, Ian earned a PhD in machine learning from Université de Montréal, under his advisors Yoshua Bengio and Aaron Courville. During his studies, which were funded by the Google PhD Fellowship in Deep Learning, he wrote Pylearn2, the open source deep learning research library, and introduced a variety of new deep learning algorithms. Previously, he obtained a BSc and MSc in Computer Science from Stanford University, where he was one of the earliest members of Andrew Ng's deep learning research group.
We caught up with Ian ahead of the summit in January 2016 to hear more about his current work and thoughts on the future of deep learning.
What are you currently working on in deep networks?
I am interested in developing generic methods that make any neural network train faster and generalize better. To improve generalization, I study the way neural networks respond to “adversarial examples” that are intentionally constructed to confuse the network. To improve optimization, I study the structure of neural network optimization problems and determine which factors cause learning to be slow.
What are the key factors that have enabled recent advancements in deep learning?
The basic machine learning algorithms have been in place since the 1980s, but until very recently, we were applying these algorithms to neural networks with fewer neurons than a leech. Unsurprisingly, such small networks performed poorly. Fast computers with larger memory capacity and better software infrastructure have allowed us to train neural networks that are large enough to perform well. Larger datasets are also very important. Some changes in machine learning algorithms, like designing neural network layers to be very linear, have also led to noticeable improvements.
What are the main types of problems now being addressed in the deep learning space?
There is a gold rush to be the first to use existing deep learning algorithms on new application areas. Every day, there are new articles about deep learning for counting calories from photos, deep learning for separating two voices in a recording, etc.
What are the practical applications of your work and what sectors are most likely to be affected?
My work is generic enough that it impacts everything we use neural networks for. Anything you want to do with a neural net, I aim to make faster and more accurate.
What developments can we expect to see in deep learning in the next 5 years?
I expect within five years, we will have neural networks that can summarize what happens in a video clip, and will be able to generate short videos. Neural networks are already the standard solution to vision tasks. I expect they will become the standard solution to NLP and robotics tasks as well. I also predict that neural networks will become an important tool in other scientific disciplines. For example, neural networks could be trained to model the behavior of genes, drugs, and proteins and then used to design new medicines.
What advancements excite you most in the field?
Recent extensions of variational auto-encoders and generative adversarial networks have greatly improved the ability of neural networks to generate realistic images. Generating data has been a constantly studied problem for decades, and we still do not seem to have the right algorithm to do it. The last year or so has shown that we are getting much closer though.
Ian Goodfellow will be speaking at Deep Learning Summit in San Francisco, on 28-29 January 2016, alongside speakers from Baidu, Twitter, Clarifai, MIT and more.
Training Neural Networks: Q&A with Ian Goodfellow, Google的更多相关文章
- 实现径向变换用于样本增强《Training Neural Networks with Very Little Data-A Draft》
背景: 做大规模机器学习算法,特别是神经网络最怕什么--没有数据!!没有数据意味着,机器学不会,人工不智能!通常使用样本增强来扩充数据一直都是解决这个问题的一个好方法. 最近的一篇论文<Trai ...
- (转)A Recipe for Training Neural Networks
A Recipe for Training Neural Networks Andrej Karpathy blog 2019-04-27 09:37:05 This blog is copied ...
- 1506.01186-Cyclical Learning Rates for Training Neural Networks
1506.01186-Cyclical Learning Rates for Training Neural Networks 论文中提出了一种循环调整学习率来训练模型的方式. 如下图: 通过循环的线 ...
- A Recipe for Training Neural Networks [中文翻译, part 1]
最近拜读大神Karpathy的经验之谈 A Recipe for Training Neural Networks https://karpathy.github.io/2019/04/25/rec ...
- [Converge] Training Neural Networks
CS231n Winter 2016: Lecture 5: Neural Networks Part 2 CS231n Winter 2016: Lecture 6: Neural Networks ...
- [CS231n-CNN] Training Neural Networks Part 1 : activation functions, weight initialization, gradient flow, batch normalization | babysitting the learning process, hyperparameter optimization
课程主页:http://cs231n.stanford.edu/ Introduction to neural networks -Training Neural Network ________ ...
- [转]Binarized Neural Networks_ Training Neural Networks with Weights and Activations Constrained to +1 or −1
原文: 二值神经网络(Binary Neural Network,BNN) 在我刚刚过去的研究生毕设中,我在ImageNet数据集上验证了图像特征二值化后仍然具有很强的表达能力,可以在检索中达到较好的 ...
- [CS231n-CNN] Training Neural Networks Part 1 : parameter updates, ensembles, dropout
课程主页:http://cs231n.stanford.edu/ ___________________________________________________________________ ...
- Binarized Neural Networks_ Training Neural Networks with Weights and Activations Constrained to +1 or −1
转载请注明出处: http://www.cnblogs.com/sysuzyq/p/6248953.html by 少侠阿朱
随机推荐
- python learning1.py
# 廖雪峰的官方网站 python教材 1~4章 # 格式控制符语法 print('Hello, %s' % 'world') print('hello, %s, you have %d dollar ...
- java对文件的操作
1.按字节读取文件内容2.按字符读取文件内容3.按行读取文件内容 4.随机读取文件内容 public class ReadFromFile { /** * 以字节为单位读取文件,常用 ...
- iOS开发面试题(中级)
//想面试的童鞋们来看看自己会多少, 老鸟可以无视直接绕过...1. Object-c的类可以多重继承么?可以实现多个接口么?Category是什么?重写一个类的方式用继承好还是分类好?为什么?与Ex ...
- redis简介及增删改查
redis 是一个文档(nosql)数据库,工作与内存,主要用做高速缓存 缓存经常会查到的数据 存入的值默认是字符串 使用步骤: 1 从redis.io下载 2 点击redis-server.exe启 ...
- 监控MySQL服务器主从同步异常的脚本,出现异常,报警
监控主从复制的指标有: Slave_IO_Running: Yes Slave_SQL_Running: Yes Seconds_Behind_Master: 0 (从服务器与主服务器延时多少秒) # ...
- 使用 oracle pipelined 返回一个结果集;
1.使用 create or replace package refcursor_pkg is -- Author : mr.yang -- Created : 5/14/2017 5:13:42 P ...
- git 常用命令总结(一)
1.初始化版本库: .进入工程根目录目录 .创建项目目录 mkdir 项目目录名称 .进入创建的项目中 cd 项目名称 pwd 显示当前目录 .项目初始化 git init //完成后会在项目目录下生 ...
- [转帖]学习一下centos7 新地方
总结的挺好 copy一下 慢慢学习: http://blog.itpub.net/312079/viewspace-2214440/ Centos7 单用户模式 centos7里不再有0-6启动级别 ...
- org.hibernate.UnknownEntityTypeException: Unable to locate persister: com.hibernate2.pojo.News at org.hibernate.internal.SessionFactoryImpl.locateEntityPersister(SessionFactoryImpl.java:797)
使用的是hibernate5的方法: ServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder().applySetti ...
- Python fullstack系列【1】:初识Python
Python简介 Python的前世今生: Python诞生于1989年的圣诞节期间,其作者是吉多·范罗苏姆(Guido van Rossum).当时Guido(江湖人称龟叔)在阿姆斯特丹度假时着手开 ...