This instability is a fundamental problem for gradient-based learning in deep neural networks. vanishing exploding gradient problem
The unstable gradient problem: The fundamental problem here isn't so much the vanishing gradient problem or the exploding gradient problem. It's that the gradient in early layers is the product of terms from all the later layers. When there are many layers, that's an intrinsically unstable situation. The only way all layers can learn at close to the same speed is if all those products of terms come close to balancing out. Without some mechanism or underlying reason for that balancing to occur, it's highly unlikely to happen simply by chance. In short, the real problem here is that neural networks suffer from an unstable gradient problem. As a result, if we use standard gradient-based learning techniques, different layers in the network will tend to learn at wildly different speeds.
Again, early hidden layers learn much more slowly than later hidden layers. In this case, the first hidden layer is learning roughly 100 times slower than the final hidden layer. No wonder we were having trouble training these networks earlier!
We have here an important observation: in at least some deep neural networks, the gradient tends to get smaller as we move backward through the hidden layers. This means that neurons in the earlier layers learn much more slowly than neurons in later layers. And while we've seen this in just a single network, there are fundamental reasons why this happens in many neural networks. The phenomenon is known as the vanishing gradient problem**See Gradient flow in recurrent nets: the difficulty of learning long-term dependencies, by Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber (2001). This paper studied recurrent neural nets, but the essential phenomenon is the same as in the feedforward networks we are studying. See also Sepp Hochreiter's earlier Diploma Thesis,Untersuchungen zu dynamischen neuronalen Netzen (1991, in German)..
Why does the vanishing gradient problem occur? Are there ways we can avoid it? And how should we deal with it in training deep neural networks? In fact, we'll learn shortly that it's not inevitable, although the alternative is not very attractive, either: sometimes the gradient gets much larger in earlier layers! This is the exploding gradient problem, and it's not much better news than the vanishing gradient problem. More generally, it turns out that the gradient in deep neural networks is unstable, tending to either explode or vanish in earlier layers. This instability is a fundamental problem for gradient-based learning in deep neural networks. It's something we need to understand, and, if possible, take steps to address.
One response to vanishing (or unstable) gradients is to wonder if they're really such a problem. Momentarily stepping away from neural nets, imagine we were trying to numerically minimize a function f(x)f(x) of a single variable. Wouldn't it be good news if the derivative f′(x)f′(x) was small? Wouldn't that mean we were already near an extremum? In a similar way, might the small gradient in early layers of a deep network mean that we don't need to do much adjustment of the weights and biases?
http://neuralnetworksanddeeplearning.com/chap5.html
Design a Gradient Logo Illustrator Tutorial - YouTube
This instability is a fundamental problem for gradient-based learning in deep neural networks. vanishing exploding gradient problem的更多相关文章
- Exploring Adversarial Attack in Spiking Neural Networks with Spike-Compatible Gradient
郑重声明:原文参见标题,如有侵权,请联系作者,将会撤销发布! arXiv:2001.01587v1 [cs.NE] 1 Jan 2020 Abstract 脉冲神经网络(SNN)被广泛应用于神经形态设 ...
- Coursera Deep Learning 2 Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization - week1, Assignment(Gradient Checking)
声明:所有内容来自coursera,作为个人学习笔记记录在这里. Gradient Checking Welcome to the final assignment for this week! In ...
- 课程二(Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization),第一周(Practical aspects of Deep Learning) —— 4.Programming assignments:Gradient Checking
Gradient Checking Welcome to this week's third programming assignment! You will be implementing grad ...
- [CS231n-CNN] Training Neural Networks Part 1 : activation functions, weight initialization, gradient flow, batch normalization | babysitting the learning process, hyperparameter optimization
课程主页:http://cs231n.stanford.edu/ Introduction to neural networks -Training Neural Network ________ ...
- 梯度消失(vanishing gradient)与梯度爆炸(exploding gradient)问题
(1)梯度不稳定问题: 什么是梯度不稳定问题:深度神经网络中的梯度不稳定性,前面层中的梯度或会消失,或会爆炸. 原因:前面层上的梯度是来自于后面层上梯度的乘乘积.当存在过多的层次时,就出现了内在本质上 ...
- 覆盖问题:最大覆盖问题(Maximum Covering Location Problem,MCLP)和集覆盖问题(Location Set Covering Problem,LSCP)
集覆盖问题研究满足覆盖所有需求点顾客的前提下,服务站总的建站个数或建 设费用最小的问题.集覆盖问题最早是由 Roth和 Toregas等提出的,用于解决消防中心和救护车等的应急服务设施的选址问题,他们 ...
- Codeforces Round #602 (Div. 2, based on Technocup 2020 Elimination Round 3) A. Math Problem 水题
A. Math Problem Your math teacher gave you the following problem: There are n segments on the x-axis ...
- Deep Learning专栏--强化学习之从 Policy Gradient 到 A3C(3)
在之前的强化学习文章里,我们讲到了经典的MDP模型来描述强化学习,其解法包括value iteration和policy iteration,这类经典解法基于已知的转移概率矩阵P,而在实际应用中,我们 ...
- 随机梯度下降(Stochastic gradient descent)和 批量梯度下降(Batch gradient descent )的公式对比、实现对比[转]
梯度下降(GD)是最小化风险函数.损失函数的一种常用方法,随机梯度下降和批量梯度下降是两种迭代求解思路,下面从公式和实现的角度对两者进行分析,如有哪个方面写的不对,希望网友纠正. 下面的h(x)是要拟 ...
随机推荐
- 2017.6.29 移除再导入maven module到IDEA中时提示: Unable to proceed. Nothing found to import.
解决办法来自:https://stackoverflow.com/questions/18278016/re-importing-modules-into-intellij 场景: 将其中一个modu ...
- PHP 在源代码中实现换行使得页面源代码更精致美观
常量 : PHP_EOL 换行实例: <? php echo $this->doctype($this->doctype) . PHP_EOL;?> <html> ...
- Visual Studio 2015 Update 1 安装到最后 KB3022398 错误解决方法
最后一步遇到一个错误的确让人心寒 只是我们还是得一步步解决.别去卸载重装.太费时 首先打开 regedit 注冊表,依次进入: 1:HKEY_LOCAL_MACHINE\SOFTWARE\Micros ...
- S1:适配器 Adapter
将一个类的接口转换为用户期望的另外一个接口.适配器使得原本由于接口不兼容而不能一起工作的类可以一起工作 UML: 一.类适配器: class A { public function methodA ...
- 利用gulp构建你的项目
gulp是前端开发过程中对代码进行构建的工具,是自动化项目的构建利器:她不仅能对网站资源进行优化,而且在开发过程中很多重复的任务能够使用正确的工具自动完成:使用她,我们不仅可以很愉快的编写代码,而且大 ...
- BaseAdapter的使用(笔记)
适配器模式的应用: 1.减少程序耦合性 2.easy扩展 BaseAdapter ListView的显示与缓存机制:须要才显示,显示完就被会受到缓存. BaseAdapter基本结构 --public ...
- 【Excle】在重复数据中对日期排序并查询最新的一条记录
现在存在以下数据: 需要查询出以下数据 姓名 日期 张三 2017-12-14 李四 2017-12-16 在E1中写入以下公式:=IF(D2=MAX(IF($C$ ...
- 【Datasatge】使用Datastage装载数据时候,报错:Missing record delimiter “”,saw EOF instead
如题,报错截图如下: 根据以上警告信息我们可以清晰看出,是字段DEFAULT_FLAG出错了!于是我们找到对应的字段,结果一看,导出文件中DS表结构中该字段为DECIMAL(18,2),但是导出文件中 ...
- jquery 插件:chosen
options 文档 https://harvesthq.github.io/chosen/options.html 官网: http://plugins.jquery.com/chosen/
- 【翻译自mos文章】asm 归档路径满了
asm 归档路径满了 參考原文: ASM Archive destination is full. (Doc ID 351547.1) 适用于: Oracle Server - Enterprise ...