RNN(Recurrent Neural Network)的几个难点
1. vanish of gradient
RNN的error相对于某个时间点t的梯度为:
\(\frac{\partial E_t}{\partial W}=\sum_{k=1}^{t}\frac{\partial E_t}{\partial y_t}\frac{\partial y_t}{\partial h_i}\frac{\partial h_t}{\partial h_k}\frac{\partial h_k}{\partial W}\) (公式1),
其中\(h\)是hidden node的输出,\(y_t\)是网络在t时刻的output,\(W\)是hidden nodes 到hidden nodes的weight,而\(\frac{\partial h_t}{\partial h_k}\)是导数在时间段[k,t]上的链式展开,这段时间可能很长,会造成vanish或者explosion gradiant。将\(\frac{\partial h_t}{\partial h_k}\)沿时间展开:\(\frac{\partial h_t}{\partial h_k}=\prod_{j=k+1}^{t}\frac{\partial h_j}{\partial h_{j-1}}=\prod_{j=k+1}^{t}W^T \times diag [\frac{\partial\sigma(h_{j-1})}{\partial h_{j-1}}]\)。上式中的diag矩阵是个什么鬼?我来举个例子,你就明白了。假设现在要求解\(\frac{\partial h_5}{\partial h_4}\),回忆向前传播时\(h_5\)是怎么得到的:\(h_5=W\sigma(h_4)+W^{hx}x_4\),则\(\frac{\partial h_5}{\partial h_4}=W\frac{\partial \sigma(h_4)}{\partial h_4}\),注意到\(\sigma(h_4)\)和\(h_4\)都是向量(维度为D),所以\(\frac{\partial \sigma(h_4)}{\partial h_4}\)是Jacobian矩阵也即:\(\frac{\partial \sigma(h_4)}{\partial h_4}=\) \(\begin{bmatrix} \frac{\partial\sigma_1(h_{41})}{\partial h_{41}}&\cdots&\frac{\partial\sigma_1(h_{41})}{\partial h_{4D}} \\ \vdots&\cdots&\vdots \\ \frac{\partial\sigma_D(h_{4D})}{\partial h_{41}}&\cdots&\frac{\partial\sigma_D(h_{4D})}{\partial h_{4D}}\end{bmatrix}\),明显的,非对角线上的值都是0。这是因为sigmoid logistic function \(\sigma\)是element-wise的操作。
后面推导vanish或者explosion gradiant的过程就很简单了,我就不写了,请参考http://cs224d.stanford.edu/lecture_notes/LectureNotes4.pdf 中的公式(14)往后部分。
2. weight shared (tied) 时, the gradient of tied weight = sum of gradient of individual weights
举个例子你就明白了:假设有向前传播\(y=F[W_1f(W_2x)]\), 且weights \(W_1\) \(W_2\) tied, 现在要求gradient \(\frac{\partial y}{\partial W}\)
办法一:
先求gradient \(\frac{\partial F[]}{\partial W_2} = F'[]f() \)
再求gradient \(\frac{\partial F[]}{\partial W_1} = F'[] (W_2f'()x) \)
将上两式相加后得,\(F'[]f()+F'[] (W_2f'()x)=F'[](f()+W_2f'()x)\)
假设weights \(W_1\) \(W_2\) tied,则上式=\(F'[](f()+Wf'()x) = \frac{\partial y}{\partial W} \)
办法二:
现在我们换个办法,在假设weights \(W_1\) \(W_2\) tied的基础上,直接求gradient
\(\frac{\partial y}{\partial W} = F'[]( \frac{\partial Wf()}{\partial W} + W \frac{\partial f()}{\partial W} ) = F'[](f()+Wf'()x) \)
可见,两种方法的结果是一样的。所以,当权重共享时,关于权重的梯度=两个不同权重梯度的和。
3. LSTM & Gated Recurrent units 是如何避免vanish的?
To understand this, you will have to go through some math. The most accessible article wrt recurrent gradient problems IMHO is Pascanu's ICML2013 paper [1].
A summary: vanishing/exploding gradient comes from the repeated application of the recurrent weight matrix [2]. That the spectral radius of the recurrent weight matrix is bigger than 1 makes exploding gradients possible (it is a necessary condition), while a spectral radius smaller than 1 makes it vanish, which is a sufficient condition.
Now, if gradients vanish, that does not mean that all gradients vanish. Only some of them, gradient information local in time will still be present. That means, you might still have a non-zero gradient--but it will not contain long term information. That's because some gradient g + 0 is still g. (上文中公式1,因为是相加,所以有些为0,也不会引起全部为0)
If gradients explode, all of them do. That is because some gradient g + infinity is infinity.(上文中公式1,因为是相加,所以有些为无限大,会引起全部为无限大)
That is the reason why LSTM does not protect you from exploding gradients, since LSTM also uses a recurrent weight matrix(h(t) = o(t) ◦ tanh(c(t))?), not only internal state-to-state connections( c(t) = f (t) ◦ ˜c(t−1) +i(t) ◦ ˜c(t) h(t)). Successful LSTM applications typically use gradient clipping.
LSTM overcomes the vanishing gradient problem, though. That is because if you look at the derivative of the internal state at T to the internal state at T-1, there is no repeated weight application. The derivative actually is the value of the forget gate. And to avoid that this becomes zero, we need to initialise it properly in the beginning.
That makes it clear why the states can act as "a wormhole through time", because they can bridge long time lags and then (if the time is right) "re inject" it into the other parts of the net by opening the output gate.
[1] Pascanu, Razvan, Tomas Mikolov, and Yoshua Bengio. "On the difficulty of training recurrent neural networks." arXiv preprint arXiv:1211.5063 (2012).
[2] It might "vanish" also due to saturating nonlinearities, but that is sth that can also happen in shallow nets and can be overcome with more careful weight initialisations.
ref: Recursive Deep Learning for Natural Language Processing and Computer Vision.pdf
CS224D-3-note bp.pdf
未完待续。。。
RNN(Recurrent Neural Network)的几个难点的更多相关文章
- Recurrent Neural Network系列2--利用Python,Theano实现RNN
作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 本文翻译自 RECURRENT NEURAL NETWORKS T ...
- Recurrent Neural Network系列3--理解RNN的BPTT算法和梯度消失
作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 这是RNN教程的第三部分. 在前面的教程中,我们从头实现了一个循环 ...
- 循环神经网络(Recurrent Neural Network,RNN)
为什么使用序列模型(sequence model)?标准的全连接神经网络(fully connected neural network)处理序列会有两个问题:1)全连接神经网络输入层和输出层长度固定, ...
- Recurrent neural network (RNN) - Pytorch版
import torch import torch.nn as nn import torchvision import torchvision.transforms as transforms # ...
- 4.5 RNN循环神经网络(recurrent neural network)
自己开发了一个股票智能分析软件,功能很强大,需要的点击下面的链接获取: https://www.cnblogs.com/bclshuai/p/11380657.html 1.1 RNN循环神经网络 ...
- Recurrent Neural Network系列1--RNN(循环神经网络)概述
作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 本文翻译自 RECURRENT NEURAL NETWORKS T ...
- Recurrent Neural Network(循环神经网络)
Reference: Alex Graves的[Supervised Sequence Labelling with RecurrentNeural Networks] Alex是RNN最著名变种 ...
- 循环神经网络(RNN, Recurrent Neural Networks)介绍(转载)
循环神经网络(RNN, Recurrent Neural Networks)介绍 这篇文章很多内容是参考:http://www.wildml.com/2015/09/recurrent-neur ...
- Recurrent Neural Network系列4--利用Python,Theano实现GRU或LSTM
yi作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 本文翻译自 RECURRENT NEURAL NETWORK ...
- Recurrent Neural Network[Content]
下面的RNN,LSTM,GRU模型图来自这里 简单的综述 1. RNN 图1.1 标准RNN模型的结构 2. BiRNN 3. LSTM 图3.1 LSTM模型的结构 4. Clockwork RNN ...
随机推荐
- MPlayer 使用手册中文版
播放文件 使用 MPlayer 播放媒体文件最简单的方式是: mplayer <somefile> MPlayer 会自动检测文件的类型并加以播放,如果是音频文件,则会在命令行中显示该播放 ...
- 【UML 建模】UML建模语言入门 -- 用例视图详解 用例视图建模实战
. 作者 :万境绝尘 转载请注明出处 : http://blog.csdn.net/shulianghan/article/details/18964835 . 一. 用例视图概述 用例视图表述哪些 ...
- Android服务器——TomCat服务器的搭建
Android服务器--TomCat服务器的搭建 作为一个开发人员,当然是需要自己调试一些程序的,这个时候本地的服务器就十分方便了,一般都会使用TomCat或者IIS服务器,IIS就比较简单了,其实t ...
- 带三方登录(qq,微信,微博)
实现QQ.微信.新浪微博和百度第三方登录(Android Studio) 前言: 对于大多数的APP都有第三方登录这个功能,自己也做过几次,最近又有一个新项目用到了第三方登录,所以特意总结了一下关于 ...
- hive使用过的基本命令
命令:完成操作 hive:进去hive show databases:显示 所有database use wizad: 使用database wizad,或者如use aso show tables: ...
- CoordinatorLayout
CoordinatorLayout作为"super-powered FrameLayout"基本实现两个功能: 1.作为顶层布局 2.调度协调子布局 CoordinatorLa ...
- ROS探索总结(十二)——坐标系统
在机器人的控制中,坐标系统是非常重要的,在ROS使用tf软件库进行坐标转换. 相关链接:http://www.ros.org/wiki/tf/Tutorials#Learning_tf 一.tf简介 ...
- Java I/O最简单的几个类
今天把I/O中最简单的几个类整理了一下,之所以整理最简单的,是因为这样会让我更加快速方便的理顺这里面的东西,以前每一次用的时候都要先百度一下,觉得很烦. 首先需要先看一下Read,Write和Stre ...
- git分享:Git_MinaPro
Apache MINA+MyBatis+EHcache定制开发,实现终端设备数据的实时接收解析存储. <项目运行:打包下载所有文件导入Eclipse,将datapro.sql导入mysql数据库 ...
- MakeFile 文件的作用
makefile文件保存了编译器和连接器的参数选项,还表述了所有源文件之间的关系(源代码文件需要的特定的包含文件,可执行文件要求包含的目标文件模块及库等).创建程序(make程序)首先读取makefi ...