RNN(Recurrent Neural Network)的几个难点
1. vanish of gradient
RNN的error相对于某个时间点t的梯度为:
\(\frac{\partial E_t}{\partial W}=\sum_{k=1}^{t}\frac{\partial E_t}{\partial y_t}\frac{\partial y_t}{\partial h_i}\frac{\partial h_t}{\partial h_k}\frac{\partial h_k}{\partial W}\) (公式1),
其中\(h\)是hidden node的输出,\(y_t\)是网络在t时刻的output,\(W\)是hidden nodes 到hidden nodes的weight,而\(\frac{\partial h_t}{\partial h_k}\)是导数在时间段[k,t]上的链式展开,这段时间可能很长,会造成vanish或者explosion gradiant。将\(\frac{\partial h_t}{\partial h_k}\)沿时间展开:\(\frac{\partial h_t}{\partial h_k}=\prod_{j=k+1}^{t}\frac{\partial h_j}{\partial h_{j-1}}=\prod_{j=k+1}^{t}W^T \times diag [\frac{\partial\sigma(h_{j-1})}{\partial h_{j-1}}]\)。上式中的diag矩阵是个什么鬼?我来举个例子,你就明白了。假设现在要求解\(\frac{\partial h_5}{\partial h_4}\),回忆向前传播时\(h_5\)是怎么得到的:\(h_5=W\sigma(h_4)+W^{hx}x_4\),则\(\frac{\partial h_5}{\partial h_4}=W\frac{\partial \sigma(h_4)}{\partial h_4}\),注意到\(\sigma(h_4)\)和\(h_4\)都是向量(维度为D),所以\(\frac{\partial \sigma(h_4)}{\partial h_4}\)是Jacobian矩阵也即:\(\frac{\partial \sigma(h_4)}{\partial h_4}=\) \(\begin{bmatrix} \frac{\partial\sigma_1(h_{41})}{\partial h_{41}}&\cdots&\frac{\partial\sigma_1(h_{41})}{\partial h_{4D}} \\ \vdots&\cdots&\vdots \\ \frac{\partial\sigma_D(h_{4D})}{\partial h_{41}}&\cdots&\frac{\partial\sigma_D(h_{4D})}{\partial h_{4D}}\end{bmatrix}\),明显的,非对角线上的值都是0。这是因为sigmoid logistic function \(\sigma\)是element-wise的操作。
后面推导vanish或者explosion gradiant的过程就很简单了,我就不写了,请参考http://cs224d.stanford.edu/lecture_notes/LectureNotes4.pdf 中的公式(14)往后部分。
2. weight shared (tied) 时, the gradient of tied weight = sum of gradient of individual weights
举个例子你就明白了:假设有向前传播\(y=F[W_1f(W_2x)]\), 且weights \(W_1\) \(W_2\) tied, 现在要求gradient \(\frac{\partial y}{\partial W}\)
办法一:
先求gradient \(\frac{\partial F[]}{\partial W_2} = F'[]f() \)
再求gradient \(\frac{\partial F[]}{\partial W_1} = F'[] (W_2f'()x) \)
将上两式相加后得,\(F'[]f()+F'[] (W_2f'()x)=F'[](f()+W_2f'()x)\)
假设weights \(W_1\) \(W_2\) tied,则上式=\(F'[](f()+Wf'()x) = \frac{\partial y}{\partial W} \)
办法二:
现在我们换个办法,在假设weights \(W_1\) \(W_2\) tied的基础上,直接求gradient
\(\frac{\partial y}{\partial W} = F'[]( \frac{\partial Wf()}{\partial W} + W \frac{\partial f()}{\partial W} ) = F'[](f()+Wf'()x) \)
可见,两种方法的结果是一样的。所以,当权重共享时,关于权重的梯度=两个不同权重梯度的和。
3. LSTM & Gated Recurrent units 是如何避免vanish的?
To understand this, you will have to go through some math. The most accessible article wrt recurrent gradient problems IMHO is Pascanu's ICML2013 paper [1].
A summary: vanishing/exploding gradient comes from the repeated application of the recurrent weight matrix [2]. That the spectral radius of the recurrent weight matrix is bigger than 1 makes exploding gradients possible (it is a necessary condition), while a spectral radius smaller than 1 makes it vanish, which is a sufficient condition.
Now, if gradients vanish, that does not mean that all gradients vanish. Only some of them, gradient information local in time will still be present. That means, you might still have a non-zero gradient--but it will not contain long term information. That's because some gradient g + 0 is still g. (上文中公式1,因为是相加,所以有些为0,也不会引起全部为0)
If gradients explode, all of them do. That is because some gradient g + infinity is infinity.(上文中公式1,因为是相加,所以有些为无限大,会引起全部为无限大)
That is the reason why LSTM does not protect you from exploding gradients, since LSTM also uses a recurrent weight matrix(h(t) = o(t) ◦ tanh(c(t))?), not only internal state-to-state connections( c(t) = f (t) ◦ ˜c(t−1) +i(t) ◦ ˜c(t) h(t)). Successful LSTM applications typically use gradient clipping.
LSTM overcomes the vanishing gradient problem, though. That is because if you look at the derivative of the internal state at T to the internal state at T-1, there is no repeated weight application. The derivative actually is the value of the forget gate. And to avoid that this becomes zero, we need to initialise it properly in the beginning.
That makes it clear why the states can act as "a wormhole through time", because they can bridge long time lags and then (if the time is right) "re inject" it into the other parts of the net by opening the output gate.
[1] Pascanu, Razvan, Tomas Mikolov, and Yoshua Bengio. "On the difficulty of training recurrent neural networks." arXiv preprint arXiv:1211.5063 (2012).
[2] It might "vanish" also due to saturating nonlinearities, but that is sth that can also happen in shallow nets and can be overcome with more careful weight initialisations.
ref: Recursive Deep Learning for Natural Language Processing and Computer Vision.pdf
CS224D-3-note bp.pdf
未完待续。。。
RNN(Recurrent Neural Network)的几个难点的更多相关文章
- Recurrent Neural Network系列2--利用Python,Theano实现RNN
作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 本文翻译自 RECURRENT NEURAL NETWORKS T ...
- Recurrent Neural Network系列3--理解RNN的BPTT算法和梯度消失
作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 这是RNN教程的第三部分. 在前面的教程中,我们从头实现了一个循环 ...
- 循环神经网络(Recurrent Neural Network,RNN)
为什么使用序列模型(sequence model)?标准的全连接神经网络(fully connected neural network)处理序列会有两个问题:1)全连接神经网络输入层和输出层长度固定, ...
- Recurrent neural network (RNN) - Pytorch版
import torch import torch.nn as nn import torchvision import torchvision.transforms as transforms # ...
- 4.5 RNN循环神经网络(recurrent neural network)
自己开发了一个股票智能分析软件,功能很强大,需要的点击下面的链接获取: https://www.cnblogs.com/bclshuai/p/11380657.html 1.1 RNN循环神经网络 ...
- Recurrent Neural Network系列1--RNN(循环神经网络)概述
作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 本文翻译自 RECURRENT NEURAL NETWORKS T ...
- Recurrent Neural Network(循环神经网络)
Reference: Alex Graves的[Supervised Sequence Labelling with RecurrentNeural Networks] Alex是RNN最著名变种 ...
- 循环神经网络(RNN, Recurrent Neural Networks)介绍(转载)
循环神经网络(RNN, Recurrent Neural Networks)介绍 这篇文章很多内容是参考:http://www.wildml.com/2015/09/recurrent-neur ...
- Recurrent Neural Network系列4--利用Python,Theano实现GRU或LSTM
yi作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 本文翻译自 RECURRENT NEURAL NETWORK ...
- Recurrent Neural Network[Content]
下面的RNN,LSTM,GRU模型图来自这里 简单的综述 1. RNN 图1.1 标准RNN模型的结构 2. BiRNN 3. LSTM 图3.1 LSTM模型的结构 4. Clockwork RNN ...
随机推荐
- 开源电子商务平台:OfBiz
OFBiz是一个电子商务平台,是一个非常著名的开源项目,提供了创建基于最新J2EE/XML规范和技术标准,构建大中型企业级.跨平台.跨数据库.跨应用服务器的多层.分布式电子商务类WEB应用系统的框架. ...
- 浅析GDAL库C#版本支持中文路径问题(续)
上篇博客中主要说了GDAL库C#版本中存在的问题,其表现形式主要是:"文件名中的汉字个数是偶数,完全没有影响,读取和创建都正常,如果文件名中的汉字个数是奇数,读取和创建都会报错." ...
- 【算法导论】八皇后问题的算法实现(C、MATLAB、Python版)
八皇后问题是一道经典的回溯问题.问题描述如下:皇后可以在横.竖.斜线上不限步数地吃掉其他棋子.如何将8个皇后放在棋盘上(有8*8个方格),使它们谁也不能被吃掉? 看到这个问题,最容易想 ...
- Cocos2D粒子发射器的纹理
每个例子发射器只能使用单个纹理发射粒子. 如果你需要在相同粒子效果中组合多重纹理,你将不得不创建多重的发射器节点并且决定谁的粒子将在其它粒子之上或之下显示.
- 【一天一道LeetCode】#8. String to Integer (atoi)
一天一道LeetCode系列 (一)题目 Implement atoi to convert a string to an integer. Hint: Carefully consider all ...
- Linux磁盘 - fdisk,partprobe, mkfs, mke2fs, fsck, badblocks, mount, mknod
磁盘分区: fdisk [root@www ~]# fdisk [-l] 装置名称 选项与参数: -l :输出后面接的装置所有的 partition 内容.若仅有 fdisk -l 时, 则系统将会把 ...
- How tomcat works 读书笔记十五 Digester库 上
Digester库 在前面的几个章节里,我们对tomcat里各个组件的配置完全是使用写硬编码的形式完成的. 如 Context context = new StandardContext(); Loa ...
- LeetCode(23)-Implement Queue using Stacks
题目: Implement the following operations of a queue using stacks. push(x) -- Push element x to the bac ...
- obj-c编程03:多个参数方法的定义
好吧,虽说本猫不能自吹精通十几种语言,但是也见过十几种语言的语法啊.像obj-c这样奇葩,或者说另类的写法还是头一次见到,完整写法我都不知道怎么起方法名了.虽说有简短写法,可和C比起来那个" ...
- 抛开rails使用ActiveRecord连接数据库
今天是大年三十,明天就正式进入羊年鸟,给所有程序猿(媛)同人拜个年吧!祝大家身体健康,事业有成,财源广进哦! 话归正题,以前都是在rails中使用数据库,或者在rails的console中使用:我们如 ...