Initialization of deep networks

24 Feb 2015Gustav Larsson

As we all know, the solution to a non-convex optimization algorithm (like stochastic gradient descent) depends on the initial values of the parameters. This post is about choosing initialization parameters for deep networks and how it affects the convergence. We will also discuss the related topic of vanishing gradients.

First, let's go back to the time of sigmoidal activation functions and initialization of parameters using IID Gaussian or uniform distributions with fairly arbitrarily set variances. Building deep networks was difficult because of exploding or vanishing activations and gradients. Let's take activations first: If all your parameters are too small, the variance of your activations will drop in each layer. This is a problem if your activation function is sigmoidal, since it is approximately linear close to 0. That is, you gradually lose your non-linearity, which means there is no benefit to having multiple layers. If, on the other hand, your activations become larger and larger, then your activations will saturate and become meaningless, with gradients approaching 0.

Let us consider one layer and forget about the bias. Note that the following analysis and conclussion is taken from Glorot and Bengio[1]. Consider a weight matrix W∈Rm×n, where each element was drawn from an IID Guassian with variance Var(W). Note that we are a bit abusive with notation letting W denote both a matrix and a univariate random variable. We also assume there is no correlation between our input and our weights and both are zero-mean. If we consider one filter (row) in W, say w (a random vector), then the variance of the output signal over the input signal is:

Var(wTx)Var(X)=∑NnVar(wnxn)Var(X)=nVar(W)Var(X)Var(X)=nVar(W)

As we build a deep network, we want the variance of the signal going forward in the network to remain the same, thus it would be advantageous if nVar(W)=1. The same argument can be made for the gradients, the signal going backward in the network, and the conclusion is that we would also like mVar(W)=1. Unless n=m, it is impossible to sastify both of these conditions. In practice, it works well if both are approximately satisfied. One thing that has never been clear to me is why it is only necessary to satisfy these conditions when picking the initialization values of W. It would seem that we have no guarantee that the conditions will remain true as the network is trained.

Nevertheless, this Xavier initialization (after Glorot's first name) is a neat trick that works well in practice. However, along came rectified linear units (ReLU), a non-linearity that is scale-invariant around 0 and does not saturate at large input values. This seemingly solved both of the problems the sigmoid function had; or were they just alleviated? I am unsure of how widely used Xavier initialization is, but if it is not, perhaps it is because ReLU seemingly eliminated this problem.

However, take the most competative network as of recently, VGG[2]. They do not use this kind of initialization, although they report that it was tricky to get their networks to converge. They say that they first trained their most shallow architecture and then used that to help initialize the second one, and so forth. They presented 6 networks, so it seems like an awfully complicated training process to get to the deepest one.

A recent paper by He et al.[3] presents a pretty straightforward generalization of ReLU and Leaky ReLU. What is more interesting is their emphasis on the benefits of Xavier initialization even for ReLU. They re-did the derivations for ReLUs and discovered that the conditions were the same up to a factor 2. The difficulty Simonyan and Zisserman had training VGG is apparently avoidable, simply by using Xavier intialization (or better yet the ReLU adjusted version). Using this technique, He et al. reportedly trained a whopping 30-layer deep network to convergence in one go.

Another recent paper tackling the signal scaling problem is by Ioffe and Szegedy[4]. They call the change in scale internal covariate shift and claim this forces learning rates to be unnecessarily small. They suggest that if all layers have the same scale and remain so throughout training, a much higher learning rate becomes practically viable. You cannot just standardize the signals, since you would lose expressive power (the bias disappears and in the case of sigmoids we would be constrained to the linear regime). They solve this by re-introducing two parameters per layer, scaling and bias, added again after standardization. The training reportedly becomes about 6 times faster and they present state-of-the-art results on ImageNet. However, I'm not certain this is the solution that will stick.

I reckon we will see a lot more work on this frontier in the next few years. Especially since it also relates to the -- right now wildly popular -- Recurrent Neural Network (RNN), which connects output signals back as inputs. The way you train such network is that you unroll the time axis, treating the result as an extremely deep feedforward network. This greatly exacerbates the vanishing gradient problem. A popular solution, called Long Short-Term Memory (LSTM), is to introduce memory cells, which are a type of teleport that allows a signal to jump ahead many time steps. This means that the gradient is retained for all those time steps and can be propagated back to a much earlier time without vanishing.

This area is far from solved, and until then I think I will be sticking to Xavier initialization. If you are using Caffe, the one take-away of this post is to use the following on all your layers:

weight_filler {
type: "xavier"
}

References

  1. X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in International conference on artificial intelligence and statistics, 2010, pp. 249–256.

  2. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014. [pdf]

  3. K. He, X. Zhang, S. Ren, and J. Sun, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” arXiv:1502.01852 [cs], Feb. 2015. [pdf]

  4. S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv:1502.03167 [cs], Feb. 2015. [pdf]

Initialization of deep networks的更多相关文章

  1. [Box] Robust Training and Initialization of Deep Neural Networks: An Adaptive Basis Viewpoint

    目录 概 主要内容 LSGD Box 初始化 Box for Resnet 代码 Cyr E C, Gulian M, Patel R G, et al. Robust Training and In ...

  2. Deep Learning 8_深度学习UFLDL教程:Stacked Autocoders and Implement deep networks for digit classification_Exercise(斯坦福大学深度学习教程)

    前言 1.理论知识:UFLDL教程.Deep learning:十六(deep networks) 2.实验环境:win7, matlab2015b,16G内存,2T硬盘 3.实验内容:Exercis ...

  3. 基于pytorch实现HighWay Networks之Train Deep Networks

    (一)Highway Networks 与 Deep Networks 的关系 理论实践表明神经网络的深度是至关重要的,深层神经网络在很多方面都已经取得了很好的效果,例如,在1000-class Im ...

  4. 论文笔记:SiamRPN++: Evolution of Siamese Visual Tracking with Very Deep Networks

    SiamRPN++: Evolution of Siamese Visual Tracking with Very Deep Networks 2019-04-02 12:44:36 Paper:ht ...

  5. 论文笔记:Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

    Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks ICML 2017 Paper:https://arxiv.org/ ...

  6. 【DeepLearning】Exercise: Implement deep networks for digit classification

    Exercise: Implement deep networks for digit classification 习题链接:Exercise: Implement deep networks fo ...

  7. 深度学习材料:从感知机到深度网络A Deep Learning Tutorial: From Perceptrons to Deep Networks

    In recent years, there’s been a resurgence in the field of Artificial Intelligence. It’s spread beyo ...

  8. Deep Networks for Image Super-Resolution with Sparse Prior

    深度学习中潜藏的稀疏表达 Deep Networks for Image Super-Resolution with Sparse Prior http://www.ifp.illinois.edu/ ...

  9. Training Very Deep Networks

    Rupesh Kumar SrivastavaKlaus Greff ̈J urgenSchmidhuberThe Swiss AI Lab IDSIA / USI / SUPSI{rupesh, k ...

随机推荐

  1. 神奇的GO语言:空接口(interface)

    对于go语言来说,设计最精妙的应该是interface了,直白点说interface是一组method的组合.至于更加详细的描述,本文不做介绍,今天谈谈空接口. 空interface(interfac ...

  2. linux命令细究

    ls -ldahipFtr    -t按照修改时间    -r翻转排序 /etc/profile  别名grep --color ls -pF ^$空行egrep -v "^#|^$&quo ...

  3. Eclipse中10个最有用的快捷键组合(转)

    Eclipse中10个最有用的快捷键组合 1. ctrl+shift+r:打开资源 这可能是所有快捷键组合中最省时间的了.这组快捷键可以让你打开你的工作区中任何一个文件,而你只需要按下文件名或mask ...

  4. Web API 安全问题

    目录 Web API 安全概览 安全隐患 1. 注入(Injection) 2. 无效认证和Session管理方式(Broken Authentication and Session Manageme ...

  5. JSON与JSONP

    JSON JSON:一种用于在浏览器和服务器之间交换信息的基于文本的轻量级数据格式.是JS对象的字符串表示.例如:‘{''name":"aa","age&quo ...

  6. 流程引擎Activiti系列:如何将kft-activiti-demo-no-maven改用mysql数据库

    kft-activiti-demo-no-maven这个工程默认使用h2数据库,这是一个内存数据库,每次启动之后都要重新对数据库做初始化,很麻烦,所以决定改用mysql,主要做3件事情: 1)在mys ...

  7. MPMoviePlayerViewController的使用 (不直接将播放器放到主视图控制器,而是放到一个内部模态视图控制器中)

    其实MPMoviePlayerController如果不作为嵌入视频来播放(例如在新闻中嵌入一个视频),通常在播放时都是占满一个屏幕的,特别是在 iPhone.iTouch上.因此从iOS3.2以后苹 ...

  8. python实现简易数据库之二——单表查询和top N实现

    上一篇中,介绍了我们的存储和索引建立过程,这篇将介绍SQL查询.单表查询和TOPN实现. 一.SQL解析 正规的sql解析是用语法分析器,但是我找了好久,只知道可以用YACC.BISON等,sqlit ...

  9. js中定时器的使用

    1.setInterval <!DOCTYPE html> <html> <head> <title>json</title> <sc ...

  10. [C#基础]说说委托+=和-=的那些事

    写在前面 为什么会突然想说说委托?原因吗,起于一个同事的想法,昨天下班的路上一直在想这个问题,如果给委托注册多个方法,会不会都执行呢?为了一探究性,就弄了个demo研究下. += 大家都知道委托都继承 ...