Stochastic Optimization Techniques
Stochastic Optimization Techniques
Neural networks are often trained stochastically, i.e. using a method where the objective function changes at each iteration. This stochastic variation is due to the model being trained on different data during each iteration. This is motivated by (at least) two factors: First, the dataset used as training data is often too large to fit in memory and/or be optimized over efficiently. Second, the objective function is typically nonconvex, so using different data at each iteration can help prevent the model from settling in a local minimum. Furthermore, training neural networks is usually done using only the first-order gradient of the parameters with respect to the loss function. This is due to the large number of parameters present in a neural network, which for practical purposes prevents the computation of the Hessian matrix. Because vanilla gradient descent can diverge or converge incredibly slowly if its learning rate hyperparameter is set inappropriately, many alternative methods have been proposed which are intended to produce desirable convergence with less dependence on hyperparameter settings. These methods often effectively compute and utilize a preconditioner on the gradient, adaptively change the learning rate over time or approximate the Hessian matrix.
In the following, we will use $\theta_t$ to denote some generic parameter of the model at iteration $t$, to be optimized according to some loss function $\mathcal{L}$ which is to be minimized.
Stochastic Gradient Descent
Stochastic gradient descent (SGD) simply updates each parameter by subtracting the gradient of the loss with respect to the parameter, scaled by the learning rate $\eta$, a hyperparameter. If $\eta$ is too large, SGD will diverge; if it's too small, it will converge slowly. The update rule is simply $$ \theta_{t + 1} = \theta_t - \eta \nabla \mathcal{L}(\theta_t) $$
Momentum
In SGD, the gradient $\nabla \mathcal{L}(\theta_t)$ often changes rapidly at each iteration $t$ due to the fact that the loss is being computed over different data. This is often partially mitigated by re-using the gradient value from the previous iteration, scaled by a momentum hyperparameter $\mu$, as follows:
\begin{align*} v_{t + 1} &= \mu v_t - \eta \nabla \mathcal{L}(\theta_t) \\ \theta_{t + 1} &= \theta_t + v_{t+1} \end{align*}
It has been argued that including the previous gradient step has the effect of approximating some second-order information about the gradient.
Nesterov's Accelerated Gradient
In Nesterov's Accelerated Gradient (NAG), the gradient of the loss at each step is computed at $\theta_t + \mu v_t$ instead of $\theta_t$. In momentum, the parameter update could be written $\theta_{t + 1} = \theta_t + \mu v_t - \eta \nabla \mathcal{L}(\theta_t)$, so NAG effectively computes the gradient at the new parameter location but without considering the gradient term. In practice, this causes NAG to behave more stably than regular momentum in many situations. A more thorough analysis can be found in 1). The update rules are then as follows:
\begin{align*} v_{t + 1} &= \mu v_t - \eta \nabla\mathcal{L}(\theta_t + \mu v_t) \\ \theta_{t + 1} &= \theta_t + v_{t+1} \end{align*}
Adagrad
Adagrad effectively rescales the learning rate for each parameter according to the history of the gradients for that parameter. This is done by dividing each term in $\nabla \mathcal{L}$ by the square root of the sum of squares of its historical gradient. Rescaling in this way effectively lowers the learning rate for parameters which consistently have large gradient values. It also effectively decreases the learning rate over time, because the sum of squares will continue to grow with the iteration. After setting the rescaling term $g = 0$, the updates are as follows: \begin{align*} g_{t + 1} &= g_t + \nabla \mathcal{L}(\theta_t)^2 \\ \theta_{t + 1} &= \theta_t - \frac{\eta\nabla \mathcal{L}(\theta_t)}{\sqrt{g_{t + 1}} + \epsilon} \end{align*} where division is elementwise and $\epsilon$ is a small constant included for numerical stability. It has nice theoretical guarantees and empirical results 2) 3).
RMSProp
In its originally proposed form 4), RMSProp is very similar to Adagrad. The only difference is that the $g_t$ term is computed as a exponentially decaying average instead of an accumulated sum. This makes $g_t$ an estimate of the second moment of $\nabla \mathcal{L}$ and avoids the fact that the learning rate effectively shrinks over time. The name “RMSProp” comes from the fact that the update step is normalized by a decaying RMS of recent gradients. The update is as follows:
\begin{align*} g_{t + 1} &= \gamma g_t + (1 - \gamma) \nabla \mathcal{L}(\theta_t)^2 \\ \theta_{t + 1} &= \theta_t - \frac{\eta\nabla \mathcal{L}(\theta_t)}{\sqrt{g_{t + 1}} + \epsilon} \end{align*}
In the original lecture slides where it was proposed, $\gamma$ is set to $.9$. In 5), it is shown that the $\sqrt{g_{t + 1}}$ term approximates (in expectation) the diagonal of the absolute value of the Hessian matrix (assuming the update steps are $\mathcal{N}(0, 1)$ distributed). It is also argued that the absolute value of the Hessian is better to use for non-convex problems which may have many saddle points.
Alternatively, in 6), a first-order moment approximator $m_t$ is added. It is included in the denominator of the preconditioner so that the learning rate is effectively normalized by the standard deviation $\nabla \mathcal{L}$. There is also a $v_t$ term included for momentum. This gives
\begin{align*} m_{t + 1} &= \gamma m_t + (1 - \gamma) \nabla \mathcal{L}(\theta_t) \\ g_{t + 1} &= \gamma g_t + (1 - \gamma) \nabla \mathcal{L}(\theta_t)^2 \\ v_{t + 1} &= \mu v_t - \frac{\eta \nabla \mathcal{L}(\theta_t)}{\sqrt{g_{t+1} - m_{t+1}^2 + \epsilon}} \\ \theta_{t + 1} &= \theta_t + v_{t + 1} \end{align*}
Adadelta
Adadelta 7) uses the same exponentially decaying moving average estimate of the gradient second moment $g_t$ as RMSProp. It also computes a moving average $x_t$ of the updates $v_t$ similar to momentum, but when updating this quantity it squares the current step, which I don't have any intuition for.
\begin{align*} g_{t + 1} &= \gamma g_t + (1 - \gamma) \nabla \mathcal{L}(\theta_t)^2 \\ v_{t + 1} &= -\frac{\sqrt{x_t + \epsilon} \nabla \mathcal{L}(\theta_t)}{\sqrt{g_{t+1} + \epsilon}} \\ x_{t + 1} &= \gamma x_t + (1 - \gamma) v_{t + 1}^2 \\ \theta_{t + 1} &= \theta_t + v_{t + 1} \end{align*}
Adam
Adam is somewhat similar to Adagrad/Adadelta/RMSProp in that it computes a decayed moving average of the gradient and squared gradient (first and second moment estimates) at each time step. It differs mainly in two ways: First, the first order moment moving average coefficient is decayed over time. Second, because the first and second order moment estimates are initialized to zero, some bias-correction is used to counteract the resulting bias towards zero. The use of the first and second order moments, in most cases, ensure that typically the gradient descent step size is $\approx \pm \eta$ and that in magnitude it is less than $\eta$. However, as $\theta_t$ approaches a true minimum, the uncertainty of the gradient will increase and the step size will decrease. It is also invariant to the scale of the gradients. Given hyperparameters $\gamma_1$, $\gamma_2$, $\lambda$, and $\eta$, and setting $m_0 = 0$ and $g_0 = 0$ (note that the paper denotes $\gamma_1$ as $\beta_1$, $\gamma_2$ as $\beta_2$, $\eta$ as $\alpha$ and $g_t$ as $v_t$), the update rule is as follows: 8)
\begin{align*} m_{t + 1} &= \gamma_1 m_t + (1 - \gamma_1) \nabla \mathcal{L}(\theta_t) \\ g_{t + 1} &= \gamma_2 g_t + (1 - \gamma_2) \nabla \mathcal{L}(\theta_t)^2 \\ \hat{m}_{t + 1} &= \frac{m_{t + 1}}{1 - \gamma_1^{t + 1}} \\ \hat{g}_{t + 1} &= \frac{g_{t + 1}}{1 - \gamma_2^{t + 1}} \\ \theta_{t + 1} &= \theta_t - \frac{\eta \hat{m}_{t + 1}}{\sqrt{\hat{g}_{t + 1}} + \epsilon} \end{align*}
ESGD
Adasecant
vSGD
Rprop
Stochastic Optimization Techniques的更多相关文章
- TensorFlow 深度学习笔记 Stochastic Optimization
Stochastic Optimization 转载请注明作者:梦里风林 Github工程地址:https://github.com/ahangchen/GDLnotes 欢迎star,有问题可以到I ...
- ADAM : A METHOD FOR STOCHASTIC OPTIMIZATION
目录 概 主要内容 算法 选择合适的参数 一些别的优化算法 AdaMax 理论 代码 Kingma D P, Ba J. Adam: A Method for Stochastic Optimizat ...
- Stochastic Optimization of PCA with Capped MSG
目录 Problem Matrix Stochastic Gradient 算法(MSG) 步骤二(单次迭代) 单步SVD \(project()\)算法 \(rounding()\) 从这里回溯到此 ...
- Training Deep Neural Networks
http://handong1587.github.io/deep_learning/2015/10/09/training-dnn.html //转载于 Training Deep Neural ...
- (zhuan) Evolution Strategies as a Scalable Alternative to Reinforcement Learning
Evolution Strategies as a Scalable Alternative to Reinforcement Learning this blog from: https://blo ...
- KDD2016,Accepted Papers
RESEARCH TRACK PAPERS - ORAL Title & Authors NetCycle: Collective Evolution Inference in Heterog ...
- An overview of gradient descent optimization algorithms
原文地址:An overview of gradient descent optimization algorithms An overview of gradient descent optimiz ...
- (转) An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms Table of contents: Gradient descent variants ...
- First release of mlrMBO - the toolbox for (Bayesian) Black-Box Optimization
We are happy to finally announce the first release of mlrMBO on cran after a quite long development ...
随机推荐
- nginx 跳转
nginx 跳转 一.需求:当需要在别的机访问本机房的服务器问题. 虚拟主机头配置 server { listen ; server_name test.zlx.com; location / { i ...
- zabbix设置微信报警的配置过程
zabbix设置微信报警的配置过程 转发:https://blog.csdn.net/qq_31613055/article/details/78831607 微信企业号的申请 注册的地址https: ...
- 高精度减法--C++
高精度减法--C++ 仿照竖式减法,先对其,再对应位相减. 算法处理时,先比较大小,用大的减小的,对应位再比较大小,用于作为借位符. #include <iostream> #includ ...
- PAT-1045. Favorite Color Stripe (30)-LIS
将Eva喜欢的颜色按顺序编个优先级, 2 3 1 5 6-> 1 2 3 4 5 然后读取stripe,将Eva不喜欢的先剔除掉,剩下的颜色替换为相应的优先级 2 2 4(去掉) 1 5 5 6 ...
- Linux内核分析第十八章读书笔记
第十八章 调试 调试工作艰难是内核级开发区别于用户级开发的一个显著特点. 18.1 准备开始 我们需要什么? 一个bug 一个藏匿bug的内核版本 思路:假定能够让bug重现 在用户级程序中,bug直 ...
- JavaScript学习笔记之JavaScript调用C#编写的COM组件
1.新建一个C#类库项目:MyCom: 2.修改 Properties 目录下的 AssemblyInfo.cs(程序集文件) 中的 ComVisible 属性为 true: 3.项目->属性- ...
- C++的OOP特性
内存模型和名称空间 存储持续性,作用域和链接性 C++有三种方案来存储数据 自动存储持续性:在函数定义中声明的变量,包括函数参数.在函数或代码块开始执行时创建.执行完函数或者代码块,内存自动释放. 静 ...
- linux终端FQ
工具列表: shadowsocks - QT5 ss账号 proxychains 使用过程: 1.用shadowsocks - QT5登入ss,设置本机端口1080 2.proxychains的使用 ...
- 蜗牛慢慢爬 LeetCode 22. Generate Parentheses [Difficulty: Medium]
题目 Given n pairs of parentheses, write a function to generate all combinations of well-formed parent ...
- 深入理解C++中的异常处理机制
异常处理 增强错误恢复能力是提高代码健壮性的最有力的途径之一,C语言中采用的错误处理方法被认为是紧耦合的,函数的使用者必须在非常靠近函数调用的地方编 写错误处理代码,这样会使得其变得笨拙和难以使用.C ...