other_techniques_for_regularization

随手翻译,略作参考,禁止转载

www.cnblogs.com/santian/p/5457412.html

Dropout: Dropout is a radically different technique for regularization. Unlike L1 and L2 regularization, dropout doesn't rely on modifying the cost function. Instead, in dropout we modify the network itself. Let me describe the basic mechanics of how dropout works, before getting into why it works, and what the results are.

Suppose we're trying to train a network:

Dropout 技术:Dropout是一个同正则化完全不同的技术,与L1和L2范式正则化不同。dropout并不会修改代价函数而是修改深度网络本身。在我描述dropout的工作机制和dropout导致何种结果前,让我们假设我们正在训练如下一个网络。

In particular, suppose we have a training input xx and corresponding desired output yy. Ordinarily, we'd train by forward-propagating xxthrough the network, and then backpropagating to determine the contribution to the gradient. With dropout, this process is modified. We start by randomly (and temporarily) deleting half the hidden neurons in the network, while leaving the input and output neurons untouched. After doing this, we'll end up with a network along the following lines. Note that the dropout neurons, i.e., the neurons which have been temporarily deleted, are still ghosted in:

特别的。假设我们有一个输入xx并且相关的输入yy的训练。通常的我们将首先通过前馈网络把xx输入我们随机初始化权重后的网络。然后反向传播拿到对梯度的影响。也就是根据误差,根据链式法则反向拿到对相应权重的偏微分。但是,使用dropout技术的话。相关的处理就完全不同了。在开始训练的时候我们随机的(临时)删除一般的神经元。但是输入层和输出层不做变动。对深度网络dropout后。我们将会得到下图中这样类似的网络。注意。下图中的虚线存在的网络就是我们临时删除的。

We forward-propagate the input xx through the modified network, and then backpropagate the result, also through the modified network. After doing this over a mini-batch of examples, we update the appropriate weights and biases. We then repeat the process, first restoring the dropout neurons, then choosing a new random subset of hidden neurons to delete, estimating the gradient for a different mini-batch, and updating the weights and biases in the network.

我们前向传播输入项xx通过修改后的网络。然后反向传播拿到的结果通过修改后的网络。对此昨晚一个样本化的迷你批次的样本后。我们更新相应的权重和偏置。这样重复迭代处理。首先存储dropout的神经元,然后选择一个新的随机隐层神经元的子集去删除。估计不同样本批次的梯度。最后更新网络的权重和偏置。

By repeating this process over and over, our network will learn a set of weights and biases. Of course, those weights and biases will have been learnt under conditions in which half the hidden neurons were dropped out. When we actually run the full network that means that twice as many hidden neurons will be active. To compensate for that, we halve the weights outgoing from the hidden neurons.

通过不断的重复处理。我们的网络将会学到一系列的权重和偏置参数。当然这些参数是在一半的隐层神经元被dropped out(临时删除的)情况下学习到的。当我们真正的运行整个神经网络的时候意味着两倍多的隐层神经元将被激活。为了抵消此影响。我将从隐层的权重输出减半。

This dropout procedure may seem strange and ad hoc. Why would we expect it to help with regularization? To explain what's going on, I'd like you to briefly stop thinking about dropout, and instead imagine training neural networks in the standard way (no dropout). In particular, imagine we train several different neural networks, all using the same training data. Of course, the networks may not start out identical, and as a result after training they may sometimes give different results. When that happens we could use some kind of averaging or voting scheme to decide which output to accept. For instance, if we have trained five networks, and three of them are classifying a digit as a "3", then it probably really is a "3". The other two networks are probably just making a mistake. This kind of averaging scheme is often found to be a powerful (though expensive) way of reducing overfitting. The reason is that the different networks may overfit in different ways, and averaging may help eliminate that kind of overfitting.

dropout处理看起来是奇怪并且没有规律的。为什么我们希望他对正则化有帮助呢。来解释dropout到底发生了什么。我们先不要思考dropout技术。而是想象我们用一个正常的方式训练一个神经网络。特别的。假设我们训练了几个完全不同的神经网络。用的是完全相同的训练数据。当然了。因为随机初始化参数或其他原因。训练得到的结果也许是不同的。当这种情况发生的时候,我们就可以平均这几种网络的结果,或者根据相应的规则决定使用哪一种神经网络输出的结果。例如。如果我们训练了五个网络。其中三个分类一个数字为3,最终的结果就是他是3的可能性更大一些。其他的两个网络也许有些错误。这种平均的架构被发现通常是十分有用的来减少过拟合。(当然这种训练多个网络的代价也是昂贵的。)出现这种结果的原因就是不同的网络也是在不同的方式上过你和。通过平均可以排除掉这种过拟合的。

What's this got to do with dropout? Heuristically, when we dropout different sets of neurons, it's rather like we're training different neural networks. And so the dropout procedure is like averaging the effects of a very large number of different networks. The different networks will overfit in different ways, and so, hopefully, the net effect of dropout will be to reduce overfitting.

这种现象与dropout这种技术有什么作用的。启发式的我们发现。dropout不同设置的神经元和我们训练几种不同的神经网络很像。因此,dropout处理很像是平均一个大量不同网络的平均结果。不同的网络在不同的情况下过拟合。因此,很大程度上。dropout将会减少这种过拟合。

A related heuristic explanation for dropout is given in one of the earliest papers to use the technique(**ImageNet Classification with Deep Convolutional Neural Networks, by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton (2012).): "This technique reduces complex co-adaptations of neurons, since a neuron cannot rely on the presence of particular other neurons. It is, therefore, forced to learn more robust features that are useful in conjunction with many different random subsets of the other neurons." In other words, if we think of our network as a model which is making predictions, then we can think of dropout as a way of making sure that the model is robust to the loss of any individual piece of evidence. In this, it's somewhat similar to L1 and L2 regularization, which tend to reduce weights, and thus make the network more robust to losing any individual connection in the network.

一个相关的早期使用这种技术的论文((**ImageNet Classification with Deep Convolutional Neural Networks, by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton (2012).))中启发性的dropout解释是:这种技术减少了神经元之间复杂的共适性。因为一个神经元不能依赖其他特定的神经元。因此,不得不去学习随机子集神经元间的鲁棒性的有用连接。换句话说。想象我们的神经元作为要给预测的模型,dropout是一种方式可以确保我们的模型在丢失一个个体线索的情况下保持健壮的模型。在这种情况下,可以说他的作用和L1和L2范式正则化是相同的。都是来减少权重连接,然后增加网络模型在缺失个体连接信息情况下的鲁棒性。

Of course, the true measure of dropout is that it has been very successful in improving the performance of neural networks. The original paper(**Improving neural networks by preventing co-adaptation of feature detectors by Geoffrey Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov (2012). Note that the paper discusses a number of subtleties that I have glossed over in this brief introduction.) introducing the technique applied it to many different tasks. For us, it's of particular interest that they applied dropout to MNIST digit classification, using a vanilla feedforward neural network along lines similar to those we've been considering. The paper noted that the best result anyone had achieved up to that point using such an architecture was 98.498.4 percent classification accuracy on the test set. They improved that to 98.798.7 percent accuracy using a combination of dropout and a modified form of L2 regularization. Similarly impressive results have been obtained for many other tasks, including problems in image and speech recognition, and natural language processing. Dropout has been especially useful in training large, deep networks, where the problem of overfitting is often acute.

当然,真正使dropout作为一个强大工具的原因是它在提高神经网络的表现方面是非常成功的。原始的dropout被发现的论文()介绍了这种技术对不同任务执行的结果。对我们来说。我们对dropout这种技术对手写字识别的提升特别感兴趣。用一个毫无新意的前馈神经网络。论文表明最好的结果实现的是98.4984的正确率。通过使用dropout和L2范式正则化。正确率提升到了98.7987.同样显著的效果也在其他任务中得到了体现。包括图像识别,语音识别,自然语言处理。大型深度网络过拟合现象很突出。dropout在训练大型的深度网络的时候在解决过拟合问题的非常有用。

深度学习(dropout)的更多相关文章

  1. 深度学习Dropout技术分析

    深度学习Dropout技术分析 什么是Dropout? dropout是指在深度学习网络的训练过程中,对于神经网络单元,按照一定的概率将其暂时从网络中丢弃.注意是暂时,对于随机梯度下降来说,由于是随机 ...

  2. 深度学习论文TOP10,2019一季度研究进展大盘点

    9012年已经悄悄过去了1/3. 过去的100多天里,在深度学习领域,每天都有大量的新论文产生.所以深度学习研究在2019年开了怎样一个头呢? Open Data Science对第一季度的深度学习研 ...

  3. caffe︱深度学习参数调优杂记+caffe训练时的问题+dropout/batch Normalization

    一.深度学习中常用的调节参数 本节为笔者上课笔记(CDA深度学习实战课程第一期) 1.学习率 步长的选择:你走的距离长短,越短当然不会错过,但是耗时间.步长的选择比较麻烦.步长越小,越容易得到局部最优 ...

  4. 深度学习中dropout策略的理解

    现在有空整理一下关于深度学习中怎么加入dropout方法来防止测试过程的过拟合现象. 首先了解一下dropout的实现原理: 这些理论的解释在百度上有很多.... 这里重点记录一下怎么实现这一技术 参 ...

  5. 深度学习中Dropout原理解析

    1. Dropout简介 1.1 Dropout出现的原因 在机器学习的模型中,如果模型的参数太多,而训练样本又太少,训练出来的模型很容易产生过拟合的现象. 在训练神经网络的时候经常会遇到过拟合的问题 ...

  6. 深度学习中 --- 解决过拟合问题(dropout, batchnormalization)

    过拟合,在Tom M.Mitchell的<Machine Learning>中是如何定义的:给定一个假设空间H,一个假设h属于H,如果存在其他的假设h’属于H,使得在训练样例上h的错误率比 ...

  7. 【深度学习】理解dropout

    dropout是指在深度学习网络的训练过程中,对于神经网络单元,按照一定的概率将其暂时从网络中丢弃.注意是暂时,对于随机梯度下降来说,由于是随机丢弃,故而每一个mini-batch都在训练不同的网络. ...

  8. 深度学习基础系列(九)| Dropout VS Batch Normalization? 是时候放弃Dropout了

    Dropout是过去几年非常流行的正则化技术,可有效防止过拟合的发生.但从深度学习的发展趋势看,Batch Normalizaton(简称BN)正在逐步取代Dropout技术,特别是在卷积层.本文将首 ...

  9. 动手学深度学习14- pytorch Dropout 实现与原理

    方法 从零开始实现 定义模型参数 网络 评估函数 优化方法 定义损失函数 数据提取与训练评估 pytorch简洁实现 小结 针对深度学习中的过拟合问题,通常使用丢弃法(dropout),丢弃法有很多的 ...

随机推荐

  1. SQLServer 事务隔离级别

    MSSQL 事务级别 分类: 数据库2012-12-28 11:17 1050人阅读 评论(0) 收藏 举报 事务 级别 等级优化数据库 一个系统项目做大了,就会遇到性能问题.数据库的优化将是解决性能 ...

  2. C#小程序呢飞行棋设计分析

    C#小程序飞行棋,程序效果图 1.设计分析 这个程序界面大致分为四部分: ① 最上面游戏名字界面 ②信息提示区 ③游戏界面区 ④游戏操作提示区 2.分区设计实现 一.游戏界面显示区,由于只需要显示出图 ...

  3. AX7: CREATE NEW PACKAGE\MODEL

    To create a new package\model on AX first you should understand the concept of Packages and Models o ...

  4. Windows内核原理系列01 - 基本概念

    1.Windows API Windows 应用编程接口(API)是针对WIndwos操作系统用户模式的系统编程接口,包含在WindwosSDK中. 2.关于.NET .NET由一个被称为FCL的类库 ...

  5. Robotframework框架AndroidLibrary库安装

    1.Ruby官网(http://rubyinstaller.org/)下载系统对应安装包进行安装 2.Ruby官网(http://rubyinstaller.org/)下载对应DevKit,运行解压到 ...

  6. C语言Notebook

    int *pointer=NULL    /*指针变量一定要赋初值*/pritnf("Pointer' address is:%p",&pointer);  /*打印指针变 ...

  7. vc 获取网络时间

    方式1 : #include <WinSock2.h> #include <Windows.h> #pragma comment(lib, "WS2_32" ...

  8. KMP算法分析

    KMP是一种复杂度较低的字符串比较算法.基本思路是对欲匹配字符串进行预处理,分析当k位匹配时可以后移的位数,所得的数构成该字符串的特征向量. 求特征向量Next int* Next(string p) ...

  9. JAVA EE中session的理解

    转自[互动百科]http://www.baike.com/wiki/Session   Session Session:在计算机中,尤其是在网络应用中,称为“会话”.Session直接翻译成中文比较困 ...

  10. 读取excel数据,并统计输出Frame版本

    package cn.cnnic.ops; import java.awt.Button; import java.awt.FileDialog; import java.awt.FlowLayout ...