【CV知识学习】early stop、regularation、fine-tuning and some other trick to be known
深度学习有不少的trick,而且这些trick有时还挺管用的,所以,了解一些trick还是必要的。上篇说的normalization、initialization就是trick的一种,下面再总结一下自己看Deep Learning Summer School, Montreal 2016 总结的一些trick。请路过大牛指正~~~
early stop
“早停止”很好理解,就是在validation的error开始上升之前,就把网络的训练停止了。说到这里,把数据集分成train、validation和test也算是一种trick吧。看一张图就明白了:

L1、L2 regularization
第一次接触正则化是在学习PRML的回归模型,那时还不太明白,现在可以详细讲讲其中的原理了。额~~~~(I will write the surplus of this article in English)
At the very outset, we take a insight on L1, L2 regularization. Assume the loss function of a linear regression model as
. In fact, L1, L2 regularization can be seen as introducing prior distribution for the parameters. Here, L1 regularization can be interpreted as Laplace prior, and Guass prior for L2 regularization.Such tricks are used to reduce the complexity of a model.
L2 regularization
As before, we make a hypothesis that the target variable
is determined by the function
with an additive guass noise
, resulting in
, where
is a zero mean Guassian random variable with precision
. Thus we know
and its loglikelihood function form can be wrote as

Finding the optimal solution for this problem is equal to minimize the least squre function
. If a zero mean
variance Guass prior is introduced for
, we get the posterior distribution for
from the function (need to know prior + likelihood = posterior)

rewrite the equaltion in loglikelihood form, we will get

at the present, you see
. This derivation process accounts for why L2 regularization could be interpreted as Guass Prior.
L1 regularization
Generally, solving the the L1 regularization is called LASSO problem, which will force some coefficients to zero thus create a sparse model. You can get reference from here.

From the above images(two-dimensionality), something can be discovered that for the L1 regularization, the interaction points always locate on the axis, and this will forces some of the variables to be zero. Addtionally, we can learn that L2 regularization doesn't produce a sparse model(the right side image).
Fine tuning
It's easy to understand fine tuning, which could be interpreted as a way to initailizes the weight parameters then proceed a supervised learning. Parameters value can be migrated from a well-trained network, such as ImageNet. However, sometimes you may need an autoencoder, which is an unsupervised learning method and seems not very popular now. An autoencoder will learn the latent structure of the data in hidden layers, whose input and output should be same.
In fact, supervised learning of fine tune just performs regular feed forward process as in a feed-forwad network.
Then, I want to show you an image about autoencoder, also you can get more information in detail through this website.

Data argumentation
If training data is not large enough, it's a neccessary to expand the data by fliping, transferring or rotating to avoid overfit. However, according to the paper "Visualizing and Understanding Convolutional Networks", CNN may not be as stable when fliping for image as transferring and rotating, except a symmetrical image.
Pooling
I have introduced why pooling works well in the ealier article in this blog, if interested, you may need patience to look through the catalogue. In the paper "Visualizing and Understanding Convolutional Networks", you will see pooling sometimes a good way to control the invariance, I want to write down my notes about this classical paper in next article.
Dropout and Drop layer
Dropout now has become a prevalent technique to prevent overfitting, proposed by Hinton, etc. The dropout program will inactivate some nodes according to a probability p, which has been fixed before training. It makes the network more thinner. But why does it works ? There may exist two main reasons, (1) for a batch of input data, nodes can be trained better in a thinner network, for they trained with more data in average. (2) dropout inject noise into network which will do help.
Inspired by dropout and ResNet, Huang, etc propose drop layer, which make use of the shortcut connection in ResNet. It will enforce the information flow through the shortcut connection directly and obstruct the normal way according to a probability p just like what dropout does. This technique makes the network shorter in training, and can be used to trained a deeper network even more than 1000 layers.
Depth is important
The chart below interprets the reason why depth is important,

as the number of layer increasing, the classification accuracy arises. While visualizing the feature extracted from layers using decovnet("Visualizing and Understanding Convolutional Networks"), the deeper layer the more abstract feature extracted. And abstract feature will strengthen the robustness, i.e. the invariance to transformation.
【CV知识学习】early stop、regularation、fine-tuning and some other trick to be known的更多相关文章
- 【CV知识学习】神经网络梯度与归一化问题总结+highway network、ResNet的思考
这是一篇水货写的笔记,希望路过的大牛可以指出其中的错误,带蒟蒻飞啊~ 一. 梯度消失/梯度爆炸的问题 首先来说说梯度消失问题产生的原因吧,虽然是已经被各大牛说烂的东西.不如先看一个简单的网络结构 ...
- 【CV知识学习】Fisher Vector
在论文<action recognition with improved trajectories>中看到fisher vector,所以学习一下.但网上很多的资料我觉得都写的不好,查了一 ...
- 【CV知识学习】【转】beyond Bags of features for rec scenen categories。基于词袋模型改进的自然场景识别方法
原博文地址:http://www.cnblogs.com/nobadfish/articles/5244637.html 原论文名叫Byeond bags of features:Spatial Py ...
- L23模型微调fine tuning
resnet185352 链接:https://pan.baidu.com/s/1EZs9XVUjUf1MzaKYbJlcSA 提取码:axd1 9.2 微调 在前面的一些章节中,我们介绍了如何在只有 ...
- 网络知识学习2---(IP地址、子网掩码)(学习还不深入,待完善)
紧接着:网络知识学习1 1.IP地址 IP包头的结构如图 A.B.C网络类别的IP地址范围(图表) A.B.C不同的分配网络数和主机的方式(A是前8个IP地址代表网络,后24个代表主机:B是16 ...
- (原)caffe中fine tuning及使用snapshot时的sh命令
转载请注明出处: http://www.cnblogs.com/darkknightzh/p/5946041.html 参考网址: http://caffe.berkeleyvision.org/tu ...
- HTML5标签汇总及知识学习线路总结
HTML5标签汇总,以及知识学习线路总结.
- 安全测试3_Web后端知识学习
其实中间还应该学习下web服务和数据库的基础,对于web服务大家可以回家玩下tomcat或者wamp等东西,数据库的话大家掌握基本的增删该查就好了,另外最好掌握下数据库的内置函数,如:concat() ...
- GCC基础知识学习
GCC基础知识学习 一.GCC编译选项解析 常用编译选项 命令格式:gcc [选项] [文件名] -E:仅执行编译预处理: -S:将C代码转换为汇编代码: -c:仅执行编译操作,不进行连接操作: -o ...
随机推荐
- 【转帖】迅为iTOP-iMX6开发板 Ubuntu系统下WiFi模块mt6620的移植
本文转自迅为论坛 :http://www.topeetboard.com 文档提供的文件如下. wpa_supplicant 拷贝到开发板 Ubuntu 系统的 /sbin 目录下,如何移植 wpa_ ...
- CE工具里自带的学习工具--第四关
图解:
- 时间戳显示格式为几天前、几分钟前、几秒前---vue过滤器
//时间显示问题(几天前.几分钟前)Vue.filter('fomatTime', function (valueTime) { if(valueTime){ var newData = Date.p ...
- jdk11 eclipse下开启ZGC
平台支持 ZGC目前只在Linux/x64上可用,如果有足够的需求,将来可能会增加对其他平台的支持. 对的,目前只支持64位的linux系统. -_-' eclipse.ini配置: -XX:+Unl ...
- React入门介绍(2)- React Component-React组件
React Component-React组件 允许用户自由封装组件是React非常突出的特性,用户可将自己创建的组件像普通的HTML标签一样插入页面,React.CreateClass方法就是用来创 ...
- C#语言中循环分类总结
C#语言中,循环主要分为4种,分别是:while循环.do while循环.for循环.foeach循环.下面我将分类对循环语句总结. 1.while循环: 如果循环条件为真,则执行循环体:执行完循环 ...
- 笔试算法题(53):四种基本排序方法的性能特征(Selection,Insertion,Bubble,Shell)
四种基本算法概述: 基本排序:选择,插入,冒泡,希尔.上述算法适用于小规模文件和特殊文件的排序,并不适合大规模随机排序的文件.前三种算法的执行时间与N2成正比,希尔算法的执行时间与N3/2(或更快)成 ...
- LUA-点号和冒号
由于LUA中是模拟类,没有class, 所以这里是使用.号来访问实例的成员 re.SetActive(re, re.activeSelf == false); 而冒号: 则是种语法糖,省略了上面代码 ...
- Python 文件修改-函数介绍
上节课复习:1.字符编码 1.1 如何解决乱码问题: 字符存取使用的编码标准不一致 1.2 文件头 在文件的首行写入文件头,用于控制Python解释器读取py文件内容时使用的编码 #coding:文件 ...
- 关于meta标签的使用,属性的说明
原文转自:http://blog.csdn.net/gavid0124/article/details/46826127 网页源代码中有时候会遇到这样的一段代码: <metaname=" ...