WHAT I READ FOR DEEP-LEARNING
WHAT I READ FOR DEEP-LEARNING
Today, I spent some time on two new papers proposing a new way of training very deep neural networks (Highway-Networks) and a new activation function for Auto-Encoders (ZERO-BIAS AUTOENCODERS AND THE BENEFITS OF
CO-ADAPTING FEATURES) which evades the use of any regularization methods such as Contraction or Denoising.
Lets start with the first one. Highway-Networks proposes a new activation type similar to LTSM networks and they claim that this peculiar activation is robust to any choice of initialization scheme and learning problems occurred at very deep NNs. It is also incentive to see that they trained models with >100 number of layers. The basic intuition here is to learn a gating function attached to a real activation function that decides to pass the activation or the input itself. Here is the formulation


T(x,Wt) is the gating function and H(x,WH) is the real activation. They use Sigmoid activation for gating and Rectifier for the normal activation in the paper. I also implemented it with Lasagne and tried to replicate the results (I aim to release the code later). It is really impressive to see its ability to learn for 50 layers (this is the most I can for my PC).
The other paper ZERO-BIAS AUTOENCODERS AND THE BENEFITS OF CO-ADAPTING FEATURES suggests the use of non-biased rectifier units for the inference of AEs. You can train your model with a biased Rectifier Unit but at the inference time (test time), you should extract features by ignoring bias term. They show that doing so gives better recognition at CIFAR dataset. They also device a new activation function which has the similar intuition to Highway Networks. Again, there is a gating unit which thresholds the normal activation function.


The first equation is the threshold function with a predefined threshold (they use 1 for their experiments). The second equation shows the reconstruction of the proposed model. Pay attention that, in this equation they use square of a linear activation for thresholding and they call this model TLin but they also use normal linear function which is called TRec. What this activation does here is to diminish the small activations so that the model is implicitly regularized without any additional regularizer. This is actually good for learning over-complete representation for the given data.
For more than this silly into, please refer to papers
and warn me for any mistake.
These two papers shows a new coming trend to Deep Learning community which is using complex activation functions . We can call it controlling each unit behavior in a smart way instead of letting them fire naively. My notion also agrees with this idea. I believe even more complication we need for smart units in our deep models like Spike and Slap networks.
WHAT I READ FOR DEEP-LEARNING的更多相关文章
- Deep learning:五十一(CNN的反向求导及练习)
前言: CNN作为DL中最成功的模型之一,有必要对其更进一步研究它.虽然在前面的博文Stacked CNN简单介绍中有大概介绍过CNN的使用,不过那是有个前提的:CNN中的参数必须已提前学习好.而本文 ...
- 【深度学习Deep Learning】资料大全
最近在学深度学习相关的东西,在网上搜集到了一些不错的资料,现在汇总一下: Free Online Books by Yoshua Bengio, Ian Goodfellow and Aaron C ...
- 《Neural Network and Deep Learning》_chapter4
<Neural Network and Deep Learning>_chapter4: A visual proof that neural nets can compute any f ...
- Deep Learning模型之:CNN卷积神经网络(一)深度解析CNN
http://m.blog.csdn.net/blog/wu010555688/24487301 本文整理了网上几位大牛的博客,详细地讲解了CNN的基础结构与核心思想,欢迎交流. [1]Deep le ...
- paper 124:【转载】无监督特征学习——Unsupervised feature learning and deep learning
来源:http://blog.csdn.net/abcjennifer/article/details/7804962 无监督学习近年来很热,先后应用于computer vision, audio c ...
- Deep Learning 26:读论文“Maxout Networks”——ICML 2013
论文Maxout Networks实际上非常简单,只是发现一种新的激活函数(叫maxout)而已,跟relu有点类似,relu使用的max(x,0)是对每个通道的特征图的每一个单元执行的与0比较最大化 ...
- Deep Learning 23:dropout理解_之读论文“Improving neural networks by preventing co-adaptation of feature detectors”
理论知识:Deep learning:四十一(Dropout简单理解).深度学习(二十二)Dropout浅层理解与实现.“Improving neural networks by preventing ...
- Deep Learning 19_深度学习UFLDL教程:Convolutional Neural Network_Exercise(斯坦福大学深度学习教程)
理论知识:Optimization: Stochastic Gradient Descent和Convolutional Neural Network CNN卷积神经网络推导和实现.Deep lear ...
- 0.读书笔记之The major advancements in Deep Learning in 2016
The major advancements in Deep Learning in 2016 地址:https://tryolabs.com/blog/2016/12/06/major-advanc ...
- #Deep Learning回顾#之LeNet、AlexNet、GoogLeNet、VGG、ResNet
CNN的发展史 上一篇回顾讲的是2006年Hinton他们的Science Paper,当时提到,2006年虽然Deep Learning的概念被提出来了,但是学术界的大家还是表示不服.当时有流传的段 ...
随机推荐
- C#_父窗体跟子窗体的控件操作
很多人都苦恼于如何在子窗体中操作主窗体上的控件,或者在主窗体中操作子窗体上的控件.相比较而言,后面稍微简单一些,只要在主窗体中创建子窗体的时候,保留所创建子窗体对象即可. 下面重点介绍前一种,目前常见 ...
- 用 IIS 搭建 mercurial server
mercurial server 对于代码管理工具,更多的人可能对 Git 更熟悉一些(Git太火了).其实另外一款分布式代码管理工具也被广泛的使用,它就是 mercurial.当多人协作时最好能够通 ...
- CentOS 6.8 安装Tomcat7
一.下载Tomcat到服务器上 将Tomcat包下载到devleoper(没有此目录创建一个)目录下: 二.解压安装包 下载好之后,直接解压,使用命令: .tar.gz # 是否使用sudo权限执行根 ...
- JMeter的下载安装以及运行教程
一.安装JMeter的必要准备 1.安装JDK JDK下载地址:https://www.oracle.com/technetwork/java/javase/downloads/index.html ...
- kaggle 欺诈信用卡预测——Smote+LR
from:https://zhuanlan.zhihu.com/p/30461746 本项目需解决的问题 本项目通过利用信用卡的历史交易数据,进行机器学习,构建信用卡反欺诈预测模型,提前发现客户信用卡 ...
- 在Mac系统下配置PHP运行环境
概述 Mac系统对于PHP运行非常友好,我们只需要进行简单的配置便可以开始进行使用,本篇文章将一步一步地介绍Apache.PHP和MySQL的安装与配置,为开始进行开发铺好路 Apache 启动Apa ...
- centos crontab 计划任务 设置与查看
centos 上 crontab 计划任务 ,这个版本解释的比较清晰 林涛 发表于:2017-4-27 11:11 分类:26点 标签:crontab,Linux,计划任务 36次 这个版本的cron ...
- No.1101_第十次团队会议
今天项目进展很多,大家都在现在的成果而开心,信心高涨,后面的任务的完成也基本都能指日可待.之前团队出现了各种问题,沟通出现了很多障碍,导致各方面受阻.现在大家再面对面坦诚相对,交流了一下自己的想法,结 ...
- 2017-2018 第一学期201623班《程序设计与数据结构》-第5&6周作业问题总结
一.作业内容 第5周作业 http://www.cnblogs.com/rocedu/p/7484252.html#WEEK05 第6周作业 http://www.cnblogs.com/rocedu ...
- UML实践详细经典教程
面向对象的问题的处理的关键是建模问题.建模可以把在复杂世界的许多重要的细节给抽象出.许多建模工具封装了UML(也就是Unified Modeling Language™),这篇课程的目的是展示出UML ...