NEGOUT: SUBSTITUTE FOR MAXOUT UNITS

Maxout [1] units are well-known and frequently used tools for Deep Neural Networks. For whom does not know, with a basic explanation, a Maxout unit is a set of internal activation units competing with each other for each instance and activation of the winner is propagated as output and the loosers are kept silent. At the backpropagation phase, it means we update only the winner unit. That also means, implicitly, we always prefer to back-propagate gradient signal through the strongest path.  It is an important aspect of Maxout units, especially for very deep models which are prone to gradient instability.

Although Maxout units have very good properties like which I told (please refer to the paper for more details), I am a proactive sceptic of its ability to encode underlying information and pass it to next layer.  Here is a very simple example. Suppose we have two competing functions (filters) in a Maxout unit. One of these functions is receptive of edge structures whereas the other is receptive of corners. For an instance, we might have the first filter as the winner with a value, let’s say, ~3 which means Maxout output is also ~3. For another instance, we have the other function as the winner with approximately same value ~3. If we assume that each NN layer is a classifier which takes the previous layer output as a feature vector (I guess not very wrong assumption), then basically we give the same value for different detections for a particular feature dimension (which is corresponded to our Maxout unit). Eventually, we cannot expect from the next layer to be able to discern this signal.

Edge detector is the winner

Corner detector is the winner but the result is same

One can argue that we should evaluate Maxout unit as a whole and it is reminiscent of OR function on top of multiple filters. This is a valid argument which I cannot refuse directly but the problem that I indicated above is still floating on air.  Beside,  why we would waste our expensive NN parameters, if we could come up with a better encoding scheme for Maxout units

Here is one alternative approach for better encoding of competing functions, which we call NegOut. Let's assume we have a ordering of two competing functions by heart as 1st and 2nd. If the winner is the 1st function, NegOut outputs the 1st function's value and otherwise it outputs the 2nd function but by taking its negative. NegOut yields two assumptions. The first, competing functions are always positive (like ReLU functions ). The second, we have 2 competing functions.

NegOut activation with different winners.

If we consider the backpropagation signal, the only difference from Maxout unit is to take negative of the gradient signal for the 2nd competing unit, if it is the winner.

As you can see from the figure, the inherent property here is to output different values for different winner detectors in which the value captures both the structural difference and the strength of the winner activation.

I performed some experiments on CIFAR-10 and MNIST comparing Maxout Network with NegOut Network with exact same architectures explained in the Maxout Paper [1].  The table below summarizes results that I observe by the initial runs without any finetunning or hyper-parameter optimization yet. More comparisons on larger datasets are still in progress.

Results on CIFAR-10 and MNIST after average of 5 different runs.

NegOut give better results on CIFAR, although it is slightly lower on MNIST. Again notice that no tunning has been took a place for our NegOut network where as Maout Network is optimized as described in the paper [1].  In addition, NegOut network uses 2 competing set of units (as it is constrained by its nature) for the last FC layer in comparison to Maxout net which uses 5 competing units. My expectation is to have more difference as we go through larger models and datasets since as we scale up, representational power takes more place for better results.

Here, I tried to give a basic sketch of my recent work by no means complete. Different observations and experiments are still running. I also need to include LWTA [2] for being more fair and grasp more wider aspect of competing units. Please feel free to share your thoughts as well. Any contribution is appreciated.

PS: Lately, I devote myself to analyze the internal dynamics of Neural Networks with different architectures, layers and activation functions. The aim is checking under the hood and analyzing any intuitionally well-functioning ideas applied to  Deep Neural Networks. I also expect to share more of my findings at my blog.

[1] Maxout networks IJ Goodfellow, D Warde-Farley, M Mirza, A Courville, Y Bengio arXiv preprint arXiv:1302.4389

[2] Understanding Locally Competitive Networks Rupesh Kumar Srivastava, Jonathan Masci, Faustino Gomez, Jürgen Schmidhuber. http://arxiv.org/abs/1410.1165

NEGOUT: SUBSTITUTE FOR MAXOUT UNITS的更多相关文章

  1. Deep Learning in a Nutshell: Core Concepts

    Deep Learning in a Nutshell: Core Concepts This post is the first in a series I’ll be writing for Pa ...

  2. (转) Deep Learning in a Nutshell: Core Concepts

    Deep Learning in a Nutshell: Core Concepts Share:   Posted on November 3, 2015by Tim Dettmers 7 Comm ...

  3. Classifying plankton with deep neural networks

    Classifying plankton with deep neural networks The National Data Science Bowl, a data science compet ...

  4. 条件GAN论文简单解读

        条件GAN(Conditional Generative Adversarial Nets),原文地址为CGAN. Abstract     生成对抗网络(GAN)是最近提出的训练生成模型(g ...

  5. 用500行Julia代码开始深度学习之旅 Beginning deep learning with 500 lines of Julia

    Click here for a newer version (Knet7) of this tutorial. The code used in this version (KUnet) has b ...

  6. ICLR 2014 International Conference on Learning Representations深度学习论文papers

    ICLR 2014 International Conference on Learning Representations Apr 14 - 16, 2014, Banff, Canada Work ...

  7. 激活函数--(Sigmoid,tanh,Relu,maxout)

    Question? 激活函数是什么? 激活函数有什么用? 激活函数怎么用? 激活函数有哪几种?各自特点及其使用场景? 1.激活函数 1.1激活函数是什么? 激活函数的主要作用是提供网络的非线性建模能力 ...

  8. Dropout & Maxout

    [ML] My Journal from Neural Network to Deep Learning: A Brief Introduction to Deep Learning. Part. E ...

  9. Excel函数——DATE、SUBSTITUTE、REPLACE、ISERROR、IFERROR

    1.DATE DATE 函数返回表示特定日期的连续序列号.例如,公式 =DATE(2008,7,8) 返回 2008-7-8或39637,取决于单元格格式,但空单元格计算和默认为日期格式. DATE也 ...

随机推荐

  1. 如何使用淘宝 NPM 镜像,安装CNPM的方法

    npm 版本需要大于 3.0 前提:安装好npm 环境:Linux 直接在linux下输入命令: npm install -g cnpm --registry=https://registry.npm ...

  2. 微软职位内部推荐-Software Engineer

    微软近期Open的职位: Job Title: Software Engineer Work Location: Suzhou, China This is a once in a lifetime ...

  3. npm模块之http-proxy-middleware使用教程(译)

    单线程node.js代理中间件,用于连接,快速和浏览器同步 Node.js代理简单. 轻松配置代理中间件连接,快速,浏览器同步等. 由流行的Nodejitsu http代理提供. TL;DR 代理/ ...

  4. BigDecimal的setScale()方法无效(坑)

    最近在使用BigDecimal进行四舍五入时,发现setScale()方法设置的精度值并没有起作用,一度让我怀疑起是否jdk有bug,代码如下: 错误代码 double d = 7.199999999 ...

  5. 1090. Highest Price in Supply Chain (25)-dfs求层数

    给出一棵树,在树根出货物的价格为p,然后每往下一层,价格增加r%,求所有叶子节点中的最高价格,以及该层叶子结点个数. #include <iostream> #include <cs ...

  6. 12.16daily_scrum

    这个阶段,我们组需要攻克的技术难题一个是测试及美化界面,另一个是在M1阶段的基础上进一步细化和完善悬浮窗的功能,具体的工作内容如下: 具体工作: 小组成员 今日任务 明日任务 工作时间 李睿琦 图片笔 ...

  7. mysql 插多行数据

    应用场景: 需要把一个表(tableA)的个别字段筛选出来,添加到新表中(tableB).新表还含有其他字段,主键是uuid. 思路解析: 熟悉插入一行数据的sql语句: insert into cu ...

  8. javascript 数组对象及其方法

    数组声明:通过let arr = new Array(); 或者 let arr = []; 数组对象可调用的方法: 1)find方法,使用情况是对数组进行筛选遍历,find方法要求某个函数(A)作为 ...

  9. Linux版本CentOS、Ubuntu和Debian的异同

    Linux有非常多的发行版本,从性质上划分,大体分为由商业公司维护的商业版本与由开源社区维护的免费发行版本. 商业版本以Redhat为代表,开源社区版本则以debian为代表. #Ubuntu系统 U ...

  10. Graph Database & 图形数据库

    Graph Database 图形数据库 https://en.wikipedia.org/wiki/Graph_database cayley https://github.com/cayleygr ...