Overview

In the previous sections, you constructed a 3-layer neural network comprising an input, hidden and output layer. While fairly effective for MNIST, this 3-layer model is a fairly shallow network; by this, we mean that the features (hidden layer activations a(2)) are computed using only "one layer" of computation (the hidden layer).

In this section, we begin to discuss deep neural networks, meaning ones in which we have multiple hidden layers; this will allow us to compute much more complex features of the input. Because each hidden layer computes a non-linear transformation of the previous layer, a deep network can have significantly greater representational power (i.e., can learn significantly more complex functions) than a shallow one.

Note that when training a deep network, it is important to use a non-linear activation function in each hidden layer. This is because multiple layers of linear functions would itself compute only a linear function of the input (i.e., composing multiple linear functions together results in just another linear function), and thus be no more expressive than using just a single layer of hidden units.

Advantages of deep networks

The primary advantage is that it can compactly represent a significantly larger set of fuctions than shallow networks. Formally, one can show that there are functions which a k-layer network can represent compactly (with a number of hidden units that is polynomial in the number of inputs), that a (k − 1)-layer network cannot represent unless it has an exponentially large number of hidden units.

By using a deep network, in the case of images, one can also start to learn part-whole decompositions. For example, the first layer might learn to group together pixels in an image in order to detect edges (as seen in the earlier exercises). The second layer might then group together edges to detect longer contours, or perhaps detect simple "parts of objects." An even deeper layer might then group together these contours or detect even more complex features.

Finally, cortical computations (in the brain) also have multiple layers of processing. For example, visual images are processed in multiple stages by the brain, by cortical area "V1", followed by cortical area "V2" (a different part of the brain), and so on.

Difficulty of training deep architectures

The main learning algorithm that researchers were using was to randomly initialize the weights of a deep network, and then train it using a labeled training set using a supervised learning objective, for example by applying gradient descent to try to drive down the training error. However, this usually did not work well. There were several reasons for this.

Availability of data
Local optima
Diffusion of gradients

when using backpropagation to compute the derivatives, the gradients that are propagated backwards (from the output layer to the earlier layers of the network) rapidly diminish in magnitude as the depth of the network increases. As a result, the derivative of the overall cost with respect to the weights in the earlier layers is very small.(深度神经网络的前几层) Thus, when using gradient descent, the weights of the earlier layers change slowly, and the earlier layers fail to learn much. This problem is often called the "diffusion of gradients."

A closely related problem to the diffusion of gradients is that if the last few layers in a neural network have a large enough number of neurons, it may be possible for them to model the labeled data alone without the help of the earlier layers. Hence, training the entire network at once with all the layers randomly initialized ends up giving similar performance to training a shallow network (the last few layers) on corrupted input (the result of the processing done by the earlier layers).

Greedy layer-wise training

the main idea is to train the layers of the network one at a time, so that we first train a network with 1 hidden layer, and only after that is done, train a network with 2 hidden layers, and so on. At each step, we take the old network with k − 1 hidden layers, and add an additional k-th hidden layer (that takes as input the previous hidden layer k − 1 that we had just trained). Training can either be supervised (say, with classification error as the objective function on each step), but more frequently it is unsupervised (as in an autoencoder; details to provided later). The weights from training the layers individually are then used to initialize the weights in the final/overall deep network, and only then is the entire architecture "fine-tuned" (i.e., trained together to optimize the labeled training set error).

Availability of data

While labeled data can be expensive to obtain, unlabeled data is cheap and plentiful. The promise of self-taught learning is that by exploiting the massive amount of unlabeled data, we can learn much better models. By using unlabeled data to learn a good initial value for the weights in all the layers (except for the final classification layer that maps to the outputs/predictions), our algorithm is able to learn and discover patterns from massively more amounts of data than purely supervised approaches. This often results in much better classifiers being learned.

Better local optima

After having trained the network on the unlabeled data, the weights are now starting at a better location in parameter space than if they had been randomly initialized. We can then further fine-tune the weights starting from this location. Empirically, it turns out that gradient descent from this location is much more likely to lead to a good local minimum, because the unlabeled data has already provided a significant amount of "prior" information about what patterns there are in the input data.

Deep Networks : Overview的更多相关文章

  1. Deep Learning 8_深度学习UFLDL教程:Stacked Autocoders and Implement deep networks for digit classification_Exercise(斯坦福大学深度学习教程)

    前言 1.理论知识:UFLDL教程.Deep learning:十六(deep networks) 2.实验环境:win7, matlab2015b,16G内存,2T硬盘 3.实验内容:Exercis ...

  2. Initialization of deep networks

    Initialization of deep networks 24 Feb 2015Gustav Larsson As we all know, the solution to a non-conv ...

  3. 基于pytorch实现HighWay Networks之Train Deep Networks

    (一)Highway Networks 与 Deep Networks 的关系 理论实践表明神经网络的深度是至关重要的,深层神经网络在很多方面都已经取得了很好的效果,例如,在1000-class Im ...

  4. 论文笔记:SiamRPN++: Evolution of Siamese Visual Tracking with Very Deep Networks

    SiamRPN++: Evolution of Siamese Visual Tracking with Very Deep Networks 2019-04-02 12:44:36 Paper:ht ...

  5. 论文笔记:Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

    Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks ICML 2017 Paper:https://arxiv.org/ ...

  6. 【DeepLearning】Exercise: Implement deep networks for digit classification

    Exercise: Implement deep networks for digit classification 习题链接:Exercise: Implement deep networks fo ...

  7. 深度学习材料:从感知机到深度网络A Deep Learning Tutorial: From Perceptrons to Deep Networks

    In recent years, there’s been a resurgence in the field of Artificial Intelligence. It’s spread beyo ...

  8. Deep Networks for Image Super-Resolution with Sparse Prior

    深度学习中潜藏的稀疏表达 Deep Networks for Image Super-Resolution with Sparse Prior http://www.ifp.illinois.edu/ ...

  9. Training Very Deep Networks

    Rupesh Kumar SrivastavaKlaus Greff ̈J urgenSchmidhuberThe Swiss AI Lab IDSIA / USI / SUPSI{rupesh, k ...

随机推荐

  1. FragmentPagerAdapter和FragmentStatePagerAdapter的区别

    FragmentPagerAdapter 1:简单的介绍: 该类内的每一个生成的 Fragment 都将保存在内存之中,因此适用于那些相对静态的页,数量也比较少的那种:如果需要处理有很多页,并且数据动 ...

  2. windows, fast-rcnn CPU版本的安装配置

    一:安装准备 1:caffe的安装配置,本人用的是happynear大神的caffe版本,具体链接https://github.com/happynear/caffe-windows,编译时需要用到p ...

  3. caffe(7) solver及其配置

    solver算是caffe的核心的核心,它协调着整个模型的运作.caffe程序运行必带的一个参数就是solver配置文件.运行代码一般为 # caffe train --solver=*_slover ...

  4. Mac配置PHP环境

    本文章来自:http://blog.csdn.net/wj_november/article/details/51417491 本人使用的是:MacOs 10.12.3,根据如上操作已经安装成功,感谢 ...

  5. 我的PHP学习之路

    由于工作中,做微信小程序需要我自己写一些后台代码.并且公司后台用的是php.所以我决定在周末和下班后抽空学习php.一开始,我想找一些入门视频来学,然后发现好像效率不是很好.不如看书来得痛快.(主要是 ...

  6. POJ 3539 Elevator(同余类BFS)

    题意 有一部电梯,最初停在1层. 电梯有4个按键,上升a,b,c层,回到一层. 求从一层出发.能到达1~h的哪些楼层. (h<=1018,a,b,c<=105) 题解 这种h能大的图论,一 ...

  7. zend framework获取数据库中枚举类enum的数据并将其转换成数组

    在model中建立这种模型,在当中写入获取枚举类的方法 请勿盗版,转载请加上出处http://blog.csdn.net/yanlintao1 class Student extends Zend_D ...

  8. MyEclipse连接不上genymotion的解决方式

    奇怪的是我的MyEclipse有时候连接得上genymotion,有时候又连接不上.之前连接不上的时候,就直接用真机调试,因此出现这个问题非常久了一直都没有去找解决方式.今天认真的反省了自己,再也不能 ...

  9. 深度拷贝java对象

    有时,如,修改session中对象的时候,如果直接修改session中的对象,修改步骤比较多,一部分修改成功,另一部分不成功,这个时候程序报错,数据库会回滚,但是session已经修改一部分了. 这样 ...

  10. 酱油记:GDKOI2018

    GDKOI2018,走出机房的第六场考试 DAY0 这一次GDKOI,第一次在广州二中考,第一次住在柏高酒店(住宿条件杠杠的!),晚上就到对面的万达广场吃了顿烤肉,到老师那里开会,然后就回酒店睡了 D ...