Overview

In the previous sections, you constructed a 3-layer neural network comprising an input, hidden and output layer. While fairly effective for MNIST, this 3-layer model is a fairly shallow network; by this, we mean that the features (hidden layer activations a(2)) are computed using only "one layer" of computation (the hidden layer).

In this section, we begin to discuss deep neural networks, meaning ones in which we have multiple hidden layers; this will allow us to compute much more complex features of the input. Because each hidden layer computes a non-linear transformation of the previous layer, a deep network can have significantly greater representational power (i.e., can learn significantly more complex functions) than a shallow one.

Note that when training a deep network, it is important to use a non-linear activation function in each hidden layer. This is because multiple layers of linear functions would itself compute only a linear function of the input (i.e., composing multiple linear functions together results in just another linear function), and thus be no more expressive than using just a single layer of hidden units.

Advantages of deep networks

The primary advantage is that it can compactly represent a significantly larger set of fuctions than shallow networks. Formally, one can show that there are functions which a k-layer network can represent compactly (with a number of hidden units that is polynomial in the number of inputs), that a (k − 1)-layer network cannot represent unless it has an exponentially large number of hidden units.

By using a deep network, in the case of images, one can also start to learn part-whole decompositions. For example, the first layer might learn to group together pixels in an image in order to detect edges (as seen in the earlier exercises). The second layer might then group together edges to detect longer contours, or perhaps detect simple "parts of objects." An even deeper layer might then group together these contours or detect even more complex features.

Finally, cortical computations (in the brain) also have multiple layers of processing. For example, visual images are processed in multiple stages by the brain, by cortical area "V1", followed by cortical area "V2" (a different part of the brain), and so on.

Difficulty of training deep architectures

The main learning algorithm that researchers were using was to randomly initialize the weights of a deep network, and then train it using a labeled training set using a supervised learning objective, for example by applying gradient descent to try to drive down the training error. However, this usually did not work well. There were several reasons for this.

Availability of data
Local optima
Diffusion of gradients

when using backpropagation to compute the derivatives, the gradients that are propagated backwards (from the output layer to the earlier layers of the network) rapidly diminish in magnitude as the depth of the network increases. As a result, the derivative of the overall cost with respect to the weights in the earlier layers is very small.(深度神经网络的前几层) Thus, when using gradient descent, the weights of the earlier layers change slowly, and the earlier layers fail to learn much. This problem is often called the "diffusion of gradients."

A closely related problem to the diffusion of gradients is that if the last few layers in a neural network have a large enough number of neurons, it may be possible for them to model the labeled data alone without the help of the earlier layers. Hence, training the entire network at once with all the layers randomly initialized ends up giving similar performance to training a shallow network (the last few layers) on corrupted input (the result of the processing done by the earlier layers).

Greedy layer-wise training

the main idea is to train the layers of the network one at a time, so that we first train a network with 1 hidden layer, and only after that is done, train a network with 2 hidden layers, and so on. At each step, we take the old network with k − 1 hidden layers, and add an additional k-th hidden layer (that takes as input the previous hidden layer k − 1 that we had just trained). Training can either be supervised (say, with classification error as the objective function on each step), but more frequently it is unsupervised (as in an autoencoder; details to provided later). The weights from training the layers individually are then used to initialize the weights in the final/overall deep network, and only then is the entire architecture "fine-tuned" (i.e., trained together to optimize the labeled training set error).

Availability of data

While labeled data can be expensive to obtain, unlabeled data is cheap and plentiful. The promise of self-taught learning is that by exploiting the massive amount of unlabeled data, we can learn much better models. By using unlabeled data to learn a good initial value for the weights in all the layers (except for the final classification layer that maps to the outputs/predictions), our algorithm is able to learn and discover patterns from massively more amounts of data than purely supervised approaches. This often results in much better classifiers being learned.

Better local optima

After having trained the network on the unlabeled data, the weights are now starting at a better location in parameter space than if they had been randomly initialized. We can then further fine-tune the weights starting from this location. Empirically, it turns out that gradient descent from this location is much more likely to lead to a good local minimum, because the unlabeled data has already provided a significant amount of "prior" information about what patterns there are in the input data.

Deep Networks : Overview的更多相关文章

  1. Deep Learning 8_深度学习UFLDL教程:Stacked Autocoders and Implement deep networks for digit classification_Exercise(斯坦福大学深度学习教程)

    前言 1.理论知识:UFLDL教程.Deep learning:十六(deep networks) 2.实验环境:win7, matlab2015b,16G内存,2T硬盘 3.实验内容:Exercis ...

  2. Initialization of deep networks

    Initialization of deep networks 24 Feb 2015Gustav Larsson As we all know, the solution to a non-conv ...

  3. 基于pytorch实现HighWay Networks之Train Deep Networks

    (一)Highway Networks 与 Deep Networks 的关系 理论实践表明神经网络的深度是至关重要的,深层神经网络在很多方面都已经取得了很好的效果,例如,在1000-class Im ...

  4. 论文笔记:SiamRPN++: Evolution of Siamese Visual Tracking with Very Deep Networks

    SiamRPN++: Evolution of Siamese Visual Tracking with Very Deep Networks 2019-04-02 12:44:36 Paper:ht ...

  5. 论文笔记:Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

    Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks ICML 2017 Paper:https://arxiv.org/ ...

  6. 【DeepLearning】Exercise: Implement deep networks for digit classification

    Exercise: Implement deep networks for digit classification 习题链接:Exercise: Implement deep networks fo ...

  7. 深度学习材料:从感知机到深度网络A Deep Learning Tutorial: From Perceptrons to Deep Networks

    In recent years, there’s been a resurgence in the field of Artificial Intelligence. It’s spread beyo ...

  8. Deep Networks for Image Super-Resolution with Sparse Prior

    深度学习中潜藏的稀疏表达 Deep Networks for Image Super-Resolution with Sparse Prior http://www.ifp.illinois.edu/ ...

  9. Training Very Deep Networks

    Rupesh Kumar SrivastavaKlaus Greff ̈J urgenSchmidhuberThe Swiss AI Lab IDSIA / USI / SUPSI{rupesh, k ...

随机推荐

  1. Scala之面向对象

    1. Scala基础练习 不使用str.toLong,str.toInt/Integer.valueOf()/Long.valueOf/Integer.parseInt()等,将字符串"12 ...

  2. IDL build

    For Developers‎ > ‎Design Documents‎ > ‎ IDL build 目录 1 Steps 2 GYP 3 Performance 3.1 Details ...

  3. [POI2011]MET-Meteors 整体二分_树状数组_卡常

    线段树肯定会 TLE 的,必须要用树状数组. Code: // luogu-judger-enable-o2 #include <cstdio> #include <algorith ...

  4. 用Electron开发企业网盘(一)--通信

    效果展示 项目背景: 由于浏览器的限制,web批量下载体验不好以及无法下载文件夹.采用Electron技术,通过js开发PC应用程序,着力解决批量下载.断点续传.文件夹下载等问题.配合网页版网盘使用, ...

  5. 同门不同类—创新Aurvana Live2/Air简评(附随身视听设备心路历程)

    (注,本文把live2/air并成一起写的,同时本人是木耳,请轻拍) 本命年各种坏东西,很是无语,终于坏到耳塞耳机了来了,之前用的拜亚DT235无缘无故就一边不响了,无奈只能扔了. 纠结了好几个月,终 ...

  6. Python 读写文件 小应用:生成随机的测验试卷文件

    去年学习了python的读写文件部分,了解了python读写的常用模块os.shelve,今天准备把课后作业试着自己做一下 目标:1)生成35份试卷.每个试卷有50道选择题 2)为了防止有学生作弊,需 ...

  7. 紫书 习题 10-1UVa 111040(找规律)

    通过观察可以得 图可以分成很多个上面一个,中间两个,下面三个的"模板" 这个时候最上面一个知道,最下面得左右知道 那么可以设下面中间为x,左边为a1, 右边为a2, a1a2已知 ...

  8. main()函数的形参

    main函数中的第一个参数argc代表的是向main函数传递的参数个数,第二个参数argv数组代表执行的程序名称和执行程序时输入的参数 #include <stdio.h> int mai ...

  9. ECNUOJ 2574 Principles of Compiler

    Principles of Compiler Time Limit:1000MS Memory Limit:65536KBTotal Submit:473 Accepted:106 Descripti ...

  10. Spring-statemachine Action不能并发执行的问题

    Spring-statemachine版本:当前最新的1.2.3.RELEASE版本 这几天一直被Action是串行执行搞得很郁闷,写了一个demo专门用来测试: public static void ...