http://handong1587.github.io/deep_learning/2015/10/09/training-dnn.html  //转载于

Training Deep Neural Networks

 Published: 09 Oct 2015  Category: deep_learning

Tutorials

Popular Training Approaches of DNNs — A Quick Overview

https://medium.com/@asjad/popular-training-approaches-of-dnns-a-quick-overview-26ee37ad7e96#.pqyo039bb

Activation functions

Rectified linear units improve restricted boltzmann machines (ReLU)

Rectifier Nonlinearities Improve Neural Network Acoustic Models (leaky-ReLU, aka LReLU)

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (PReLU)

Empirical Evaluation of Rectified Activations in Convolutional Network (ReLU/LReLU/PReLU/RReLU)

Deep Learning with S-shaped Rectified Linear Activation Units (SReLU)

Parametric Activation Pools greatly increase performance and consistency in ConvNets

Noisy Activation Functions

Weights Initialization

An Explanation of Xavier Initialization

Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?

All you need is a good init

Data-dependent Initializations of Convolutional Neural Networks

What are good initial weights in a neural network?

RandomOut: Using a convolutional gradient norm to win The Filter Lottery

Batch Normalization

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift(ImageNet top-5 error: 4.82%)

Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks

Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks

Loss Function

The Loss Surfaces of Multilayer Networks

Optimization Methods

On Optimization Methods for Deep Learning

On the importance of initialization and momentum in deep learning

Invariant backpropagation: how to train a transformation-invariant neural network

A practical theory for designing very deep convolutional neural network

Stochastic Optimization Techniques

Alec Radford’s animations for optimization algorithms

http://www.denizyuret.com/2015/03/alec-radfords-animations-for.html

Faster Asynchronous SGD (FASGD)

An overview of gradient descent optimization algorithms (★★★★★)

Exploiting the Structure: Stochastic Gradient Methods Using Raw Clusters

Writing fast asynchronous SGD/AdaGrad with RcppParallel

Regularization

DisturbLabel: Regularizing CNN on the Loss Layer [University of California & MSR] (2016)

Dropout

Improving neural networks by preventing co-adaptation of feature detectors (Dropout)

Regularization of Neural Networks using DropConnect

Regularizing neural networks with dropout and with DropConnect

Fast dropout training

Dropout as data augmentation

A Theoretically Grounded Application of Dropout in Recurrent Neural Networks

Improved Dropout for Shallow and Deep Learning

Gradient Descent

Fitting a model via closed-form equations vs. Gradient Descent vs Stochastic Gradient Descent vs Mini-Batch Learning. What is the difference?(Normal Equations vs. GD vs. SGD vs. MB-GD)

http://sebastianraschka.com/faq/docs/closed-form-vs-gd.html

An Introduction to Gradient Descent in Python

Train faster, generalize better: Stability of stochastic gradient descent

A Variational Analysis of Stochastic Gradient Algorithms

The vanishing gradient problem: Oh no — an obstacle to deep learning!

Gradient Descent For Machine Learning

http://machinelearningmastery.com/gradient-descent-for-machine-learning/

Revisiting Distributed Synchronous SGD

Accelerate Training

Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices

Image Data Augmentation

DataAugmentation ver1.0: Image data augmentation tool for training of image recognition algorithm

Caffe-Data-Augmentation: a branc caffe with feature of Data Augmentation using a configurable stochastic combination of 7 data augmentation techniques

Papers

Scalable and Sustainable Deep Learning via Randomized Hashing

Tools

pastalog: Simple, realtime visualization of neural network training performance

torch-pastalog: A Torch interface for pastalog - simple, realtime visualization of neural network training performance

Training Deep Neural Networks的更多相关文章

  1. Training (deep) Neural Networks Part: 1

    Training (deep) Neural Networks Part: 1 Nowadays training deep learning models have become extremely ...

  2. CVPR 2018paper: DeepDefense: Training Deep Neural Networks with Improved Robustness第一讲

    前言:好久不见了,最近一直瞎忙活,博客好久都没有更新了,表示道歉.希望大家在新的一年中工作顺利,学业进步,共勉! 今天我们介绍深度神经网络的缺点:无论模型有多深,无论是卷积还是RNN,都有的问题:以图 ...

  3. 论文翻译:BinaryConnect: Training Deep Neural Networks with binary weights during propagations

    目录 摘要 1.引言 2.BinaryConnect 2.1 +1 or -1 2.2确定性与随机性二值化 2.3 Propagations vs updates 2.4 Clipping 2.5 A ...

  4. 论文翻译:BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or −1

    目录 摘要 引言 1.BinaryNet 符号函数 梯度计算和累积 通过离散化传播梯度 一些有用的成分 算法1 使用BinaryNet训练DNN 算法2 批量标准化转换(Ioffe和Szegedy,2 ...

  5. 为什么深度神经网络难以训练Why are deep neural networks hard to train?

    Imagine you're an engineer who has been asked to design a computer from scratch. One day you're work ...

  6. This instability is a fundamental problem for gradient-based learning in deep neural networks. vanishing exploding gradient problem

    The unstable gradient problem: The fundamental problem here isn't so much the vanishing gradient pro ...

  7. [C4] Andrew Ng - Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization

    About this Course This course will teach you the "magic" of getting deep learning to work ...

  8. [Box] Robust Training and Initialization of Deep Neural Networks: An Adaptive Basis Viewpoint

    目录 概 主要内容 LSGD Box 初始化 Box for Resnet 代码 Cyr E C, Gulian M, Patel R G, et al. Robust Training and In ...

  9. On Explainability of Deep Neural Networks

    On Explainability of Deep Neural Networks « Learning F# Functional Data Structures and Algorithms is ...

随机推荐

  1. innodb insert buffer 插入缓冲区的理解

    今天在做一个大业务的数据删除时,看到下面的性能曲线图 在删除动作开始之后,insert buffer 大小增加到140.对于这些状态参数的说明 InnoDB Insert Buffer 插入缓冲,并不 ...

  2. ClientAbortException 异常解决办法

    http://blog.sina.com.cn/s/blog_43eb83b90102ds8w.html ClientAbortException 异常解决办法 当我们用Servlet导出图片,或用J ...

  3. 安全沙箱冲突:Loader.content:XX 不能访问 XX 可以通过调用 Security.allowDomain 来避免此冲突。

    参考资料:http://tieba.baidu.com/p/882855105 感谢:Z0287yyy 感谢分享精神. 具体解决方案:在loader去load的时候,带上这个参数 var contex ...

  4. 鸟哥的linux私房菜学习

    cat /etc/shells 系统拥有的shellcat /etc/passwd 记录用户使用的shell按两次 tab 键可显示所有可执行的指令alias 查看所有命令的别名alias lm='l ...

  5. 在excel 中某一单元格添加指定字符的示例

    ="select TestSurveyID,'http://www.findoout.cn/survey/viewsurvey.aspx?tid='+CONVERT(varchar(10), ...

  6. 对SSH三大框架的理解

    SSH框架一般指的是Struts.Spring.Hibernate,后来Struts2代替了Struts.最近5年,Struts2已经被Spring MVC代替,而Hibernate基本也被iBati ...

  7. js实现缓冲运动,和匀速运动有点不相同

    缓冲运动和匀速运动有点不同,看图可以知道缓冲运动速度是越来越慢的. <style> *{ padding:0; margin:10px 0; } #div1{ height:100px; ...

  8. 洛谷P3371 【模板】单源最短路径

    P3371 [模板]单源最短路径 282通过 1.1K提交 题目提供者HansBug 标签 难度普及/提高- 提交  讨论  题解 最新讨论 不萌也是新,老司机求带 求看,spfa跑模板40分 为什么 ...

  9. c++操作符重载

    一.类型转换操作符(type conversion operator)[1] 参考: [1]. C++类型转换操作符(type conversion operator): http://www.cpp ...

  10. mongodb根据字符长度作为条件查询

    { $where:"this.XXX.length==2" } 用$where条件查询,等号要用==.虽说$where查询可能效率不是很好,这只是我能想到的,有更好的方法欢迎指教