从13年11月初开始接触DL,奈何boss忙or 各种问题,对DL理解没有CSDN大神 比如 zouxy09等 深刻,主要是自己觉得没啥进展,感觉荒废时日(丢脸啊,这么久。。。。)开始开文,即为记录自己是怎么一步一个逗比的走过的路的,也为了自己思维更有条理。请看客,轻拍,(如果有错,我会立马改正,谢谢大家的指正。==!其实有人看没人看都是个问题。哈哈)

推荐 tornadomeet 的博客园学习资料

http://www.cnblogs.com/tornadomeet/category/497607.html

zouxy09 的csdn学习资料

http://blog.csdn.net/zouxy09

sunmenggmail的csdn的DL的paper整理

http://blog.csdn.net/sunmenggmail/article/details/20904867

falao_beiliu的csdn资料

http://blog.csdn.net/mytestmy/article/category/1465487

Rachel-Zhang 浙大DL女神

http://blog.csdn.net/abcjennifer/article/details/7826917

国内的一个DL论坛,刚刚成立,欢迎大家关注。

http://dl.xsoftlab.net/

下面是综述类的文章,暂时就只记得这一些

2009  Learning
Deep Architectures for AI

http://deeplearning.net/reading-list/

2010 Deep Machine Learning – A New Frontier in Artificial Intelligence Research

http://deeplearning.net/reading-list/

2011 An Introduction to Deep Learning

https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2011-4.pdf

2012  Representation
Learning: A Review and New Perspectives

http://deeplearning.net/reading-list/

2012 深度学习研究综述

2014  Deep Learning in Neural Networks: An Overview

http://arxiv.org/abs/1404.7828

2014 Object Detection with Deep Learning CVPR 2014 Tutorial

2014 DEEP LEARNING:METHODS AND APPLICATIONS

微软的邓力大叔,虽然做语音,但是也写了不少的例如综述类的 http://research.microsoft.com/en-us/people/deng/

2014  A
Tutorial Survey of Architectures, Algorithms, and Applications for Deep Learning

http://research.microsoft.com/en-us/people/deng/

其实很多比如PPT什么的就很好,比如hinton的,andrew ng的 ,Yann LeCun的,Yoshua Bengio的,从他们的home page上就可以找到很多有用的文章。他们作为大神,没话说,而且难能可贵的 作为老师,他们也出了视频,或者很多对于我们菜鸟的浅显入门的东西。还有吴立德,吴老爷子的教学视频(优酷就有,但是杂音太多)。

http://blog.coursegraph.com/公开课可下载资源汇总  (这里有很全的视频学习资料,比如ng的机器学习,hinton的机器学习,自然语言处理各种。)

读书列表,有http://deeplearning.net/reading-list/列的,也有Yoshua Bengio推荐的书单(有些链接失效的。我这个月才发现这个,如果太老,或者什么请忽略我。)

Reading lists for new LISA students

Research in General

● How to write a great research paper

Basics of machine learning

● http://www.iro.umontreal.ca/~bengioy/DLbook/math.html

● http://www.iro.umontreal.ca/~bengioy/DLbook/ml.html

Basics of deep learning

● http://www.iro.umontreal.ca/~bengioy/DLbook/intro.html

● http://www.iro.umontreal.ca/~bengioy/DLbook/mlp.html

● Learning deep architectures for AI

● Practical recommendations for gradientbased

training of deep architectures

● Quick’n’dirty introduction to deep learning: Advances in Deep Learning

● A fast learning algorithm for deep belief nets

● Greedy LayerWise

Training of Deep Networks

● Stacked denoising autoencoders: Learning useful representations in a deep network with

a local denoising criterion

● Contractive autoencoders:

Explicit invariance during feature extraction

● Why does unsupervised pretraining

help deep learning?

● An Analysis of Single Layer Networks in Unsupervised Feature Learning

● The importance of Encoding Versus Training With Sparse Coding and Vector

Quantization

● Representation Learning: A Review and New Perspectives

● Deep Learning of Representations: Looking Forward

● Measuring Invariances in Deep Networks

● Neural networks course at USherbrooke [youtube]

Feedforward nets

● http://www.iro.umontreal.ca/~bengioy/DLbook/mlp.html

● “Improving Neural Nets with Dropout” by Nitish Srivastava

● “Deep Sparse Rectifier Neural Networks”

● “What is the best multistage

architecture for object recognition?”

● “Maxout Networks”

MCMC

● Iain Murray’s MLSS slides

● Radford Neal’s Review Paper (old but still very comprehensive)

● Better Mixing via Deep Representations

Restricted Boltzmann Machines

● Unsupervised learning of distributions of binary vectors using 2layer

networks

● A practical guide to training restricted Boltzmann machines

● Training restricted Boltzmann machines using approximations to the likelihood gradient

● Tempered Markov Chain Monte Carlo for training of Restricted Boltzmann Machine

● How to Center Binary Restricted Boltzmann Machines

● Enhanced Gradient for Training Restricted Boltzmann Machines

● Using fast weights to improve persistent contrastive divergence

● Training Products of Experts by Minimizing Contrastive Divergence

Boltzmann Machines

● Deep Boltzmann Machines (Salakhutdinov & Hinton)

● Multimodal Learning with Deep Boltzmann Machines

● MultiPrediction

Deep Boltzmann Machines

● A Twostage

Pretraining Algorithm for Deep Boltzmann Machines

Regularized Auto-Encoders

● The Manifold Tangent Classifier

Regularization

Stochastic Nets & GSNs

● Estimating or Propagating Gradients Through Stochastic Neurons for Conditional

Computation

● Learning Stochastic Feedforward Neural Networks

● Generalized Denoising AutoEncoders

as Generative Models

● Deep Generative Stochastic Networks Trainable by Backprop

Others

● Slow, Decorrelated Features for Pretraining Complex Celllike

Networks

● What Regularized AutoEncoders

Learn from the Data Generating Distribution

● Generalized Denoising AutoEncoders

as Generative Models

● Why the logistic function?

Recurrent Nets

● Learning longterm

dependencies with gradient descent is difficult

● Advances in Optimizing Recurrent Networks

● Learning recurrent neural networks with Hessianfree

optimization

● On the importance of momentum and initialization in deep learning,

● Long shortterm

memory (Hochreiter & Schmidhuber)

● Generating Sequences With Recurrent Neural Networks

● Long ShortTerm

Memory in Echo State Networks: Details of a Simulation Study

● The "echo state" approach to analysing and training recurrent neural networks

● BackpropagationDecorrelation:

online recurrent learning with O(N) complexity

● New results on recurrent network training:Unifying the algorithms and accelerating

convergence

● Audio Chord Recognition with Recurrent Neural Networks

● Modeling Temporal Dependencies in HighDimensional

Sequences: Application to

Polyphonic Music Generation and Transcription

Convolutional Nets

● http://www.iro.umontreal.ca/~bengioy/DLbook/convnets.html

● Generalization and Network Design Strategies (LeCun)

● ImageNet Classification with Deep Convolutional Neural Networks, Alex Krizhevsky, Ilya

Sutskever, Geoffrey E Hinton, NIPS 2012.

● On Random Weights and Unsupervised Feature Learning

Optimization issues with DL

● Curriculum Learning

● Evolving Culture vs Local Minima

● Knowledge Matters: Importance of Prior Information for Optimization

● Efficient Backprop

● Practical recommendations for gradientbased

training of deep architectures

● Natural Gradient Works Efficiently (Amari 1998)

● Hessian Free

● Natural Gradient (TONGA)

● Revisiting Natural Gradient

NLP + DL

● Natural Language Processing (Almost) from Scratch

● DeViSE: A Deep VisualSemantic

Embedding Model

● Distributed Representations of Words and Phrases and their Compositionality

● Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection

CV+RBM

● Fields of Experts

● What makes a good model of natural images?

● Phone Recognition with the meancovariance

restricted Boltzmann machine

● Unsupervised Models of Images by SpikeandSlab

RBMs

CV + DL

● Imagenet classification with deep convolutional neural networks

● Learning to relate images

Scaling Up

● Large Scale Distributed Deep Networks

● Random search for hyperparameter

optimization

● Practical Bayesian Optimization of Machine Learning Algorithms

DL + Reinforcement learning

● Playing Atari with Deep Reinforcement Learning (paper not officially released yet!)

Graphical Models Background

● An Introduction to Graphical Models (Mike Jordan, brief course notes)

● A View of the EM Algorithm that Justifies Incremental, Sparse and Other Variants (Neal &

Hinton, important paper to the modern understanding of ExpectationMaximization)

● A Unifying Review of Linear Gaussian Models (Roweis & Ghahramani, ties together PCA,

factor analysis, hidden Markov models, Gaussian mixtures, kmeans,

linear dynamical

systems)

● An Introduction to Variational Methods for Graphical Models (Jordan et al, meanfield,

etc.)

Writing

● Writing a great research paper (video of the presentation)

Software documentation

● Python, Theano, Pylearn2, Linux (bash) (at least the 5 first sections), git (5 first sections),

github/contributing to it (Theano doc), vim tutorial or emacs tutorial

Software lists of built-in commands/functions

● Bash commands

● List of Builtin

Python Functions

● vim commands

Other Software stuff to know about:

● screen

● ssh

● ipython

● matplotlib

deep learning 的综述的更多相关文章

  1. Deep Learning论文笔记之(八)Deep Learning最新综述

    Deep Learning论文笔记之(八)Deep Learning最新综述 zouxy09@qq.com http://blog.csdn.net/zouxy09 自己平时看了一些论文,但老感觉看完 ...

  2. Paper List ABOUT Deep Learning

    Deep Learning 方向的部分 Paper ,自用.一 RNN 1 Recurrent neural network based language model RNN用在语言模型上的开山之作 ...

  3. Deep Learning方向的paper

    转载 http://hi.baidu.com/chb_seaok/item/6307c0d0363170e73cc2cb65 个人阅读的Deep Learning方向的paper整理,分了几部分吧,但 ...

  4. 论文学习-深度学习目标检测2014至201901综述-Deep Learning for Generic Object Detection A Survey

    目录 写在前面 目标检测任务与挑战 目标检测方法汇总 基础子问题 基于DCNN的特征表示 主干网络(network backbone) Methods For Improving Object Rep ...

  5. 论文阅读:Face Recognition: From Traditional to Deep Learning Methods 《人脸识别综述:从传统方法到深度学习》

     论文阅读:Face Recognition: From Traditional to Deep Learning Methods  <人脸识别综述:从传统方法到深度学习>     一.引 ...

  6. 个性探测综述阅读笔记——Recent trends in deep learning based personality detection

    目录 abstract 1. introduction 1.1 个性衡量方法 1.2 应用前景 1.3 伦理道德 2. Related works 3. Baseline methods 3.1 文本 ...

  7. (2020行人再识别综述)Person Re-Identification using Deep Learning Networks: A Systematic Review

    目录 1.引言 2.研究方法 2.1本次综述的贡献 2.2综述方法 2.3与现有综述的比较 3.行人再识别基准数据集 3.1基于图像的再识别数据集 3.2基于视频的再识别数据集 4.基于图像的深度再识 ...

  8. Deep Learning综述[下]

    Image understanding with deep convolutional networks 直到2012年ImageNet大赛之前,卷积神经网络一直被主流机器视觉和机器学习社区所遗弃.2 ...

  9. Deep Learning综述[上]

    Deep-Learning-Papers-Reading-Roadmap: [1] LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Dee ...

随机推荐

  1. 避坑宝典:如何选择HTML5游戏引擎

    原生手游市场已是红海,腾讯.网易等寡头独霸天下,H5游戏市场成为下一个风口.据笔者所知,很多H5游戏开发团队由于选择引擎不慎导致项目甚至团队夭折. 如何选择适合团队和项目的引擎,笔者通过学习和项目实践 ...

  2. javascript-简单工厂两种实现方式

    简单工厂笔记 两种方式: 第一种:通过实例化对象创建 第二种:通过创建一个新对象然后包装增强其属性和功能来实现 差异性:前一种通过类创建的 对象,如果这些类继承同一个父类,他们父类原型上的方法是可以共 ...

  3. Python标准库(1) — itertools模块

    简介 官方描述:Functional tools for creating and using iterators.即用于创建高效迭代器的函数. itertools.chain(*iterable) ...

  4. MySQL 中隔离级别 RC 与 RR 的区别

    1. 数据库事务ACID特性 数据库事务的4个特性: 原子性(Atomic): 事务中的多个操作,不可分割,要么都成功,要么都失败: All or Nothing. 一致性(Consistency): ...

  5. python中mysqldb的用法

    1.引入MySQLdb库 import MySQLdb 2.和数据库建立连接 conn=MySQLdb.connect(host="localhost",user="ro ...

  6. ab 性能测试工具的使用(Web并发测试)

    1.下载 http://pan.baidu.com/s/1hrlAbI0 2.命令介绍 参数的介绍 n在测试会话中所执行的请求个数.默认时,仅执行一个请求. -c一次产生的请求个数.默认是一次一个. ...

  7. Nagios check_logfiles插件的使用记录

    1 获取与安装 https://labs.consol.de/assets/downloads/nagios/check_logfiles-3.7.4.tar.gz 链接可能会失效,建议去官网下载. ...

  8. dedecms调用标签总结(一)

    dedecms 基本包含了一个常规网站需要的一切功能,拥有完善的中文学习资料,很容易上手,学习成本较低.学会dedecms 的模板修改.栏目新增.内容模型新增和常用的标签调用方法后,即便我们不懂 ph ...

  9. [转]MySQL: Starting MySQL….. ERROR! The server quit without updating PID file

    转自: http://icesquare.com/wordpress/mysql-starting-mysql-error-the-server-quit-without-updating-pid-f ...

  10. NGUI Label Color Code

    UILabel的颜色代码 NGUI的Label文档:http://www.tasharen.com/?page_id=166 you can embed colors in [RrGgBb] form ...