https://medium.com/towards-data-science/deep-learning-for-object-detection-a-comprehensive-review-73930816d8d9

https://stackoverflow.com/questions/20027598/why-should-weights-of-neural-networks-be-initialized-to-random-numbers/40525812?noredirect=1#comment80759413_40525812

https://www.quora.com/If-one-initializes-a-set-of-weights-in-a-Neural-Network-to-zero-is-it-true-that-in-future-iterations-they-will-not-be-updated-by-gradient-descent-and-backpropagation

111

【直观详解】什么是正则化

https://charlesliuyx.github.io/2017/10/03/%E3%80%90%E7%9B%B4%E8%A7%82%E8%AF%A6%E8%A7%A3%E3%80%91%E4%BB%80%E4%B9%88%E6%98%AF%E6%AD%A3%E5%88%99%E5%8C%96/

李宏毅 / 一天搞懂深度學習

https://www.slideshare.net/tw_dsconf/ss-62245351?qid=108adce3-2c3d-4758-a830-95d0a57e46bc&v=&b=&from_search=3

gradient descent

http://www.deeplearningbook.org/contents/numerical.html

http://cs231n.github.io/neural-networks-3/

https://arxiv.org/pdf/1609.04747.pdf

http://www.deeplearningbook.org/contents/optimization.html

https://www.analyticsvidhya.com/blog/2017/03/introduction-to-gradient-descent-algorithm-along-its-variants/

https://www.quora.com/Is-a-single-layered-ReLu-network-still-a-universal-approximator/answer/Conner-Davis-2

https://www.analyticsvidhya.com/blog/2017/04/comparison-between-deep-learning-machine-learning/

https://www.analyticsvidhya.com/blog/2017/05/25-must-know-terms-concepts-for-beginners-in-deep-learning/

softmax

https://www.quora.com/What-is-the-intuition-behind-SoftMax-function/answer/Sebastian-Raschka-1

https://blog.manash.me/implementing-l2-constrained-softmax-loss-function-on-a-convolutional-neural-network-using-1bb7c0aab7b1

https://eli.thegreenplace.net/2016/the-softmax-function-and-its-derivative/

http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/

https://www.quora.com/What-is-the-role-of-the-activation-function-in-a-neural-network-How-does-this-function-in-a-human-neural-network-system

Important

http://www.cs.toronto.edu/~fleet/courses/cifarSchool09/slidesBengio.pdf

https://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf

https://medium.com/@vivek.yadav/how-neural-networks-learn-nonlinear-functions-and-classify-linearly-non-separable-data-22328e7e5be1

如何通俗易懂地解释卷积?

https://www.zhihu.com/question/22298352?rf=21686447

卷积神经网络工作原理直观的解释?

https://www.zhihu.com/question/39022858

https://mlnotebook.github.io/post/

https://zhuanlan.zhihu.com/p/28478034

http://timdettmers.com/2015/03/26/convolution-deep-learning/

https://stats.stackexchange.com/questions/116362/what-does-the-convolution-step-in-a-convolutional-neural-network-do

https://www.quora.com/Why-does-deep-learning-architectures-only-use-the-non-linear-activation-function-in-the-hidden-layers

https://www.quora.com/Is-a-single-layered-ReLu-network-still-a-universal-approximator/answer/Conner-Davis-2

https://www.quora.com/Is-ReLU-a-linear-piece-wise-linear-or-non-linear-activation-function

=========

transfer learning

https://www.quora.com/Why-is-deep-learning-so-easy

===============

https://www.quora.com/How-can-I-learn-Deep-Learning-quickly

What is a simple explanation of how artificial neural networks work?

How can I learn Deep Learning quickly?

https://www.quora.com/How-can-I-learn-Deep-Learning-quickly

https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw

https://www.quora.com/Why-do-neural-networks-need-more-than-one-hidden-layer

bengioy

https://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf

https://www.iro.umontreal.ca/~lisa/pointeurs/TR1312.pdf

http://videolectures.net/deeplearning2015_bengio_theoretical_motivations/

http://www.cs.toronto.edu/~fleet/courses/cifarSchool09/slidesBengio.pdf

https://stats.stackexchange.com/questions/182734/what-is-the-difference-between-a-neural-network-and-a-deep-neural-network?rq=1

Universal Approximation Theorem

https://pdfs.semanticscholar.org/f22f/6972e66bdd2e769fa64b0df0a13063c0c101.pdf

http://www.cs.cmu.edu/~epxing/Class/10715/reading/Kornick_et_al.pdf

「Deep Learning」读书系列分享第四章:数值计算 | 分享总结

Nonlinear Classifiers

https://www.quora.com/In-deep-learning-can-good-results-be-obtained-when-you-use-a-linear-function-in-between-the-hidden-layers

https://www.quora.com/Why-do-neural-networks-need-an-activation-function

https://stackoverflow.com/questions/9782071/why-must-a-nonlinear-activation-function-be-used-in-a-backpropagation-neural-net

http://ai.stanford.edu/~quocle/tutorial1.pdf

http://cs231n.github.io/neural-networks-1/

https://www.quora.com/Why-does-deep-learning-architectures-only-use-the-non-linear-activation-function-in-the-hidden-layers

https://medium.com/@vivek.yadav/how-neural-networks-learn-nonlinear-functions-and-classify-linearly-non-separable-data-22328e7e5be1

https://www.quora.com/What-is-the-ability-of-a-single-neuron-with-a-non-linear-activation-function-Can-it-only-classify-the-input-space-in-two-classes

NN,CNN

https://www.analyticsvidhya.com/blog/2017/04/comparison-between-deep-learning-machine-learning/

https://www.analyticsvidhya.com/blog/2017/06/architecture-of-convolutional-neural-networks-simplified-demystified/

[CV] 通俗理解『卷积』——从傅里叶变换到滤波器

https://zhuanlan.zhihu.com/p/28478034

如何通俗易懂地解释卷积?

https://www.zhihu.com/question/22298352?rf=21686447

http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf

https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/

https://mlnotebook.github.io/post/CNN1/

http://bamos.github.io/2016/08/09/deep-completion/

https://www.analyticsvidhya.com/blog/2016/04/deep-learning-computer-vision-introduction-convolution-neural-networks/

https://www.analyticsvidhya.com/blog/2016/03/introduction-deep-learning-fundamentals-neural-networks/

https://www.analyticsvidhya.com/blog/2017/05/gpus-necessary-for-deep-learning/

Applied Deep Learning - Part 1: Artificial Neural Networks

https://medium.com/towards-data-science/applied-deep-learning-part-1-artificial-neural-networks-d7834f67a4f6

Papaer

dropout ----Hinton

https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf

Neural Network with Unbounded Activation Functions is Universal Approximator

https://arxiv.org/pdf/1505.03654.pdf

Transfer Learning

Paper by Yoshua Bengio (another deep learning pioneer).
Paper by Ali Sharif Razavian.
Paper by Jeff Donahue.
Paper and subsequent paper by Dario Garcia-Gasulla.

overfitting

https://medium.com/towards-data-science/deep-learning-overfitting-846bf5b35e24

名校课程

cs231

http://www.jianshu.com/p/182baeb82c71

https://www.coursera.org/learn/neural-networks

收费视频

https://www.udemy.com/deeplearning/?siteID=mDjthAvMbf0-ZE2EvHFczLauDLzv0OQAKg&LSNPUBID=mDjthAvMbf0

Paper

The Power of Depth for Feedforward Neural Networks

https://arxiv.org/pdf/1512.03965.pdf?platform=hootsuite

Deep Residual Learning for Image Recognition

https://arxiv.org/pdf/1512.03385v1.pdf

Speed/accuracy trade-offs for modern convolutional object detectors

https://arxiv.org/pdf/1611.10012.pdf

Playing Atari with Deep Reinforcement Learning

https://arxiv.org/pdf/1312.5602v1.pdf

Neural Network with Unbounded Activation Functions is Universal Approximator

https://arxiv.org/pdf/1505.03654.pdf

Transfer learning

https://databricks.com/blog/2017/06/06/databricks-vision-simplify-large-scale-deep-learning.html

TensorFlow Object Detection API

https://github.com/tensorflow/models/tree/477ed41e7e4e8a8443bc633846eb01e2182dc68a/object_detection

https://opensource.googleblog.com/2017/06/supercharge-your-computer-vision-models.html

Supercharge your Computer Vision models with the TensorFlow Object Detection API

https://research.googleblog.com/2017/06/supercharge-your-computer-vision-models.html

如何使用TensorFlow API构建视频物体识别系统

https://www.jiqizhixin.com/articles/2017-07-14-5

谷歌开放的TensorFlow Object Detection API 效果如何?对业界有什么影响?

https://www.zhihu.com/question/61173908

https://stackoverflow.com/questions/42364513/how-to-recognise-multiple-objects-in-the-same-image

利用TensorFlow Object Detection API 训练自己的数据集

https://zhuanlan.zhihu.com/p/27469690

谷歌开放的TensorFlow Object Detection API 效果如何?对业界有什么影响?

https://github.com/tensorflow/models/tree/master/research/object_detection/data

https://medium.com/towards-data-science/building-a-toy-detector-with-tensorflow-object-detection-api-63c0fdf2ac95

https://medium.com/towards-data-science/building-a-real-time-object-recognition-app-with-tensorflow-and-opencv-b7a2b4ebdc32

https://medium.com/towards-data-science/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9

https://stackoverflow.com/questions/44973184/train-tensorflow-object-detection-on-own-dataset

https://cloud.google.com/blog/big-data/2017/06/training-an-object-detector-using-cloud-machine-learning-engine

https://medium.com/ilenze-com/object-detection-using-deep-learning-for-advanced-users-part-1-183bbbb08b19

脑科学

https://www.quora.com/What-are-the-parts-of-the-neuron-and-their-function

why deep learning works的更多相关文章

  1. Why Deep Learning Works – Key Insights and Saddle Points

    Why Deep Learning Works – Key Insights and Saddle Points A quality discussion on the theoretical mot ...

  2. Decision Boundaries for Deep Learning and other Machine Learning classifiers

    Decision Boundaries for Deep Learning and other Machine Learning classifiers H2O, one of the leading ...

  3. Growing Pains for Deep Learning

    Growing Pains for Deep Learning Advances in theory and computer hardware have allowed neural network ...

  4. Why deep learning?

    1. 深度学习中网络越深越好么? 理论上说是这样的,因为网络越深,参数也越多,拟合能力也越强(但实际情况是,网络很深的时候,不容易训练,使得表现能力可能并不好). 2. 那么,不同什么深度的网络,在参 ...

  5. Use of Deep Learning in Modern Recommendation System: A Summary of Recent Works(笔记)

    注意:论文中,很多的地方出现baseline,可以理解为参照物的意思,但是在论文中,我们还是直接将它称之为基线,也 就是对照物,参照物. 这片论文中,作者没有去做实际的实验,但是却做了一件很有意义的事 ...

  6. (转) The major advancements in Deep Learning in 2016

    The major advancements in Deep Learning in 2016 Pablo Tue, Dec 6, 2016 in MACHINE LEARNING DEEP LEAR ...

  7. (转) Deep Learning Research Review Week 2: Reinforcement Learning

      Deep Learning Research Review Week 2: Reinforcement Learning 转载自: https://adeshpande3.github.io/ad ...

  8. deep learning 的综述

    从13年11月初开始接触DL,奈何boss忙or 各种问题,对DL理解没有CSDN大神 比如 zouxy09等 深刻,主要是自己觉得没啥进展,感觉荒废时日(丢脸啊,这么久....)开始开文,即为记录自 ...

  9. Deep Learning 16:用自编码器对数据进行降维_读论文“Reducing the Dimensionality of Data with Neural Networks”的笔记

    前言 论文“Reducing the Dimensionality of Data with Neural Networks”是深度学习鼻祖hinton于2006年发表于<SCIENCE > ...

随机推荐

  1. 【可靠性】Mysql 5.7 降低了半同步复制-数据丢失的风险

    如果你的生产线开启了半同步复制,那么对数据的一致性会要求较高,但在MySQL5.5/5.6里,会存在数据不一致的风险.有这么一个场景,客户端提交了一个事务,master把binlog发送给slave, ...

  2. IIS 注册 ASP.NET 2.0 4.0

    在CMD窗体,运行如下命令: 2.0:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe -i 4.0:C:\WINDOWS ...

  3. 移动端小坑:用户长按H5文字出现复制

    禁止复制方法:*{ -webkit-user-select: none;/*禁用手机浏览器的用户选择功能 */ -moz-user-select: none; -webkit-touch-callou ...

  4. 51Nod.1766.树上最远点对(树的直径 RMQ 线段树/ST表)

    题目链接 \(Description\) 给定一棵树.每次询问给定\(a\sim b,c\sim d\)两个下标区间,从这两个区间中各取一个点,使得这两个点距离最远.输出最远距离. \(n,q\leq ...

  5. 2017-9-8-Linux下VNC server开启&图形界面显示

    之前有一个写树莓派3B怎么只使用网线VNC远程的blog,里面写的比较粗糙(其实是很长时间没搞我也忘了怎么装的了,照着原来的看一遍应该能想起来),所以重新来在新的环境下搭建一下VNC server. ...

  6. Linq.js表达式常见写法

    1.回调函数法 2.lambda表达式字符串 3.$符号的表达式

  7. python系统编程(十)

    多线程-非共享数据 对于全局变量,在多线程中要格外小心,否则容易造成数据错乱的情况发生 1. 非全局变量是否要加锁呢? #coding=utf-8 import threading import ti ...

  8. IDEA的安装

    https://blog.csdn.net/when_to_return/article/details/81590356

  9. BeanPostProcessor出现init方法无法被调用Invocation of init method failed

    是因为 返回了null,要返回object即可,arg0是bean对象本身,arg1是bean名字,即bean的id

  10. 安装并运行Hello World

    新建虚拟环境并安装Flask pip install Flask 运行HelloWorld from flask import Flask #导入Flask类 app = Flask(__name__ ...