why deep learning works
111
【直观详解】什么是正则化
李宏毅 / 一天搞懂深度學習
gradient descent
http://www.deeplearningbook.org/contents/numerical.html
http://cs231n.github.io/neural-networks-3/
https://arxiv.org/pdf/1609.04747.pdf
http://www.deeplearningbook.org/contents/optimization.html
https://www.quora.com/Is-a-single-layered-ReLu-network-still-a-universal-approximator/answer/Conner-Davis-2
https://www.analyticsvidhya.com/blog/2017/04/comparison-between-deep-learning-machine-learning/
softmax
https://www.quora.com/What-is-the-intuition-behind-SoftMax-function/answer/Sebastian-Raschka-1
https://eli.thegreenplace.net/2016/the-softmax-function-and-its-derivative/
http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/
https://www.quora.com/What-is-the-role-of-the-activation-function-in-a-neural-network-How-does-this-function-in-a-human-neural-network-system
Important
http://www.cs.toronto.edu/~fleet/courses/cifarSchool09/slidesBengio.pdf
https://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf
https://medium.com/@vivek.yadav/how-neural-networks-learn-nonlinear-functions-and-classify-linearly-non-separable-data-22328e7e5be1
如何通俗易懂地解释卷积?
https://www.zhihu.com/question/22298352?rf=21686447
卷积神经网络工作原理直观的解释?
https://www.zhihu.com/question/39022858
https://mlnotebook.github.io/post/
https://zhuanlan.zhihu.com/p/28478034
http://timdettmers.com/2015/03/26/convolution-deep-learning/
https://www.quora.com/Is-ReLU-a-linear-piece-wise-linear-or-non-linear-activation-function
=========
transfer learning
https://www.quora.com/Why-is-deep-learning-so-easy
===============
https://www.quora.com/How-can-I-learn-Deep-Learning-quickly
What is a simple explanation of how artificial neural networks work?
How can I learn Deep Learning quickly?
https://www.quora.com/How-can-I-learn-Deep-Learning-quickly
https://www.quora.com/Why-do-neural-networks-need-more-than-one-hidden-layer
bengioy
https://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf
https://www.iro.umontreal.ca/~lisa/pointeurs/TR1312.pdf
http://videolectures.net/deeplearning2015_bengio_theoretical_motivations/
http://www.cs.toronto.edu/~fleet/courses/cifarSchool09/slidesBengio.pdf
Universal Approximation Theorem
https://pdfs.semanticscholar.org/f22f/6972e66bdd2e769fa64b0df0a13063c0c101.pdf
http://www.cs.cmu.edu/~epxing/Class/10715/reading/Kornick_et_al.pdf
「Deep Learning」读书系列分享第四章:数值计算 | 分享总结
Nonlinear Classifiers
https://www.quora.com/Why-do-neural-networks-need-an-activation-function
http://ai.stanford.edu/~quocle/tutorial1.pdf
http://cs231n.github.io/neural-networks-1/
NN,CNN
https://www.analyticsvidhya.com/blog/2017/04/comparison-between-deep-learning-machine-learning/
[CV] 通俗理解『卷积』——从傅里叶变换到滤波器
https://zhuanlan.zhihu.com/p/28478034
如何通俗易懂地解释卷积?
https://www.zhihu.com/question/22298352?rf=21686447
http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf
https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/
https://mlnotebook.github.io/post/CNN1/
http://bamos.github.io/2016/08/09/deep-completion/
https://www.analyticsvidhya.com/blog/2017/05/gpus-necessary-for-deep-learning/
Applied Deep Learning - Part 1: Artificial Neural Networks
Papaer
dropout ----Hinton
https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf
Neural Network with Unbounded Activation Functions is Universal Approximator
https://arxiv.org/pdf/1505.03654.pdf
Transfer Learning
Paper by Yoshua Bengio (another deep learning pioneer).
Paper by Ali Sharif Razavian.
Paper by Jeff Donahue.
Paper and subsequent paper by Dario Garcia-Gasulla.
overfitting
https://medium.com/towards-data-science/deep-learning-overfitting-846bf5b35e24
名校课程
cs231
http://www.jianshu.com/p/182baeb82c71
https://www.coursera.org/learn/neural-networks
收费视频
https://www.udemy.com/deeplearning/?siteID=mDjthAvMbf0-ZE2EvHFczLauDLzv0OQAKg&LSNPUBID=mDjthAvMbf0
Paper
The Power of Depth for Feedforward Neural Networks
https://arxiv.org/pdf/1512.03965.pdf?platform=hootsuite
Deep Residual Learning for Image Recognition
https://arxiv.org/pdf/1512.03385v1.pdf
Speed/accuracy trade-offs for modern convolutional object detectors
https://arxiv.org/pdf/1611.10012.pdf
Playing Atari with Deep Reinforcement Learning
https://arxiv.org/pdf/1312.5602v1.pdf
Neural Network with Unbounded Activation Functions is Universal Approximator
https://arxiv.org/pdf/1505.03654.pdf
Transfer learning
https://databricks.com/blog/2017/06/06/databricks-vision-simplify-large-scale-deep-learning.html
TensorFlow Object Detection API
https://github.com/tensorflow/models/tree/477ed41e7e4e8a8443bc633846eb01e2182dc68a/object_detection
https://opensource.googleblog.com/2017/06/supercharge-your-computer-vision-models.html
Supercharge your Computer Vision models with the TensorFlow Object Detection API
https://research.googleblog.com/2017/06/supercharge-your-computer-vision-models.html
如何使用TensorFlow API构建视频物体识别系统
https://www.jiqizhixin.com/articles/2017-07-14-5
谷歌开放的TensorFlow Object Detection API 效果如何?对业界有什么影响?
https://www.zhihu.com/question/61173908
https://stackoverflow.com/questions/42364513/how-to-recognise-multiple-objects-in-the-same-image
利用TensorFlow Object Detection API 训练自己的数据集
https://zhuanlan.zhihu.com/p/27469690
谷歌开放的TensorFlow Object Detection API 效果如何?对业界有什么影响?
https://github.com/tensorflow/models/tree/master/research/object_detection/data
https://stackoverflow.com/questions/44973184/train-tensorflow-object-detection-on-own-dataset
脑科学
https://www.quora.com/What-are-the-parts-of-the-neuron-and-their-function
why deep learning works的更多相关文章
- Why Deep Learning Works – Key Insights and Saddle Points
Why Deep Learning Works – Key Insights and Saddle Points A quality discussion on the theoretical mot ...
- Decision Boundaries for Deep Learning and other Machine Learning classifiers
Decision Boundaries for Deep Learning and other Machine Learning classifiers H2O, one of the leading ...
- Growing Pains for Deep Learning
Growing Pains for Deep Learning Advances in theory and computer hardware have allowed neural network ...
- Why deep learning?
1. 深度学习中网络越深越好么? 理论上说是这样的,因为网络越深,参数也越多,拟合能力也越强(但实际情况是,网络很深的时候,不容易训练,使得表现能力可能并不好). 2. 那么,不同什么深度的网络,在参 ...
- Use of Deep Learning in Modern Recommendation System: A Summary of Recent Works(笔记)
注意:论文中,很多的地方出现baseline,可以理解为参照物的意思,但是在论文中,我们还是直接将它称之为基线,也 就是对照物,参照物. 这片论文中,作者没有去做实际的实验,但是却做了一件很有意义的事 ...
- (转) The major advancements in Deep Learning in 2016
The major advancements in Deep Learning in 2016 Pablo Tue, Dec 6, 2016 in MACHINE LEARNING DEEP LEAR ...
- (转) Deep Learning Research Review Week 2: Reinforcement Learning
Deep Learning Research Review Week 2: Reinforcement Learning 转载自: https://adeshpande3.github.io/ad ...
- deep learning 的综述
从13年11月初开始接触DL,奈何boss忙or 各种问题,对DL理解没有CSDN大神 比如 zouxy09等 深刻,主要是自己觉得没啥进展,感觉荒废时日(丢脸啊,这么久....)开始开文,即为记录自 ...
- Deep Learning 16:用自编码器对数据进行降维_读论文“Reducing the Dimensionality of Data with Neural Networks”的笔记
前言 论文“Reducing the Dimensionality of Data with Neural Networks”是深度学习鼻祖hinton于2006年发表于<SCIENCE > ...
随机推荐
- C# 的Chart
Axis Label 横纵坐标的文字 (比如 0 20 40 ....) Axis Title 横纵坐标的代表什么(比如 Y Axis Title) Chart Area 图标所在位置 Chart P ...
- Xamarin Essentials教程安全存储SecureStorage
Xamarin Essentials教程安全存储SecureStorage 在实际应用中,应用程序会将一些数据保存在用户设备中,避免用户重复操作.但是为了防止因设备丢失或者感染病毒导致数据泄漏,需 ...
- react编码规范
1.每个文件只写一个组件,但是多个无状态组件可以放在单个文件中: 2.有内部状态,方法或要对外暴露ref的组件,用类式组件: 3.无内部状态,方法或无需对外暴露ref的组件,用函数式组件: 4.有内部 ...
- 【Excel】SUMIF 或用 筛选器 实现挑选含有某些字段的值,然后把这些值所对应的后面某列上的值相加
Background: 挑选含有某些字段的值,然后把这些值所对应的后面某列上的值相加.比如挑选下表中,所有带有“MX104”这个字段的值,然后把它的后面total那一列的值相加. Solution: ...
- SPOJ.TLE - Time Limit Exceeded(DP 高维前缀和)
题目链接 \(Description\) 给定长为\(n\)的数组\(c_i\)和\(m\),求长为\(n\)的序列\(a_i\)个数,满足:\(c_i\not\mid a_i,\quad a_i\& ...
- [HAOI2015]树上染色
Description: 有一棵点数为 N 的树,树边有边权.给你一个在 0~ N 之内的正整数 K ,你要在这棵树中选择 K个点,将其染成黑色,并将其他 的N-K个点染成白色 . 将所有点染色后,你 ...
- MySQL(五)
关系 创建成绩表scores,结构如下 id 学生 科目 成绩 思考:学生列应该存什么信息呢? 答:学生列的数据不是在这里新建的,而应该从学生表引用过来,关系也是一条数据:根据范式要求应该存储学生的编 ...
- 批量操作数据库数据mybatis.xml
批量插入数据 <insert id="equipment_Add" parameterType="cn.wtsr.core.web.dao.vo.equipment ...
- 基于Tkinter以及百度翻译爬虫做的一个小的翻译软件
首先看效果: 输入Hello, 可见输出 int. 打招呼 下面看源码: from tkinter import * import requests# 首先导入用到的库 request = reque ...
- 关于js的函数
1.获取内容的兼容函数 /* * 一: 获取内容的兼容函数 * setText(obj, str) * 思路: * 1.首先判断浏览器: * 2.如果是IE浏览器,就用innerText: * 3.如 ...