neural network and deep learning笔记(1)
neural network and deep learning 这本书看了陆陆续续看了好几遍了,但每次都会有不一样的收获。
DL领域的paper日新月异。每天都会有非常多新的idea出来,我想。深入阅读经典书籍和paper,一定能够从中发现remian open的问题。从而有不一样的视角。
PS:blog主要摘取书中重要内容简述。
摘要部分
- Neural networks, a beautiful biologically-inspired programming paradigm which enables a computer to learn from observational data.
Deep learning, a powerful set of techniques for learning in neural networks.
CHAPTER 1 Using neural nets to recognize handwritten digits
the neural network uses the examples to automatically infer rules for recognizing handwritten digits.
#
The exact form of active function isn’t so important - what really matters is the shape of the function when plotted.
#
4.The architecture of neural networks
The design of the input and output layers of a neural network is often straightforward, there can be quite an art to the design of the hidden layers. But researchers have developed many design heuristics for the hidden layers, which help people get the behaviour they want out of their nets.
Learning with gradient descent
- The aim of our training algorithm will be to minimize the cost C as a function of the weights and biases. We’ll do that using an algorithm known as gradient descent.
- Why introduce the quadratic cost? It’s a smooth function of the weights and biases in the network and it turns out to be easy to figure out how to make small changes in the weights and biases so as to get an improvement in the cost.
- MSE cost function isn’t the only cost function used in neural network.
- Mini batch: SGD randomly picking out a small number m of randomly chosen training inputs;epoch : randomly choose mini-batch and training until we’ve exhausted the training inputs.
Thinking about hyper-parameter choosing
”If we were coming to this problem for the first time then there wouldn’t be much in the output to guide us on what to do. We might worry not only about the learning rate, but about every other aspect of our neural network. We might wonder if we’ve initialized the weights and biases in a way that makes it hard for the network to learn? Or maybe we don’t have enough training data to get meaningful learning? Perhaps we haven’t run for enough epochs? Or maybe it’s impossible for a neural network with this architecture to learn to recognize handwritten digits?Maybe the learning rate is too low? Or, maybe, the learning rate is too high?
When you’re coming to a problem for the first time, you’re not always sure.
The lesson to take away from this is that debugging a neural network is not trivial, and, just as for ordinary programming, there is an art to it. You need to learn that art of debugging in order to get good results from neural networks. More generally, we need to develop heuristics for choosing good hyper-parameters and a good architecture.”- Inspiration from Face detection:
“The end result is a network which breaks down a very complicated question - does this image show a face or not - into very simple questions answerable at the level of single pixels. It does this through a series of many layers, with early layers answering very simple and specific questions about the input image, and later layers building up a hierarchy of ever more complex and abstract concepts. Networks with this kind of many-layer structure - two or more hidden layers - are called deep neural networks.”
CHAPTER 2 How the backpropagation algorithm works
Backpropagation(BP): a fast algorithm for computing the gradient of the cost function.
For backpropagation to work we need to make two main assumptions about the form of the cost function.
- Since what BP let us do is compute the partial derivatives for a single training example,so we need that the cost function can be written as an average over all individual example.
- It can be written as a function of the outputs from the neural network.Since y is not something which the neural network learns.
The four fundamental equations behind backpropagation
What’s clever about BP is that it enables us to simultaneously compute all the partial derivatives using just one forward pass through the network, followed by one backward pass through the network.
What indeed the BP do and how someone could ever have discovered BP?
A small perturbations
will cause a change in the activation,then next and so on all the way through to causing a change in the final layer,and then the cost function.
A clever way of keeping track of small perturbations to the weights (and biases) as they propagate through the network, reach the output, and then affect the cost.(未完待续……)
neural network and deep learning笔记(1)的更多相关文章
- 《Neural Network and Deep Learning》_chapter4
<Neural Network and Deep Learning>_chapter4: A visual proof that neural nets can compute any f ...
- Neural Network Programming - Deep Learning with PyTorch with deeplizard.
PyTorch Prerequisites - Syllabus for Neural Network Programming Series PyTorch先决条件 - 神经网络编程系列教学大纲 每个 ...
- Neural Networks and Deep Learning 笔记
1 Introduction to Deep Learning 介绍了神经网络的定义,有监督学习,分析了为什么深度学习会崛起 1.1 结构化数据/非结构化数据 结构化数据:有一个确切的数据库,有key ...
- Neural Network Programming - Deep Learning with PyTorch - YouTube
百度云链接: 链接:https://pan.baidu.com/s/1xU-CxXGCvV6o5Sksryj3fA 提取码:gawn
- 《Neural Networks and Deep Learning》课程笔记
Lesson 1 Neural Network and Deep Learning 这篇文章其实是 Coursera 上吴恩达老师的深度学习专业课程的第一门课程的课程笔记. 参考了其他人的笔记继续归纳 ...
- 【DeepLearning学习笔记】Coursera课程《Neural Networks and Deep Learning》——Week2 Neural Networks Basics课堂笔记
Coursera课程<Neural Networks and Deep Learning> deeplearning.ai Week2 Neural Networks Basics 2.1 ...
- 【DeepLearning学习笔记】Coursera课程《Neural Networks and Deep Learning》——Week1 Introduction to deep learning课堂笔记
Coursera课程<Neural Networks and Deep Learning> deeplearning.ai Week1 Introduction to deep learn ...
- 课程一(Neural Networks and Deep Learning),第四周(Deep Neural Networks) —— 3.Programming Assignments: Deep Neural Network - Application
Deep Neural Network - Application Congratulations! Welcome to the fourth programming exercise of the ...
- 树卷积神经网络Tree-CNN: A Deep Convolutional Neural Network for Lifelong Learning
树卷积神经网络Tree-CNN: A Deep Convolutional Neural Network for Lifelong Learning 2018-04-17 08:32:39 看_这是一 ...
随机推荐
- POJ 1236 Network of Schools Tarjan缩点
Network of Schools Time Limit: 1000MS Memory Limit: 10000K Total Submissions: 22729 Accepted: 89 ...
- CSDN数据库下载地址 CSDN 用户名密码泄漏,600万数据下载
原文发布时间为:2011-12-21 -- 来源于本人的百度文章 [由搬家工具导入] 12月21日消息,下午有网友爆料称国内最大的开发者社区CSDN.NET的安全系统遭到黑客攻击,CSDN数据库中的6 ...
- DataRelation 实现父子表 父子级 Repeater的嵌套使用
原文发布时间为:2009-05-21 -- 来源于本人的百度文章 [由搬家工具导入] DataRelation 实现父子表 if (!IsPostBack) { ...
- Docker(四):docker的安装
docker在Ubuntu下安装必须满足两个条件: 内核版本必须在3.10以上的版本,而且必须是64位的系统. 在Ubuntu的14.04版本中已经自带docker的安装包了. 首先我是在自己的笔记本 ...
- 关于 gstreamer 和 webrtc 的结合,有点小突破
今天让我找到了 gstreamer 的一个牛叉的杀手锏,脑海中马上想到了一个大致的框架和方案计划,用 gst-inspector 先进行对象自省属性探测,然后祭出 gst-launcher 大刀进行管 ...
- 内存 : CL设置
CL(CAS Latency):为CAS的延迟时间,这是纵向地址脉冲的反应时间,也是在一定频率下衡量支持不同规范的内存的重要标志之一. 内存负责向CPU提供运算所需的原始数据,而目前CPU运行速度超过 ...
- asp.net core 2.1 将控制器抽离到类库中
startup.cs的ConfigureServices中添加: public void ConfigureServices(IServiceCollection services) { var ma ...
- 牛客网 Wannafly挑战赛8 A.小Y和小B睡觉觉
写了一会不想写了... A-小Y和小B睡觉觉 链接:https://www.nowcoder.com/acm/contest/57/A来源:牛客网 时间限制:C/C++ 1秒,其他语言2秒 空间限制: ...
- GRDB自定义的纯函数
GRDB自定义的纯函数 在GRDB中,用户可以自定义SQlite函数.这样,在SQL语句中,可以直接调用这些函数.但是在定义的时候,用户需要指定函数的pure属性,表示该函数是否为纯函数.纯函数是 ...
- Oracle并发控制、事务管理学习笔记
(a)基本概念 锁的2种最基本.最简单的类型:排他锁(eXclusive lock,即X锁).共享锁(Share lock,即S锁). 不同级别的锁定协议及其作用: 申请的锁 及其作用 锁定协议 修改 ...