In the previous post I go through basic 1-layer Neural Network with sigmoid activation function, including

  • How to get sigmoid function from a binary classification problem?

  • NN is still an optimization problem, so what's the target to optimize? - cost function

  • How does model learn?- gradient descent

  • Work flow of NN? - Backward/Forward propagation

Now let's get deeper to 2-layers Neural Network, from where you can have as many hidden layers as you want. Also let's try to vectorize everything.

1. The architecture of 2-layers shallow NN

Below is the architecture of 2-layers NN, including input layer, one hidden layer and one output layer. The input layer is not counted.

(1) Forward propagation

In each neuron, there are 2 activities going on after take in the input from previous hidden layer:

  1. a linear transformation of the input
  2. a non-linear activation function applied after

Then the ouput will pass to the next hidden layer as input.

From input layer to output layer, do above computation layer by layer is forward propagation. It tries to map each input \(x \in R^n\) to $ y$.

For each training sample, the forward propagation is defined as following:

\(x \in R^{n*1}\) denotes the input data. In the picture n = 4.

\((w^{[1]} \in R^{k*n},b^{[1]}\in R^{k*1})\) is the parameter in the first hidden layer. Here k = 3.

\((w^{[2]} \in R^{1*k},b^{[2]}\in R^{1*1})\) is the parameter in the output layer. The output is a binary variable with 1 dimension.

\((z^{[1]} \in R^{k*1},z^{[2]}\in R^{1*1})\) is the intermediate output after linear transformation in the hidden and output layer.

\((a^{[1]} \in R^{k*1},a^{[2]}\in R^{1*1})\) is the output from each layer. To make it more generalize we can use \(a^{[0]} \in R^n\) to denote \(x\)

*Here we use \(g(x)\) as activation function for hidden layer, and sigmoid \(\sigma(x)\) for output layer. we will discuss what are the available activation functions \(g(x)\) out there in the following post. What happens in forward propagation is following:

\([1]\) \(z^{[1]} = {w^{[1]}} a^{[0]} + b^{[1]}\)
\([2]\) \(a^{[1]} = g((z^{[1]} ) )\)
\([3]\) \(z^{[2]} = {w^{[2]}} a^{[1]} + b^{[2]}\)
\([4]\) \(a^{[2]} = \sigma(z^{[2]} )\)

(2) Backward propagation

After forward propagation, for each training sample \(x\) is done ,we will have a prediction \(\hat{y}\). Comparing \(\hat{y}\) with \(y\), we then use the error between prediction and real value to update the parameter via gradient descent.

Backward propagation is passing the gradient descent from output layer back to input layer using chain rule like below. The deduction is in the previous post.

\[ \frac{\partial L(a,y)}{\partial w} =
\frac{\partial L(a,y)}{\partial a} \cdot
\frac{\partial a}{\partial z} \cdot
\frac{\partial z}{\partial w}\]

\([4]\) \(dz^{[2]} = a^{[2]} - y\)
\([3]\) \(dw^{[2]} = dz^{[2]} a^{[1]T}\)
\([3]\) \(db^{[2]} = dz^{[2]}\)
\([2]\) \(dz^{[1]} = da^{[1]} * g^{[1]'}(z[1]) = w^{[2]T} dz^{[2]}* g^{[1]'}(z[1])\)
\([1]\) \(dw^{[1]} = dz^{[1]} a^{[0]T}\)
\([1]\) \(db^{[1]} = dz^{[1]}\)

2. Vectorize and Generalize your NN

Let's derive the vectorize representation of the above forward and backward propagation. The usage of vector is to speed up the computation. We will talk about this again in batch gradient descent.

\(w^{[1]},b^{[1]}, w^{[2]}, b^{[2]}\) stays the same. Generally \(w^{[i]}\) has dimension \((h_{i},h_{i-1})\) and \(b^{[i]}\) has dimension \((h_{i},1)\)

\(Z^{[1]} \in R^{k*m}, Z^{[2]} \in R^{1*m}, A^{[0]} \in R^{n*m}, A^{[1]} \in R^{k*m}, A^{[2]}\in R^{1*m}\) where \(A^{[0]}\)is the input vector, each column is one training sample.

(1) Forward propogation

Follow above logic, vectorize representation is below:

\([1]\) \(Z^{[1]} = {w^{[1]}} A^{[0]} + b^{[1]}\)
\([2]\) \(A^{[1]} = g((Z^{[1]} ) )\)
\([3]\) \(Z^{[2]} = {w^{[2]}} A^{[1]} + b^{[2]}\)
\([4]\) \(A^{[2]} = \sigma(Z^{[2]} )\)

Have you noticed that the dimension above is not a exact matched?
\({w^{[1]}} A^{[0]}\) has dimension \((k,m)\), \(b^{[1]}\) has dimension \((k,1)\).
However Python will take care of this for you with Broadcasting. Basically it will replicate the lower dimension to the higher dimension. Here \(b^{[1]}\) will be replicated m times to become \((k,m)\)

(1) Backward propogation

Same as above, backward propogation will be:
\([4]\) \(dZ^{[2]} = A^{[2]} - Y\)
\([3]\) \(dw^{[2]} =\frac{1}{m} dZ^{[2]} A^{[1]T}\)
\([3]\) \(db^{[2]} = \frac{1}{m} \sum{dZ^{[2]}}\)
\([2]\) \(dZ^{[1]} = dA^{[1]} * g^{[1]'}(z[1]) = w^{[2]T} dZ^{[2]}* g^{[1]'}(z[1])\)
\([1]\) \(dw^{[1]} = \frac{1}{m} dZ^{[1]} A^{[0]T}\)
\([1]\) \(db^{[1]} = \frac{1}{m} \sum{dZ^{[1]} }\)

In the next post, I will talk about some other details in NN, like hyper parameter, activation function.

To be continued.


Reference

  1. Ian Goodfellow, Yoshua Bengio, Aaron Conrville, "Deep Learning"
  2. Deeplearning.ai https://www.deeplearning.ai/

DeepLearning - Forard & Backward Propogation的更多相关文章

  1. Deeplearning - Overview of Convolution Neural Network

    Finally pass all the Deeplearning.ai courses in March! I highly recommend it! If you already know th ...

  2. DeepLearning - Regularization

    I have finished the first course in the DeepLearnin.ai series. The assignment is relatively easy, bu ...

  3. Coursera机器学习+deeplearning.ai+斯坦福CS231n

    日志 20170410 Coursera机器学习 2017.11.28 update deeplearning 台大的机器学习课程:台湾大学林轩田和李宏毅机器学习课程 Coursera机器学习 Wee ...

  4. DeepLearning - Overview of Sequence model

    I have had a hard time trying to understand recurrent model. Compared to Ng's deep learning course, ...

  5. back propogation 的线代描述

    参考资料: 算法部分: standfor, ufldl  : http://ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial 一文弄懂BP:https: ...

  6. DeepLearning Intro - sigmoid and shallow NN

    This is a series of Machine Learning summary note. I will combine the deep learning book with the de ...

  7. 用纯Python实现循环神经网络RNN向前传播过程(吴恩达DeepLearning.ai作业)

    Google TensorFlow程序员点赞的文章!   前言 目录: - 向量表示以及它的维度 - rnn cell - rnn 向前传播 重点关注: - 如何把数据向量化的,它们的维度是怎么来的 ...

  8. 吴恩达DeepLearning.ai的Sequence model作业Dinosaurus Island

    目录 1 问题设置 1.1 数据集和预处理 1.2 概览整个模型 2. 创建模型模块 2.1 在优化循环中梯度裁剪 2.2 采样 3. 构建语言模型 3.1 梯度下降 3.2 训练模型 4. 结论   ...

  9. Sql Server 聚集索引扫描 Scan Direction的两种方式------FORWARD 和 BACKWARD

    最近发现一个分页查询存储过程中的的一个SQL语句,当聚集索引列的排序方式不同的时候,效率差别达到数十倍,让我感到非常吃惊 由此引发出来分页查询的情况下对大表做Clustered Scan的时候, 不同 ...

随机推荐

  1. 【oracle】关于创建表时用default指定默认值的坑

    刚开始学create table的时候没注意,学到后面发现可以指定默认值.于是写了如下语句: 当我查询的时候发现,查出来的结果是这样的.. 很纳闷有没有,我明明指定默认值了呀,为什么创建出来的表还是空 ...

  2. IOS 枚举 enum

    前言:oc中枚举的正确使用,可以增强代码的可读性,减少各种“错误”,让代码更加的规范.下面先介绍枚举的用法,最后介绍个人对枚举的理解,什么是枚举,为什么用枚举. 一. OC中,枚举的使用 1. 写法1 ...

  3. DB数据源之SpringBoot+MyBatis踏坑过程(三)手工+半自动注解配置数据源与加载Mapper.xml扫描

    DB数据源之SpringBoot+MyBatis踏坑过程(三)手工+半自动注解配置数据源与加载Mapper.xml扫描 liuyuhang原创,未经允许禁止转载    系列目录连接 DB数据源之Spr ...

  4. javascript根据文件字节数返回文件大小

    function getFileSize(fileByte) { var fileSizeByte = fileByte; var fileSizeMsg = ""; if(fil ...

  5. 基于DCT的图片数字水印实验

    1. 实验类别 设计型实验:MATLAB设计并实现基于DCT的图像数字水印算法. 2. 实验目的 了解基于DCT的图像数字水印技术,掌握基于DCT系数关系的图像水印算法原理,设计并实现一种基于DCT的 ...

  6. TinyMCE插件:Filemanager [4.x-6.x] 文件名统一格式化

    上传图片程序(filemanager/upload.php) 在if (!empty($_FILES) && $upload_files)中上传图片时,在文件正式上传至服务器前,有一次 ...

  7. TCC : Tiny C Compiler (2018-2-6)

    饭墙下载,有缘上传: https://files.cnblogs.com/files/bhfdz/tcc-0.9.27-win32-bin.zip https://files.cnblogs.com/ ...

  8. opencv移植(二)

    原文:https://blog.csdn.net/Guet_Kite/article/details/78667175?utm_source=copy 版权声明:本文为博主原创文章,转载请附上博文链接 ...

  9. 使用java实现AES加密

    公司最近做agent项目,需要对一些远程重要的请求参数进行加密.加密之前选型,选择了AES,而DES算法加密,容易被破解.网上有很多关于加密的算法的Demo案列,我发现这些Demo在Window平台运 ...

  10. Prism for WPF 搭建一个简单的模块化开发框架(二)

    原文:Prism for WPF 搭建一个简单的模块化开发框架(二) 今天又有时间了,再改改,加了一些控件全局的样式 样式代码 <ResourceDictionary xmlns="h ...