Vectorization

Vectorization refers to a powerful way to speed up your algorithms. Numerical computing and parallel computing researchers have put decades of work into making certain numerical operations (such as matrix-matrix multiplication, matrix-matrix addition, matrix-vector multiplication) fast. The idea of vectorization is that we would like to express our learning algorithms in terms of these highly optimized operations.

More generally, a good rule-of-thumb for coding Matlab/Octave is:

Whenever possible, avoid using explicit for-loops in your code.

A large part of vectorizing our Matlab/Octave code will focus on getting rid of for loops, since this lets Matlab/Octave extract more parallelism from your code, while also incurring less computational overhead from the interpreter.

多用向量运算,别把向量拆成标量然后再循环

Logistic Regression Vectorization Example

Consider training a logistic regression model using batch gradient ascent. Suppose our hypothesis is

where we let , so that and , and is our intercept term. We have a training set of examples, and the batch gradient ascent update rule is , where is the log likelihood and is its derivative.

We thus need to compute the gradient:

Further, suppose the Matlab/Octave variable y is a row vector of the labels in the training set, so that the variable y(i) is .

Here's truly horrible, extremely slow, implementation of the gradient computation:

% Implementation
grad = zeros(n+,);
for i=:m,
h = sigmoid(theta'*x(:,i));
temp = y(i) - h;
for j=:n+,
grad(j) = grad(j) + temp * x(j,i);
end;
end;

The two nested for-loops makes this very slow. Here's a more typical implementation, that partially vectorizes the algorithm and gets better performance:

% Implementation
grad = zeros(n+,);
for i=:m,
grad = grad + (y(i) - sigmoid(theta'*x(:,i)))* x(:,i);
end;

Neural Network Vectorization

Forward propagation

Consider a 3 layer neural network (with one input, one hidden, and one output layer), and suppose x is a column vector containing a single training example . Then the forward propagation step is given by:

This is a fairly efficient implementation for a single example. If we have m examples, then we would wrap a for loop around this.

% Unvectorized implementation
for i=:m,
z2 = W1 * x(:,i) + b1;
a2 = f(z2);
z3 = W2 * a2 + b2;
h(:,i) = f(z3);
end;

For many algorithms, we will represent intermediate stages of computation via vectors. For example, z2, a2, and z3 here are all column vectors that're used to compute the activations of the hidden and output layers. In order to take better advantage of parallelism and efficient matrix operations, we would like to have our algorithm operate simultaneously on many training examples. Let us temporarily ignore b1 and b2 (say, set them to zero for now). We can then implement the following:

% Vectorized implementation (ignoring b1, b2)
z2 = W1 * x;
a2 = f(z2);
z3 = W2 * a2;
h = f(z3)

In this implementation, z2, a2, and z3 are all matrices, with one column per training example.

A common design pattern in vectorizing across training examples is that whereas previously we had a column vector (such as z2) per training example, we can often instead try to compute a matrix so that all of these column vectors are stacked together to form a matrix. Concretely, in this example, a2 becomes a s2 by m matrix (where s2 is the number of units in layer 2 of the network, and m is the number of training examples). And, the i-th column of a2 contains the activations of the hidden units (layer 2 of the network) when the i-th training example x(:,i) is input to the network.

% Inefficient, unvectorized implementation of the activation function
function output = unvectorized_f(z)
output = zeros(size(z))
for i=:size(z,),
for j=:size(z,),
output(i,j) = /(+exp(-z(i,j)));
end;
end;
end % Efficient, vectorized implementation of the activation function
function output = vectorized_f(z)
output = ./(+exp(-z)); % "./" is Matlab/Octave's element-wise division operator.
end

Finally, our vectorized implementation of forward propagation above had ignored b1 and b2. To incorporate those back in, we will use Matlab/Octave's built-in repmat function. We have:

% Vectorized implementation of forward propagation
z2 = W1 * x + repmat(b1,,m);
a2 = f(z2);
z3 = W2 * a2 + repmat(b2,,m);
h = f(z3)

repmat !!矩阵变形!!

Backpropagation

We are in a supervised learning setting, so that we have a training set of m training examples. (For the autoencoder, we simply set y(i) = x(i), but our derivation here will consider this more general setting.)

we had that for a single training example (x,y), we can compute the derivatives as

Here, denotes element-wise product. For simplicity, our description here will ignore the derivatives with respect to b(l), though your implementation of backpropagation will have to compute those derivatives too.

gradW1 = zeros(size(W1));
gradW2 = zeros(size(W2));
for i=:m,
delta3 = -(y(:,i) - h(:,i)) .* fprime(z3(:,i));
delta2 = W2'*delta3(:,i) .* fprime(z2(:,i)); gradW2 = gradW2 + delta3*a2(:,i)';
gradW1 = gradW1 + delta2*a1(:,i)';
end;

This implementation has a for loop. We would like to come up with an implementation that simultaneously performs backpropagation on all the examples, and eliminates this for loop.

To do so, we will replace the vectors delta3 and delta2 with matrices, where one column of each matrix corresponds to each training example. We will also implement a function fprime(z) that takes as input a matrix z, and applies element-wise.

Sparse autoencoder

When performing backpropagation on a single training example, we had taken into the account the sparsity penalty by computing the following:

也就是不要用循环一个样本一个样本的去更新参数,而是要将样本组织成矩阵的形式,应用矩阵运算,提高效率。

Vectorized implementation的更多相关文章

  1. DL三(向量化编程 Vectorized implementation)

    向量化编程实现 Vectorized implementation 一向量化编程 Vectorization 1.1 基本术语 向量化 vectorization 1.2 向量化编程(Vectoriz ...

  2. 机器学习公开课笔记(4):神经网络(Neural Network)——表示

    动机(Motivation) 对于非线性分类问题,如果用多元线性回归进行分类,需要构造许多高次项,导致特征特多学习参数过多,从而复杂度太高. 神经网络(Neural Network) 一个简单的神经网 ...

  3. 转载 Deep learning:一(基础知识_1)

    前言: 最近打算稍微系统的学习下deep learing的一些理论知识,打算采用Andrew Ng的网页教程UFLDL Tutorial,据说这个教程写得浅显易懂,也不太长.不过在这这之前还是复习下m ...

  4. Deep learning:一(基础知识_1)

    本文纯转载: 主要是想系统的跟tornadomeet的顺序走一遍deeplearning; 前言: 最近打算稍微系统的学习下deep learing的一些理论知识,打算采用Andrew Ng的网页教程 ...

  5. machine learning 之 Neural Network 1

    整理自Andrew Ng的machine learning课程week 4. 目录: 为什么要用神经网络 神经网络的模型表示 1 神经网络的模型表示 2 实例1 实例2 多分类问题 1.为什么要用神经 ...

  6. Coursera Deep Learning 2 Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization - week1, Assignment(Regularization)

    声明:所有内容来自coursera,作为个人学习笔记记录在这里. Regularization Welcome to the second assignment of this week. Deep ...

  7. Coursera机器学习+deeplearning.ai+斯坦福CS231n

    日志 20170410 Coursera机器学习 2017.11.28 update deeplearning 台大的机器学习课程:台湾大学林轩田和李宏毅机器学习课程 Coursera机器学习 Wee ...

  8. Neural Networks and Deep Learning 课程笔记(第三周)浅层神经网络(Shallow neural networks)

    3.1 神经网络概述(Neural Network Overview ) (神经网络中,我们要反复计算a和z,最终得到最后的loss function) 3.2 神经网络的表示(Neural Netw ...

  9. [UFLDL] Basic Concept

    博客内容取材于:http://www.cnblogs.com/tornadomeet/archive/2012/06/24/2560261.html 参考资料: UFLDL wiki UFLDL St ...

随机推荐

  1. RelativeLayout中的baseline

    比如,加入两个相邻的TextView,给第二个TextView一个大一点的padding(比如20dp),如果加了layout_alignBaseline到第二个TextView中的话, TextVi ...

  2. 不再安全的 OSSpinLock

    自旋锁的本质是持续占有cpu,直到获取到资源.与其他锁的忙等待的实现机制不同. 昨天有位开发者在 Github 上给我提了一个 issue,里面指出 OSSpinLock 在新版 iOS 中已经不能再 ...

  3. 常见bug分析

    变量类型不匹配,形参和实参类型不匹配,隐式类型转换,变量类型赋值不匹配, 工具不熟悉,导致逻辑错误,查看代码,测试驱动开发,完整的测试用例,覆盖所有分支, 变量超出范围,对于大的数据要特别注意, 工具 ...

  4. Linux GPT分区表16进制实例分析

    Linux GPT分区表16进制实例分析 GPT分区表随着win10的普及,已经在越来越多的新电脑上开始使用了.前段时间的新闻有看到说Intel会在后面的新平台中完全取消CSM支持,这也大概相当于后面 ...

  5. Android通过XML来定义Menu

    直接在代码中添加菜单项,给菜单项分组等,这是比较传统的做法,它存在着一些不足.比如说,为了响应每个菜单项,我们需要用常量来保存每个菜单项的ID等.为此,Android提供了一种更好的方式,就是把men ...

  6. Python学习笔记(2)--基本数据类型

    在介绍基本数据类型之前,先说一个系统方法type():返回对象的数据类型,可以帮助我们查看系统的类型定义 python不同的版本,类型名称稍有不同,这里使用的是3.5.2版本 一.基本数据类型: 1. ...

  7. Spring Cloud学习笔记【六】Hystrix 监控数据聚合 Turbine

    上一篇我们介绍了使用 Hystrix Dashboard 来展示 Hystrix 用于熔断的各项度量指标.通过 Hystrix Dashboard,我们可以方便的查看服务实例的综合情况,比如:服务调用 ...

  8. 洛谷 P2083 找人

    P2083 找人 题目背景 无 题目描述 小明要到他的同学家玩,可他只知道他住在某一单元,却不知住在哪个房间.那个单元有N层(1,2……N),每层有M(1,2……M)个房间. 小明会从第一层的某个房间 ...

  9. Chrome开启无界面浏览模式Python+Windows环境

    环境:Python 3.5.x + Selenium 3.4.3 + Chromedriver 2.30 + Chrome 60 beta版 + WIN7/WIN10 chrome_options = ...

  10. ArcGIS api for javascript——图层-创建WMS图层类型的图层

    本例使用一个WMS端点创建了一个简单的动态图层.首先,代码声明一个新的类my.CityStatesRiversUSAWMSLayer,该类继承esri.layers.DynamicMapService ...