Exercise:Sparse Autoencoder

习题的链接:Exercise:Sparse Autoencoder

注意点:

1、训练样本像素值需要归一化。

因为输出层的激活函数是logistic函数,值域(0,1),

如果训练样本每个像素点没有进行归一化,那将无法进行自编码。

2、训练阶段,向量化实现比for循环实现快十倍。

3、最后产生的图片阵列是将W1权值矩阵的转置,每一列作为一张图片。

第i列其实就是最大可能激活第i个隐藏节点的图片xi,再乘以常数因子C(其中C就是W1第i行元素的平方和)。

证明可见:Visualizing a Trained Autoencoder

我的实现:

sampleIMAGES.m

function patches = sampleIMAGES()
% sampleIMAGES
% Returns patches for training load IMAGES; % load images from disk patchsize = ; % we'll use 8x8 patches
numpatches = ; % Initialize patches with zeros. Your code will fill in this matrix--one
% column per patch, columns.
patches = zeros(patchsize*patchsize, numpatches); %% ---------- YOUR CODE HERE --------------------------------------
% Instructions: Fill in the variable called "patches" using data
% from IMAGES.
%
% IMAGES is a 3D array containing images
% For instance, IMAGES(:,:,) is a 512x512 array containing the 6th image,
% and you can type "imagesc(IMAGES(:,:,6)), colormap gray;" to visualize
% it. (The contrast on these images look a bit off because they have
% been preprocessed using using "whitening." See the lecture notes for
% more details.) As a second example, IMAGES(:,:,) is an image
% patch corresponding to the pixels in the block (,) to (,) of
% Image for i=:numpatches
% generate random row&col number [, -patchsize+=]
% generate random IMAGES id [, ]
row = round( + rand(,)*);
col = round( + rand(,)*);
pid = round( + rand(,)*);
patches(:, i) = reshape(IMAGES(row:row+, col:col+, pid), patchsize*patchsize, );
end %% ---------------------------------------------------------------
% For the autoencoder to work well we need to normalize the data
% Specifically, since the output of the network is bounded between [,]
% (due to the sigmoid activation function), we have to make sure
% the range of pixel values is also bounded between [,]
patches = normalizeData(patches); end %% ---------------------------------------------------------------
function patches = normalizeData(patches) % Squash data to [0.1, 0.9] since we use sigmoid as the activation
% function in the output layer % Remove DC (mean of images).
patches = bsxfun(@minus, patches, mean(patches)); % Truncate to +/- standard deviations and scale to - to
pstd = * std(patches(:));
patches = max(min(patches, pstd), -pstd) / pstd; % Rescale from [-,] to [0.1,0.9]
patches = (patches + ) * 0.4 + 0.1; end

computeNumericalGradient.m

function numgrad = computeNumericalGradient(J, theta)
% numgrad = computeNumericalGradient(J, theta)
% theta: a vector of parameters (column vector)
% J: a function that outputs a real-number. Calling y = J(theta) will return the
% function value at theta. % Initialize numgrad with zeros
numgrad = zeros(size(theta)); %% ---------- YOUR CODE HERE --------------------------------------
% Instructions:
% Implement numerical gradient checking, and return the result in numgrad.
% (See Section 2.3 of the lecture notes.)
% You should write code so that numgrad(i) is (the numerical approximation to) the
% partial derivative of J with respect to the i-th input argument, evaluated at theta.
% I.e., numgrad(i) should be the (approximately) the partial derivative of J with
% respect to theta(i).
%
% Hint: You will probably want to compute the elements of numgrad one at a time. N = size(theta, );
EPSILON = 1e-;
Identity = eye(N); for i = :N
numgrad(i,:) = (J(theta + EPSILON * Identity(:, i)) - J(theta - EPSILON * Identity(:, i))) / ( * EPSILON);
end %% ---------------------------------------------------------------
end

sparseAutoencoderCost.m

function [cost,grad] = sparseAutoencoderCost(theta, visibleSize, hiddenSize, ...
lambda, sparsityParam, beta, data) % visibleSize: the number of input units (probably )
% hiddenSize: the number of hidden units (probably )
% lambda: weight decay parameter
% sparsityParam: The desired average activation for the hidden units (denoted in the lecture
% notes by the greek alphabet rho, which looks like a lower-case "p").
% beta: weight of sparsity penalty term
% data: Our 64x10000 matrix containing the training data. So, data(:,i) is the i-th training example. % The input theta is a vector (because minFunc expects the parameters to be a vector).
% We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this
% follows the notation convention of the lecture notes. % W1 is a hiddenSize * visibleSize matrix
W1 = reshape(theta(:hiddenSize*visibleSize), hiddenSize, visibleSize);
% W2 is a visibleSize * hiddenSize matrix
W2 = reshape(theta(hiddenSize*visibleSize+:*hiddenSize*visibleSize), visibleSize, hiddenSize);
% b1 is a hiddenSize * vector
b1 = theta(*hiddenSize*visibleSize+:*hiddenSize*visibleSize+hiddenSize);
% b2 is a visible * vector
b2 = theta(*hiddenSize*visibleSize+hiddenSize+:end); % Cost and gradient variables (your code needs to compute these values).
% Here, we initialize them to zeros.
cost = ;
W1grad = zeros(size(W1));
W2grad = zeros(size(W2));
b1grad = zeros(size(b1));
b2grad = zeros(size(b2)); %% ---------- YOUR CODE HERE --------------------------------------
% Instructions: Compute the cost/optimization objective J_sparse(W,b) for the Sparse Autoencoder,
% and the corresponding gradients W1grad, W2grad, b1grad, b2grad.
%
% W1grad, W2grad, b1grad and b2grad should be computed using backpropagation.
% Note that W1grad has the same dimensions as W1, b1grad has the same dimensions
% as b1, etc. Your code should set W1grad to be the partial derivative of J_sparse(W,b) with
% respect to W1. I.e., W1grad(i,j) should be the partial derivative of J_sparse(W,b)
% with respect to the input parameter W1(i,j). Thus, W1grad should be equal to the term
% [(/m) \Delta W^{()} + \lambda W^{()}] in the last block of pseudo-code in Section 2.2
% of the lecture notes (and similarly for W2grad, b1grad, b2grad).
%
% Stated differently, if we were using batch gradient descent to optimize the parameters,
% the gradient descent update to W1 would be W1 := W1 - alpha * W1grad, and similarly for W2, b1, b2.
% numCases = size(data, ); % forward propagation
z2 = W1 * data + repmat(b1, , numCases);
a2 = sigmoid(z2);
z3 = W2 * a2 + repmat(b2, , numCases);
a3 = sigmoid(z3); % error
sqrerror = (data - a3) .* (data - a3);
error = sum(sum(sqrerror)) / ( * numCases);
% weight decay
wtdecay = (sum(sum(W1 .* W1)) + sum(sum(W2 .* W2))) / ;
% sparsity
rho = sum(a2, ) ./ numCases;
divergence = sparsityParam .* log(sparsityParam ./ rho) + ( - sparsityParam) .* log(( - sparsityParam) ./ ( - rho));
sparsity = sum(divergence); cost = error + lambda * wtdecay + beta * sparsity; % delta3 is a visibleSize * numCases matrix
delta3 = -(data - a3) .* sigmoiddiff(z3);
% delta2 is a hiddenSize * numCases matrix
sparsityterm = beta * (-sparsityParam ./ rho + (-sparsityParam) ./ (-rho));
delta2 = (W2' * delta3 + repmat(sparsityterm, 1, numCases)) .* sigmoiddiff(z2); W1grad = delta2 * data' ./ numCases + lambda * W1;
b1grad = sum(delta2, ) ./ numCases; W2grad = delta3 * a2' ./ numCases + lambda * W2;
b2grad = sum(delta3, ) ./ numCases; %-------------------------------------------------------------------
% After computing the cost and gradient, we will convert the gradients back
% to a vector format (suitable for minFunc). Specifically, we will unroll
% your gradient matrices into a vector. grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)]; end %-------------------------------------------------------------------
% Here's an implementation of the sigmoid function, which you may find useful
% in your computation of the costs and the gradients. This inputs a (row or
% column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). function sigm = sigmoid(x) sigm = ./ ( + exp(-x));
end function sigmdiff = sigmoiddiff(x) sigmdiff = sigmoid(x) .* ( - sigmoid(x));
end

最终训练结果:

【DeepLearning】Exercise:Sparse Autoencoder的更多相关文章

  1. 【DeepLearning】Exercise:Vectorization

    Exercise:Vectorization 习题的链接:Exercise:Vectorization 注意点: MNIST图片的像素点已经经过归一化. 如果再使用Exercise:Sparse Au ...

  2. 【DeepLearning】Exercise:Convolution and Pooling

    Exercise:Convolution and Pooling 习题链接:Exercise:Convolution and Pooling cnnExercise.m %% CS294A/CS294 ...

  3. 【DeepLearning】Exercise: Implement deep networks for digit classification

    Exercise: Implement deep networks for digit classification 习题链接:Exercise: Implement deep networks fo ...

  4. 【DeepLearning】Exercise:Self-Taught Learning

    Exercise:Self-Taught Learning 习题链接:Exercise:Self-Taught Learning feedForwardAutoencoder.m function [ ...

  5. 【DeepLearning】Exercise:Learning color features with Sparse Autoencoders

    Exercise:Learning color features with Sparse Autoencoders 习题链接:Exercise:Learning color features with ...

  6. 【DeepLearning】Exercise:Softmax Regression

    Exercise:Softmax Regression 习题的链接:Exercise:Softmax Regression softmaxCost.m function [cost, grad] = ...

  7. 【DeepLearning】Exercise:PCA and Whitening

    Exercise:PCA and Whitening 习题链接:Exercise:PCA and Whitening pca_gen.m %%============================= ...

  8. 【DeepLearning】Exercise:PCA in 2D

    Exercise:PCA in 2D 习题的链接:Exercise:PCA in 2D pca_2d.m close all %%=================================== ...

  9. 【UFLDL】Exercise: Convolutional Neural Network

    这个exercise需要完成cnn中的forward pass,cost,error和gradient的计算.需要弄清楚每一层的以上四个步骤的原理,并且要充分利用matlab的矩阵运算.大概把过程总结 ...

随机推荐

  1. Google Guava新手教程

         以下资料整理自网络 一.Google Guava入门介绍 引言 Guavaproject包括了若干被Google的 Java项目广泛依赖 的核心库,比如:集合 [collections] . ...

  2. 分词中的HMM

    http://blog.csdn.net/heavendai/article/details/7030102 1.       首先来说一下马尔科夫链.   一个事件序列发生的概率可以用下面的概率论里 ...

  3. error MSB6006: "CL.exe" exited with code -1073741819.

    编译一个c++项目的时候,会报如下的错误,总是无法编译,是怎么回事? error MSB6006: "CL.exe" exited with code -1073741819. 搜 ...

  4. 安装QT的时候出现PATH_MAX错误

      运行c:\qt\4.5.0的configure文件的时候,出现如下的错误提示: ....\corelib\io\qfsfileengine_win.cpp(1012) : error C2065: ...

  5. 设置Linux中的Mysql不区分表名大小写

    1. MySQL数据库的表名在Linux系统下是严格区分大小写的,在Windows系统下开发的程序移植到Linux系统下,如果程序中SQL语句没有严格按照大小写访问数据库表,就可能会出现找不到表的错误 ...

  6. ngularJs项目实战!05: 不同controller作用域之间通信的方式

    最近在做d3js + angularjs项目中,经常遇到d3组件与angularjs模块间通信的问题,以及angularjs多个作用域之间互相通信的问题.关于angularjs的作用域概念及其继承模式 ...

  7. Rust 的安装和使用举例

    一.环境 二.安装 $curl -sSf https://static.rust-lang.org/rustup.sh | sh Welcome to Rust. This script will d ...

  8. jQuery对象合并、jQuery添加静态方法、jQuery添加DOM实例方法

    实例效果: 代码演示: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http:/ ...

  9. Zabbix通过Nginx状态来监控网站并发量

    一.开 启Nginx状态 一.安装Nginx 执行命令:yum install nginx 二.启动Nginx 执行命令:systemctl start nginx 三.配置Nginx开启Status ...

  10. TCP三次握手详解

    当两台主机采用 TCP 协议进行通信时,在交换数据前将建立连接.通信完成后,将关闭会话并终止连接.连接和会话机制保障了TCP 的可靠性功能. 请参见图中建立并终止 TCP 连接的步骤. 主机将跟踪会话 ...