First, you will train your sparse autoencoder on an "unlabeled" training dataset of handwritten digits. This produces feature that are penstroke-like. We then extract these learned features from a labeled dataset of handwritten digits. These features will then be used as inputs to the softmax classifier that you wrote in the previous exercise.

Concretely, for each example in the the labeled training dataset , we forward propagate the example to obtain the activation of the hidden units . We now represent this example using (the "replacement" representation), and use this to as the new feature representation with which to train the softmax classifier.

Finally, we also extract the same features from the test data to obtain predictions.

In this exercise, our goal is to distinguish between the digits from 0 to 4. We will use the digits 5 to 9 as our "unlabeled" dataset with which to learn the features; we will then use a labeled dataset with the digits 0 to 4 with which to train the softmax classifier.

Step 1: Generate the input and test data sets
Step 2: Train the sparse autoencoder

use the unlabeled data (the digits from 5 to 9) to train a sparse autoencoder

When training is complete, you should get a visualization of pen strokes like the image shown below:

Informally, the features learned by the sparse autoencoder should correspond to penstrokes.

Step 3: Extracting features

After the sparse autoencoder is trained, you will use it to extract features from the handwritten digit images.

Step 4: Training and testing the logistic regression model

Use your code from the softmax exercise (softmaxTrain.m) to train a softmax classifier using the training set features (trainFeatures) and labels (trainLabels).

Step 5: Classifying on the test set

Finally, complete the code to make predictions on the test set (testFeatures) and see how your learned features perform! If you've done all the steps correctly, you should get an accuracy of about 98% percent.

code

%% CS294A/CS294W Self-taught Learning Exercise

%  Instructions
% ------------
%
% This file contains code that helps you get started on the
% self-taught learning. You will need to complete code in feedForwardAutoencoder.m
% You will also need to have implemented sparseAutoencoderCost.m and
% softmaxCost.m from previous exercises.
%
%% ======================================================================
% STEP : Here we provide the relevant parameters values that will
% allow your sparse autoencoder to get good filters; you do not need to
% change the parameters below. inputSize = * ;
numLabels = ;
hiddenSize = ;
sparsityParam = 0.1; % desired average activation of the hidden units.
% (This was denoted by the Greek alphabet rho, which looks like a lower-case "p",
% in the lecture notes).
lambda = 3e-; % weight decay parameter
beta = ; % weight of sparsity penalty term
maxIter = ; %% ======================================================================
% STEP : Load data from the MNIST database
%
% This loads our training and test data from the MNIST database files.
% We have sorted the data for you in this so that you will not have to
% change it. % Load MNIST database files
mnistData = loadMNISTImages('train-images.idx3-ubyte');
mnistLabels = loadMNISTLabels('train-labels.idx1-ubyte'); % Set Unlabeled Set (All Images) % Simulate a Labeled and Unlabeled set
labeledSet = find(mnistLabels >= & mnistLabels <= );
unlabeledSet = find(mnistLabels >= ); %%增加的一行代码
unlabeledSet = unlabeledSet(:end/); numTest = round(numel(labeledSet)/);%拿一半的样本来训练%
numTrain = round(numel(labeledSet)/);
trainSet = labeledSet(:numTrain);
testSet = labeledSet(numTrain+:*numTrain); unlabeledData = mnistData(:, unlabeledSet);%%为什么这两句连在一起都要出错呢?
% pack;
trainData = mnistData(:, trainSet);
trainLabels = mnistLabels(trainSet)' + 1; % Shift Labels to the Range 1-5 % mnistData2 = mnistData;
testData = mnistData(:, testSet);
testLabels = mnistLabels(testSet)' + 1; % Shift Labels to the Range 1-5 % Output Some Statistics
fprintf('# examples in unlabeled set: %d\n', size(unlabeledData, ));
fprintf('# examples in supervised training set: %d\n\n', size(trainData, ));
fprintf('# examples in supervised testing set: %d\n\n', size(testData, )); %% ======================================================================
% STEP : Train the sparse autoencoder
% This trains the sparse autoencoder on the unlabeled training
% images. % Randomly initialize the parameters
theta = initializeParameters(hiddenSize, inputSize); %% ----------------- YOUR CODE HERE ----------------------
% Find opttheta by running the sparse autoencoder on
% unlabeledTrainingImages opttheta = theta;
addpath minFunc/
options.Method = 'lbfgs';
options.maxIter = ;
options.display = 'on';
[opttheta, loss] = minFunc( @(p) sparseAutoencoderLoss(p, ...
inputSize, hiddenSize, ...
lambda, sparsityParam, ...
beta, unlabeledData), ...
theta, options); %% ----------------------------------------------------- % Visualize weights
W1 = reshape(opttheta(:hiddenSize * inputSize), hiddenSize, inputSize);
display_network(W1'); %%======================================================================
%% STEP : Extract Features from the Supervised Dataset
%
% You need to complete the code in feedForwardAutoencoder.m so that the
% following command will extract features from the data. trainFeatures = feedForwardAutoencoder(opttheta, hiddenSize, inputSize, ...
trainData); testFeatures = feedForwardAutoencoder(opttheta, hiddenSize, inputSize, ...
testData); %%======================================================================
%% STEP : Train the softmax classifier softmaxModel = struct;
%% ----------------- YOUR CODE HERE ----------------------
% Use softmaxTrain.m from the previous exercise to train a multi-class
% classifier. % Use lambda = 1e- for the weight regularization for softmax
lambda = 1e-;
inputSize = hiddenSize;
numClasses = numel(unique(trainLabels));%unique为找出向量中的非重复元素并进行排序 % You need to compute softmaxModel using softmaxTrain on trainFeatures and
% trainLabels % You need to compute softmaxModel using softmaxTrain on trainFeatures and
% trainLabels options.maxIter = ;
softmaxModel = softmaxTrain(inputSize, numClasses, lambda, ...
trainFeatures, trainLabels, options); %% ----------------------------------------------------- %%======================================================================
%% STEP : Testing %% ----------------- YOUR CODE HERE ----------------------
% Compute Predictions on the test set (testFeatures) using softmaxPredict
% and softmaxModel [pred] = softmaxPredict(softmaxModel, testFeatures); %% ----------------------------------------------------- % Classification Score
fprintf('Test Accuracy: %f%%\n', *mean(pred(:) == testLabels(:))); % (note that we shift the labels by , so that digit now corresponds to
% label )
%
% Accuracy is the proportion of correctly classified images
% The results for our implementation was:
%
% Accuracy: 98.3%
%
%
function [activation] = feedForwardAutoencoder(theta, hiddenSize, visibleSize, data)

% theta: trained weights from the autoencoder
% visibleSize: the number of input units (probably )
% hiddenSize: the number of hidden units (probably )
% data: Our matrix containing the training data as columns. So, data(:,i) is the i-th training example. % We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this
% follows the notation convention of the lecture notes. W1 = reshape(theta(:hiddenSize*visibleSize), hiddenSize, visibleSize);
b1 = theta(*hiddenSize*visibleSize+:*hiddenSize*visibleSize+hiddenSize); %% ---------- YOUR CODE HERE --------------------------------------
% Instructions: Compute the activation of the hidden layer for the Sparse Autoencoder.
activation = sigmoid(W1*data+repmat(b1,[,size(data,)])); %------------------------------------------------------------------- end %-------------------------------------------------------------------
% Here's an implementation of the sigmoid function, which you may find useful
% in your computation of the costs and the gradients. This inputs a (row or
% column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). function sigm = sigmoid(x)
sigm = ./ ( + exp(-x));
end

Exercise : Self-Taught Learning的更多相关文章

  1. 一个Self Taught Learning的简单例子

    idea: Concretely, for each example in the the labeled training dataset xl, we forward propagate the ...

  2. Andrew Ng机器学习 四:Neural Networks Learning

    背景:跟上一讲一样,识别手写数字,给一组数据集ex4data1.mat,,每个样例都为灰度化为20*20像素,也就是每个样例的维度为400,加载这组数据后,我们会有5000*400的矩阵X(5000个 ...

  3. Unsupervised Feature Learning and Deep Learning(UFLDL) Exercise 总结

    7.27 暑假开始后,稍有时间,“搞完”金融项目,便开始跑跑 Deep Learning的程序 Hinton 在Nature上文章的代码 跑了3天 也没跑完 后来Debug 把batch 从200改到 ...

  4. 【DeepLearning】Exercise:Learning color features with Sparse Autoencoders

    Exercise:Learning color features with Sparse Autoencoders 习题链接:Exercise:Learning color features with ...

  5. 【DeepLearning】Exercise:Self-Taught Learning

    Exercise:Self-Taught Learning 习题链接:Exercise:Self-Taught Learning feedForwardAutoencoder.m function [ ...

  6. MachineLearning Exercise 4 :Neural Networks Learning

    nnCostFunction 消耗公式: a1 = [ones(m,) X]; z2 = a1*Theta1'; pre = sigmoid(a1*Theta1'); a2 = [ones(m,) p ...

  7. How do I learn machine learning?

    https://www.quora.com/How-do-I-learn-machine-learning-1?redirected_qid=6578644   How Can I Learn X? ...

  8. A Brief Overview of Deep Learning

    A Brief Overview of Deep Learning (This is a guest post by Ilya Sutskever on the intuition behind de ...

  9. 深度学习Deep learning

    In the last chapter we learned that deep neural networks are often much harder to train than shallow ...

随机推荐

  1. iOS单例创建的一点疑惑

    线程安全的单例常用写法, +(AccountManager *)sharedManager{ static AccountManager *defaultManager = nil; disptch_ ...

  2. iOS网络缓存机制

    iOS的网络引擎自带缓存机制: 网络请求在经过网络引擎时有过处理(添加了字段),所以用api的网络请求无法获取缓存. [NSURLCache sharedURLCache]

  3. bzoj2763 [JLOI]飞行路线 分层图最短路

    问题描述 Alice和Bob现在要乘飞机旅行,他们选择了一家相对便宜的航空公司.该航空公司一共在n个城市设有业务,设这些城市分别标记为0到n-1,一共有m种航线,每种航线连接两个城市,并且航线有一定的 ...

  4. bzoj2100 [Usaco2010 DEC]Apple Delivery苹果贸易

    题目描述 一张P个点的无向图,C条正权路.CLJ要从Pb点(家)出发,既要去Pa1点NOI赛场拿金牌,也要去Pa2点CMO赛场拿金牌.(途中不必回家)可以先去NOI,也可以先去CMO.当然神犇CLJ肯 ...

  5. Git学习笔记 2,GitHub常用命令

    廖雪峰Git教程 莫烦Git教程 莫烦Git视频教程 文件三个状态,add之后从工作区(原始状态)到暂存区,commit之后从暂存区到版本库 工作区 暂存区 版本库 unstage stage mas ...

  6. 洛谷—— P2419 [USACO08JAN]牛大赛Cow Contest

    https://www.luogu.org/problem/show?pid=2419 题目背景 [Usaco2008 Jan] 题目描述 N (1 ≤ N ≤ 100) cows, convenie ...

  7. ArcGIS api for javascript——以地理处理结果为条件查询地图

    这里发生什么任务呢?当第一次单击地图,单击的坐标被发送到一个Geoprocessor任务.该任务访问服务器上的通过ArcGIS Server 地理处理服务提供的可用的GIS模型.本例中模型计算驱动时间 ...

  8. 《TCP/IP具体解释》读书笔记(19章)-TCP的交互数据流

    在TCP进行传输数据时.能够分为成块数据流和交互数据流两种.假设按字节计算.成块数据与交互数据的比例约为90%和10%,TCP须要同一时候处理这两类数据,且处理的算法不同. 书籍本章中以Rlogin应 ...

  9. 【LeetCode-面试算法经典-Java实现】【032-Longest Valid Parentheses(最长有效括号)】

    [032-Longest Valid Parentheses(最长有效括号)] [LeetCode-面试算法经典-Java实现][全部题目文件夹索引] 原题 Given a string contai ...

  10. spring之DelegatingFilterProxy

    DelegatingFilterProxy是一个标准servlet Filter的代理,代理实现了Filter接口的spring管理的Bean.支持一个在web.xml的init-param定义的&q ...