First, you will train your sparse autoencoder on an "unlabeled" training dataset of handwritten digits. This produces feature that are penstroke-like. We then extract these learned features from a labeled dataset of handwritten digits. These features will then be used as inputs to the softmax classifier that you wrote in the previous exercise.

Concretely, for each example in the the labeled training dataset , we forward propagate the example to obtain the activation of the hidden units . We now represent this example using (the "replacement" representation), and use this to as the new feature representation with which to train the softmax classifier.

Finally, we also extract the same features from the test data to obtain predictions.

In this exercise, our goal is to distinguish between the digits from 0 to 4. We will use the digits 5 to 9 as our "unlabeled" dataset with which to learn the features; we will then use a labeled dataset with the digits 0 to 4 with which to train the softmax classifier.

Step 1: Generate the input and test data sets
Step 2: Train the sparse autoencoder

use the unlabeled data (the digits from 5 to 9) to train a sparse autoencoder

When training is complete, you should get a visualization of pen strokes like the image shown below:

Informally, the features learned by the sparse autoencoder should correspond to penstrokes.

Step 3: Extracting features

After the sparse autoencoder is trained, you will use it to extract features from the handwritten digit images.

Step 4: Training and testing the logistic regression model

Use your code from the softmax exercise (softmaxTrain.m) to train a softmax classifier using the training set features (trainFeatures) and labels (trainLabels).

Step 5: Classifying on the test set

Finally, complete the code to make predictions on the test set (testFeatures) and see how your learned features perform! If you've done all the steps correctly, you should get an accuracy of about 98% percent.

code

%% CS294A/CS294W Self-taught Learning Exercise

%  Instructions
% ------------
%
% This file contains code that helps you get started on the
% self-taught learning. You will need to complete code in feedForwardAutoencoder.m
% You will also need to have implemented sparseAutoencoderCost.m and
% softmaxCost.m from previous exercises.
%
%% ======================================================================
% STEP : Here we provide the relevant parameters values that will
% allow your sparse autoencoder to get good filters; you do not need to
% change the parameters below. inputSize = * ;
numLabels = ;
hiddenSize = ;
sparsityParam = 0.1; % desired average activation of the hidden units.
% (This was denoted by the Greek alphabet rho, which looks like a lower-case "p",
% in the lecture notes).
lambda = 3e-; % weight decay parameter
beta = ; % weight of sparsity penalty term
maxIter = ; %% ======================================================================
% STEP : Load data from the MNIST database
%
% This loads our training and test data from the MNIST database files.
% We have sorted the data for you in this so that you will not have to
% change it. % Load MNIST database files
mnistData = loadMNISTImages('train-images.idx3-ubyte');
mnistLabels = loadMNISTLabels('train-labels.idx1-ubyte'); % Set Unlabeled Set (All Images) % Simulate a Labeled and Unlabeled set
labeledSet = find(mnistLabels >= & mnistLabels <= );
unlabeledSet = find(mnistLabels >= ); %%增加的一行代码
unlabeledSet = unlabeledSet(:end/); numTest = round(numel(labeledSet)/);%拿一半的样本来训练%
numTrain = round(numel(labeledSet)/);
trainSet = labeledSet(:numTrain);
testSet = labeledSet(numTrain+:*numTrain); unlabeledData = mnistData(:, unlabeledSet);%%为什么这两句连在一起都要出错呢?
% pack;
trainData = mnistData(:, trainSet);
trainLabels = mnistLabels(trainSet)' + 1; % Shift Labels to the Range 1-5 % mnistData2 = mnistData;
testData = mnistData(:, testSet);
testLabels = mnistLabels(testSet)' + 1; % Shift Labels to the Range 1-5 % Output Some Statistics
fprintf('# examples in unlabeled set: %d\n', size(unlabeledData, ));
fprintf('# examples in supervised training set: %d\n\n', size(trainData, ));
fprintf('# examples in supervised testing set: %d\n\n', size(testData, )); %% ======================================================================
% STEP : Train the sparse autoencoder
% This trains the sparse autoencoder on the unlabeled training
% images. % Randomly initialize the parameters
theta = initializeParameters(hiddenSize, inputSize); %% ----------------- YOUR CODE HERE ----------------------
% Find opttheta by running the sparse autoencoder on
% unlabeledTrainingImages opttheta = theta;
addpath minFunc/
options.Method = 'lbfgs';
options.maxIter = ;
options.display = 'on';
[opttheta, loss] = minFunc( @(p) sparseAutoencoderLoss(p, ...
inputSize, hiddenSize, ...
lambda, sparsityParam, ...
beta, unlabeledData), ...
theta, options); %% ----------------------------------------------------- % Visualize weights
W1 = reshape(opttheta(:hiddenSize * inputSize), hiddenSize, inputSize);
display_network(W1'); %%======================================================================
%% STEP : Extract Features from the Supervised Dataset
%
% You need to complete the code in feedForwardAutoencoder.m so that the
% following command will extract features from the data. trainFeatures = feedForwardAutoencoder(opttheta, hiddenSize, inputSize, ...
trainData); testFeatures = feedForwardAutoencoder(opttheta, hiddenSize, inputSize, ...
testData); %%======================================================================
%% STEP : Train the softmax classifier softmaxModel = struct;
%% ----------------- YOUR CODE HERE ----------------------
% Use softmaxTrain.m from the previous exercise to train a multi-class
% classifier. % Use lambda = 1e- for the weight regularization for softmax
lambda = 1e-;
inputSize = hiddenSize;
numClasses = numel(unique(trainLabels));%unique为找出向量中的非重复元素并进行排序 % You need to compute softmaxModel using softmaxTrain on trainFeatures and
% trainLabels % You need to compute softmaxModel using softmaxTrain on trainFeatures and
% trainLabels options.maxIter = ;
softmaxModel = softmaxTrain(inputSize, numClasses, lambda, ...
trainFeatures, trainLabels, options); %% ----------------------------------------------------- %%======================================================================
%% STEP : Testing %% ----------------- YOUR CODE HERE ----------------------
% Compute Predictions on the test set (testFeatures) using softmaxPredict
% and softmaxModel [pred] = softmaxPredict(softmaxModel, testFeatures); %% ----------------------------------------------------- % Classification Score
fprintf('Test Accuracy: %f%%\n', *mean(pred(:) == testLabels(:))); % (note that we shift the labels by , so that digit now corresponds to
% label )
%
% Accuracy is the proportion of correctly classified images
% The results for our implementation was:
%
% Accuracy: 98.3%
%
%
function [activation] = feedForwardAutoencoder(theta, hiddenSize, visibleSize, data)

% theta: trained weights from the autoencoder
% visibleSize: the number of input units (probably )
% hiddenSize: the number of hidden units (probably )
% data: Our matrix containing the training data as columns. So, data(:,i) is the i-th training example. % We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this
% follows the notation convention of the lecture notes. W1 = reshape(theta(:hiddenSize*visibleSize), hiddenSize, visibleSize);
b1 = theta(*hiddenSize*visibleSize+:*hiddenSize*visibleSize+hiddenSize); %% ---------- YOUR CODE HERE --------------------------------------
% Instructions: Compute the activation of the hidden layer for the Sparse Autoencoder.
activation = sigmoid(W1*data+repmat(b1,[,size(data,)])); %------------------------------------------------------------------- end %-------------------------------------------------------------------
% Here's an implementation of the sigmoid function, which you may find useful
% in your computation of the costs and the gradients. This inputs a (row or
% column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). function sigm = sigmoid(x)
sigm = ./ ( + exp(-x));
end

Exercise : Self-Taught Learning的更多相关文章

  1. 一个Self Taught Learning的简单例子

    idea: Concretely, for each example in the the labeled training dataset xl, we forward propagate the ...

  2. Andrew Ng机器学习 四:Neural Networks Learning

    背景:跟上一讲一样,识别手写数字,给一组数据集ex4data1.mat,,每个样例都为灰度化为20*20像素,也就是每个样例的维度为400,加载这组数据后,我们会有5000*400的矩阵X(5000个 ...

  3. Unsupervised Feature Learning and Deep Learning(UFLDL) Exercise 总结

    7.27 暑假开始后,稍有时间,“搞完”金融项目,便开始跑跑 Deep Learning的程序 Hinton 在Nature上文章的代码 跑了3天 也没跑完 后来Debug 把batch 从200改到 ...

  4. 【DeepLearning】Exercise:Learning color features with Sparse Autoencoders

    Exercise:Learning color features with Sparse Autoencoders 习题链接:Exercise:Learning color features with ...

  5. 【DeepLearning】Exercise:Self-Taught Learning

    Exercise:Self-Taught Learning 习题链接:Exercise:Self-Taught Learning feedForwardAutoencoder.m function [ ...

  6. MachineLearning Exercise 4 :Neural Networks Learning

    nnCostFunction 消耗公式: a1 = [ones(m,) X]; z2 = a1*Theta1'; pre = sigmoid(a1*Theta1'); a2 = [ones(m,) p ...

  7. How do I learn machine learning?

    https://www.quora.com/How-do-I-learn-machine-learning-1?redirected_qid=6578644   How Can I Learn X? ...

  8. A Brief Overview of Deep Learning

    A Brief Overview of Deep Learning (This is a guest post by Ilya Sutskever on the intuition behind de ...

  9. 深度学习Deep learning

    In the last chapter we learned that deep neural networks are often much harder to train than shallow ...

随机推荐

  1. PostgreSQL两种事务隔离级

    PostgreSQL两种事务隔离级别: 读已提交:PostgreSQL中缺省隔离级别.当一个事务运行在这个隔离级别时,一个SELECT查询只能看到查询开始之前提交的数据而永远无法看到未提交的数据或者在 ...

  2. PHP安装curl扩展

    昨天在写文章的时候,突然出现了一个很顽皮的bug. 一直跳到404页面??? 于是我赶紧打开debug,看看什么情况! 弹出的错误是 :Call to undefined function Home\ ...

  3. WPS for Linux使用测评

    从去年有WPS for Linux的消息到现在,Linux 版的WPS Office在经过一系列的alpha版本之后终于迎来了Beta版本.笔者也是第一时间下载安装,WPS 文字.WPS 演示和WPS ...

  4. rsync---远程数据同步工具

    rsync命令是一个远程数据同步工具,可通过LAN/WAN快速同步多台主机间的文件.rsync使用所谓的“rsync算法”来使本地和远程两个主机之间的文件达到同步,这个算法只传送两个文件的不同部分,而 ...

  5. [APIO2009]会议中心(贪心)

    P3626 [APIO2009]会议中心 题目描述 Siruseri 政府建造了一座新的会议中心.许多公司对租借会议中心的会堂很 感兴趣,他们希望能够在里面举行会议. 对于一个客户而言,仅当在开会时能 ...

  6. ECNUOJ 2855 贪吃蛇

    贪吃蛇 Time Limit:1000MS Memory Limit:65536KBTotal Submit:480 Accepted:109 Description  相信很多人都玩过这个游戏,当然 ...

  7. Java.Lang.NoSuchMethod 错误

    项目开发.调用webservice,方法调用报了 Java.Lang.NoSucheMethod..........,印象中记得是jar包冲突,maven项目,一看,这一堆jar包...用eclips ...

  8. 使用 gradle 在编译时动态设置 Android resValue / BuildConfig / Manifes中&lt;meta-data&gt;变量的值

    转载请标明出处:http://blog.csdn.net/xx326664162/article/details/49247815 文章出自:薛瑄的博客 你也能够查看我的其它同类文章.也会让你有一定的 ...

  9. POJ 1887 Testingthe CATCHER (LIS:最长下降子序列)

    POJ 1887Testingthe CATCHER (LIS:最长下降子序列) http://poj.org/problem?id=3903 题意: 给你一个长度为n (n<=200000) ...

  10. 手动新建hive编程环境(以hive-1.0.0和hive-1.2.1为例)

    如下,是用maven构建项目,本篇博文重点不是这个.初学者(小白)变成小鸟后,建议开始用maven啦! Eclipse下Maven新建项目.自动打依赖jar包(包含普通项目和Web项目) HBase ...