Exercise : Self-Taught Learning
First, you will train your sparse autoencoder on an "unlabeled" training dataset of handwritten digits. This produces feature that are penstroke-like. We then extract these learned features from a labeled dataset of handwritten digits. These features will then be used as inputs to the softmax classifier that you wrote in the previous exercise.
Concretely, for each example in the the labeled training dataset , we forward propagate the example to obtain the activation of the hidden units
. We now represent this example using
(the "replacement" representation), and use this to as the new feature representation with which to train the softmax classifier.
Finally, we also extract the same features from the test data to obtain predictions.
In this exercise, our goal is to distinguish between the digits from 0 to 4. We will use the digits 5 to 9 as our "unlabeled" dataset with which to learn the features; we will then use a labeled dataset with the digits 0 to 4 with which to train the softmax classifier.
Step 1: Generate the input and test data sets
Step 2: Train the sparse autoencoder
use the unlabeled data (the digits from 5 to 9) to train a sparse autoencoder
When training is complete, you should get a visualization of pen strokes like the image shown below:
Informally, the features learned by the sparse autoencoder should correspond to penstrokes.
Step 3: Extracting features
After the sparse autoencoder is trained, you will use it to extract features from the handwritten digit images.
Step 4: Training and testing the logistic regression model
Use your code from the softmax exercise (softmaxTrain.m) to train a softmax classifier using the training set features (trainFeatures) and labels (trainLabels).
Step 5: Classifying on the test set
Finally, complete the code to make predictions on the test set (testFeatures) and see how your learned features perform! If you've done all the steps correctly, you should get an accuracy of about 98% percent.
code
%% CS294A/CS294W Self-taught Learning Exercise % Instructions
% ------------
%
% This file contains code that helps you get started on the
% self-taught learning. You will need to complete code in feedForwardAutoencoder.m
% You will also need to have implemented sparseAutoencoderCost.m and
% softmaxCost.m from previous exercises.
%
%% ======================================================================
% STEP : Here we provide the relevant parameters values that will
% allow your sparse autoencoder to get good filters; you do not need to
% change the parameters below. inputSize = * ;
numLabels = ;
hiddenSize = ;
sparsityParam = 0.1; % desired average activation of the hidden units.
% (This was denoted by the Greek alphabet rho, which looks like a lower-case "p",
% in the lecture notes).
lambda = 3e-; % weight decay parameter
beta = ; % weight of sparsity penalty term
maxIter = ; %% ======================================================================
% STEP : Load data from the MNIST database
%
% This loads our training and test data from the MNIST database files.
% We have sorted the data for you in this so that you will not have to
% change it. % Load MNIST database files
mnistData = loadMNISTImages('train-images.idx3-ubyte');
mnistLabels = loadMNISTLabels('train-labels.idx1-ubyte'); % Set Unlabeled Set (All Images) % Simulate a Labeled and Unlabeled set
labeledSet = find(mnistLabels >= & mnistLabels <= );
unlabeledSet = find(mnistLabels >= ); %%增加的一行代码
unlabeledSet = unlabeledSet(:end/); numTest = round(numel(labeledSet)/);%拿一半的样本来训练%
numTrain = round(numel(labeledSet)/);
trainSet = labeledSet(:numTrain);
testSet = labeledSet(numTrain+:*numTrain); unlabeledData = mnistData(:, unlabeledSet);%%为什么这两句连在一起都要出错呢?
% pack;
trainData = mnistData(:, trainSet);
trainLabels = mnistLabels(trainSet)' + 1; % Shift Labels to the Range 1-5 % mnistData2 = mnistData;
testData = mnistData(:, testSet);
testLabels = mnistLabels(testSet)' + 1; % Shift Labels to the Range 1-5 % Output Some Statistics
fprintf('# examples in unlabeled set: %d\n', size(unlabeledData, ));
fprintf('# examples in supervised training set: %d\n\n', size(trainData, ));
fprintf('# examples in supervised testing set: %d\n\n', size(testData, )); %% ======================================================================
% STEP : Train the sparse autoencoder
% This trains the sparse autoencoder on the unlabeled training
% images. % Randomly initialize the parameters
theta = initializeParameters(hiddenSize, inputSize); %% ----------------- YOUR CODE HERE ----------------------
% Find opttheta by running the sparse autoencoder on
% unlabeledTrainingImages opttheta = theta;
addpath minFunc/
options.Method = 'lbfgs';
options.maxIter = ;
options.display = 'on';
[opttheta, loss] = minFunc( @(p) sparseAutoencoderLoss(p, ...
inputSize, hiddenSize, ...
lambda, sparsityParam, ...
beta, unlabeledData), ...
theta, options); %% ----------------------------------------------------- % Visualize weights
W1 = reshape(opttheta(:hiddenSize * inputSize), hiddenSize, inputSize);
display_network(W1'); %%======================================================================
%% STEP : Extract Features from the Supervised Dataset
%
% You need to complete the code in feedForwardAutoencoder.m so that the
% following command will extract features from the data. trainFeatures = feedForwardAutoencoder(opttheta, hiddenSize, inputSize, ...
trainData); testFeatures = feedForwardAutoencoder(opttheta, hiddenSize, inputSize, ...
testData); %%======================================================================
%% STEP : Train the softmax classifier softmaxModel = struct;
%% ----------------- YOUR CODE HERE ----------------------
% Use softmaxTrain.m from the previous exercise to train a multi-class
% classifier. % Use lambda = 1e- for the weight regularization for softmax
lambda = 1e-;
inputSize = hiddenSize;
numClasses = numel(unique(trainLabels));%unique为找出向量中的非重复元素并进行排序 % You need to compute softmaxModel using softmaxTrain on trainFeatures and
% trainLabels % You need to compute softmaxModel using softmaxTrain on trainFeatures and
% trainLabels options.maxIter = ;
softmaxModel = softmaxTrain(inputSize, numClasses, lambda, ...
trainFeatures, trainLabels, options); %% ----------------------------------------------------- %%======================================================================
%% STEP : Testing %% ----------------- YOUR CODE HERE ----------------------
% Compute Predictions on the test set (testFeatures) using softmaxPredict
% and softmaxModel [pred] = softmaxPredict(softmaxModel, testFeatures); %% ----------------------------------------------------- % Classification Score
fprintf('Test Accuracy: %f%%\n', *mean(pred(:) == testLabels(:))); % (note that we shift the labels by , so that digit now corresponds to
% label )
%
% Accuracy is the proportion of correctly classified images
% The results for our implementation was:
%
% Accuracy: 98.3%
%
%function [activation] = feedForwardAutoencoder(theta, hiddenSize, visibleSize, data) % theta: trained weights from the autoencoder
% visibleSize: the number of input units (probably )
% hiddenSize: the number of hidden units (probably )
% data: Our matrix containing the training data as columns. So, data(:,i) is the i-th training example. % We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this
% follows the notation convention of the lecture notes. W1 = reshape(theta(:hiddenSize*visibleSize), hiddenSize, visibleSize);
b1 = theta(*hiddenSize*visibleSize+:*hiddenSize*visibleSize+hiddenSize); %% ---------- YOUR CODE HERE --------------------------------------
% Instructions: Compute the activation of the hidden layer for the Sparse Autoencoder.
activation = sigmoid(W1*data+repmat(b1,[,size(data,)])); %------------------------------------------------------------------- end %-------------------------------------------------------------------
% Here's an implementation of the sigmoid function, which you may find useful
% in your computation of the costs and the gradients. This inputs a (row or
% column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). function sigm = sigmoid(x)
sigm = ./ ( + exp(-x));
end
Exercise : Self-Taught Learning的更多相关文章
- 一个Self Taught Learning的简单例子
idea: Concretely, for each example in the the labeled training dataset xl, we forward propagate the ...
- Andrew Ng机器学习 四:Neural Networks Learning
背景:跟上一讲一样,识别手写数字,给一组数据集ex4data1.mat,,每个样例都为灰度化为20*20像素,也就是每个样例的维度为400,加载这组数据后,我们会有5000*400的矩阵X(5000个 ...
- Unsupervised Feature Learning and Deep Learning(UFLDL) Exercise 总结
7.27 暑假开始后,稍有时间,“搞完”金融项目,便开始跑跑 Deep Learning的程序 Hinton 在Nature上文章的代码 跑了3天 也没跑完 后来Debug 把batch 从200改到 ...
- 【DeepLearning】Exercise:Learning color features with Sparse Autoencoders
Exercise:Learning color features with Sparse Autoencoders 习题链接:Exercise:Learning color features with ...
- 【DeepLearning】Exercise:Self-Taught Learning
Exercise:Self-Taught Learning 习题链接:Exercise:Self-Taught Learning feedForwardAutoencoder.m function [ ...
- MachineLearning Exercise 4 :Neural Networks Learning
nnCostFunction 消耗公式: a1 = [ones(m,) X]; z2 = a1*Theta1'; pre = sigmoid(a1*Theta1'); a2 = [ones(m,) p ...
- How do I learn machine learning?
https://www.quora.com/How-do-I-learn-machine-learning-1?redirected_qid=6578644 How Can I Learn X? ...
- A Brief Overview of Deep Learning
A Brief Overview of Deep Learning (This is a guest post by Ilya Sutskever on the intuition behind de ...
- 深度学习Deep learning
In the last chapter we learned that deep neural networks are often much harder to train than shallow ...
随机推荐
- Debian9.5 系统配置FTP
FTP 是File Transfer Protocol(文件传输协议)的英文简称,而中文简称为“文传协议”.用于Internet上的控制文件的双向传输.同时,它也是一个应用程序(Application ...
- 如何使 nginx 支撑更高并发
/** * * * * 如何使 nginx 支撑更高的并发? * 原理: * 服务器方面可以从两个方面阐述: * 1.socket 链接方面:因为每次请求都是一次连接,而 nginx 服务器配置方面默 ...
- iOS开发——设置屏幕亮度
想在APP里面调节屏幕的亮度,这只是个小众需求.而且,虽然可以直接调节手机的亮度,但是它还是个需求,客户有需求,剩下的就是我们的事了,好了,吐槽到此结束. 刚拿到这个需求的人,或许想的是直接对view ...
- POJ2299 树状数组求逆序对
裸题,不多解释. #include<iostream> #include<cstdio> #include<algorithm> #include<cstri ...
- rsync---远程数据同步工具
rsync命令是一个远程数据同步工具,可通过LAN/WAN快速同步多台主机间的文件.rsync使用所谓的“rsync算法”来使本地和远程两个主机之间的文件达到同步,这个算法只传送两个文件的不同部分,而 ...
- 紫书 例题 10-20 UVa 10900(连续概率)
分两类,当前第i题答或不答 如果不回答的话最大期望奖金为2的i次方 如果回答的话等于p* 下一道题的最大期望奖金 那么显然我们要取最大值 所以就要分类讨论 我们设答对i题后的最大期望奖金为d[i] 显 ...
- Java基础学习总结(18)——网络编程
一.网络基础概念 首先理清一个概念:网络编程 != 网站编程,网络编程现在一般称为TCP/IP编程. 二.网络通信协议及接口 三.通信协议分层思想 四.参考模型 五.IP协议 每个人的电脑都有一个独一 ...
- OpenCV学习笔记09--通过cvPtr2D或指针算法绘制图形
练习:创建一个1000*1000的三通道图像,将其元素所有置0.以(200,50)和(400,200)为顶点绘制一个绿色平面 我们能够用两种方法来实现这一功能,一个是使用cvPtr2D,可是因为使用了 ...
- 千千万万的IT开发project师路在何方
已经找不到该文章的最初出处了,有找到的人请告诉我.谢谢~~ 千千万万的IT开发project师路在何方 2007-06-25 21:41 恭喜,你选择开发project师作为自已的职业! 悲哀.你选择 ...
- Install the IIS 6.0 Management Compatibility Components in Windows 7 or in Windows Vista from Control Panel
https://technet.microsoft.com/en-us/library/bb397374(v=exchg.80).aspx Install the IIS 6.0 Management ...