PCA

给定一组二维数据,每列十一组样本,共45个样本点

-6.7644914e-01  -6.3089308e-01  -4.8915202e-01 ...

-4.4722050e-01  -7.4778067e-01  -3.9074344e-01 ...

可以表示为如下形式:

本例子中的的x(i)为2维向量,整个数据集X为2*m的矩阵,矩阵的每一列代表一个数据,该矩阵的转置X' 为一个m*2的矩阵:

假设如上数据为归一化均值后的数据(注意这里省略了方差归一化),则数据的协方差矩阵Σ为 1/m(X*X'),Σ为一个2*2的矩阵:

对该对称矩阵对角线化:

这是对于2维情况,若对于n维,会得到一组n维的新基:

,且U的转置:

原数据在U上的投影为用UT*X表示即可:

对于二维数据,UT为2*2的矩阵,UT*X会得到2*m的新矩阵,即原数据在新基下的表示XROT,原来的数据映射到这组新基上,便得到可一组在各个维度上不相关的数据,取k<n,把数据映射到上,便完成的降维过程,下图为XROT:

对基变换后的数据还可以进行还原,比如得到了原始数据  的低维“压缩”表征量  , 反过来,如果给定  ,我们应如何还原原始数据  呢?  的基为要转换回来,只需  即可。进一步,我们把  看作将  的最后  个元素被置0所得的近似表示,因此如果给定  ,可以通过在其末尾添加  个0来得到对  的近似,最后,左乘  便可近似还原出原数据  。具体来说,计算如下:

下图为还原后的数据:

下面来看白化,白化就是先对数据进行基变换,但是并不进行降维,且对变化后的数据,每一个维度上都除以其标准差,来达到归一化均值方差的目的。另外值得一提的一段话是:

感觉除了层数和每层隐节点的个数,也没啥好调的。其它参数,近两年论文基本都用同样的参数设定:迭代几十到几百epoch。sgd,mini batch size从几十到几百皆可。步长0.1,可手动收缩,weight decay取0.005,momentum取0.9。dropout加relu。weight用高斯分布初始化,bias全初始化为0。最后记得输入特征和预测目标都做好归一化。做完这些你的神经网络就应该跑出基本靠谱的结果,否则反省一下自己的人品。

对于ZCA,直接在PCAwhite 的基础上左成特征矩阵U即可,

matlab代码:

close all

%%================================================================
%% Step 0: Load data
% We have provided the code to load data from pcaData.txt into x.
% x is a 2 * 45 matrix, where the kth column x(:,k) corresponds to
% the kth data point.Here we provide the code to load natural image data into x.
% You do not need to change the code below. x = load('pcaData.txt','-ascii');
figure(1);
scatter(x(1, :), x(2, :));
title('Raw data'); %%================================================================
%% Step 1a: Implement PCA to obtain U
% Implement PCA to obtain the rotation matrix U, which is the eigenbasis
% sigma. % -------------------- YOUR CODE HERE --------------------
u = zeros(size(x, 1)); % You need to compute this
[n m] = size(x);
p = mean(x,2);%按行求均值,p为一个2维列向量
%x = x-repmat(p,1,m);%预处理,均值为0
sigma = (1.0/m)*x*x';%协方差矩阵
[u s v] = svd(sigma);%奇异值分解得到特征值与特征向量 % --------------------------------------------------------
hold on
plot([0 u(1,1)], [0 u(2,1)]);%画第一条线
plot([0 u(1,2)], [0 u(2,2)]);%第二条线
scatter(x(1, :), x(2, :));
hold off %%================================================================
%% Step 1b: Compute xRot, the projection on to the eigenbasis
% Now, compute xRot by projecting the data on to the basis defined
% by U. Visualize the points by performing a scatter plot. % -------------------- YOUR CODE HERE --------------------
xRot = zeros(size(x)); % 初始化一个基变换后的数据
xRot = u'*x; %做基变换 % -------------------------------------------------------- % Visualise the covariance matrix. You should see a line across the
% diagonal against a blue background.
figure(2);
scatter(xRot(1, :), xRot(2, :));
title('xRot'); %%================================================================
%% Step 2: Reduce the number of dimensions from 2 to 1.
% Compute xRot again (this time projecting to 1 dimension).
% Then, compute xHat by projecting the xRot back onto the original axes
% to see the effect of dimension reduction % 用投影后的数据还原原始数据
k = 1; % Use k = 1 and project the data onto the first eigenbasis
xHat = zeros(size(x)); % 还原原始数据
%[u(:,1),zeros(n,1)]'*x 代表原数据在新基上的前K维的投影,之后的维度为0
%对降维后的数据进行还原:u * xRot = Xhat,Xhat为还原后的数据
xHat = u*([u(:,1),zeros(n,1)]'*x);%n代表数据的维度 % --------------------------------------------------------
figure(3);
scatter(xHat(1, :), xHat(2, :));
title('xHat'); %%================================================================
%% Step 3: PCA Whitening
% Complute xPCAWhite and plot the results. epsilon = 1e-5;
% -------------------- YOUR CODE HERE --------------------
xPCAWhite = zeros(size(x)); % You need to compute this
% s为对角阵,diag(s)会返回s主对角线元素组成的列向量
% diag(1./sqrt(diag(s)+epsilon))会返回一个对角阵,
% 对角线元素为 -> 1./sqrt(diag(s)+epsilon
% 变换后的数据为 : Xrot = u'*x
%这样做对应于Xrot的数据再每个维度除以其标准差
xPCAWhite = diag(1./sqrt(diag(s)+epsilon))*u'*x; % --------------------------------------------------------
figure(4);
scatter(xPCAWhite(1, :), xPCAWhite(2, :));
title('xPCAWhite'); %%================================================================
%% Step 3: ZCA Whitening
% Complute xZCAWhite and plot the results. % -------------------- YOUR CODE HERE --------------------
xZCAWhite = zeros(size(x)); % You need to compute this
xZCAWhite = u*diag(1./sqrt(diag(s)+epsilon))*u'*x; % --------------------------------------------------------
figure(5);
scatter(xZCAWhite(1, :), xZCAWhite(2, :));
title('xZCAWhite'); %% Congratulations! When you have reached this point, you are done!
% You can now move onto the next PCA exercise. :)

PCA与Whitening与ZCA的一个小实验:参考自http://deeplearning.stanford.edu/wiki/index.php/Exercise:PCA_and_Whitening

%%================================================================
%% Step 0a: 加载数据
% 随机采样10000张图片放入到矩阵x里.
%  x 是一个 144 * 10000 的矩阵,该矩阵的第 k列 x(:, k) 对应第k张图片
 
x = sampleIMAGESRAW();
figure('name','Raw images');
randsel = randi(size(x,2),200,1); % A random selection of samples for visualization
display_network(x(:,randsel));
 
%%================================================================
%% Step 0b: 0-均值(Zero-mean)这些数据 (按行)
%  You can make use of the mean and repmat/bsxfun functions.
[n m] = size(x);
p = mean(x,1);
x = x - repmat(p,1,m);
%%================================================================
%% Step 1a: Implement PCA to obtain xRot
%  Implement PCA to obtain xRot, the matrix in which the data is expressed
%  with respect to the eigenbasis of sigma, which is the matrix U.
 
xRot = zeros(size(x)); % 新基下的数据
sigma =(1.0/m)*x*x';
[u s v] = svd(sigma);
XRot = u'*x;
 
%%================================================================
%% Step 1b: Check your implementation of PCA
% 新基U下的数据的协方差矩阵是对角阵,只在主对角线上不为0
%  Write code to compute the covariance matrix, covar.
%  When visualised as an image, you should see a straight line across the
%  diagonal (non-zero entries) against a blue background (zero entries).
 
% -------------------- YOUR CODE HERE --------------------
covar = zeros(size(x, 1)); % You need to compute this
covar = (1./m)*xRot*xRot'; %新基下数据的均值仍然为0,直接计算covariance
 
% Visualise the covariance matrix. You should see a line across the
% diagonal against a blue background.
figure('name','Visualisation of covariance matrix');
imagesc(covar);
 
%%================================================================
%% Step 2: Find k, the number of components to retain
%  Write code to determine k, the number of components to retain in order
%  to retain at least 99% of the variance.
%  保留99%的方差比
 
% -------------------- YOUR CODE HERE --------------------
k = 0; % Set k accordingly
for i = i,n:
lambd = diag(s)%对角线元素组成的列向量
% 通过循环找到99%的方差百分比的k值
for k = 1:n
    if sum(lambd(1:k))/sum(lambd)<0.99
        continue;
end
%下面是另一种k的求法
%其中cumsum(ss)求出的是一个累积向量,也就是说ss向量值的累加值
%并且(cumsum(ss)/sum(ss))<=0.99是一个向量,值为0或者1的向量,为1表示满足那个条件
%k = length(ss((cumsum(ss)/sum(ss))<=0.99));
 
%%================================================================
%% Step 3: Implement PCA with dimension reduction
%  Now that you have found k, you can reduce the dimension of the data by
%  discarding the remaining dimensions. In this way, you can represent the
%  data in k dimensions instead of the original 144, which will save you
%  computational time when running learning algorithms on the reduced
%  representation.
%
%  Following the dimension reduction, invert the PCA transformation to produce
%  the matrix xHat, the dimension-reduced data with respect to the original basis.
%  Visualise the data and compare it to the raw data. You will observe that
%  there is little loss due to throwing away the principal components that
%  correspond to dimensions with low variation.
 
% -------------------- YOUR CODE HERE --------------------
xHat = zeros(size(x));  % You need to compute this
%把x映射到U的前k个基上 u(:,1:k)'*x作为Xrot',Xrot'为k*m维的
%补全整个Xrot'中k到n维的元素为0,然后左乘U变回到原来的基下得到Xhat
% 首先为了降维做一个基变换,降维后要还原到原来的坐标系下,还原后为
%对应的降维后的原始数据
 
xHat = u*[u(:,1:k)'*x;zeros(n-k,m)];
 
% Visualise the data, and compare it to the raw data
% You should observe that the raw and processed data are of comparable quality.
% For comparison, you may wish to generate a PCA reduced image which
% retains only 90% of the variance.
 
figure('name',['PCA processed images ',sprintf('(%d / %d dimensions)', k, size(x, 1)),'']);
display_network(xHat(:,randsel));
figure('name','Raw images');
display_network(x(:,randsel));
 
%%================================================================
%% Step 4a: Implement PCA with whitening and regularisation
%  Implement PCA with whitening and regularisation to produce the matrix
%  xPCAWhite.
 
epsilon = 0.1;
xPCAWhite = zeros(size(x));
 
% 白化处理
% xRot = u' * x 为白化后的数据
xPCAWhite = diag(1./sqrt(diag(s) + epsilon))* u' * x;
figure('name','PCA whitened images'); display_network(xPCAWhite(:,randsel));
%%================================================================ %%
Step 4b: Check your implementation of PCA whitening
% Check your implementation of PCA whitening with and without regularisation.
% PCA whitening without regularisation results a covariance matrix
% that is equal to the identity matrix. PCA whitening with regularisation
% results in a covariance matrix with diagonal entries starting close to
% 1 and gradually becoming smaller. We will verify these properties here.
% Write code to compute the covariance matrix, covar.
% Without regularisation (set epsilon to 0 or close to 0),
% when visualised as an image, you should see a red line across the
% diagonal (one entries) against a blue background (zero entries).
% With regularisation, you should see a red line that slowly turns
% blue across the diagonal, corresponding to the one entries slowly
% becoming smaller.
% -------------------- YOUR CODE HERE --------------------
% Visualise the covariance matrix. You should see a red line across the
% diagonal against a blue background. figure('name','Visualisation of covariance matrix'); imagesc(covar);
%%================================================================ %
% Step 5: Implement ZCA whitening % Now implement ZCA whitening to produce the matrix xZCAWhite.
% Visualise the data and compare it to the raw data. You should observe
% that whitening results in, among other things, enhanced edges.
xZCAWhite = zeros(size(x));
% ZCA处理
xZCAWhite = u*xPCAWhite;
% Visualise the data, and compare it to the raw data.
% You should observe that the whitened images have enhanced edges.
figure('name','ZCA whitened images');
display_network(xZCAWhite(:,randsel)); figure('name','Raw images'); display_network(x(:,randsel));

  参考:

http://www.cnblogs.com/tornadomeet/archive/2013/03/21/2973231.html

UFLDL

(六)6.8 Neurons Networks implements of PCA ZCA and whitening的更多相关文章

  1. CS229 6.8 Neurons Networks implements of PCA ZCA and whitening

    PCA 给定一组二维数据,每列十一组样本,共45个样本点 -6.7644914e-01  -6.3089308e-01  -4.8915202e-01 ... -4.4722050e-01  -7.4 ...

  2. (六)6.10 Neurons Networks implements of softmax regression

    softmax可以看做只有输入和输出的Neurons Networks,如下图: 其参数数量为k*(n+1) ,但在本实现中没有加入截距项,所以参数为k*n的矩阵. 对损失函数J(θ)的形式有: 算法 ...

  3. (六) 6.1 Neurons Networks Representation

    面对复杂的非线性可分的样本是,使用浅层分类器如Logistic等需要对样本进行复杂的映射,使得样本在映射后的空间是线性可分的,但在原始空间,分类边界可能是复杂的曲线.比如下图的样本只是在2维情形下的示 ...

  4. CS229 6.10 Neurons Networks implements of softmax regression

    softmax可以看做只有输入和输出的Neurons Networks,如下图: 其参数数量为k*(n+1) ,但在本实现中没有加入截距项,所以参数为k*n的矩阵. 对损失函数J(θ)的形式有: 算法 ...

  5. (六)6.11 Neurons Networks implements of self-taught learning

    在machine learning领域,更多的数据往往强于更优秀的算法,然而现实中的情况是一般人无法获取大量的已标注数据,这时候可以通过无监督方法获取大量的未标注数据,自学习( self-taught ...

  6. (六)6.5 Neurons Networks Implements of Sparse Autoencoder

    一大波matlab代码正在靠近.- -! sparse autoencoder的一个实例练习,这个例子所要实现的内容大概如下:从给定的很多张自然图片中截取出大小为8*8的小patches图片共1000 ...

  7. (六)6.13 Neurons Networks Implements of stack autoencoder

    对于加深网络层数带来的问题,(gradient diffuse  局部最优等)可以使用逐层预训练(pre-training)的方法来避免 Stack-Autoencoder是一种逐层贪婪(Greedy ...

  8. (六) 6.2 Neurons Networks Backpropagation Algorithm

    今天得主题是BP算法.大规模的神经网络可以使用batch gradient descent算法求解,也可以使用 stochastic gradient descent 算法,求解的关键问题在于求得每层 ...

  9. CS229 6.11 Neurons Networks implements of self-taught learning

    在machine learning领域,更多的数据往往强于更优秀的算法,然而现实中的情况是一般人无法获取大量的已标注数据,这时候可以通过无监督方法获取大量的未标注数据,自学习( self-taught ...

随机推荐

  1. HDU 2159 FATE (二维完全背包

    FATE http://acm.hdu.edu.cn/showproblem.php?pid=2159 Problem Description 最近xhd正在玩一款叫做FATE的游戏,为了得到极品装备 ...

  2. Codeforces Round #263 (Div. 2) D. Appleman and Tree(树形DP)

    题目链接 D. Appleman and Tree time limit per test :2 seconds memory limit per test: 256 megabytes input ...

  3. VS2003 下GridControl的列显示成图片+文字的形式实现

    public RC_CustomerSolicitListUC() { // 该调用是 Windows.Forms 窗体设计器所必需的. InitializeComponent(); // TODO: ...

  4. poj 2480 Longge's problem 积性函数

    思路:首先给出几个结论: 1.gcd(a,b)是积性函数: 2.积性函数的和仍然是积性函数: 3.phi(a^b)=a^b-a^(b-1); 记 f(n)=∑gcd(i,n),n=p1^e1*p2^e ...

  5. Linux网络编程8——对TCP与UDP的简易封装

    引言 每次使用socket通信,都会有很对相似的操作.本文,会对TCP与UDP通信做一简单封装,并生成动态库. 代码 my_socket.h #ifndef __MY_SOCKET_H__ #defi ...

  6. BZOJ 3224: Tyvj 1728 普通平衡树 vector

    3224: Tyvj 1728 普通平衡树 Description 您需要写一种数据结构(可参考题目标题),来维护一些数,其中需要提供以下操作:1. 插入x数2. 删除x数(若有多个相同的数,因只删除 ...

  7. poj 1733(带权并查集+离散化)

    题目链接:http://poj.org/problem?id=1733 思路:这题一看就想到要用并查集做了,不过一看数据这么大,感觉有点棘手,其实,我们仔细一想可以发现,我们需要记录的是出现过的节点到 ...

  8. Jmeter正则表达式提取器的使用方法(转)

    下面简单介绍一下Jmeter正则表达式提取器的使用方法. 1.添加Jmeter正则表达式提取器:在具体的Request下添加Jmeter正则表达式提取器(Jmeter正则表达式在“后置处理器”下面)  ...

  9. 修改bigbluebutton白板上传中文乱码

          中文命名的文档,上传是乱码 -- 显示的 打开后, 中文部分是乱码 Comment 1 by project member ffdixon, Nov 08, 2010 Translatio ...

  10. updmap-sys failed. Output has been stored in

    Ubuntu 12.04升级到Ubuntu 12.04lts的时候,出现错误: Do you want to continue? [Y/n] ySetting up tex-common (4.04) ...