Generalized Low Rank Approximation of Matrices
Generalized Low Rank Approximations of Matrices
JIEPING YE*jieping@cs.umn.edu
Department of Computer Science & Engineering,University of Minnesota-Twin Cities, Minneapolis, MN 55455, USA
Published online:12 August 2005
Abstract.The problem of computing low rank approximations of matrices is considered. The novel
aspect of our approach is that the low rank approximations are on a collection of matrices. We formulate this as an optimization problem, which aims to minimize the reconstruction (approximation) error. To the best of our knowledge, the optimization problem
proposed in this paper does not admit a closed form solution. We thus derive an iterative algorithm, namely GLRAM, which stands for the Generalized Low Rank Approximations of Matrices. GLRAM reduces the reconstruction error sequentially, and the resulting
approximation is thus improved during successive iterations. Experimental results show that the algorithm converges rapidly.
We have conducted extensive experiments on image data to evaluate the effectiveness of the proposed algorithm and compare
the computed low rank approximations with those obtained from traditional Singular Value Decomposition (SVD) based methods. The comparison is based on the reconstruction error, misclassification error rate,and computation time. Results show that GLRAM is competitive
with SVD for classification, while it has a muchlower computation cost. However, GLRAM results in a larger reconstruction error than SVD. To further reduce the reconstruction error, we study the combination of GLRAM and SVD, namely GLRAM + SVD, where SVD is
repreceded by GLRAM. Results show that when using the same number of reduced dimensions, GLRAM+SVD achievessignificant
reduction of the reconstruction error as compared to GLRAM, while keeping the computation cost low.
Generalized Low Rank Approximation of Matrices的更多相关文章
- Sparse Principal Component Analysis via Regularized Low Rank Matrix Approximation(Adjusted Variance)
目录 前言 文章概述 固定\(\widetilde{\mathrm{v}}\) 固定\(\widetilde{\mathrm{u}}\) Adjusted Variance 前言 这篇文章用的也是交替 ...
- 吴恩达机器学习笔记59-向量化:低秩矩阵分解与均值归一化(Vectorization: Low Rank Matrix Factorization & Mean Normalization)
一.向量化:低秩矩阵分解 之前我们介绍了协同过滤算法,本节介绍该算法的向量化实现,以及说说有关该算法可以做的其他事情. 举例:1.当给出一件产品时,你能否找到与之相关的其它产品.2.一位用户最近看上一 ...
- 推荐系统(recommender systems):预测电影评分--构造推荐系统的一种方法:低秩矩阵分解(low rank matrix factorization)
如上图中的predicted ratings矩阵可以分解成X与ΘT的乘积,这个叫做低秩矩阵分解. 我们先学习出product的特征参数向量,在实际应用中这些学习出来的参数向量可能比较难以理解,也很难可 ...
- <<Numerical Analysis>>笔记
2ed, by Timothy Sauer DEFINITION 1.3A solution is correct within p decimal places if the error is l ...
- <Numerical Analysis>(by Timothy Sauer) Notes
2ed, by Timothy Sauer DEFINITION 1.3A solution is correct within p decimal places if the error is l ...
- 2017年计算语义相似度最新论文,击败了siamese lstm,非监督学习
Page 1Published as a conference paper at ICLR 2017AS IMPLE BUT T OUGH - TO -B EAT B ASELINE FOR S EN ...
- cs231n spring 2017 lecture15 Efficient Methods and Hardware for Deep Learning 听课笔记
1. 深度学习面临的问题: 1)模型越来越大,很难在移动端部署,也很难网络更新. 2)训练时间越来越长,限制了研究人员的产量. 3)耗能太多,硬件成本昂贵. 解决的方法:联合设计算法和硬件. 计算硬件 ...
- 李宏毅-Network Compression课程笔记
一.方法总结 Network Pruning Knowledge Distillation Parameter Quantization Architecture Design Dynamic Com ...
- cs231n spring 2017 lecture15 Efficient Methods and Hardware for Deep Learning
讲课嘉宾是Song Han,个人主页 Stanford:https://stanford.edu/~songhan/:MIT:https://mtlsites.mit.edu/songhan/. 1. ...
随机推荐
- 触摸控(触摸与移动 Touch & Mobility)的官方教程
http://blogs.msdn.com/b/linanw/archive/2010/05/02/windows-1-wpf-4.aspx
- Java面试题上
1.面向对象的特征有哪些方面?答:面向对象的特征主要有以下几个方面:- 抽象:抽象是将一类对象的共同特征总结出来构造类的过程,包括数据抽象和行为抽象两方面.抽象只关注对象有哪些属性和行为,并不关注这些 ...
- Github删除项目
相关博客:GitLab删除项目操作 发现github的项目删除按钮挺难找的,记录一下. 1,先在github打开项目,进入项目 2,点击Settings,进去后往下拉就是删除按钮.
- Eclipse 快捷键大全(群里共享的,留下来以后兴许会用到)
Eclipse快捷键大全Ctrl+1 快速修复(最经典的快捷键,就不用多说了)Ctrl+D: 删除当前行 Ctrl+Alt+↓ 复制当前行到下一行(复制增加) Ctrl+Alt+↑ 复制当前行到上一行 ...
- centos下环境变量配置
export JAVA_HOME=/usr/local/jdk1.7.0_80export JRE_HOME=/usr/local/jdk1.7.0_80/jreexport CLASSPATH=.: ...
- L121
今天上午签字仪式的布置与该场合的严肃性非常协调.The setting for this morning's signing ceremony matched the solemnity of the ...
- hdu1085 Holding Bin-Laden Captive!(母函数)
简单的母函数应用. #include<iostream> #include<cstdio> #include<cstdlib> #include<cstrin ...
- Codeforces Round #281 (Div. 2) A. Vasya and Football(模拟)
简单题,却犯了两个错误导致WA了多次. 第一是程序容错性不好,没有考虑到输入数据中可能给实际已经罚下场的人再来牌,这种情况在system测试数据里是有的... 二是chronologically这个词 ...
- hdu4451 Dressing(容斥原理)
#include<iostream> #include<cstdio> #include<cstdlib> #include<cstring> #inc ...
- poj2010
大学招n(n为奇数)个牛 招第i个牛需要ai块钱 第i个牛高考si分 输入招的牛数n 总的牛数c 总的钱数f 以及ai si 问用这些钱招的n个牛高考分数的中位数最大是多少 如果钱不够输出-1 这题结 ...