Comparison

  LSA pLSA
1. Theoretical background Linear Algebra Probabilities and Statistics
2. Objective function Frobenius norm Likelihood function
3. Polysemy No Yes
4. Folding-in Straightforward Complicated

1. LSA stems from Linear Algebra as it is nothing more than a Singular Value Decomposition. On the other hand, pLSA has a strong probabilistic grounding (latent variable models).

2. SVD is a least squares method (it finds a low-rank matrix approximation that minimizes the Frobenius norm of the difference with the original matrix). Moreover, as it is well known in Machine Learning, the least squares solution corresponds to the Maximum Likelihood solution when experimental errors are gaussian. Therefore, LSA makes an implicit assumption of gaussian noise on the term counts. On the other hand, the objective function maximized in pLSA is the likelihood function of multinomial sampling.

The values in the concept-term matrix found by LSA are not normalized and may even contain negative values. On the other hand, values found by pLSA are probabilities which means they are interpretable and can be combined with other models.

Note: SVD is equivalent to PCA (Principal Component Analysis) when the data is centered (has zero-mean).

3. Both LSA and pLSA can handle synonymy but LSA cannot handle polysemy, as words are defined by a unique point in a space.

4. LSA and pLSA analyze a corpus of documents in order to find a new low-dimensional representation of it. In order to be comparable, new documents that were not originally in the corpus must be projected in the lower-dimensional space too. This is called “folding-in”. Clearly, new documents folded-in don’t contribute to learning the factored representation so it is necessary to rebuild the model using all the documents from time to time.

In LSA, folding-in is as easy as a matrix-vector product. In pLSA, this requires several iterations of the EM algorithm.

LSA和pLSA的比较的更多相关文章

  1. LSA,pLSA原理及其代码实现

    一. LSA 1. LSA原理 LSA(latent semantic analysis)潜在语义分析,也被称为 LSI(latent semantic index),是 Scott Deerwest ...

  2. 文本情感分析(一):基于词袋模型(VSM、LSA、n-gram)的文本表示

    现在自然语言处理用深度学习做的比较多,我还没试过用传统的监督学习方法做分类器,比如SVM.Xgboost.随机森林,来训练模型.因此,用Kaggle上经典的电影评论情感分析题,来学习如何用传统机器学习 ...

  3. LDA

    2 Latent Dirichlet Allocation Introduction LDA是给文本建模的一种方法,它属于生成模型.生成模型是指该模型可以随机生成可观测的数据,LDA可以随机生成一篇由 ...

  4. bow lsa plsa

    Bag-of-Words (BoW) 模型是NLP和IR领域中的一个基本假设.在这个模型中,一个文档(document)被表示为一组单词(word/term)的无序组合,而忽略了语法或者词序的部分.B ...

  5. 一口气讲完 LSA — PlSA —LDA在自然语言处理中的使用

    自然语言处理之LSA LSA(Latent Semantic Analysis), 潜在语义分析.试图利用文档中隐藏的潜在的概念来进行文档分析与检索,能够达到比直接的关键词匹配获得更好的效果. LSA ...

  6. Latent semantic analysis note(LSA)

    1 LSA Introduction LSA(latent semantic analysis)潜在语义分析,也被称为LSI(latent semantic index),是Scott Deerwes ...

  7. NLP —— 图模型(三)pLSA(Probabilistic latent semantic analysis,概率隐性语义分析)模型

    LSA(Latent semantic analysis,隐性语义分析).pLSA(Probabilistic latent semantic analysis,概率隐性语义分析)和 LDA(Late ...

  8. DL4NLP——词表示模型(一)表示学习;syntagmatic与paradigmatic两类模型;基于矩阵的LSA和GloVe

    本文简述了以下内容: 什么是词表示,什么是表示学习,什么是分布式表示 one-hot representation与distributed representation(分布式表示) 基于distri ...

  9. [IR] Concept Search and PLSA

    [Topic Model]主题模型之概率潜在语义分析(Probabilistic Latent Semantic Analysis) 感觉LDA在实践中的优势其实不大,学好pLSA才是重点 阅读笔记 ...

随机推荐

  1. JavaScript中的this基本问题

    在函数中 this 到底取何值,是在函数真正被调用执行的时候确定下来的,函数定义的时候确定不了. 执行上下文环境 : **定义**:执行函数的时候,会产生一个上下文的对象,里面保存变量,函数声明和th ...

  2. django中使用sha1,md5加密

    # salt 盐 使用sha1加密算法,返回str加密后的字符串 # 提高字符串的复杂的 from hashlib import sha1 def get_hash(str, salt=None): ...

  3. 使用ftp软件上传下载php文件时换行丢失bug(全部变为一行)

    文章来源:http://www.piaoyi.org/computer/ftp-php-r-n-bug.html 正 文: 在使用ftp软件上传下载php源文件时,我们偶尔会发现在本地windows下 ...

  4. LeetCode 204. Count Primes (质数的个数)

    Description: Count the number of prime numbers less than a non-negative number, n. 题目标签:Hash Table 题 ...

  5. LeetCode 75. Sort Colors(排序颜色)

    Given an array with n objects colored red, white or blue, sort them so that objects of the same colo ...

  6. 压缩感知重构算法之压缩采样匹配追踪(CoSaMP)

    压缩采样匹配追踪(CompressiveSampling MP)是D. Needell继ROMP之后提出的又一个具有较大影响力的重构算法.CoSaMP也是对OMP的一种改进,每次迭代选择多个原子,除了 ...

  7. Python - 单步调试

    Python 有一个单步调试器模块,能实现基本的调试效果!详情请看Python标准文档说明:https://docs.python.org/2/library/pdb.html 调试例子: >& ...

  8. Python CRM项目二

    一.准备工作 如果没有配置基本的项目,请参考 http://www.cnblogs.com/luhuajun/p/7771196.html 当我们配置完成后首先准备我们的app 创建2个app分别对应 ...

  9. MAC 下虚拟主机的配置

    第一部分:httpd.conf 1:找到这段,改成如下这样 <Directory /> Options Indexes FollowSymLinks AllowOverride All O ...

  10. 1041: [HAOI2008]圆上的整点

    1041: [HAOI2008]圆上的整点 Time Limit: 10 Sec  Memory Limit: 162 MBSubmit: 4298  Solved: 1944[Submit][Sta ...