Notes on Probabilistic Latent Semantic Analysis (PLSA)
转自:http://www.hongliangjie.com/2010/01/04/notes-on-probabilistic-latent-semantic-analysis-plsa/
I highly recommend you read the more detailed version of http://arxiv.org/abs/1212.3900
Formulation of PLSA
There are two ways to formulate PLSA. They are equivalent but may lead to different inference process.
Let’s see why these two equations are equivalent by using Bayes rule.

The whole data set is generated as (we assume that all words are generated independently):

The Log-likelihood of the whole data set for (1) and (2) are:


EM
For
or
, the optimization is hard due to the log of sum. Therefore, an algorithm called Expectation-Maximization is usually employed. Before we introduce anything about EM, please note that EM is only guarantee to find a local optimum (although it may be a global one).
First, we see how EM works in general. As we shown for PLSA, we usually want to estimate the likelihood of data, namely
, given the paramter
. The easiest way is to obtain a maximum likelihood estimator by maximizing
. However, sometimes, we also want to include some hidden variables which are usually useful for our task. Therefore, what we really want to maximize is
, the complete likelihood. Now, our attention becomes to this complete likelihood. Again, directly maximizing this likelihood is usually difficult. What we would like to show here is to obtain a lower bound of the likelihood and maximize this lower bound.
We need Jensen’s Inequality to help us obtain this lower bound. For any convex function
, Jensen’s Inequality states that :

Thus, it is not difficult to show that :

and for concave functions (like logarithm), it is :

Back to our complete likelihood, we can obtain the following conclusion by using concave version of Jensen’s Inequality :

Therefore, we obtained a lower bound of complete likelihood and we want to maximize it as tight as possible. EM is an algorithm that maximize this lower bound through a iterative fashion. Usually, EM first would fix current
value and maximize
and then use the new
value to obtain a new guess on
, which is essentially a two stage maximization process. The first step can be shown as follows:

The first term is the same for all
. Therefore, in order to maximize the whole equation, we need to minimize KL divergence between
and
, which eventually leads to the optimum solution of
. So, usually for E-step, we use current guess of
to calculate the posterior distribution of hidden variable as the new update score. For M-step, it is problem-dependent. We will see how to do that in later discussions.
Another explanation of EM is in terms of optimizing a so-called Q function. We devise the data generation process as
. Therefore, the complete likelihood is modified as:

Think about how to maximize
. Instead of directly maximizing it, we can iteratively maximize
as :

Now take the expectation of this equation, we have:

The last term is always non-negative since it can be recognized as the KL-divergence of
and
. Therefore, we obtain a lower bound of Likelihood :

The last two terms can be treated as constants as they do not contain the variable
, so the lower bound is essentially the first term, which is also sometimes called as “Q-function”. 
EM of Formulation 1
In case of Formulation 1, let us introduce hidden variables
to indicate which hidden topic
is selected to generated
in
(
). Therefore, the complete likelihood can be formulated as :

From the equation above, we can write our Q-function for the complete likelihood
:

For E-step, simply using Bayes Rule, we can obtain:

For M-step, we need to maximize Q-function, which needs to be incorporated with other constraints:

and take all derivatives:

Therefore, we can easily obtain:

EM of Formulation 2
Use similar method to introduce hidden variables to indicate which
is selected to generated
and
and we can have the following complete likelihood :

Therefore, the Q-function
would be :

For E-step, again, simply using Bayes Rule, we can obtain:

For M-step, we maximize the constraint version of Q-function:

and take all derivatives:

Therefore, we can easily obtain:

Notes on Probabilistic Latent Semantic Analysis (PLSA)的更多相关文章
- NLP —— 图模型(三)pLSA(Probabilistic latent semantic analysis,概率隐性语义分析)模型
LSA(Latent semantic analysis,隐性语义分析).pLSA(Probabilistic latent semantic analysis,概率隐性语义分析)和 LDA(Late ...
- 主题模型之概率潜在语义分析(Probabilistic Latent Semantic Analysis)
上一篇总结了潜在语义分析(Latent Semantic Analysis, LSA),LSA主要使用了线性代数中奇异值分解的方法,但是并没有严格的概率推导,由于文本文档的维度往往很高,如果在主题聚类 ...
- Latent semantic analysis note(LSA)
1 LSA Introduction LSA(latent semantic analysis)潜在语义分析,也被称为LSI(latent semantic index),是Scott Deerwes ...
- 主题模型之潜在语义分析(Latent Semantic Analysis)
主题模型(Topic Models)是一套试图在大量文档中发现潜在主题结构的机器学习模型,主题模型通过分析文本中的词来发现文档中的主题.主题之间的联系方式和主题的发展.通过主题模型可以使我们组织和总结 ...
- Latent Semantic Analysis (LSA) Tutorial 潜语义分析LSA介绍 一
Latent Semantic Analysis (LSA) Tutorial 译:http://www.puffinwarellc.com/index.php/news-and-articles/a ...
- 潜语义分析(Latent Semantic Analysis)
LSI(Latent semantic indexing, 潜语义索引)和LSA(Latent semantic analysis,潜语义分析)这两个名字其实是一回事.我们这里称为LSA. LSA源自 ...
- 潜在语义分析Latent semantic analysis note(LSA)原理及代码
文章引用:http://blog.sina.com.cn/s/blog_62a9902f0101cjl3.html Latent Semantic Analysis (LSA)也被称为Latent S ...
- 海量数据挖掘MMDS week4: 推荐系统之隐语义模型latent semantic analysis
http://blog.csdn.net/pipisorry/article/details/49256457 海量数据挖掘Mining Massive Datasets(MMDs) -Jure Le ...
- Latent Semantic Analysis(LSA/ LSI)原理简介
LSA的工作原理: How Latent Semantic Analysis Works LSA被广泛用于文献检索,文本分类,垃圾邮件过滤,语言识别,模式检索以及文章评估自动化等场景. LSA其中一个 ...
随机推荐
- Be quiet
Be quiet */--> UP | HOME Be quiet Table of Contents 1 Be quiet 1 Be quiet 最近心情有点不太好,各方面原因.主要是25岁是 ...
- adb shell 查找并删除文件
# -*- coding: cp936 -*- ## function: remove file ## remark: python version -- import os,sys import l ...
- table 锁定表头,出滚动对齐
前一段时间来了一个汇总的需求,想锁定表头,这个问题在网上找了老半天,实现起来都比较麻烦,经过这几天的摸索终于找到一个简洁的处理方法 下面介绍一下如何处理的: 1.thead 和tbody 放两个tab ...
- jQuery Mobile中文手册:开发入门
jQuery Mobile 以“Write Less, Do More”作为目标,为所有的主流移动操作系统平台提供了高度统一的 UI 框架:jQuery 的移动框架可以让你为所有流行的移动平台设计一个 ...
- Codeforces Round #218 (Div. 2) C题
C. Hamburgers time limit per test 1 second memory limit per test 256 megabytes input standard input ...
- Net判断一个对象是否为数值类型 z
http://www.cnblogs.com/SkyD/p/4053461.html public static bool IsNumeric(this Type dataType) { if (da ...
- linux下使用go-oci8
地址:https://github.com/wendal/go-oci8 它是 https://github.com/mattn/go-oci8 的分支. win下安装步骤参考:http://www. ...
- PermGen space Eclipse 终极解决方案
1.选中项目右键 run or debug configurations... 2.在 VM arguments 加入 -Xms128m -Xmx512m -XX:PermSize=64M -XX: ...
- gcc 安装
最近在中标麒麟上面工作,结果发现上面gcc都没有,没有办法只好自己装(PS 中标麒麟:怪我咯) 资源:http://download.csdn.net/detail/jiahuat/8715413 按 ...
- N皇后问题--回溯法
1.引子 中国有一句古话,叫做“不撞南墙不回头",生动的说明了一个人的固执,有点贬义,但是在软件编程中,这种思路确是一种解决问题最简单的算法,它通过一种类似于蛮干的思路,一步一步地往前走,每 ...

