Study notes for Latent Dirichlet Allocation
1. Topic Models
- Topic models are based upon the idea that documents are mixtures of topics, where a topic is a probabilistic distribution over words. A topic model is a generative model for documents: it specifies a simple probabilistic procedure by which documents can be generated (Steyvers and Griffiths, 2007). Two general steps are taken to make a new document:
- Step 1, for each document, one chooses a distribution over topics.
- Step 2, to generate each word in that document, one chooses a topic at random according to the distribution (because one same word may belong to various topics with different probabilities.). Then a word is drawn from the chosen topic in terms of probabilistic sampling, e.g. as illustrated in Figures 1 and 2.
- When fitting a generative model, the goal is to find the best set of latent variables that can explain the observed data (i.e., observed words in documents), assuming that the model actually generated the data.
- Many different generative models have been proposed under the same assumption that a document is a mixture of topics, but make slightly different statistical assumptions.
- The number of topics will affect the interpretability of the results. A solution with too few topics will generally result in very broad topics whereas a solution with too many topics will result in uninterpretable topics that pick out idiosyncratic word combinations. One way is to choose the number of topics that leads to best generalization performance to new tasks.
- Notations:
- P(z) is the probability distribution over topics z in a particular document
- P(w|z) is the probability distribution over words w given topic z.
- The model specifies the distribution over words within a document is:
where t is the number of topics,
is the probability that the j-th topic is chosen/sampled for the i-th word token, and
is the probability of word wi under topic j.
is the multinomial distribution over words for topic j.
is the multinomial distribution over topics for document d.
- D is the number of documents, and each document d consists of Nd words,
is the number of word tokens.
2. The LDA Model
- Latent Dirichlet Allocation (LDA) is a generative model (i.e., graphic model) that allows sets of observations to be explained by unobserved latent variables that explain why some parts of the data are similar.
- Different from PLSA, the topic distribution in LDA is assumed to have a Dirichlet prior.
- Specifically, each document has a Dirichlet prior distribution of topics, and each topic has a Dirichlet prior distribution of words.
- In practice, this assumption results in more reasonable mixtures of topics in a document.
- However, PLSA may be equivalent to the LDA model under a uniform Dirichlet prior distribution.
Toy Example
- I like to eat broccoli and bananas.
- I ate a banana and spinach smoothie for breakfast.
- Chinchillas and kittens are cute.
- My sister adopted a kitten yesterday.
- Look at this cute hamster munching on a piece of broccoli.
- Sentences 1 and 2: 100% Topic A
- Sentences 3 and 4: 100% Topic B
- Sentence 5: 60% Topic A, 40% Topic B
- Topic A: 30% broccoli, 15% bananas, 10% breakfast, 10% munching, … (at which point, you could interpret topic A to be about food)
- Topic B: 20% chinchillas, 20% kittens, 20% cute, 15% hamster, … (at which point, you could interpret topic B to be about cute animals)
Dirichlet Prior
- Blei et al. (2003) extends the PLSI model by introducing a Dirichlet prior on topics
, later Griffiths and Steyvers (2003) enrich it by placing a Dirichlet prior on words
. Good choices for the hyperparameters α and β will depend on number of topics and vocabulary size. From previous research, we have found α =50/t and β = 0.01 to work well with many different text collections (Steyvers and Griffiths, 2007).
- Conjugate prior. If the posterior distributions are in the same family as the prior probability distributions, the prior and posterior are then called conjugate distributions, and the prior is called conjugate prior for the likelihood.
Graphical Model
- The graphical model is represented by plate notation as shown as follows, where the shaded and unshaded variables indicate observed and latent (i.e., unobserved) variables respectively.
where arrows indicate conditional dependencies between variables while plates (the boxes in the figure) refer to repetitions of sampling steps with the variable in the lower right corner referring to the number of samples. For example, the inner plate over z and w illustrates the repeated sampling of topics and words until Nd words have been generated for document d.
- The topic models can be interpreted by matrix factorization, illustrated as follows together with LSA interpretations. The word-document co-occurrence matrix is split into two matrices: a topic matrix and a document matrix.Note that the constraints in LDA are that the feature values (topic distributions) are non-negative and should be summed up to one. LSA decomposition (i.e., SVD factorization) does not have such constraints.
Gibbs Sampling
- The challenge is to efficiently estimate the posterior distributions
and
, according to the large number of word tokens in the document collections.
- Gibbs sampling (a.k.a alternating conditional sampling) is a specific form of Markov chain Monte Carlo, simulating a high-dimensional distribution by sampling on lower-dimensional subsets of variables where each subset is conditioned on the value of all others. The sampling is done sequentially and proceeds until the sampled values approximate the target distribution.
- Markov chain Monte Carlo (MCMC) refers to a set of approximate iterative techniques designed to sample values from complex (often high-dimensional) distributions. More lectures about MCMC can be referred to here.
- The procedure is as follows (assuming K topics):
- Go through each document, and randomly assign each word in the document to one of the K topics.
- Notice that this random assignment already gives you both topic representations of all the documents, and word distributions of all the topics (albeit not very good ones).
- So to improve on them, for each document d ...
- Go through each word w in d, and for each topic t, compute two things:
- p(t|d) = the proportion of words in document d that are currently assigned to topic t.
- p(w|t) = the proportion of assignments to topic t over all documents that come from this word w.
- we compute that p(w|d)=p(w|t)p(t|d).
- Go through each word w in d, and for each topic t, compute two things:
- After repeating the previous step a large number of times, you’ll eventually reach a roughly steady state where your assignments are pretty good. So use these assignments to estimate the topic mixtures of each document (by counting the proportion of words assigned to each topic within that document) and the words associated to each topic (by counting the proportion of words assigned to each topic overall).
Generative Process
- Decide the number of words N that a document will have, according to a Poisson distribution.
- Draw a topic distribution,
, where
is a draw from a uniform Dirichlet distribution with scaling parameter
.
- For each word in the document
- Draw a specific topic
, where
is a multinomial distribution.
- Draw a word
using the picked topic.
Assuming this generative model for a collection of documents, LDA then tries to backtrack from the documents to find a set of topics that are likely to have generated the collection.
- Draw a specific topic
- Pick 5 to be the number of words in D.
- Decide that D will be 1/2 about food and 1/2 about cute animals.
- Pick the first word to come from the food topic, which then gives you the word “broccoli”.
- Pick the second word to come from the cute animals topic, which gives you “panda”.
- Pick the third word to come from the cute animals topic, giving you “adorable”.
- Pick the fourth word to come from the food topic, giving you “cherries”.
- Pick the fifth word to come from the food topic, giving you “eating”.So the document generated under the LDA model will be “broccoli panda adorable cherries eating” (note that LDA is a bag-of-words model).
3. Computing Similarity
- The derived topic probability distributions can be used to compute document or word similarity.
- Two documents are similar to the extent that the same topics appear in these documents.
- Two words are similar to the extent that they appear in the same topic.
- My understanding: if we look at the matrix representation, it is easy to compute both document and word similarities according to topic distributions.
Document Similarity
- Given the topic distributions
,
of two documents, the document similarity is measured as that of the topic distributions.
- A standard function to measure the difference or divergence between two distributions p and q is the Kullback Leibler (KL) divergence:
- The KL divergence is asymmetric and in many applications, it is convenient to apply a symmetric measure based on KL divergence:
- Another option is to apply the symmetrized Jensen-Shannon (JS) divergence:
- If we treat the topic distributions as a vector, then other measures such as Euclidian distance, cosine similarity can be applied.
Word Similarity
- The word similarity can be measured by the extent that they share the same topics. Specifically, one word can appear in many topics. Hence the word similarity can be regarded as their overlappings.
- Given two word-topic distributions
and
, either the symmetrized KL or JS divergence can be used to compute word similarity.
References
- Edwin Chen, Introduction to Latent Dirichlet Allocation.
- Blei et al., 2003, Latent Dirichlet Allocation, Journal of Machine Learning Research.
- Griffiths and Steyvers, 2003, Prediction and semantic association, In Neural information processing systems.
- Steyvers and Griffiths, 2007, Probabilistic Topic Models, Handbook of latent semantic analysis.
Study notes for Latent Dirichlet Allocation的更多相关文章
- LDA( Latent Dirichlet Allocation)主题模型 学习报告
1 问题描述 LDA由Blei, David M..Ng, Andrew Y..Jordan于2003年提出,是一种主题模型,它可以将文档集中每篇文档的主题以概率分布的形式给出,从而通过分析一 ...
- [综] Latent Dirichlet Allocation(LDA)主题模型算法
多项分布 http://szjc.math168.com/book/ebookdetail.aspx?cateid=1&§ionid=983 二项分布和多项分布 http:// ...
- Latent Dirichlet Allocation 文本分类主题模型
文本提取特征常用的模型有:1.Bag-of-words:最原始的特征集,一个单词/分词就是一个特征.往往一个数据集就会有上万个特征:有一些简单的指标可以帮助筛选掉一些对分类没帮助的词语,例如去停词,计 ...
- LDA(Latent Dirichlet Allocation)
转自:http://leyew.blog.51cto.com/5043877/860255#559183-tsina-1-46862-ed0973a0c870156ed15f06a6573c8bf0 ...
- JGibbLDA:java版本的LDA(Latent Dirichlet Allocation)实现、修改及使用
转载自:http://blog.csdn.net/memray/article/details/16810763 一.概述 JGibbLDA是一个java版本的LDA(Latent Dirichl ...
- LDA(latent dirichlet allocation)的应用
http://www.52ml.net/1917.html 主题模型LDA(latent dirichlet allocation)的应用还是很广泛的,之前我自己在检索.图像分类.文本分类.用户评论的 ...
- 转:关于Latent Dirichlet Allocation及Hierarchical LDA模型的必读文章和相关代码
关于Latent Dirichlet Allocation及Hierarchical LDA模型的必读文章和相关代码 转: http://andyliuxs.iteye.com/blog/105174 ...
- LDA(latent dirichlet allocation)
1.LDA介绍 LDA假设生成一份文档的步骤如下: 模型表示: 单词w:词典的长度为v,则单词为长度为v的,只有一个分量是1,其他分量为0的向量 $(0,0,...,0,1,0,... ...
- LDA(Latent Dirichlet allocation)主题模型
LDA是一种典型的词袋模型,即它认为一篇文档是由一组词构成的一个集合,词与词之间没有顺序以及先后的关系.一篇文档可以包含多个主题,文档中每一个词都由其中的一个主题生成. 它是一种主题模型,它可以将文档 ...
随机推荐
- QTcpSocket通信编程时阻塞与非阻塞的问题
目标,qt程序作为客户端,windows下winsock作为服务器端,实现两端通信. 开始时写了一个小函数测试: [cpp] view plaincopy QTcpSocket tmpSock; t ...
- More Divisors(反素数)
More Divisors Time Limit: 2 Seconds Memory Limit: 65536 KB Everybody knows that we use decimal ...
- [置顶] 吃论扯谈---吃货和Office 365订阅的关系
什么事物都可以和吃联系起来,在女孩子穿裙子的季节这是一个悲伤的故事! 说明: :Office365是微软云计算产品之一,其采取订阅的方式,按人头*每月的方式付费,用户可以选择自己需要的服务 2:Off ...
- LoadImage()的使用
系统中的定义是: WINUSERAPIHANDLEWINAPILoadImageA( HINSTANCE, LPCSTR, UINT, int, int, UINT ...
- Minimum Window Substring @LeetCode
不好做的一道题,发现String Algorithm可以出很多很难的题,特别是多指针,DP,数学推导的题.参考了许多资料: http://leetcode.com/2010/11/finding-mi ...
- 安装Tomcat指定JDK(转)
一.应用实例 一般情况下一台服务器只跑一个业务,那么就直接配置一套环境,设置好Java环境变量即可.某些时候一台服务器上会安装多个业务,而且各个业务需要的JDK版本各不相同,或者为了使业务独立开来,需 ...
- leetcode第一刷_Binary Tree Inorder Traversal
递归实现当然太简单,也用不着为了ac走这样的捷径吧..非递归实现还挺有意思的. 树的非递归遍历一定要借助栈,相当于把原来编译器做的事情显式的写出来.对于中序遍历,先要訪问最左下的节点,一定是进入循环后 ...
- SystemTap----将SystemTap脚本编译成内核模块
当运行SystemTap脚本时,会根据脚本生成一个内核模块,然后插入到系统中执行后退出.这个过程总共分为5个阶段:parse, elaborate, translate, compile, run ...
- c#取出LDAP SearchResult所有属性
string aaa = System.Threading.Thread.CurrentPrincipal.Identity.Name; DirectorySearcher ds = new Dire ...
- sql2008r2局域网复制订阅实操
10.129.186.37 本机 10.129.186.95 服务器 局域网环境 复制类型:事务复制 注意点:要复制的表,必须有主键,否则不可选复制表. 要点1,局域网尝试用共享的方式保存快照文件夹, ...