"""
=======================================================================================
Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation
======================================================================================= This is an example of applying Non-negative Matrix Factorization
and Latent Dirichlet Allocation on a corpus of documents and
extract additive models of the topic structure of the corpus.
The output is a list of topics, each represented as a list of terms
(weights are not shown). The default parameters (n_samples / n_features / n_topics) should make
the example runnable in a couple of tens of seconds. You can try to
increase the dimensions of the problem, but be aware that the time
complexity is polynomial in NMF. In LDA, the time complexity is
proportional to (n_samples * iterations).
""" # Author: Olivier Grisel <olivier.grisel@ensta.org>
# Lars Buitinck <L.J.Buitinck@uva.nl>
# Chyi-Kwei Yau <chyikwei.yau@gmail.com>
# License: BSD 3 clause from __future__ import print_function
from time import time from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import NMF, LatentDirichletAllocation
from sklearn.datasets import fetch_20newsgroups n_samples = 2000
n_features = 1000
n_topics = 10
n_top_words = 20 def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("Topic #%d:" % topic_idx)
print(" ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
print() # Load the 20 newsgroups dataset and vectorize it. We use a few heuristics
# to filter out useless terms early on: the posts are stripped of headers,
# footers and quoted replies, and common English words, words occurring in
# only one document or in at least 95% of the documents are removed. print("Loading dataset...")
t0 = time()
dataset = fetch_20newsgroups(shuffle=True, random_state=1,
remove=('headers', 'footers', 'quotes'))
data_samples = dataset.data
print("done in %0.3fs." % (time() - t0)) # Use tf-idf features for NMF.
print("Extracting tf-idf features for NMF...")
tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2, #max_features=n_features,
stop_words='english')
t0 = time()
tfidf = tfidf_vectorizer.fit_transform(data_samples)
print("done in %0.3fs." % (time() - t0)) # Use tf (raw term count) features for LDA.
print("Extracting tf features for LDA...")
tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2, max_features=n_features,
stop_words='english')
t0 = time()
tf = tf_vectorizer.fit_transform(data_samples)
print("done in %0.3fs." % (time() - t0)) # Fit the NMF model
print("Fitting the NMF model with tf-idf features,"
"n_samples=%d and n_features=%d..."
% (n_samples, n_features))
t0 = time()
nmf = NMF(n_components=n_topics, random_state=1, alpha=.1, l1_ratio=.5).fit(tfidf)
exit()
print("done in %0.3fs." % (time() - t0)) print("\nTopics in NMF model:")
tfidf_feature_names = tfidf_vectorizer.get_feature_names()
print_top_words(nmf, tfidf_feature_names, n_top_words) print("Fitting LDA models with tf features, n_samples=%d and n_features=%d..."
% (n_samples, n_features))
lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=5,
learning_method='online', learning_offset=50.,
random_state=0)
t0 = time()
lda.fit(tf)
print("done in %0.3fs." % (time() - t0)) print("\nTopics in LDA model:")
tf_feature_names = tf_vectorizer.get_feature_names()
print_top_words(lda, tf_feature_names, n_top_words)

TopicsExtraction with NMF & LDA的更多相关文章

  1. 机器学习SVD笔记

    机器学习中SVD总结 矩阵分解的方法 特征值分解. PCA(Principal Component Analysis)分解,作用:降维.压缩. SVD(Singular Value Decomposi ...

  2. SK-Learn使用NMF(非负矩阵分解)和LDA(隐含狄利克雷分布)进行话题抽取

    英文链接:http://scikit-learn.org/stable/auto_examples/applications/topics_extraction_with_nmf_lda.html 这 ...

  3. 文本主题模型之LDA(一) LDA基础

    文本主题模型之LDA(一) LDA基础 文本主题模型之LDA(二) LDA求解之Gibbs采样算法 文本主题模型之LDA(三) LDA求解之变分推断EM算法(TODO) 在前面我们讲到了基于矩阵分解的 ...

  4. 文本主题模型之非负矩阵分解(NMF)

    在文本主题模型之潜在语义索引(LSI)中,我们讲到LSI主题模型使用了奇异值分解,面临着高维度计算量太大的问题.这里我们就介绍另一种基于矩阵分解的主题模型:非负矩阵分解(NMF),它同样使用了矩阵分解 ...

  5. KNN PCA LDA

    http://blog.csdn.net/scyscyao/article/details/5987581 这学期选了门模式识别的课.发现最常见的一种情况就是,书上写的老师ppt上写的都看不懂,然后绕 ...

  6. LDA主题模型评估方法–Perplexity

    在LDA主题模型之后,需要对模型的好坏进行评估,以此依据,判断改进的参数或者算法的建模能力. Blei先生在论文<Latent Dirichlet Allocation>实验中用的是Per ...

  7. 用scikit-learn进行LDA降维

    在线性判别分析LDA原理总结中,我们对LDA降维的原理做了总结,这里我们就对scikit-learn中LDA的降维使用做一个总结. 1. 对scikit-learn中LDA类概述 在scikit-le ...

  8. 线性判别分析LDA原理总结

    在主成分分析(PCA)原理总结中,我们对降维算法PCA做了总结.这里我们就对另外一种经典的降维方法线性判别分析(Linear Discriminant Analysis, 以下简称LDA)做一个总结. ...

  9. word2vec参数调整 及lda调参

     一.word2vec调参   ./word2vec -train resultbig.txt -output vectors.bin -cbow 0 -size 200 -window 5 -neg ...

随机推荐

  1. Mongodb数据库学习

    数据库 MongoDB (芒果数据库) 数据存储阶段 文件管理阶段 (.txt .doc .xls)优点 : 数据可以长期保存 可以存储大量的数据 使用简单 缺点 : 数据一致性差 数据查找修改不方便 ...

  2. PhoenixFD插件流体模拟——UI布局【Gird】详解

    流体网格 本文主要讲解Grid折叠栏中的内容 主要内容 Overview 综述 Parameters 参数 General 普通参数 Example: Scene Scale Example: Gri ...

  3. iframe登录超时跳转登录页面

    1.可以在登录页面加jQuery验证 $(function () { //判断一下当前是不是做顶层,如果不是,则做一下顶层页面重定向 if (window != top) { top.location ...

  4. Apache Traffic Server

    1. ats 安装 参考:https://docs.trafficserver.apache.org/en/latest/getting-started/index.en.html#installat ...

  5. jQuery插件的一些想法

    之前在用ant-design和MUI的时候是一个系统的插件,应有尽有,当然jQuery也有系统性的插件,最近的项目没有用,所以一些需要插件的东西,需要哪种,找哪种,然后再引入项目中,首先百度搜索这类插 ...

  6. adduser与useradd的区别

    问题:使用 useradd 创建用户,发现 /home 目录下没有自动创建关于用户的目录.所以做了一番调查研究 useradd是一个linux命令,但是它提供了很多参数在用户使用的时候根据自己的需要进 ...

  7. android 7.0+ FileProvider 访问隐私文件 相册、相机、安装应用的适配

    从 Android 7.0 开始,Android SDK 中的 StrictMode 策略禁止开发人员在应用外部公开 file:// URI.具体表现为,当我们在应用中使用包含 file:// URI ...

  8. 检查手机是否安装外置SD卡

    /** * 检测是否安装外置SD卡 * * @return */ public boolean checkSDcard() { StorageList list = new StorageList(t ...

  9. Dual Attention Network for Scene Segmentation

    Dual Attention Network for Scene Segmentation 原始文档 https://www.yuque.com/lart/papers/onk4sn 在本文中,我们通 ...

  10. “天龙八步”细说浏览器输入URL后发生了什么

    本文摘要: 1.DNS域名解析: 2.建立TCP连接: 3.发送HTTP请求: 4.服务器处理请求: 5.返回响应结果: 6.关闭TCP连接: 7.浏览器解析HTML: 8.浏览器布局渲染: 总结 输 ...