自然语言14_Stemming words with NLTK
python机器学习-乳腺癌细胞挖掘(博主亲自录制视频)https://study.163.com/course/introduction.htm?courseId=1005269003&utm_campaign=commission&utm_source=cp-400000000398149&utm_medium=share
# -*- coding: utf-8 -*-
"""
Created on Sun Nov 13 09:14:13 2016 @author: daxiong
"""
from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize,word_tokenize
#生成波特词干算法实例
ps=PorterStemmer()
'''
ps.stem('emancipation')
Out[17]: 'emancip' ps.stem('love')
Out[18]: 'love' ps.stem('loved')
Out[19]: 'love' ps.stem('loving')
Out[20]: 'love' ''' example_words=['python','pythoner','pythoning','pythoned','pythonly']
example_text="Five score years ago, a great American, in whose symbolic shadow we stand today, signed the Emancipation Proclamation. This momentous decree came as a great beacon light of hope to millions of Negro slaves who had been seared in the flames of withering injustice. It came as a joyous daybreak to end the long night of bad captivity."
#分句
list_sentences=sent_tokenize(example_text)
#分词
list_words=word_tokenize(example_text)
#词干提取
list_stemWords=[ps.stem(w) for w in example_words]
''' ['python', 'python', 'python', 'python', 'pythonli']''' list_stemWords1=[ps.stem(w) for w in list_words]
Stemming words with NLTK
The idea of stemming is a sort of normalizing method. Many
variations of words carry the same meaning, other than when tense is
involved.
The reason why we stem is to shorten the lookup, and normalize sentences.
Consider:
I was riding in the car.
This sentence means the same thing. in the car is the same. I was is
the same. the ing denotes a clear past-tense in both cases, so is it
truly necessary to differentiate between ride and riding, in the case of
just trying to figure out the meaning of what this past-tense activity
was?
No, not really.
This is just one minor example, but imagine every word in the English
language, every possible tense and affix you can put on a word. Having
individual dictionary entries per version would be highly redundant and
inefficient, especially since, once we convert to numbers, the "value"
is going to be identical.
One of the most popular stemming algorithms is the Porter stemmer, which has been around since 1979.
First, we're going to grab and define our stemmer:
from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize, word_tokenize ps = PorterStemmer()
Now, let's choose some words with a similar stem, like:
example_words = ["python","pythoner","pythoning","pythoned","pythonly"]
Next, we can easily stem by doing something like:
for w in example_words:
print(ps.stem(w))
Our output:
python
python
python
python
pythonli
Now let's try stemming a typical sentence, rather than some words:
new_text = "It is important to by very pythonly while you are pythoning with python. All pythoners have pythoned poorly at least once."
words = word_tokenize(new_text) for w in words:
print(ps.stem(w))
Now our result is:
It
is
import
to
by
veri
pythonli
while
you
are
python
with
python
.
All
python
have
python
poorli
at
least
onc
.
Next up, we're going to discuss something a bit more advanced from the NLTK module, Part of Speech tagging, where we can use the NLTK module to identify the parts of speech for each word in a sentence.
自然语言14_Stemming words with NLTK的更多相关文章
- 自然语言处理(1)之NLTK与PYTHON
自然语言处理(1)之NLTK与PYTHON 题记: 由于现在的项目是搜索引擎,所以不由的对自然语言处理产生了好奇,再加上一直以来都想学Python,只是没有机会与时间.碰巧这几天在亚马逊上找书时发现了 ...
- 自然语言23_Text Classification with NLTK
QQ:231469242 欢迎喜欢nltk朋友交流 https://www.pythonprogramming.net/text-classification-nltk-tutorial/?compl ...
- 自然语言20_The corpora with NLTK
QQ:231469242 欢迎喜欢nltk朋友交流 https://www.pythonprogramming.net/nltk-corpus-corpora-tutorial/?completed= ...
- 自然语言19.1_Lemmatizing with NLTK(单词变体还原)
QQ:231469242 欢迎喜欢nltk朋友交流 https://www.pythonprogramming.net/lemmatizing-nltk-tutorial/?completed=/na ...
- 自然语言13_Stop words with NLTK
https://www.pythonprogramming.net/stop-words-nltk-tutorial/?completed=/tokenizing-words-sentences-nl ...
- 自然语言处理2.1——NLTK文本语料库
1.获取文本语料库 NLTK库中包含了大量的语料库,下面一一介绍几个: (1)古腾堡语料库:NLTK包含古腾堡项目电子文本档案的一小部分文本.该项目目前大约有36000本免费的电子图书. >&g ...
- python自然语言处理函数库nltk从入门到精通
1. 关于Python安装的补充 若在ubuntu系统中同时安装了Python2和python3,则输入python或python2命令打开python2.x版本的控制台:输入python3命令打开p ...
- Python自然语言处理实践: 在NLTK中使用斯坦福中文分词器
http://www.52nlp.cn/python%E8%87%AA%E7%84%B6%E8%AF%AD%E8%A8%80%E5%A4%84%E7%90%86%E5%AE%9E%E8%B7%B5-% ...
- 推荐《用Python进行自然语言处理》中文翻译-NLTK配套书
NLTK配套书<用Python进行自然语言处理>(Natural Language Processing with Python)已经出版好几年了,但是国内一直没有翻译的中文版,虽然读英文 ...
随机推荐
- zabbix解决中文乱码问题(没有测试成功)
zabbix解决中文乱码问题 1.在windows系统中找一个自己喜欢的字体,这里我们用:msyh.ttf 2.将字体上传至/var/www/html/zabbix/fonts目录下 [root@za ...
- linux基础-第十六单元 yum管理RPM包
第十六单元 yum管理RPM包 yum的功能 本地yum配置 光盘挂载和镜像挂载 本地yum配置 网络yum配置 网络yum配置 Yum命令的使用 使用yum安装软件 使用yum删除软件 安装组件 删 ...
- Visual Studio 2013启用AnkSVN
1. 官网下载AnkSVN https://ankhsvn.open.collab.net/ 2. 安装 3. 启用 When you enable AnkhSVN as a VS.NET sour ...
- Shell配置_配置IP
1.setup 打开图形化页面 a) 选择网络配置 b) 选择设置配置 c) 选择第一个网卡 2.启动网卡(第一个网卡) vim /etc/sysconfig/network-s ...
- 莫比乌斯函数筛法 & 莫比乌斯反演
模板: int p[MAXN],pcnt=0,mu[MAXN]; bool notp[MAXN]; void shai(int n){ mu[1]=1; for(int i=2;i<=n;++i ...
- bzoj1051
就是一个tarjan #include<iostream> #include<stack> #include<cstdio> using namespace std ...
- bzoj 3743
这道题用到了4个dfs,分别是找出所有家的最小生成树,找出一点距离树的最小距离,找出每个点儿子距离的最大值(不包括父亲,也就是指不包括根节点的子树),用父亲的值来更新自己 因为我们可以知道:如果我们在 ...
- Android进程回收机制LMK(Low Memory Killer)
熟悉Android系统的童鞋都知道,系统出于体验和性能上的考虑,app在退到后台时系统并不会真正的kill掉这个进程,而是将其缓存起来.打开的应用越多,后台缓存的进程也越多.在系统内存不足的情况下,系 ...
- 【HDU 4408】Minimum Spanning Tree(最小生成树计数)
Problem Description XXX is very interested in algorithm. After learning the Prim algorithm and Krusk ...
- 宽字符,Ansic和Unicode
电脑发展的初期,只是在美国等英文国家使用,英文只有26个字母和其它字符,一个字节最多可以表示256个字符,如字母"A"用0x41(二进制01000001)表示,字母"a& ...