安装pip命令之后:

sudo pip install -U pyyaml nltk

import nltk
nltk.download()

等待ing

目前访问不了,故使用Green VPN

http://www.evergreenvpn.com/ubuntu-pptp-vpn-setting/

nltk使用

http://www.cnblogs.com/yuxc/archive/2011/08/29/2157415.html

http://blog.csdn.net/huyoo/article/details/12188573

http://www.52nlp.cn/tag/nltk

1.空格进行英文分词.split(python自带)

>>> slower
'we all like the book'
>>> ssplit = slower.split()
>>> ssplit
['we', 'all', 'like', 'the', 'book']
>>> 

>>> import nltk
>>> s = u"我们都Like the book"
>>> m = [word for word in nltk.tokenize.word_tokenize(s)]
>>> for word in m:
...     print word
...
我们都Like
the
book

>>> tokens = nltk.word_tokenize(s)
>>> tokens
[u'\u6211\u4eec\u90fdLike', u'the', u'book']
>>> for word in tokens
  File "<stdin>", line 1
    for word in tokens
                     ^
SyntaxError: invalid syntax
>>> for word in tokens:
...     print word
...
我们都Like
the
book

2.词性标注

>>> tagged = nltk.pos_tag(tokens)
>>> for word in tagged:
...     print word
...
(u'\u6211\u4eec\u90fdLike', 'IN')
(u'the', 'DT')
(u'book', 'NN')
>>> 

3.句法分析

>>> entities= nltk.chunk.ne_chunk(tagged)
>>> entities
Tree('S', [(u'\u6211\u4eec\u90fdLike', 'IN'), (u'the', 'DT'), (u'book', 'NN')])
>>> 

---------------------------------------------------------------------------------------------------------------------------------------------------------

4.转换为小写(Python自带)

>>> s
'We all like the book'
>>> slower = s.lower()
>>> slower
'we all like the book'
>>> 

5.空格进行英文分词.split(python自带)

>>> slower
'we all like the book'
>>> ssplit = slower.split()
>>> ssplit
['we', 'all', 'like', 'the', 'book']
>>> 

6.标号与单词分离

>>> s
'we all like the book,it\xe2\x80\x98s so interesting.'
>>> s = 'we all like the book, it is so interesting.'
>>> wordtoken = nltk.tokenize.word_tokenize(s)
>>> wordtoken
['we', 'all', 'like', 'the', 'book', ',', 'it', 'is', 'so', 'interesting', '.']
>>> wordtoken = nltk.word_tokenize(s)
>>> wordtoken
['we', 'all', 'like', 'the', 'book', ',', 'it', 'is', 'so', 'interesting', '.']
>>> wordsplit = s.split()
>>> wordsplit
['we', 'all', 'like', 'the', 'book,', 'it', 'is', 'so', 'interesting.']
>>> 

7.去停用词(nltk自带127个英文停用词)

>>> wordEngStop = nltk.corpus.stopwords.words('english')
>>> wordEngStop
[u'i', u'me', u'my', u'myself', u'we', u'our', u'ours', u'ourselves', u'you', u'your', u'yours', u'yourself', u'yourselves', u'he', u'him', u'his', u'himself', u'she', u'her', u'hers', u'herself', u'it', u'its', u'itself', u'they', u'them', u'their', u'theirs', u'themselves', u'what', u'which', u'who', u'whom', u'this', u'that', u'these', u'those', u'am', u'is', u'are', u'was', u'were', u'be', u'been', u'being', u'have', u'has', u'had', u'having', u'do', u'does', u'did', u'doing', u'a', u'an', u'the', u'and', u'but', u'if', u'or', u'because', u'as', u'until', u'while', u'of', u'at', u'by', u'for', u'with', u'about', u'against', u'between', u'into', u'through', u'during', u'before', u'after', u'above', u'below', u'to', u'from', u'up', u'down', u'in', u'out', u'on', u'off', u'over', u'under', u'again', u'further', u'then', u'once', u'here', u'there', u'when', u'where', u'why', u'how', u'all', u'any', u'both', u'each', u'few', u'more', u'most', u'other', u'some', u'such', u'no', u'nor', u'not', u'only', u'own', u'same', u'so', u'than', u'too', u'very', u's', u't', u'can', u'will', u'just', u'don', u'should', u'now']
>>> len(wordEngStop)
127
>>> 
>>> len(wordEngStop)
127
>>> s
'we all like the book, it is so interesting.'
>>> wordtoken
['we', 'all', 'like', 'the', 'book', ',', 'it', 'is', 'so', 'interesting', '.']
>>> for word in wordtoken:
...     if not word in wordEngStop:
...             print word
...
like
book
,
interesting
.
>>> 

8.去标点符号

>>> english_punctuations = [',', '.', ':', ';', '?', '(', ')', '[', ']', '!', '@', '#', '%', '$', '*']
>>> wordtoken
['we', 'all', 'like', 'the', 'book', ',', 'it', 'is', 'so', 'interesting', '.']
>>> for word in wordtoken:
...     if not word in english_punctuations:
...             print word
...
we
all
like
the
book
it
is
so
interesting
>>>

9.词干化

“我们对这些英文单词词干化(Stemming),NLTK提供了好几个相关工具接口可供选择,具体参考这个页面: http://nltk.org/api/nltk.stem.html , 可选的工具包括Lancaster Stemmer, Porter Stemmer等知名的英文Stemmer。这里我们使用LancasterStemmer:”  来自:我爱自然语言处理  http://www.52nlp.cn/%E5%A6%82%E4%BD%95%E8%AE%A1%E7%AE%97%E4%B8%A4%E4%B8%AA%E6%96%87%E6%A1%A3%E7%9A%84%E7%9B%B8%E4%BC%BC%E5%BA%A6%E4%B8%89

http://lutaf.com/212.htm  词干化的主流方法

http://blog.sina.com.cn/s/blog_6d65717d0100z4hu.html

>>> from nltk.stem.lancaster import LancasterStemmer
>>> st = LancasterStemmer()
>>> wordtoken
['we', 'all', 'like', 'the', 'book', ',', 'it', 'is', 'so', 'interesting', '.']
>>> st.stem(wordtoken)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/nltk/stem/lancaster.py", line 195, in stem
AttributeError: 'list' object has no attribute 'lower'
>>> for word in wordtoken:
...     print st.stem(word)
...
we
al
lik
the
book
,
it
is
so
interest
.
>>>

两者各有优缺点

>>> from nltk.stem import PorterStemmer
>>> wordtoken
['we', 'all', 'like', 'the', 'book', ',', 'it', 'is', 'so', 'interesting', '.']
>>> PorterStemmer().stem(wordtoken)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/nltk/stem/porter.py", line 632, in stem
AttributeError: 'list' object has no attribute 'lower'
>>> PorterStemmer().stem('all')
u'all'
>>> for word in wordtoken:
...     print PorterStemmer().stem(word)
...
we
all
like
the
book
,
it
is
so
interest
.
>>> PorterStemmer().stem("better")
u'better'
>>> PorterStemmer().stem("supplies")
u'suppli'
>>> st.stem('supplies')
u'supply'
>>> 
# -*- coding:utf8 -*-
import nltk
import os

wordEngStop = nltk.corpus.stopwords.words('english')
english_punctuations = [',', '.', ':', ';', '?', '(', ')', '[', ']', '!', '@', '#', '%', '$', '*','=','abstract=', '{', '}']
porterStem=nltk.stem.PorterStemmer()
lancasterStem=nltk.stem.lancaster.LancasterStemmer()

fin = open('/home/xdj/myOutput.txt', 'r')
fout  = open('/home/xdj/myOutputLancasterStemmer.txt','w')
for eachLine in fin:
        eachLine = eachLine.lower().decode('utf-8', 'ignore') #小写
        tokens = nltk.word_tokenize(eachLine)                 #分词(与标点分开)
        wordLine = ''
        for word in tokens:
            if not word in english_punctuations:          #去标点
                if not word in wordEngStop:          #去停用词
                    #word = porterStem.stem(word)
                    word = lancasterStem.stem(word)
                    wordLine+=word+' '
        fout.write(wordLine.encode('utf-8')+'\n')
fin.close()
fout.close()

python 安装nltk,使用(英文分词处理,词干化等)(Green VPN)的更多相关文章

  1. python安装Jieba中文分词组件并测试

    python安装Jieba中文分词组件 1.下载http://pypi.python.org/pypi/jieba/ 2.解压到解压到python目录下: 3.“win+R”进入cmd:依次输入如下代 ...

  2. python中nltk的下载安装方式

    首先去http://nltk.org/install.html下载相关的安装程序,然后 在cmd窗口中,进入到python的文件夹内的 Scripts内,运行easy_install pip 安装Py ...

  3. 转:python的nltk中文使用和学习资料汇总帮你入门提高

    python的nltk中文使用和学习资料汇总帮你入门提高 转:http://blog.csdn.net/huyoo/article/details/12188573 nltk的安装 nltk初步使用入 ...

  4. 【python】NLTK好文

    From:http://m.blog.csdn.net/blog/huyoo/12188573 nltk是一个python工具包, 用来处理和自然语言处理相关的东西. 包括分词(tokenize), ...

  5. linux环境下安装sphinx中文支持分词搜索(coreseek+mmseg)

     linux环境下安装sphinx中文支持分词搜索(coreseek+mmseg) 2013-11-10 16:51:14 分类: 系统运维 为什么要写这篇文章? 答:通过常规的三大步(./confi ...

  6. 探索 Python、机器学习和 NLTK 库 开发一个应用程序,使用 Python、NLTK 和机器学习对 RSS 提要进行分类

    挑战:使用机器学习对 RSS 提要进行分类 最近,我接到一项任务,要求为客户创建一个 RSS 提要分类子系统.目标是读取几十个甚至几百个 RSS 提要,将它们的许多文章自动分类到几十个预定义的主题领域 ...

  7. win安装NLTK出现的问题

    一.今天学习Python自然语言处理(NLP processing) 需要安装自然语言工具包NLTK Natural Language Toolkit 按照教程在官网https://pypi.pyth ...

  8. Python安装、配置图文详解(转载)

    Python安装.配置图文详解 目录: 一. Python简介 二. 安装python 1. 在windows下安装 2. 在Linux下安装 三. 在windows下配置python集成开发环境(I ...

  9. 【和我一起学python吧】Python安装、配置图文详解

     Python安装.配置图文详解 目录: 一. Python简介 二. 安装python 1. 在windows下安装 2. 在Linux下安装 三. 在windows下配置python集成开发环境( ...

随机推荐

  1. UI第九节——UIProgressView

    - (void)viewDidLoad {    [super viewDidLoad];        // 实例化 UIProgressView,高度是固定的    UIProgressView ...

  2. PHP如何将session保存到memcached中?如何分布式保存PHP session

    session_set_save_handler无关的memcached保存session的方法 在memcached服务器上 1)下载memcached #wget http://memcached ...

  3. 如何对Azure磁盘性能进行测试

    Azure的云存储一直是Azure比较自豪的东西,想到AWS的LSA后面有若干个9,搞得大家都以为它的存储最优秀,其实不然,Azure存储到现在没有丢过客户1bit的数据,但是Azure不会去说我们的 ...

  4. Alpha版本冲刺总结——曙光初现

    No Bug 031402401鲍亮 031402402曹鑫杰 031402403常松 031402412林淋 031402418汪培侨 031402426许秋鑫 项目预期计划 界面设计 androi ...

  5. C# 操作office知识点汇总

    1. C#操作Word的超详细总结

  6. HTML头部

    1.文档声明 html5的声明类型为  <!DOCTYPE html> 2.head部分 2.1 <title></title> 2.2 <base href ...

  7. 开源实时日志分析ELK平台部署

    参考帖子: (1)自动化测试Web服务器性能autobench+httperf

  8. CentOS7 词典

    goldendict sudo yum install goldendict打开goldendict,阅读welcome,添加本地词典,在http://abloz.com/huzheng/stardi ...

  9. mint17.3挂载u盘出现错误:mount:unknown filesystem type 'exfat'

    mint17.3挂载u盘出现错误:mount:unknown filesystem type 'exfat' 安装exfat-fuse: sudo apt-get install exfat-fuse

  10. Yii 动作过滤的方法

    protected function _init() { } public function beforeAction($action) { //黑名单 $blackList = array('tes ...