使用 DL4J 训练中文词向量

1 预处理

对中文语料的预处理,主要包括:分词、去停用词以及一些根据实际场景制定的规则。

package ai.mole.test;

import org.ansj.domain.Term;
import org.ansj.splitWord.analysis.ToAnalysis;
import org.nlpcn.commons.lang.tire.domain.Forest;
import org.nlpcn.commons.lang.tire.library.Library; import java.io.*;
import java.util.LinkedList;
import java.util.List;
import java.util.regex.Pattern; public class Preprocess {
private static final Pattern NUMERIC_PATTERN = Pattern.compile("^[.\\d]+$");
private static final Pattern ENGLISH_WORD_PATTERN = Pattern.compile("^[a-z]+$"); public static void main(String[] args) {
String inPath1 = "D:\\MyData\\XUGP3\\Desktop\\测试分词\\test1.txt";
String inPath2 = "D:\\MyData\\XUGP3\\Desktop\\测试分词\\stop_words.txt";
String outPath = "D:\\MyData\\XUGP3\\Desktop\\测试分词\\result1.txt";
String encoding = "utf-8"; PrintWriter writer = null;
Forest forest = null;
try {
writer = new PrintWriter(new OutputStreamWriter(new FileOutputStream(outPath), encoding));
forest = Library.makeForest(Test.class.getResourceAsStream("/library/userLibrary.dic")); List<String> lineList = IOUtil.readLines(new FileInputStream(inPath1), encoding);
List<String> stopWordList = IOUtil.readLines(new FileInputStream(inPath2), encoding); for (String line : lineList) {
String[] cols = line.split("\\t", -1); if (cols.length < 2) {
continue;
} String text = cols[0].trim().toLowerCase() + " " + cols[1].trim().toLowerCase(); // 分词
List<Term> termList = ToAnalysis.parse(text, forest).getTerms();
List<String> wordList = new LinkedList<>();
for (Term term : termList) {
String word = term.getName(); if (word.length() < 2) {
continue;
} if (stopWordList.contains(word)) {
continue;
} if (isNumeric(word)) {
continue;
} if (isEnglishWord(word)) {
continue;
} wordList.add(word);
} if (wordList.size() > 5) {
String outStr = listToLine(wordList);
writer.println(outStr);
}
}
} catch (FileNotFoundException e) {
System.out.println("The file does not exist or the path is not correct!!!");
System.exit(-1);
} catch (UnsupportedEncodingException e) {
System.out.println("Does not support the current character set!!!");
} catch (IOException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
} finally {
if (writer != null) {
writer.close();
}
}
} private static boolean isNumeric(String text) {
return NUMERIC_PATTERN.matcher(text).matches();
} private static boolean isEnglishWord(String text) {
return ENGLISH_WORD_PATTERN.matcher(text).matches();
} private static String listToLine(List<String> list) {
StringBuilder sb = new StringBuilder();
for (int i=0; i<list.size(); i++) {
sb.append(list.get(i));
if (i != list.size()-1) {
sb.append(" ");
}
}
return sb.toString();
}
}

2 训练

训练的代码非常简单,可以直接看官网的教程,至于 word2vec 的原理可以看皮提果的博文。

package ai.mole.test;

import org.deeplearning4j.models.embeddings.loader.WordVectorSerializer;
import org.deeplearning4j.models.word2vec.Word2Vec;
import org.deeplearning4j.text.sentenceiterator.BasicLineIterator;
import org.deeplearning4j.text.sentenceiterator.SentenceIterator;
import org.deeplearning4j.text.tokenization.tokenizerfactory.DefaultTokenizerFactory;
import org.deeplearning4j.text.tokenization.tokenizerfactory.TokenizerFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import java.io.File;
import java.io.IOException;
import java.util.Collection; public class TrainWord2VecModel {
private static Logger log = LoggerFactory.getLogger(TrainWord2VecModel.class); public static void main(String[] args) throws IOException {
String corpusPath = "/data/analyze/xgp/words.txt";
String vectorsPath = "/data/analyze/xgp/word_vectors.txt"; log.info("Start Training...");
long st = System.currentTimeMillis(); log.info("Load & vectorize sentences...");
SentenceIterator iter = new BasicLineIterator(new File(corpusPath));
TokenizerFactory t = new DefaultTokenizerFactory();
// t.setTokenPreProcessor(new CommonPreprocessor()); log.info("Building model...");
Word2Vec vec = new Word2Vec.Builder()
.minWordFrequency(50)
.iterations(1)
.epochs(100)
.layerSize(500)
.seed(42)
.windowSize(5)
.iterate(iter)
.tokenizerFactory(t)
.build(); log.info("Fitting word2vec model...");
vec.fit(); log.info("Writing word vectors to text file...");
// WordVectorSerializer.writeWord2VecModel(vec, vectorsPath);
WordVectorSerializer.writeWordVectors(vec, vectorsPath); log.info("Closest words:");
Collection<String> bydWordList = vec.wordsNearest("比亚迪", 10);
Collection<String> changanWordList = vec.wordsNearest("长安", 10);
System.out.print(bydWordList);
System.out.println(changanWordList); log.info("10 words closest to '比亚迪': {}", bydWordList);
log.info("10 words closest to '长安': {}", changanWordList); long et = System.currentTimeMillis();
log.info("Training is completed, and the time taken is " + (et-st) + " ms.");
System.out.println("Training is completed, and the time taken is " + (et-st) + " ms.");
}
}

3 调用

调用训练好的词向量也非常简单,只需要调用 WordVectorSerializer 类的静态方法 readWord2VecModel 就可以了,提供的输入参数就是训练好的词向量路径。

Word2Vec word2Vec = WordVectorSerializer.readWord2VecModel("D:\\MyData\\XUGP3\\Desktop\\测试分词\\vectors.txt");
Collection<String> bydWordList = word2Vec.wordsNearest("比亚迪", 10);
Collection<String> changanWordList = word2Vec.wordsNearest("长安", 10);
System.out.println(bydWordList);
System.out.println(changanWordList);

附录 - maven 依赖

<dependencies>
<dependency>
<groupId>org.apdplat</groupId>
<artifactId>word</artifactId>
<version>1.3</version>
</dependency> <!-- ND4J backend. You need one in every DL4J project. Normally define artifactId as either "nd4j-native-platform" or "nd4j-cuda-7.5-platform" -->
<dependency>
<groupId>org.nd4j</groupId>
<artifactId>${nd4j.backend}</artifactId>
<version>${nd4j.version}</version>
</dependency> <!-- Core DL4J functionality -->
<dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-core</artifactId>
<version>${dl4j.version}</version>
</dependency> <dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-nlp</artifactId>
<version>${dl4j.version}</version>
</dependency> <dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-zoo</artifactId>
<version>${dl4j.version}</version>
</dependency> <!-- deeplearning4j-ui is used for visualization: see http://deeplearning4j.org/visualization -->
<dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-ui_${scala.binary.version}</artifactId>
<version>${dl4j.version}</version>
</dependency> <!-- ParallelWrapper & ParallelInference live here -->
<dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-parallel-wrapper_${scala.binary.version}</artifactId>
<version>${dl4j.version}</version>
</dependency> <!-- Next 2: used for MapFileConversion Example. Note you need *both* together -->
<dependency>
<groupId>org.datavec</groupId>
<artifactId>datavec-hadoop</artifactId>
<version>${datavec.version}</version>
</dependency> <dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>${hadoop.version}</version>
</dependency> <!-- Arbiter - used for hyperparameter optimization (grid/random search) -->
<dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>arbiter-deeplearning4j</artifactId>
<version>${arbiter.version}</version>
</dependency> <dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>arbiter-ui_2.11</artifactId>
<version>${arbiter.version}</version>
</dependency> <!-- datavec-data-codec: used only in video example for loading video data -->
<dependency>
<artifactId>datavec-data-codec</artifactId>
<groupId>org.datavec</groupId>
<version>${datavec.version}</version>
</dependency>
</dependencies>

使用 DL4J 训练中文词向量的更多相关文章

  1. 使用word2vec训练中文词向量

    https://www.jianshu.com/p/87798bccee48 一.文本处理流程 通常我们文本处理流程如下: 1 对文本数据进行预处理:数据预处理,包括简繁体转换,去除xml符号,将单词 ...

  2. AAAI 2018 论文 | 蚂蚁金服公开最新基于笔画的中文词向量算法

    AAAI 2018 论文 | 蚂蚁金服公开最新基于笔画的中文词向量算法 2018-01-18 16:13蚂蚁金服/雾霾/人工智能 导读:词向量算法是自然语言处理领域的基础算法,在序列标注.问答系统和机 ...

  3. 文本分布式表示(三):用gensim训练word2vec词向量

    今天参考网上的博客,用gensim训练了word2vec词向量.训练的语料是著名科幻小说<三体>,这部小说我一直没有看,所以这次拿来折腾一下. <三体>这本小说里有不少人名和一 ...

  4. 在Keras模型中one-hot编码,Embedding层,使用预训练的词向量/处理图片

    最近看了吴恩达老师的深度学习课程,又看了python深度学习这本书,对深度学习有了大概的了解,但是在实战的时候, 还是会有一些细枝末节没有完全弄懂,这篇文章就用来总结一下用keras实现深度学习算法的 ...

  5. word2vec 构建中文词向量

    词向量作为文本的基本结构——词的模型,以其优越的性能,受到自然语言处理领域研究人员的青睐.良好的词向量可以达到语义相近的词在词向量空间里聚集在一起,这对后续的文本分类,文本聚类等等操作提供了便利,本文 ...

  6. 开源共享一个训练好的中文词向量(语料是维基百科的内容,大概1G多一点)

    使用gensim的word2vec训练了一个词向量. 语料是1G多的维基百科,感觉词向量的质量还不错,共享出来,希望对大家有用. 下载地址是: http://pan.baidu.com/s/1boPm ...

  7. 文本分布式表示(二):用tensorflow和word2vec训练词向量

    看了几天word2vec的理论,终于是懂了一些.理论部分我推荐以下几篇教程,有博客也有视频: 1.<word2vec中的数学原理>:http://www.cnblogs.com/pegho ...

  8. word2vec词向量训练及中文文本类似度计算

    本文是讲述怎样使用word2vec的基础教程.文章比較基础,希望对你有所帮助! 官网C语言下载地址:http://word2vec.googlecode.com/svn/trunk/ 官网Python ...

  9. NLP教程(2) | GloVe及词向量的训练与评估

    作者:韩信子@ShowMeAI 教程地址:http://www.showmeai.tech/tutorials/36 本文地址:http://www.showmeai.tech/article-det ...

随机推荐

  1. osgi.net框架

    osgi.net是一个动态的模块化框架.它向用户提供了模块化与插件化.面向服务构架和模块扩展支持等功能.该平台是OSGi联盟定义的服务平台规范移植到.NET的实现. 简介 尤埃开放服务平台是一个基于. ...

  2. PostgreSQL递归查询

    原料 --创建组织架构表 create table "Org"( "OrgId" ) primary key, "ParentId" ), ...

  3. ASP.NET 生成缩略图片类分享

    /// <summary> /// 生成图片缩略图 指定文件路径生成 /// </summary> public static void SaveImage(String fu ...

  4. 日笔记--C# 从数据库取表格到DataGridView---json传输

    只作为个人学习笔记. class OpData { // 创建一个和客户端通信的套接字 Socket socketwatch = null; //连接Access字符串 string strCon; ...

  5. SQL server 添加主外键约束

    ---添加主键约束   alter table 表名 add constraint 约束名 primary key (主键)          - --添加唯一约束   alter table 表名 ...

  6. Let it crash philosophy part II

    Designing fault tolerant systems is extremely difficult.  You can try to anticipate and reason about ...

  7. sqlmap注入之tamper绕过WAF脚本列表

    本文作者:i春秋作者——玫瑰 QQ2230353371转载请保留文章出处 使用方法--tamper xxx.py apostrophemask.py用UTF-8全角字符替换单引号字符 apostrop ...

  8. maven指定本地的文件包

    maven指定本地的文件包 案例: <!-- CKFinder begin --> <dependency> <groupId>net.coobird</gr ...

  9. 挂载U盘到linux中

        一.  挂载U盘到linux中,也可以是虚拟机中的linux 1. 首先插上U盘 2. fdisk -l 找到自己的U盘设备,并且记住文件系统类型,主要看空间大小来判断,比如是/dev/sdc ...

  10. 通过设置Ionic-Cli代理解决ionic serve跨域调试问题

    Ionic-Cli代理设置: 打开ionic.config.json文件,添加proxies代理配置字段: { "name": "ion", "app ...