在大数据的今天,世界上任何一台单机都无法处理大数据,无论cpu的计算能力或者内存的容量。必须采用分布式来实现多台单机的资源整合,来进行任务的处理,包括离线的批处理和在线的实时处理。

鉴于上次开会讲了语言模型的发展,从规则到后来的NNLM。本章的目的就是锻炼动手能力,在知道原理的基础上,通过采用MR范式,自己实现一个ngram语言模型。

首先通过maven来管理相关包的依赖。

 <?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <groupId>com.dingheng</groupId>
<artifactId>nragmMR</artifactId>
<version>1.0-SNAPSHOT</version> <packaging>jar</packaging> <dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.2</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.2</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>8.0.12</version>
</dependency>
</dependencies>
</project>

然后直接上代码:

1.首先是driver,作为程序的启动文件。

  

 package com.dingheng;

 import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.db.DBConfiguration;
import org.apache.hadoop.mapreduce.lib.db.DBOutputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; public class Driver { public static void main(String[] args) throws ClassNotFoundException, IOException, InterruptedException { // inputDir
// outputDir
// NumOfGram
// topK String inputDir = args[0];
String outputDir = args[1];
String numOfGram = args[2];
String threshold = args[3];
String topK = args[4]; // first mapreduce
Configuration configurationNGram = new Configuration();
configurationNGram.set("textinputformat.recode.delimiter", ".");
configurationNGram.set("numOfGram", numOfGram); Job jobNGram = Job.getInstance(configurationNGram);
jobNGram.setJobName("NGram");
jobNGram.setJarByClass(Driver.class); jobNGram.setMapperClass(NGram.NGramMapper.class);
jobNGram.setReducerClass(NGram.NGramReducer.class); jobNGram.setOutputKeyClass(Text.class);
jobNGram.setMapOutputValueClass(IntWritable.class); jobNGram.setInputFormatClass(TextInputFormat.class);
jobNGram.setOutputFormatClass(TextOutputFormat.class); TextInputFormat.addInputPath(jobNGram, new Path(inputDir));
TextOutputFormat.setOutputPath(jobNGram, new Path(outputDir));
jobNGram.waitForCompletion(true); // second mapreduce
Configuration configurationLanguage = new Configuration();
configurationLanguage.set("threshold", threshold);
configurationLanguage.set("topK", topK); DBConfiguration.configureDB(configurationLanguage,
"com.mysql.jdbc.Driver",
"jdbc:mysql://localhost:3306/test",
"root",
"123456"); Job jobLanguage = Job.getInstance(configurationLanguage);
jobLanguage.setJobName("LanguageModel");
jobLanguage.setJarByClass(Driver.class); jobLanguage.setMapperClass(LanguageModel.Map.class);
jobLanguage.setReducerClass(LanguageModel.Reduce.class); jobLanguage.setMapOutputKeyClass(Text.class);
jobLanguage.setMapOutputValueClass(Text.class);
jobLanguage.setOutputKeyClass(DBOutputWritable.class);
jobLanguage.setOutputValueClass(NullWritable.class); jobLanguage.setInputFormatClass(TextInputFormat.class);
jobLanguage.setOutputFormatClass(DBOutputFormat.class); DBOutputFormat.setOutput(
jobLanguage,
"output",
new String[] { "starting_phrase", "following_word", "count"}); TextInputFormat.setInputPaths(jobLanguage, new Path(args[1])); jobLanguage.waitForCompletion(true); }
}

Driver

2.然后是自己的定制类,自己定制了output

 

 package com.dingheng;

 import org.apache.hadoop.mapreduce.lib.db.DBWritable;

 import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException; public class DBOutputWritable implements DBWritable{ private String starting_phrase;
private String following_word;
private int count; public DBOutputWritable(String starting_phrase, String following_word, int count) {
this.starting_phrase = starting_phrase;
this.following_word = following_word;
this.count = count;
} public void write(PreparedStatement arg0) throws SQLException {
arg0.setString(1, starting_phrase);
arg0.setString(2, following_word);
arg0.setInt(3, count);
} public void readFields(ResultSet arg0) throws SQLException {
this.starting_phrase = arg0.getString(1);
this.following_word = arg0.getString(2);
this.count = arg0.getInt(3);
}
}

DBOutputWritable

3.之后自己的mapper和reducer。我试用了两个MR迭代,每一个迭代写在文件中

 package com.dingheng;

 import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer; import java.io.IOException; public class NGram { public static class NGramMapper extends Mapper<LongWritable, Text, Text, IntWritable> { int numOfGram; @Override
public void setup(Context context) {
Configuration conf = context.getConfiguration();
numOfGram = conf.getInt("numOfGram", 5);
} @Override
public void map(LongWritable key,
Text value,
Context context) throws IOException, InterruptedException {
/*
input: read sentence
I love data n=3
I love -> 1
love data -> 1
I love data -> 1
*/ String line = value.toString().trim().toLowerCase().replaceAll("[^a-z]", " ");
String[] words = line.split("\\s+"); if (words.length < 2) {
return;
} StringBuilder sb;
for (int i = 0; i < words.length; i++) {
sb = new StringBuilder();
sb.append(words[i]);
for (int j = 1; i + j < words.length && j < numOfGram; j++) {
sb.append(" ");
sb.append(words[i + j]);
context.write(new Text(sb.toString()), new IntWritable(1));
}
}
}
} public static class NGramReducer extends Reducer<Text, IntWritable, Text, IntWritable> { @Override
public void reduce(Text key,
Iterable<IntWritable> values,
Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable value: values) {
sum = sum + value.get();
}
context.write(key, new IntWritable(sum));
}
}
}

NGram

 package com.dingheng;

 import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer; import java.io.IOException;
import java.util.*; public class LanguageModel { public static class Map extends Mapper<LongWritable, Text, Text, Text> { // input: I love big data\t10
// output: key: I love big value: data = 10 int threshold; @Override
protected void setup(Context context) throws IOException, InterruptedException {
Configuration configuration = context.getConfiguration();
threshold = configuration.getInt("threshold", 20);
} @Override
public void map(LongWritable key,
Text value,
Context context) throws IOException, InterruptedException { if ((value == null) || (value.toString().trim().length() == 0)) {
return;
} String line = value.toString().trim(); String[] wordsPlusCount = line.split("\t");
String[] words = wordsPlusCount[0].split("\\s+");
int count = Integer.valueOf(wordsPlusCount[wordsPlusCount.length - 1]); if (wordsPlusCount.length < 2 || count < threshold) {
return;
} StringBuilder sb = new StringBuilder();
for (int i = 0; i < words.length - 1; i++) {
sb.append(words[i]);
sb.append(" ");
} String outputKey = sb.toString().trim();
String outputValue = words[words.length - 1];
if (!(outputKey.length() < 1)) {
context.write(new Text(outputKey), new Text(outputValue + "=" + count));
}
}
} public static class Reduce extends Reducer<Text, Text, DBOutputWritable, NullWritable> { int topK; @Override
protected void setup(Context context) throws IOException, InterruptedException {
Configuration configuration = context.getConfiguration();
topK = configuration.getInt("topK", 5);
} @Override
public void reduce(Text key,
Iterable<Text> values,
Context context) throws IOException, InterruptedException {
// key: I love big
// value: <data = 10, girl = 100, boy = 1000 ...>
TreeMap<Integer, List<String>> tm = new TreeMap<Integer, List<String>>(Collections.<Integer>reverseOrder());
// <10, <data, baby...>>, <100, <girl>>, <1000, <boy>> for (Text val : values) {
// val: data = 10
String value = val.toString().trim();
String word = value.split("=")[0].trim();
int count = Integer.parseInt(value.split("=")[1].trim()); if (tm.containsKey(count)) {
tm.get(count).add(word);
} else {
List<String> list = new ArrayList<String>();
list.add(word);
tm.put(count, list);
}
} Iterator<Integer> iter = tm.keySet().iterator();
for (int j = 0; iter.hasNext() && j < topK; ) {
int keyCount = iter.next();
List<String> words = tm.get(keyCount);
for (String curWord: words) {
context.write(new DBOutputWritable(key.toString(), curWord, keyCount), NullWritable.get());
j++;
}
}
}
}
}

LanguageModel

基于MR实现ngram语言模型的更多相关文章

  1. NLP系列(5)_从朴素贝叶斯到N-gram语言模型

    作者: 龙心尘 && 寒小阳 时间:2016年2月. 出处: http://blog.csdn.net/longxinchen_ml/article/details/50646528 ...

  2. N-gram语言模型简单介绍

    N-gram语言模型 考虑一个语音识别系统,假设用户说了这么一句话:"I have a gun",因为发音的相似,该语音识别系统发现如下几句话都是可能的候选:1.I have a ...

  3. NLP中的用N-gram语言模型做英语完型填空的环境搭建

    本文是对xing_NLP中的用N-gram语言模型做完型填空这样一个NLP项目环境搭建的一个说明,本来想写在README.md中.第一次用github中的wiki,想想尝试一下也不错,然而格式非常的混 ...

  4. OCR技术浅探:基于深度学习和语言模型的印刷文字OCR系统

    作者: 苏剑林 系列博文: 科学空间 OCR技术浅探:1. 全文简述 OCR技术浅探:2. 背景与假设 OCR技术浅探:3. 特征提取(1) OCR技术浅探:3. 特征提取(2) OCR技术浅探:4. ...

  5. 通俗理解N-gram语言模型。(转)

    从NLP的最基础开始吧..不过自己看到这里,还没做总结,这里有一篇很不错的解析,可以分享一下. N-gram语言模型 考虑一个语音识别系统,假设用户说了这么一句话:“I have a gun”,因为发 ...

  6. N-gram语言模型与马尔科夫假设关系(转)

    1.从独立性假设到联合概率链朴素贝叶斯中使用的独立性假设为 P(x1,x2,x3,...,xn)=P(x1)P(x2)P(x3)...P(xn) 去掉独立性假设,有下面这个恒等式,即联合概率链规则 P ...

  7. 用CNTK搞深度学习 (二) 训练基于RNN的自然语言模型 ( language model )

    前一篇文章  用 CNTK 搞深度学习 (一) 入门    介绍了用CNTK构建简单前向神经网络的例子.现在假设读者已经懂得了使用CNTK的基本方法.现在我们做一个稍微复杂一点,也是自然语言挖掘中很火 ...

  8. 语言模型(N-Gram)

    问题描述:由于公司业务产品中,需要用户自己填写公司名称,而这个公司名称存在大量的乱填现象,因此需要对其做一些归一化的问题.在这基础上,能延伸出一个预测用户填写的公司名是否有效的模型出来. 目标:问题提 ...

  9. 基于N-Gram判断句子是否通顺

    完整代码实现及训练与测试数据:click me 一.任务描述         自然语言通顺与否的判定,即给定一个句子,要求判定所给的句子是否通顺. 二.问题探索与分析         拿到这个问题便开 ...

随机推荐

  1. ThreadLocal = 本地线程?

    一.定义 ThreadLocal是JDK包提供的,从名字来看,ThreadLocal意思就是本地线程的意思. 1.1 是什么? 要想知道他是个啥,我们看看ThreadLocal的源码(基于JDK 1. ...

  2. java.sql.SQLException: connection holder is null 问题处理

    问题描述 上上个周测试的时候突然报系统异常,于是我立即查看日志,发现是一个数据库异常:java.sql.SQLException: connection holder is null我第一想到的就是可 ...

  3. Python自带HTTP文件传输服务

    一行命令搭建一个基于python的http文件传输服务 由于今天朋友想要一个文件,而我恰好有,因为这个文件比较大,网速不是很给力,所以想到了python自己有这么一个功能,这样不仅不需要下载其他软件, ...

  4. ORM批量添加

    # ########### Book是模型类 ############ 建一个空列表,盛放obj对象lst_obj=[]# 用for循环控制添加信息条数for i in range(100):# 创建 ...

  5. 5.Switch多选择结构

    Switch语句: 多选择结构还有一个实现方式就是 switch case 语句 switch case 语句判断一个变量与一系列值中的某个值是否相等,每个值称为一个分支. switch语句中的变量类 ...

  6. Qt Installer Framework翻译(3-3)

    移除组件 下图说明了删除所有或某些已安装组件的默认工作流程: 本节使用在macOS上运行的Qt 5维护工具为例,来演示用户如何删除所有或部分选定组件. 移除所有组件 用户启动维护工具时,将打开&quo ...

  7. MacOSX 安装 TensorFlow

    TensorFlow是一个端到端开源机器学习平台.它拥有一个包含各种工具.库和社区资源的全面灵活生态系统,可以让研究人员推动机器学习领域的先进技术的. 准备 安装 Anaconda TensorFlo ...

  8. linux操作系统运行学习总结

    https://www.cnblogs.com/f-ck-need-u/p/10481466.html 操作系统学习总结 1.linux上面cpu通过上下文切换达到进程的不断切换,通过动态计算切换执行 ...

  9. HCNA 2017年01月26日

    [Huawei]ping 127.0.0.1 PING 127.0.0.1: 56 data bytes, press CTRL_C to break Reply from 127.0.0.1: by ...

  10. Mixing .NET