In this post we'll see how to count the top-n items of a dataset; we'll again use the flatland book we used in a previous post: in that example we used the WordCount program to count the occurrences of every single word forming the book; now we want to find which are the top-n words used in the book.

Let's start with the mapper:

public static class TopNMapper extends Mapper<object, text,="" intwritable=""> {

        private final static IntWritable one = new IntWritable(1);
private Text word = new Text(); @Override
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
String cleanLine = value.toString().toLowerCase().replaceAll("[_|$#<>\\^=\\[\\]\\*/\\\\,;,.\\-:()?!\"']", " ");
StringTokenizer itr = new StringTokenizer(cleanLine);
while (itr.hasMoreTokens()) {
word.set(itr.nextToken().trim());
context.write(word, one);
}
}
}

The mapper is really straightforward : the TopNMapper class defines an IntWritable set to 1 and a Text object; its map() method, like in the previous post, splits every line of the book into an array of single words and send to the reducers every word with the value of 1.

The reducer is more interesting:

public static class TopNReducer extends Reducer<text, intwritable,="" text,="" intwritable=""> {

        private Map<text, intwritable=""> countMap = new HashMap<>();

        @Override
public void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException { // computes the number of occurrences of a single word
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
} // puts the number of occurrences of this word into the map.
countMap.put(key, new IntWritable(sum));
} @Override
protected void cleanup(Context context) throws IOException, InterruptedException { Map<text, intwritable=""> sortedMap = sortByValues(countMap); int counter = 0;
for (Text key: sortedMap.keySet()) {
if (counter ++ == 20) {
break;
}
context.write(key, sortedMap.get(key));
}
}
}

We override two methods: reduce() and cleanup(). Let's examine the reduce() method. 
As we've seen in the mapper's code, the keys the reducer receive are every single word contained in the book; at the beginning of the method, we compute the sum of all the values received from the mappers for this key, which is the number of occurrences of this word inside the book; then we put the word and the number of occurrences into a HashMap. Note that we're not directly putting into the map the Text object that contains the word because that instance is reused many times by Hadoop for performance issues; instead, we put a new Text object based on the received one.

To output the top-n values, we have to compute the number of occurrences of every word, sort the words by the number of occurrences and then extract the first n. In the reduce() method we don't write any value to the output, because we can sort the words only after that we collect them all; the cleanup() method is called by Hadoop after the reducer has received all its data, so we override this method to be sure that our HashMap is filled up with all the words. 
Let's look at the method: first we sort the HashMap by values (using code from this post); then we loop over the keyset and output the first 20 items.

The complete code is available on my github.

The output of the reducer gives us the 20 most used words in Flatland:

the 2286
of 1634
and 1098
to 1088
a 936
i 735
in 713
that 499
is 429
you 419
my 334
it 330
as 322
by 317
not 317
or 299
but 279
with 273
for 267
be 252

Predictably, the most used words in the book are articles, conjunctions, adjectives, prepositions and personal pronouns.

This MapReduce program is not very efficient: the mappers will transfer to the reducers a lot of data; every single word of the book will be emitted to reducers together with the number "1", causing a very high network load; the phase in which mappers send data to the reducers is called "Shuffle and sort" and is explained in more detail in the free chapter of the "Hadoop, the definitive guide" by Tom White.

In the next posts we'll see how to improve the performances of the Shuffle and sort phase.

from: http://andreaiacono.blogspot.com/2014/03/mapreduce-for-top-n-items.html

Top N的MapReduce程序MapReduce for Top N items的更多相关文章

  1. Top N之MapReduce程序加强版Enhanced MapReduce for Top N items

    In the last post we saw how to write a MapReduce program for finding the top-n items of a dataset. T ...

  2. 攻城狮在路上(陆)-- 配置hadoop本地windows运行MapReduce程序环境

    本文的目的是实现在windows环境下实现模拟运行Map/Reduce程序.最终实现效果:MapReduce程序不会被提交到实际集群,但是运算结果会写入到集群的HDFS系统中. 一.环境说明:     ...

  3. windows环境下Eclipse开发MapReduce程序遇到的四个问题及解决办法

    按此文章<Hadoop集群(第7期)_Eclipse开发环境设置>进行MapReduce开发环境搭建的过程中遇到一些问题,饶了一些弯路,解决办法记录在此: 文档目的: 记录windows环 ...

  4. 编写简单的Mapreduce程序并部署在Hadoop2.2.0上运行

    今天主要来说说怎么在Hadoop2.2.0分布式上面运行写好的 Mapreduce 程序. 可以在eclipse写好程序,export或用fatjar打包成jar文件. 先给出这个程序所依赖的Mave ...

  5. 如何在Hadoop的MapReduce程序中处理JSON文件

    简介: 最近在写MapReduce程序处理日志时,需要解析JSON配置文件,简化Java程序和处理逻辑.但是Hadoop本身似乎没有内置对JSON文件的解析功能,我们不得不求助于第三方JSON工具包. ...

  6. hadoop——在命令行下编译并运行map-reduce程序 2

     hadoop map-reduce程序的编译需要依赖hadoop的jar包,我尝试javac编译map-reduce时指定-classpath的包路径,但无奈hadoop的jar分布太散乱,根据自己 ...

  7. hadoop-初学者写map-reduce程序中容易出现的问题 3

    1.写hadoop的map-reduce程序之前所必须知道的基础知识: 1)hadoop map-reduce的自带的数据类型: Hadoop提供了如下内容的数据类型,这些数据类型都实现了Writab ...

  8. mapreduce程序编写(WordCount)

    折腾了半天.终于编写成功了第一个自己的mapreduce程序,并通过打jar包的方式运行起来了. 运行环境: windows 64bit eclipse 64bit jdk6.0 64bit 一.工程 ...

  9. 基于Maven管理的Mapreduce程序下载依赖包到LIB目录

    1.Mapreduce程序需要打包作为作业提交到Hadoop集群环境运行,但是程序中有相关的依赖包,如果没有一起打包,会出现xxxxClass Not Found . 2.在pom.xml文件< ...

随机推荐

  1. CentOS 7下安装Python3.6和pip

    一.安装python3.6 1.1.安装python3.6需要依赖包 yum install openssl-devel bzip2-devel expat-devel gdbm-devel read ...

  2. thinkphp签到的实现代码

    thinkphp签到的实现代码 数据表 1 2 3 4 5 6 7 8 9 10 11 CREATE TABLE `members_sign` (   `id` int(11) unsigned NO ...

  3. thinkphp调整框架核心目录think的位置

    thinkphp的核心目录即框架文件可以放在项目之外的目录,这点手册上有提到,放在项目之外的地方可以方便其他项目共用一个框架文件. 在入口文件的index.php中,在导入框架目录这一行,可以直接修改 ...

  4. RTSP 资料

    分享两个不错的播客. http://blog.csdn.net/u010425035/article/details/10410851 http://blog.csdn.net/xiaoyafang1 ...

  5. 机器学习:KNN-近邻算法

    一.理论知识 1.K近邻(k-Nearest Neighbor,简称KNN)学习是一种常用的监督学习. 工作机制:给定测试样本,基于某种距离度量找出训练集中与其最靠近的k个训练样本,然后基于这k个的信 ...

  6. windows上springboot打war部署tomcat小记

    web项目,需要部署到云主机里去,现在windows里试一下. springboot项目,主要流程就只是打成war包后扔到tomcat里去,但是由于是springboot项目,有一些需要注意的地方,这 ...

  7. 拉格朗日乘子法以及KKT条件

    拉格朗日乘子法是一种优化算法,主要用来解决约束优化问题.他的主要思想是通过引入拉格朗日乘子来将含有n个变量和k个约束条件的约束优化问题转化为含有n+k个变量的无约束优化问题. 其中,利用拉格朗日乘子法 ...

  8. python可变数据和不可变数据

    可变数据类型:列表list和字典dict: 不可变数据类型:整型int.浮点型float.字符串型string和元组tuple 可变与不可变是相对“引用地址”来说的.python中的不可变数据类型,不 ...

  9. 清北冬令营入学测试[ABCDEF]

    http://tyvj.cn/Contest/861 [1.2.2017] 像我这种蒟蒻只做了前6道还有道不会只拿了暴力分 A 描述 这是一道有背景的题目,小A也是一个有故事的人.但可惜的是这里纸张太 ...

  10. Alpha 冲刺报告5

    组长:吴晓晖 今天完成了哪些任务: 将服务端程序基本部署在阿里云上,还未进行测试 完成了手写记录的代码实现 处理团队问题 为明天的编程任务做准备 展示GitHub当日代码/文档签入记录: 明日计划: ...