Top N之MapReduce程序加强版Enhanced MapReduce for Top N items
In the last post we saw how to write a MapReduce program for finding the top-n items of a dataset.
The code in the mapper emits a pair key-value for every word found, passing the word as the key and 1 as the value. Since the book has roughly 38,000 words, this means that the information transmitted from mappers to reducers is proportional to that number. A way to improve network performance of this program is to rewrite the mapper as follows:
public static class TopNMapper extends Mapper<object, text,="" intwritable=""> {
        private Map<String, Integer> countMap = new HashMap<>();
        @Override
        public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
            String cleanLine = value.toString().toLowerCase().replaceAll("[_|$#<>\\^=\\[\\]\\*/\\\\,;,.\\-:()?!\"']", " ");
            StringTokenizer itr = new StringTokenizer(cleanLine);
            while (itr.hasMoreTokens()) {
                String word = itr.nextToken().trim();
                if (countMap.containsKey(word)) {
                    countMap.put(word, countMap.get(word)+1);
                }
                else {
                    countMap.put(word, 1);
                }
            }
        }
        @Override
        protected void cleanup(Context context) throws IOException, InterruptedException {
            for (String key: countMap.keySet()) {
                context.write(new Text(key), new IntWritable(countMap.get(key)));
            }
        }
    }
As we can see, we define an HashMap that uses words as the keys and the number of occurrences as the values; inside the loop, instead of emitting every word to the reducer, we put it into the map: if the word was already put, we increase its value, otherwise we set it to one. We also overrode the cleanup method, which is a method that Hadoop calls when the mapper has finished computing its input; in this method we now can emit the words to the reducers: doing this way, we can save a lot of network transmissions because we send to the reducers every word only once.
The complete code of this class is available on my github. 
In the next post we'll see how to use combiners to leverage this approach.
from: http://andreaiacono.blogspot.com/2014/03/enhanced-mapreduce-for-top-n-items.html
Top N之MapReduce程序加强版Enhanced MapReduce for Top N items的更多相关文章
- hadoop 第一个 mapreduce 程序(对MapReduce的几种固定代码的理解)
		
1.2MapReduce 和 HDFS 是如何工作的 MapReduce 其实是两部分,先是 Map 过程,然后是 Reduce 过程.从词频计算来说,假设某个文件块里的一行文字是”Thisis a ...
 - hive--构建于hadoop之上、让你像写SQL一样编写MapReduce程序
		
hive介绍 什么是hive? hive:由Facebook开源用于解决海量结构化日志的数据统计 hive是基于hadoop的一个数据仓库工具,可以将结构化的数据映射为数据库的一张表,并提供类SQL查 ...
 - 攻城狮在路上(陆)-- 配置hadoop本地windows运行MapReduce程序环境
		
本文的目的是实现在windows环境下实现模拟运行Map/Reduce程序.最终实现效果:MapReduce程序不会被提交到实际集群,但是运算结果会写入到集群的HDFS系统中. 一.环境说明: ...
 - windows环境下Eclipse开发MapReduce程序遇到的四个问题及解决办法
		
按此文章<Hadoop集群(第7期)_Eclipse开发环境设置>进行MapReduce开发环境搭建的过程中遇到一些问题,饶了一些弯路,解决办法记录在此: 文档目的: 记录windows环 ...
 - 编写简单的Mapreduce程序并部署在Hadoop2.2.0上运行
		
今天主要来说说怎么在Hadoop2.2.0分布式上面运行写好的 Mapreduce 程序. 可以在eclipse写好程序,export或用fatjar打包成jar文件. 先给出这个程序所依赖的Mave ...
 - 如何在Hadoop的MapReduce程序中处理JSON文件
		
简介: 最近在写MapReduce程序处理日志时,需要解析JSON配置文件,简化Java程序和处理逻辑.但是Hadoop本身似乎没有内置对JSON文件的解析功能,我们不得不求助于第三方JSON工具包. ...
 - hadoop——在命令行下编译并运行map-reduce程序  2
		
hadoop map-reduce程序的编译需要依赖hadoop的jar包,我尝试javac编译map-reduce时指定-classpath的包路径,但无奈hadoop的jar分布太散乱,根据自己 ...
 - hadoop-初学者写map-reduce程序中容易出现的问题 3
		
1.写hadoop的map-reduce程序之前所必须知道的基础知识: 1)hadoop map-reduce的自带的数据类型: Hadoop提供了如下内容的数据类型,这些数据类型都实现了Writab ...
 - mapreduce程序编写(WordCount)
		
折腾了半天.终于编写成功了第一个自己的mapreduce程序,并通过打jar包的方式运行起来了. 运行环境: windows 64bit eclipse 64bit jdk6.0 64bit 一.工程 ...
 
随机推荐
- mysql 插入时带判断条件
			
INSERT INTO table (f1 ,f2 ,f3) ,'a', FROM DUAL WHERE NOT EXISTS ( FROM table2 where a = b) DUAL 为临时表 ...
 - Python入门3(赋值)
			
[一:*,-1的作用] 给大家两个例子: a,b,c,d='spam' print(a) print(b) print(c) print(d) a,*b='spam' print(a) print(b ...
 - 源码之Java集合
			
No1: ArrayList的扩容策略是,新容量扩大为原来的1.5倍. ArrayList不是线性安全的,因为没有使用synchronized关键字,但是优点是效率提高了.与之相比,Vector是线性 ...
 - QString::arg()//用字符串变量参数依次替代字符串中最小数值
			
QString i = "iTest"; // current file's number QString total = "totalTest&qu ...
 - c++ 栈(顺序表)
			
栈可以用顺序表(数组)也可以用链表来储存内容,本文采用顺序表(数组)来保存内部元素.代码如下: 1 #include <iostream> 2 using namespace std; ...
 - CF632D Longest Subsequence
			
D. Longest Subsequence time limit per test 2 seconds memory limit per test 256 megabytes input stand ...
 - 1035 Password (20)(20 point(s))
			
problem To prepare for PAT, the judge sometimes has to generate random passwords for the users. The ...
 - leetcode x 的平方根 python
			
x 的平方根 实现 int sqrt(int x) 函数. 计算并返回 x 的平方根,其中 x 是非负整数. 由于返回类型是整数,结果只保留整数的部分,小数部分将被舍去. 示例 1: 输入: ...
 - python opencv3 获取摄像头视频
			
git:https://github.com/linyi0604/Computer-Vision # coding:utf8 import cv2 """ 捕获摄像头10 ...
 - UOJ275 组合数问题
			
给定n,m和k,求有多少对(i , j)满足0 ≤ i ≤ n, 0 ≤ j ≤ min(i ,m)且C(︀i,j)︀是k的倍数.n,m ≤ 1018, k ≤ 100,且k是质数. 把i和j都看成k ...