Hadoop Combiners
In the last post and in the preceding one we saw how to write a MapReduce program for finding the top-n items of a data set. The difference between the two was that the first program (which we call basic) emitted to the reducers every single item read from input, while the second (which we call enhanced) made a partial computation and emitted only a subset of the input. The enhanced top-n optimizes network transmissions (the less the key-value pairs emitted, the less network is used for transmitting them from mapper to reducer) and reduces the number of keys shuffled and sorted; but this is obtained at the cost of rewriting of the mapper.
If we look at the code of the mapper of the enhanced top-n , we can see that it implements the idea behind the reducer: it uses a Map for making a partial count of the words and emits every word only once; looking at the reducer's code, we see that it implements the same idea. If we could execute the code of the reducer of the basic top-n after the mapper has run on every machine (with its subset of data), we would obtain exactly the same result than rewriting the mapper as in the enhanced. This is exactly what Hadoop combiners do: they're executed just after the mapper on every machine for improving performance. For telling Hadoop which class to use as a combiner, we can use the Job.setCombinerClass() method.
Caution: using the reducer as a combiner works only if the function we're computing is both commutative (a + b = b + a) and associative (a + (b + c) = (a + b) + c).
Let's make an example. Suppose we're analyzing the traffic of a website and we have an input file with the number of visits per day like this (YYYYMMDD value):
20140401 100
20140331 1000
20140330 1300
20140329 5100
20140328 1200
We want to find which is the day with the highest number of visits.
Let's say that we have two mappers; the first one receives the first three lines and the second receives the last two. If we write the mapper to emit every line, the reducer will evaluate something like this:
max(100, 1000, 1300, 5100, 1200) -> 5100
and the max is 5100.
If we use the reducer as a combiner, the reducer will evaluate something like this:
max( max(100, 1000, 1300), max(5100, 1200)) -> max( 1300, 5100) -> 5100
because each of the two mapper will evaluate locally the max function. In this case the result will be 5100 as well, since the function we're evaluating (the max function) is both commutative and associative.
Let's say that now we need to compute the average number of visits per day. If we write the mapper to emit every line of the input file, the reducer will evaluate this:
mean(100, 1000, 1300, 5100, 1200) -> 1740
which is 1740.
If we use the reducer as a combiner, the reducer will evaluate something like this:
mean( mean(100, 1000, 1300), mean(5100, 1200)) -> mean( 800, 3150) -> 1975
because each of the two mapper will evaluate locally the max function. In this case the result will be 1975, which is obviously wrong.
So, if we're computing a commutative and associative function and we want to improve the performance of our job, we can use our reducer as a combiner; if we want to improve performance but we're computing a function that is not commutative and associative, we have to rewrite the mapper or to write a new combiner from stratch.
from: http://andreaiacono.blogspot.com/2014/03/hadoop-combiners.html
Hadoop Combiners的更多相关文章
- 更为详细的介绍Hadoop combiners-More about Hadoop combiners
Hadoop combiners are a very powerful tool to speed up our computations. We already saw what a combin ...
- Hadoop学习笔记—8.Combiner与自定义Combiner
一.Combiner的出现背景 1.1 回顾Map阶段五大步骤 在第四篇博文<初识MapReduce>中,我们认识了MapReduce的八大步凑,其中在Map阶段总共五个步骤,如下图所示: ...
- Hadoop日记Day17---计数器、map规约、分区学习
一.Hadoop计数器 1.1 什么是Hadoop计数器 Haoop是处理大数据的,不适合处理小数据,有些大数据问题是小数据程序是处理不了的,他是一个高延迟的任务,有时处理一个大数据需要花费好几个小时 ...
- [BigData]关于Hadoop学习笔记第四天(PPT总结)(一)
课程安排 Partitioner编程** 自定义排序编程** Combiner编程** 常见的MapReduce算法** ---------------------------加深拓展-------- ...
- hadoop调优之一:概述
hadoop集群性能低下的常见原因 (一)硬件环境 1.CPU/内存不足,或未充分利用 2.网络原因 3.磁盘原因 (二)map任务原因 1.输入文件中小文件过多,导致多次启动和停止JVM进程.可以设 ...
- 一脸懵逼学习Hadoop中的MapReduce程序中自定义分组的实现
1:首先搞好实体类对象: write 是把每个对象序列化到输出流,readFields是把输入流字节反序列化,实现WritableComparable,Java值对象的比较:一般需要重写toStrin ...
- hadoop两大核心之一:MapReduce总结
MapReduce是一种分布式计算模型,由Google提出,主要用于搜索领域,MapReduce程序 本质上是并行运行的,因此可以解决海量数据的计算问题. MapReduce任务过程被分为两个处理阶段 ...
- hadoop调优之一:概述 分类: A1_HADOOP B3_LINUX 2015-03-13 20:51 395人阅读 评论(0) 收藏
hadoop集群性能低下的常见原因 (一)硬件环境 1.CPU/内存不足,或未充分利用 2.网络原因 3.磁盘原因 (二)map任务原因 1.输入文件中小文件过多,导致多次启动和停止JVM进程.可以设 ...
- Hadoop 三剑客之 —— 分布式计算框架 MapReduce
一.MapReduce概述 二.MapReduce编程模型简述 三.combiner & partitioner 四.MapReduce词频统计案例 4.1 项目简介 ...
随机推荐
- 删除/添加/调用WordPress用户个人资料的联系信息
如果你要折腾主题或者将WordPress站点开放注册,你可能需要自定义WordPress用户个人资料信息.下面倡萌将简单说一下如何删除.添加和调用自定义用户信息字段. 添加或删除字段,可以在主题的 f ...
- jquery li a 样式
jQuery(".CwebtopNavContainer").find("li:last a").css("color","red ...
- IEEEXtreme 10.0 - Mysterious Maze
这是 meelo 原创的 IEEEXtreme极限编程大赛题解 Xtreme 10.0 - Mysterious Maze 题目来源 第10届IEEE极限编程大赛 https://www.hacker ...
- python开发学习-day04(迭代器、生成器、装饰器、二分查找、正则)
s12-20160123-day04 *:first-child { margin-top: 0 !important; } body>*:last-child { margin-bottom: ...
- 使用360对app安全进行加固
在写了第一个app之后,打算上架到各个渠道看看,无意间看到了360的app加固工具 http://jiagu.360.cn/ 自己体验了一把,加固过程很傻瓜化, 加固好了之后,还要对app进行二次签名 ...
- ie6 css 返回顶部图标固定在浏览器右下角
比较常用记录一下. #e_float{ _position:absolute; _bottom:auto; _right:50%; _margin-right:-536px; _top:express ...
- POJ 2019 Cornfields [二维RMQ]
题目传送门 Cornfields Time Limit: 1000MS Memory Limit: 30000K Total Submissions: 7963 Accepted: 3822 ...
- C++ 四种显示转换
转自:http://www.jellythink.com/archives/205 (果冻想) 前言 这篇文章总结的是C++中的类型转换,这些小的知识点,有的时候,自己不是很注意,但是在实际开发中 ...
- ESXI 5.5卡在LSI_MR3.V00
方法一 故障现象 此问题无论使用VMware官方镜像还是HP的自定义镜像都会出现一下情况并卡着不动.(此文档普遍存在各种服务器上,包括其它厂商服务器) 故障原因: 故障原因VMware官方和HP官方并 ...
- GC参数
串行收集器 串行收集器(Serial),是一个相对比较老的回收器,但是它的效率在回收器中相对较好,并且比较稳定.他在进行垃圾回收的过程中,使得应用暂时被挂起,然后启用单条线程去做垃圾回收,所以在进行垃 ...