In the last post and in the preceding one we saw how to write a MapReduce program for finding the top-n items of a data set. The difference between the two was that the first program (which we call basic) emitted to the reducers every single item read from input, while the second (which we call enhanced) made a partial computation and emitted only a subset of the input. The enhanced top-n optimizes network transmissions (the less the key-value pairs emitted, the less network is used for transmitting them from mapper to reducer) and reduces the number of keys shuffled and sorted; but this is obtained at the cost of rewriting of the mapper.

If we look at the code of the mapper of the enhanced top-n , we can see that it implements the idea behind the reducer: it uses a Map for making a partial count of the words and emits every word only once; looking at the reducer's code, we see that it implements the same idea. If we could execute the code of the reducer of the basic top-n after the mapper has run on every machine (with its subset of data), we would obtain exactly the same result than rewriting the mapper as in the enhanced. This is exactly what Hadoop combiners do: they're executed just after the mapper on every machine for improving performance. For telling Hadoop which class to use as a combiner, we can use the Job.setCombinerClass() method.

Caution: using the reducer as a combiner works only if the function we're computing is both commutative (a + b = b + a) and associative (a + (b + c) = (a + b) + c). 
Let's make an example. Suppose we're analyzing the traffic of a website and we have an input file with the number of visits per day like this (YYYYMMDD value):

20140401 100
20140331 1000
20140330 1300
20140329 5100
20140328 1200

We want to find which is the day with the highest number of visits. 
Let's say that we have two mappers; the first one receives the first three lines and the second receives the last two. If we write the mapper to emit every line, the reducer will evaluate something like this:

max(100, 1000, 1300, 5100, 1200) -> 5100

and the max is 5100. 
If we use the reducer as a combiner, the reducer will evaluate something like this:

max( max(100, 1000, 1300), max(5100, 1200)) -> max( 1300, 5100) -> 5100

because each of the two mapper will evaluate locally the max function. In this case the result will be 5100 as well, since the function we're evaluating (the max function) is both commutative and associative.

Let's say that now we need to compute the average number of visits per day. If we write the mapper to emit every line of the input file, the reducer will evaluate this:

mean(100, 1000, 1300, 5100, 1200) -> 1740

which is 1740. 
If we use the reducer as a combiner, the reducer will evaluate something like this:

mean( mean(100, 1000, 1300), mean(5100, 1200)) -> mean( 800, 3150) -> 1975

because each of the two mapper will evaluate locally the max function. In this case the result will be 1975, which is obviously wrong.

So, if we're computing a commutative and associative function and we want to improve the performance of our job, we can use our reducer as a combiner; if we want to improve performance but we're computing a function that is not commutative and associative, we have to rewrite the mapper or to write a new combiner from stratch.

from: http://andreaiacono.blogspot.com/2014/03/hadoop-combiners.html

Hadoop Combiners的更多相关文章

  1. 更为详细的介绍Hadoop combiners-More about Hadoop combiners

    Hadoop combiners are a very powerful tool to speed up our computations. We already saw what a combin ...

  2. Hadoop学习笔记—8.Combiner与自定义Combiner

    一.Combiner的出现背景 1.1 回顾Map阶段五大步骤 在第四篇博文<初识MapReduce>中,我们认识了MapReduce的八大步凑,其中在Map阶段总共五个步骤,如下图所示: ...

  3. Hadoop日记Day17---计数器、map规约、分区学习

    一.Hadoop计数器 1.1 什么是Hadoop计数器 Haoop是处理大数据的,不适合处理小数据,有些大数据问题是小数据程序是处理不了的,他是一个高延迟的任务,有时处理一个大数据需要花费好几个小时 ...

  4. [BigData]关于Hadoop学习笔记第四天(PPT总结)(一)

    课程安排 Partitioner编程** 自定义排序编程** Combiner编程** 常见的MapReduce算法** ---------------------------加深拓展-------- ...

  5. hadoop调优之一:概述

    hadoop集群性能低下的常见原因 (一)硬件环境 1.CPU/内存不足,或未充分利用 2.网络原因 3.磁盘原因 (二)map任务原因 1.输入文件中小文件过多,导致多次启动和停止JVM进程.可以设 ...

  6. 一脸懵逼学习Hadoop中的MapReduce程序中自定义分组的实现

    1:首先搞好实体类对象: write 是把每个对象序列化到输出流,readFields是把输入流字节反序列化,实现WritableComparable,Java值对象的比较:一般需要重写toStrin ...

  7. hadoop两大核心之一:MapReduce总结

    MapReduce是一种分布式计算模型,由Google提出,主要用于搜索领域,MapReduce程序 本质上是并行运行的,因此可以解决海量数据的计算问题. MapReduce任务过程被分为两个处理阶段 ...

  8. hadoop调优之一:概述 分类: A1_HADOOP B3_LINUX 2015-03-13 20:51 395人阅读 评论(0) 收藏

    hadoop集群性能低下的常见原因 (一)硬件环境 1.CPU/内存不足,或未充分利用 2.网络原因 3.磁盘原因 (二)map任务原因 1.输入文件中小文件过多,导致多次启动和停止JVM进程.可以设 ...

  9. Hadoop 三剑客之 —— 分布式计算框架 MapReduce

    一.MapReduce概述 二.MapReduce编程模型简述 三.combiner & partitioner 四.MapReduce词频统计案例         4.1 项目简介      ...

随机推荐

  1. 使用matlab表示“段数不确定”的分段函数

    示例函数: 分段函数f(x)的段数为数组a的长度减1,在表达f(x)时,不能直接使用a的长度5-1=4. 方法1: 先计算每个间隔点的函数值f(a2),f(a3),f(a4),再循环表示f(x). f ...

  2. bzoj 1224

    dfs + 剪枝, 用最大最小值剪. #include<bits/stdc++.h> #define LL long long #define fi first #define se se ...

  3. openldap quick start guide

    openldap 2.4 在centos 7 x64系统上部署 1 下载源码编译解压tar -xvf xx ./configure make && make install 2 更改配 ...

  4. 记录自己在 cmd 中执行 jar 文件遇到的一些错误

    记录自己在 cmd 中执行 jar 文件遇到的一些错误 场景: 请求接口,解析接口返回的 JSON 字符串并插入到我们的数据库里面. 情况: 项目在 eclipse 中正常运行,打成 jar 包后在 ...

  5. vars 变量预解析

    JavaScript中,你可以在函数的任何位置声明多个var语句,并且它们就好像是在函数顶部声明一样发挥作用,这种行为称为 hoisting(悬置/置顶解析/预解析).当你使用了一个变量,然后不久在函 ...

  6. BNUOJ 52506 Captcha Cracker

    简单模拟题. #include<bits/stdc++.h> using namespace std; ]; int T; int main() { scanf("%d" ...

  7. mcnp的重复探测器单元计数-fmesh卡的介绍

    第一步:首先前面是cell surface和material等的定义,忽略,然后写上下面的这些抽样信息等.最后写入fmesh卡的信息定义 第二步:计算上述输入卡,得到结果,显然不在outx,x代表p ...

  8. FastReport.Net使用:[1]屏蔽打印对话框

    1.如何设置默认打印机 在FastReport设计界面找到File->Printer Setup菜单,运行该菜单显示“打印机设置”对话框.在打印机(Printer)列表中选择默认打印机,并勾上“ ...

  9. 入门cout输出的格式(补位和小数精度)

    http://blog.csdn.net/gentle_guan/article/details/52071415   mark一下,妈妈再也不用担心我高精度不会补位了

  10. bzoj 1269 bzoj 1507 Splay处理文本信息

    bzoj 1269 题目:http://www.lydsy.com/JudgeOnline/problem.php?id=1269 大致思路: 用splay维护整个文本信息,splay树的中序遍历即为 ...