Hadoop combiners are a very powerful tool to speed up our computations. We already saw what a combiner is in a previous post and we also have seen another form of optimization inthis post. Let's put all together to get the broader idea. 
The combiners are optimizations that can be used with Hadoop to make a local-reduction: the idea is to reduce the key-value pairs directly on the mapper, to avoid transmitting all of them to the reducers. 
Let's get back to the Top20 example from the previous post, which finds the top 20 words most used in a text. The Hadoop output of this job is shown below:

...
Map input records=4239
Map output records=37817
Map output bytes=359621
Input split bytes=118
Combine input records=0
Combine output records=0
Reduce input groups=4987
Reduce shuffle bytes=435261
Reduce input records=37817
Reduce output records=20
...

As we can see in the lines highlighted in bold, without a combiner we have 4239 lines in input for the mappers and 37817 key-value pairs emitted (the number of different words of the text). Having defined no combiner, the input and output records of combiners are 0, and so the input records for the reducers are exactly those emitted by the mappers, 37817.

Let's define a simple combiner:

    public static class WordCountCombiner extends Reducer<text, intwritable,="" text,="" intwritable=""> {

        @Override
public void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException { // computes the number of occurrences of a single word
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}

As we can see, the code has the same logic of the reducer, since its target is the same: reducing key/value pairs. 
Running the job having set the combiner gives us this result:

...
Map input records=4239
Map output records=37817
Map output bytes=359621
Input split bytes=116
Combine input records=37817
Combine output records=20

Reduce input groups=20
Reduce shuffle bytes=194
Reduce input records=20
Reduce output records=20
...

Looking at the output from Hadoop, we see that now the combiner has 37817 input records: this means that the records emitted from the mappers were all sent to the combiners; the result of the combiners is of 20 records emitted, which is the number of records received by the reducers. 
Wow, that's a great result! We avoided the transmission of a lot of data: just 20 records instead of 37817 that we had without the combiner.

But there's a big disadvantage using combiners: since is an optimization, Hadoop does not guarantee their execution. So, what can we do to ensure a reduction at the mapper-level? Simple: we can put the logic of the reducer inside the mapper!

This is exactly what we've done in the mapper of this post. This pattern is called "in-mapper combiner". The reduce part is started at mapper level, so that the key-value pairs sent to the reducers are minimized. 
Let's see Hadoop output with this pattern (in-mapper combiner and without the stand-alone combiner):

...
Map input records=4239
Map output records=4987
Map output bytes=61522
Input split bytes=118
Combine input records=0
Combine output records=0

Reduce input groups=4987
Reduce shuffle bytes=71502
Reduce input records=4987
Reduce output records=20...

Compared to the execution of the other mapper (without combining), this mapper outputs only 4987 records instead of the 37817 that are emitted to the reducers. A big reduction, even if not as big as the one obtained with the stand-alone combiner. 
And what happens if we decide to couple the in-mapper combiner pattern and the stand-alone combiner? Well, we've got the best of the two:

...
Map input records=4239
Map output records=4987
Map output bytes=61522
Input split bytes=116
Combine input records=4987
Combine output records=20

Reduce input groups=20
Reduce shuffle bytes=194
Reduce input records=20
Reduce output records=20
...

In this last case, we have the best performance because we're emitting from the mapper a reduced number of records, the combiners (if it's executed) reduce even more the size of the data to be emitted. The only downside of this approach I can think of is that it takes a lot of time to be coded.

from: http://andreaiacono.blogspot.com/2014/05/more-about-hadoop-combiners.html

更为详细的介绍Hadoop combiners-More about Hadoop combiners的更多相关文章

  1. 原来你是这样的BERT,i了i了! —— 超详细BERT介绍(一)BERT主模型的结构及其组件

    原来你是这样的BERT,i了i了! -- 超详细BERT介绍(一)BERT主模型的结构及其组件 BERT(Bidirectional Encoder Representations from Tran ...

  2. Window VNC远程控制LINUX:VNC详细配置介绍

    Window VNC远程控制LINUX:VNC详细配置介绍 //---------------------------------------vnc linux下的详细配置 1.VNC的启动/停止/重 ...

  3. Hadoop介绍及最新稳定版Hadoop 2.4.1下载地址及单节点安装

     Hadoop介绍 Hadoop是一个能对大量数据进行分布式处理的软件框架.其基本的组成包括hdfs分布式文件系统和可以运行在hdfs文件系统上的MapReduce编程模型,以及基于hdfs和MapR ...

  4. ThinkPHP 自动创建数据、自动验证、自动完成详细例子介绍(十九)

    原文:ThinkPHP 自动创建数据.自动验证.自动完成详细例子介绍(十九) 1:自动创建数据 //$name=$_POST['name']; //$password=$_POST['password ...

  5. hadoop学习第一天-hadoop初步环境搭建&伪分布式计算配置(详细)

    一.虚拟机环境搭建 我们用的虚拟机为vmware,Linux镜像为centOS6.5. vmware安装 安装没什么多说的,一路下一步,但是在新建虚拟机的时候有两个地方需要注意: 1.分配处理器1个就 ...

  6. [原]Redis详细配置介绍

    Redis详细配置介绍 # redis 配置文件示例 # 当你需要为某个配置项指定内存大小的时候,必须要带上单位, # 通常的格式就是 1k 5gb 4m 等酱紫: # # 1k => 1000 ...

  7. 更为详细的Txtsetup.sif文件解释

    更为详细的Txtsetup.sif文件解释;代码页定义, 以免文本安装模式下无法正常显示简体中文 (以下基本都是跟简体中文相关的, 不同语言版本的 Windows, 此处定义也不同)[nls]Ansi ...

  8. 详细版在虚拟机安装和使用hadoop分布式集群

    集群模式: 一台master 192.168.85.2 一台slave  192.168.85.3 jdk jdk1.8.0_74(版本不重要,看喜欢) hadoop版本 2.7.2(版本不重要,2. ...

  9. Hadoop(三) HADOOP常用命令参数介绍

    -help 功能:输出这个命令参数手册 -ls                  功能:显示目录信息 示例: hadoop fs -ls hdfs://hadoop-server01:9000/ 备注 ...

随机推荐

  1. 应用nslookup命令查看A记录、MX记录、CNAME记录和NS记录

    https://blog.csdn.net/qq_38058202/article/details/80468688

  2. CircleIndicator

    dependencies { compile 'com.nineoldandroids:library:2.4.+' compile 'me.relex:circleindicator:1.0.0@a ...

  3. 【LOJ】#2007. 「SCOI2015」国旗计划

    题解 考虑朴素的做法,断环为链,复制2M个,找到一个位置i,f(i)是这个位置之前开始的线段,结束位置最远的位置在哪 然后对于每一个人,从自己线段的起点往下跳,跳到起点+M或以后的步数就是答案 我们发 ...

  4. Django+Nginx+uwsgi搭建自己的博客(三)

    (本来打算在这篇博文中介绍Users App的前端部分的,但写着写着就发现还需要铺垫很多东西才能把整个项目串的比较流畅些,因此这篇就继续介绍了后端的一些东西,前端的部分只好跳票到下一篇了-) 在上一篇 ...

  5. 洛谷——P2559 [AHOI2002]哈利·波特与魔法石

    P2559 [AHOI2002]哈利·波特与魔法石 题目描述 输入输出格式 输入格式: 文件中第一行有七个数,分别是 S1. S2 . …. S7 :第二行有两个数,依次分别是起点城市 i 和终点城市 ...

  6. hibernate 基于主键的单向一对一关联映射

    1.设计表结构 表结构对于基于外键的关联关系来说就少了外键的关联列,并且两张表共用同一个ID,表示一对一. 2.创建Person对象 3.创建IdCard对象 4.写hbm.xml文件 5.生成数据库 ...

  7. 【BZOJ 3958】 3958: [WF2011]Mummy Madness (二分+扫描线、线段树)

    3958: [WF2011]Mummy Madness Time Limit: 10 Sec  Memory Limit: 256 MBSubmit: 96  Solved: 41 Descripti ...

  8. noip历届 && 打代码常犯错误总结

    最近(21号~24号)A了下noip历届……(挑题做的,主要做最近几年的) 发现noip好像十分钟情于搜索枚举……好几届都有. 发现自己搜索基本功实在堪忧啊,首先算法设计的十分拙计,而且还不会剪枝,然 ...

  9. codevs1081 线段树练习 2<区间修改>

    1081 线段树练习 2 时间限制: 1 s 空间限制: 128000 KB 题目等级 : 大师 Master   题目描述 Description 给你N个数,有两种操作 1:给区间[a,b]的所有 ...

  10. [BJOI2011]禁忌 --- AC自动机 + 矩阵优化 + 期望

    bzoj 2553 [BJOI2011]禁忌 题目描述: Magic Land上的人们总是提起那个传说:他们的祖先John在那个东方岛屿帮助Koishi与其姐姐Satori最终战平.而后,Koishi ...