In the last post and in the preceding one we saw how to write a MapReduce program for finding the top-n items of a data set. The difference between the two was that the first program (which we call basic) emitted to the reducers every single item read from input, while the second (which we call enhanced) made a partial computation and emitted only a subset of the input. The enhanced top-n optimizes network transmissions (the less the key-value pairs emitted, the less network is used for transmitting them from mapper to reducer) and reduces the number of keys shuffled and sorted; but this is obtained at the cost of rewriting of the mapper.

If we look at the code of the mapper of the enhanced top-n , we can see that it implements the idea behind the reducer: it uses a Map for making a partial count of the words and emits every word only once; looking at the reducer's code, we see that it implements the same idea. If we could execute the code of the reducer of the basic top-n after the mapper has run on every machine (with its subset of data), we would obtain exactly the same result than rewriting the mapper as in the enhanced. This is exactly what Hadoop combiners do: they're executed just after the mapper on every machine for improving performance. For telling Hadoop which class to use as a combiner, we can use the Job.setCombinerClass() method.

Caution: using the reducer as a combiner works only if the function we're computing is both commutative (a + b = b + a) and associative (a + (b + c) = (a + b) + c). 
Let's make an example. Suppose we're analyzing the traffic of a website and we have an input file with the number of visits per day like this (YYYYMMDD value):

20140401 100
20140331 1000
20140330 1300
20140329 5100
20140328 1200

We want to find which is the day with the highest number of visits. 
Let's say that we have two mappers; the first one receives the first three lines and the second receives the last two. If we write the mapper to emit every line, the reducer will evaluate something like this:

max(100, 1000, 1300, 5100, 1200) -> 5100

and the max is 5100. 
If we use the reducer as a combiner, the reducer will evaluate something like this:

max( max(100, 1000, 1300), max(5100, 1200)) -> max( 1300, 5100) -> 5100

because each of the two mapper will evaluate locally the max function. In this case the result will be 5100 as well, since the function we're evaluating (the max function) is both commutative and associative.

Let's say that now we need to compute the average number of visits per day. If we write the mapper to emit every line of the input file, the reducer will evaluate this:

mean(100, 1000, 1300, 5100, 1200) -> 1740

which is 1740. 
If we use the reducer as a combiner, the reducer will evaluate something like this:

mean( mean(100, 1000, 1300), mean(5100, 1200)) -> mean( 800, 3150) -> 1975

because each of the two mapper will evaluate locally the max function. In this case the result will be 1975, which is obviously wrong.

So, if we're computing a commutative and associative function and we want to improve the performance of our job, we can use our reducer as a combiner; if we want to improve performance but we're computing a function that is not commutative and associative, we have to rewrite the mapper or to write a new combiner from stratch.

from: http://andreaiacono.blogspot.com/2014/03/hadoop-combiners.html

Hadoop Combiners的更多相关文章

  1. 更为详细的介绍Hadoop combiners-More about Hadoop combiners

    Hadoop combiners are a very powerful tool to speed up our computations. We already saw what a combin ...

  2. Hadoop学习笔记—8.Combiner与自定义Combiner

    一.Combiner的出现背景 1.1 回顾Map阶段五大步骤 在第四篇博文<初识MapReduce>中,我们认识了MapReduce的八大步凑,其中在Map阶段总共五个步骤,如下图所示: ...

  3. Hadoop日记Day17---计数器、map规约、分区学习

    一.Hadoop计数器 1.1 什么是Hadoop计数器 Haoop是处理大数据的,不适合处理小数据,有些大数据问题是小数据程序是处理不了的,他是一个高延迟的任务,有时处理一个大数据需要花费好几个小时 ...

  4. [BigData]关于Hadoop学习笔记第四天(PPT总结)(一)

    课程安排 Partitioner编程** 自定义排序编程** Combiner编程** 常见的MapReduce算法** ---------------------------加深拓展-------- ...

  5. hadoop调优之一:概述

    hadoop集群性能低下的常见原因 (一)硬件环境 1.CPU/内存不足,或未充分利用 2.网络原因 3.磁盘原因 (二)map任务原因 1.输入文件中小文件过多,导致多次启动和停止JVM进程.可以设 ...

  6. 一脸懵逼学习Hadoop中的MapReduce程序中自定义分组的实现

    1:首先搞好实体类对象: write 是把每个对象序列化到输出流,readFields是把输入流字节反序列化,实现WritableComparable,Java值对象的比较:一般需要重写toStrin ...

  7. hadoop两大核心之一:MapReduce总结

    MapReduce是一种分布式计算模型,由Google提出,主要用于搜索领域,MapReduce程序 本质上是并行运行的,因此可以解决海量数据的计算问题. MapReduce任务过程被分为两个处理阶段 ...

  8. hadoop调优之一:概述 分类: A1_HADOOP B3_LINUX 2015-03-13 20:51 395人阅读 评论(0) 收藏

    hadoop集群性能低下的常见原因 (一)硬件环境 1.CPU/内存不足,或未充分利用 2.网络原因 3.磁盘原因 (二)map任务原因 1.输入文件中小文件过多,导致多次启动和停止JVM进程.可以设 ...

  9. Hadoop 三剑客之 —— 分布式计算框架 MapReduce

    一.MapReduce概述 二.MapReduce编程模型简述 三.combiner & partitioner 四.MapReduce词频统计案例         4.1 项目简介      ...

随机推荐

  1. mac环境下使用brew安装kafka

    1.安装kafka brew install kafka note: ·kafka使用zookeeper管理,安装过程会自动安装zookeeper ·安装目录:/usr/local/Cellar/ka ...

  2. office 文档转pdf

    本地先安装 金山wps,并确保可用 工程目录 1.使用前,先执行install.bat 安装jacob 到maven本地仓库 2.复制 jacob-1.18-M2-x64.dlljacob-1.18- ...

  3. matplotlib 练习

    官网 vamei的博客还是读了就秒懂,很妙, matplotlib核心剖析 官网翻译也不错,但缺少了 Logarithmic and other nonlinear axis对数等非线性轴  这一模块 ...

  4. Web APi入门之基本操作(一)

    最近学习了下WebApi,WebApi是RESTful风格,根据请求方式决定操作.以博客的形式写出来,加深印象以及方便以后查看和复习. 1.首先我们使用VS创建一个空的WebApi项目 2.新建实体以 ...

  5. 浅谈BUFF设计

    Buff在游戏中无处不在,比如WOW.DOTA.LOL等等,这些精心设计的BUFF,让我们击节赞叹,沉迷其中. 问:BUFF的本质是什么? BUFF 是对一项或多项数据进行瞬间或持续作用的集合.(持续 ...

  6. Java 单例模式的七种写法

    Java 单例模式的七种写法 第一种(懒汉,线程不安全) public class Singleton { private static Singleton instance; private Sin ...

  7. 20169211《Linux内核原理与分析》第四周作业

    20169211<Linux内核原理与分析>第四周作业内容列表 1.教材第3.5章节知识学习总结: 2.实验楼配套实验二实验报告: 1.<linux内核设计与实现>教材第3.5 ...

  8. mac linux 命令笔记 - 权限管理

    壹 权限 在使用命令行工具时,可能需要临时切换到管理员/root权限,如何切换呢? 正文 进入 root 权限: sudo -i 提示输入密码,这个密码就是锁屏的解锁密码. 在操作完成之后,使用 ex ...

  9. AM335x内核模块驱动之LED

    在Ubuntu的任意可操作的文件才建立text目录 在text中建立zyr-hello.c: #include<linux/kernel.h> #include<linux/modu ...

  10. 大数据开篇 MapReduce初步

    最近在学习大数据相关的东西,开这篇专题来记录一下学习过程.今天主要记录一下MapReduce执行流程解析 引子(我们需要解决一个简单的单词计数(WordCount)问题) 1000个单词 嘿嘿,100 ...