Hadoop Combiners
In the last post and in the preceding one we saw how to write a MapReduce program for finding the top-n items of a data set. The difference between the two was that the first program (which we call basic) emitted to the reducers every single item read from input, while the second (which we call enhanced) made a partial computation and emitted only a subset of the input. The enhanced top-n optimizes network transmissions (the less the key-value pairs emitted, the less network is used for transmitting them from mapper to reducer) and reduces the number of keys shuffled and sorted; but this is obtained at the cost of rewriting of the mapper.
If we look at the code of the mapper of the enhanced top-n , we can see that it implements the idea behind the reducer: it uses a Map for making a partial count of the words and emits every word only once; looking at the reducer's code, we see that it implements the same idea. If we could execute the code of the reducer of the basic top-n after the mapper has run on every machine (with its subset of data), we would obtain exactly the same result than rewriting the mapper as in the enhanced. This is exactly what Hadoop combiners do: they're executed just after the mapper on every machine for improving performance. For telling Hadoop which class to use as a combiner, we can use the Job.setCombinerClass() method.
Caution: using the reducer as a combiner works only if the function we're computing is both commutative (a + b = b + a) and associative (a + (b + c) = (a + b) + c).
Let's make an example. Suppose we're analyzing the traffic of a website and we have an input file with the number of visits per day like this (YYYYMMDD value):
20140401 100
20140331 1000
20140330 1300
20140329 5100
20140328 1200
We want to find which is the day with the highest number of visits.
Let's say that we have two mappers; the first one receives the first three lines and the second receives the last two. If we write the mapper to emit every line, the reducer will evaluate something like this:
max(100, 1000, 1300, 5100, 1200) -> 5100
and the max is 5100.
If we use the reducer as a combiner, the reducer will evaluate something like this:
max( max(100, 1000, 1300), max(5100, 1200)) -> max( 1300, 5100) -> 5100
because each of the two mapper will evaluate locally the max function. In this case the result will be 5100 as well, since the function we're evaluating (the max function) is both commutative and associative.
Let's say that now we need to compute the average number of visits per day. If we write the mapper to emit every line of the input file, the reducer will evaluate this:
mean(100, 1000, 1300, 5100, 1200) -> 1740
which is 1740.
If we use the reducer as a combiner, the reducer will evaluate something like this:
mean( mean(100, 1000, 1300), mean(5100, 1200)) -> mean( 800, 3150) -> 1975
because each of the two mapper will evaluate locally the max function. In this case the result will be 1975, which is obviously wrong.
So, if we're computing a commutative and associative function and we want to improve the performance of our job, we can use our reducer as a combiner; if we want to improve performance but we're computing a function that is not commutative and associative, we have to rewrite the mapper or to write a new combiner from stratch.
from: http://andreaiacono.blogspot.com/2014/03/hadoop-combiners.html
Hadoop Combiners的更多相关文章
- 更为详细的介绍Hadoop combiners-More about Hadoop combiners
Hadoop combiners are a very powerful tool to speed up our computations. We already saw what a combin ...
- Hadoop学习笔记—8.Combiner与自定义Combiner
一.Combiner的出现背景 1.1 回顾Map阶段五大步骤 在第四篇博文<初识MapReduce>中,我们认识了MapReduce的八大步凑,其中在Map阶段总共五个步骤,如下图所示: ...
- Hadoop日记Day17---计数器、map规约、分区学习
一.Hadoop计数器 1.1 什么是Hadoop计数器 Haoop是处理大数据的,不适合处理小数据,有些大数据问题是小数据程序是处理不了的,他是一个高延迟的任务,有时处理一个大数据需要花费好几个小时 ...
- [BigData]关于Hadoop学习笔记第四天(PPT总结)(一)
课程安排 Partitioner编程** 自定义排序编程** Combiner编程** 常见的MapReduce算法** ---------------------------加深拓展-------- ...
- hadoop调优之一:概述
hadoop集群性能低下的常见原因 (一)硬件环境 1.CPU/内存不足,或未充分利用 2.网络原因 3.磁盘原因 (二)map任务原因 1.输入文件中小文件过多,导致多次启动和停止JVM进程.可以设 ...
- 一脸懵逼学习Hadoop中的MapReduce程序中自定义分组的实现
1:首先搞好实体类对象: write 是把每个对象序列化到输出流,readFields是把输入流字节反序列化,实现WritableComparable,Java值对象的比较:一般需要重写toStrin ...
- hadoop两大核心之一:MapReduce总结
MapReduce是一种分布式计算模型,由Google提出,主要用于搜索领域,MapReduce程序 本质上是并行运行的,因此可以解决海量数据的计算问题. MapReduce任务过程被分为两个处理阶段 ...
- hadoop调优之一:概述 分类: A1_HADOOP B3_LINUX 2015-03-13 20:51 395人阅读 评论(0) 收藏
hadoop集群性能低下的常见原因 (一)硬件环境 1.CPU/内存不足,或未充分利用 2.网络原因 3.磁盘原因 (二)map任务原因 1.输入文件中小文件过多,导致多次启动和停止JVM进程.可以设 ...
- Hadoop 三剑客之 —— 分布式计算框架 MapReduce
一.MapReduce概述 二.MapReduce编程模型简述 三.combiner & partitioner 四.MapReduce词频统计案例 4.1 项目简介 ...
随机推荐
- 关于为什么某些C/C++环境下浮点数可以“正常”比较的问题
有师妹问浮点数比较的问题,然后有人展示了可以“正常”比较的例子,google了一堆东西如下,有空仔细读一读,整理整理 http://bytes.com/topic/c/answers/629184-p ...
- spring_150910_hibernate_id_auto
package com.spring.model; import javax.persistence.Entity; import javax.persistence.GeneratedValue; ...
- 超简教程:Xgboost在Window上的安装(免编译)
Xboost在windows安装需要自己编译,编译的过程比较麻烦,而且需要复杂的软件环境.为了免去编译,我这里把编译好的文件上传到网盘供大家下载安装.有了编译好的文件,xgboost的安装变得超级简单 ...
- 【转】eval()函数用法
eval 功能:将字符串str当成有效的表达式来求值并返回计算结果. 语法: eval(source[, globals[, locals]]) -> value 参数: source:一个Py ...
- Python并发编程-一个简单的爬虫
一个简单的爬虫 #网页状态码 #200 正常 #404 网页找不到 #502 504 import requests from multiprocessing import Pool def get( ...
- 记录一下最近犯得sb的翻车错误
首先是: 数据范围是long long范围,然后写了一个暴力,觉得过不去,于是开了int范围,最后写了个能骗过所有数据的骗分,然后没开longlong... 接着是: for(int i = l; i ...
- hpu 1194 Sequence
HS(Handsome)的Ocean在纸上写下NN个整数,Ocean把它定义为OO序列. Ocean认为一个序列的价值的是:序列中不同元素个数. 现在他想知道OO序列中所有子序列的价值之和. 比如说: ...
- python3 django 安装
参考https://www.cnblogs.com/yuyang26/p/7411269.html 前提条件:python3.x环境 windows 步骤1 pip install Django==2 ...
- dev devfs udev sysfs及关系
Linux 下对设备的管理方式主要有/dev和sysfs两种,前者是将设备注册为设备节点放入/dev目录下,而后者是在linux2.6内核后引入的新的文件系统. ➤/dev方式 关于/dev的 ...
- 【BZOJ 4527】 4527: K-D-Sequence (线段树)
4527: K-D-Sequence Time Limit: 20 Sec Memory Limit: 256 MBSubmit: 145 Solved: 59 Description 我们称一个 ...