MapReduce: number of mappers/reducers
|
14 down vote
|
It's the other way round. Number of mappers is decided based on the number of splits. In reality it is the job of To better understand this, assume you are processing data stored in your MySQL using MR. Since there is no concept of blocks in this case, the theory that splits are always created based on the HDFS block fails. Right? What about splits creation then? One possibility is to create splits based on ranges of rows in your MySQL table (and this is what It is only for the InputFormats based on There is a fundamental difference between MR Coming back to your question. Hadoop allows much more than 200 mappers. Having said that, it doesn't make much sense to have 200 mappers for just 500MB of data. Always remember that when you talk about Hadoop, you are dealing with very huge data. Sending just 2.5 MB data to each mapper would be an overkill. And yes, if there are no free CPU slots then some mappers may run after the completion of current mappers. But the MR framework is very intelligent and tries its best to avoid these kind of situation. If the machine where data to processed is present, doesn't have any free CPU slots, the data will be moved to a nearby node, where free slots are available, and get processed. HTH |
How many mappers/reducers should be set when configuring Hadoop cluster?
There is no formula. It depends on how many cores and how much memory do you have. The number of mapper + number of reducer should not exceed the number of cores in general. Keep in mind that the machine is also running Task Tracker and Data Node daemons. One of the general suggestion is more mappers than reducers. If I were you, I would run one of my typical jobs with reasonable amount of data to try it out.
For a normal 7200rpm disk, 2-3 mappers is a good number. For you system, with 48G mem and 16 cpu thread, I/O will likely to be the problem. I suggest you to get multiple disk for each node and set them up as JBOD.
Quoting from "Hadoop The Definite Guide, 3rd edition", page 306
Because MapReduce jobs are normally I/O-bound, it makes sense to have more tasks than processors to get better utilization.
The amount of oversubscription depends on the CPU utilization of jobs you run, but a good rule of thumb is to have a factor of between one and two more tasks (counting both map and reduce tasks) than processors.
A processor in the quote above is equivalent to one logical core.
But this is just in theory, and most likely each use case is different than another, some tests need to be performed. But this number can be a good start to test with.
No. of mappers is decided in accordance with the data locality principle as described earlier. Data Locality principle : Hadoop tries its best to run map tasks on nodes where the data is present locally to optimize on the network and inter-node communication latency. As the input data is split into pieces and fed to different map tasks, it is desirable to have all the data fed to that map task available on a single node.Since HDFS only guarantees data having size equal to its block size (64M) to be present on one node, it is advised/advocated to have the split size equal to the HDFS block size so that the map task can take advantage of this data localization. Therefore, 64M of data per mapper. If we see some mappers running for a very small period of time, try to bring down the number of mappers and make them run longer for a minute or so.
No. of reducers should be slightly less than the number of reduce slots in the cluster (the concept of slots comes in with a pre-configuration in the job/task tracker properties while configuring the cluster) so that all the reducers finish in one wave and make full utilisation of the cluster resources.
mapred.tasktracker.reduce.tasks.maximum
mapred.tasktracker.map.tasks.maximum
in mapred-site.xml
This is applicable for all jobs. If you want to set for a specific one, you can use
mapred.reduce.tasks
mapred.map.tasks
Liyin Tang added a comment - 13/Nov/10 01:16
I just finished converting common join into map join based on the file size. There are 2 flags to control this optimization.
1) set hive.auto.convert.join = true; It means this optimization is enabled. By default right now, this flag is disabled in order not to break any existing test cases. Also I put 25 additional test cases, auto_join0.q - auto_join25.q, which covers this optimization code.
2) Set hive.hashtable.max.memory.usage = 0.9; It means if the memory usage of local task is more than 90% of its heap size, then the local task will abort by itself. The Driver will know the local work fails and it won't submit the MapJoinTask (a Map Only MapRedTask) to Hadoop, but instead, it will submit the originally CommonJoinTask to Hadoop to run.
3) Set hive.smalltable.filesize = 25000000L; It means if the summary of the small table file size is less than 25M, then it will run the map join task. If not, just run the originally common join task.
MapReduce: number of mappers/reducers的更多相关文章
- Hadoop官方文档翻译——MapReduce Tutorial
MapReduce Tutorial(个人指导) Purpose(目的) Prerequisites(必备条件) Overview(综述) Inputs and Outputs(输入输出) MapRe ...
- [转]Hive:简单查询不启用Mapreduce job而启用Fetch task
转自:http://www.iteblog.com/archives/831 如果你想查询某个表的某一列,Hive默认是会启用MapReduce Job来完成这个任务,如下: hive> SEL ...
- MIT 6.824 lab1:mapreduce
这是 MIT 6.824 课程 lab1 的学习总结,记录我在学习过程中的收获和踩的坑. 我的实验环境是 windows 10,所以对lab的code 做了一些环境上的修改,如果你仅仅对code 感兴 ...
- Number of dynamic partitions exceeded hive.exec.max.dynamic.partitions.pernode
动态分区数太大的问题:[Fatal Error] Operator FS_2 (id=2): Number of dynamic partitions exceeded hive.exec.max.d ...
- Hive之简单查询不启用MapReduce
假设你想查询某个表的某一列.Hive默认是会启用MapReduce Job来完毕这个任务,例如以下: 01 hive> SELECT id, money FROM m limit 10; 02 ...
- 011-HQL中级1-Hive快捷查询:不启用Mapreduce job启用Fetch task三种方式介绍
如果你想查询某个表的某一列,Hive默认是会启用MapReduce Job来完成这个任务,如下: hive; Total MapReduce jobs Launching Job out since ...
- MapReduce 模式、算法和用例(MapReduce Patterns, Algorithms, and Use Cases)
在新文章“MapReduce模式.算法和用例”中,Ilya Katsov提供了一个系统化的综述,阐述了能够应用MapReduce框架解决的问题. 文章开始描述了一个非常简单的.作为通用的并行计算框架的 ...
- 从零自学Hadoop(16):Hive数据导入导出,集群数据迁移上
阅读目录 序 导入文件到Hive 将其他表的查询结果导入表 动态分区插入 将SQL语句的值插入到表中 模拟数据文件下载 系列索引 本文版权归mephisto和博客园共有,欢迎转载,但须保留此段声明,并 ...
- hive创建索引
索引是hive0.7之后才有的功能,创建索引需要评估其合理性,因为创建索引也是要磁盘空间,维护起来也是需要代价的 创建索引 hive> create index [index_studentid ...
随机推荐
- Triton调试记录
先编译Release版本 先从下拉列表选择Release-MT-DLL,然后选中Triton-vc14工程, 修改项目属性配置为Release-MT-DLL-NODX,NODX的意思是不使用Direc ...
- PL/SQL编程1-基础
编写第一个存储过程 create or replace procedure test_pro1 is begin ','zydev'); end; / 查看错误 show error 执行存储过程 e ...
- iPad UIPopoverController弹出窗口的位置和坐标
本文转载至:http://blog.csdn.net/chang6520/article/details/7921181 TodoViewController *contentViewControll ...
- c++11实现l延迟调用(惰性求值)
惰性求值 惰性求值一般用于函数式编程语言中,在使用延迟求值的时候,表达式不在它被绑定到变量之后就立即求值,而是在后面的某个时候求值. 可以利用c++11中的std::function, lam ...
- 【BZOJ1630/2023】[Usaco2007 Demo]Ant Counting DP
[BZOJ1630/2023][Usaco2007 Demo]Ant Counting 题意:T中蚂蚁,一共A只,同种蚂蚁认为是相同的,有一群蚂蚁要出行,个数不少于S,不大于B,求总方案数 题解:DP ...
- 微信小程序 --- if/else条件渲染
if 条件渲染:当为真的时候显示,当为假的时候隐藏: else 条件渲染:当为真的时候隐藏,当为假的时候显示: <view wx:if="{{true}}">{{tex ...
- 云笔记类APP推荐
一.思绪收集类 Google Keep - 记事和清单 - Google Play 上的应用 注:谷歌 Keep 是最方便的收集思绪 APP 了.卡片视图,反应迅速,流畅,UI 漂亮,功能齐全,唯一不 ...
- IT求职部分网站汇总
哪合伙:http://nahehuo.com/ 过来人:http://www.guolairen.com/ 智联:www.zhaopin.com 前程:www.51job.com 中华英才:www.c ...
- Oracle Schema Objects——Index
索引主要的作用是查询优化. Oracle Schema Objects 查看执行计划的权限:查看执行计划plustrace:set autotrace trace exp stat(SP2-0618. ...
- PHP搭建(windows64+apache2.4.7+mysql-5.6+php5.5+phpMyAdmin)和Discuz安装
以下文章参考的3个来源,在加上本人搭建过程中遇到的问题的修复完善笔记: <PHP环境的搭建和Discuz!安装> http://www.myxzy.com/post-386.html ht ...