Etl之HiveSql调优(设置map reduce 的数量)
前言:
最近发现hivesql的执行速度特别慢,前面我们已经说明了left和union的优化,下面咱们分析一下增加或者减少reduce的数量来提升hsql的速度。
参考:http://www.cnblogs.com/liqiu/p/4873238.html
分析:
select s.id,o.order_id from sight s left join order_sight o on o.sight_id=s.id where s.id=9718 and o.create_time = '2015-10-10';
上一篇博文已经说明了,需要8个map,1个reduce,执行的速度:52秒。详细记录参考:http://www.cnblogs.com/liqiu/p/4873238.html
增加Reduce的数量:
首先说明一下reduce默认的个数:(每个reduce任务处理的数据量,默认为1000^3=1G,参数是hive.exec.reducers.bytes.per.reducer);(每个任务最大的reduce数,默认为999,参数是hive.exec.reducers.max)
即,如果reduce的输入(map的输出)总大小不超过1G,那么只会有一个reduce任务;
如果数据表b2c_money_trace的大小是2.4G,那么reduce的数量是3个,例如:
hive> select count() from b2c_money_trace where operate_time = '2015-10-10' group by operate_time;
Total MapReduce jobs =
Launching Job out of
Number of reduce tasks not specified. Estimated from input data size:
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1434099279301_3623421, Tracking URL = http://l-hdpm4.data.cn5.qunar.com:9981/proxy/application_1434099279301_3623421/
Kill Command = /home/q/hadoop/hadoop-2.2./bin/hadoop job -kill job_1434099279301_3623421
Hadoop job information for Stage-: number of mappers: ; number of reducers:
那么继续说最开始的例子,例如:
set mapred.reduce.tasks = 8;
执行的结果:
hive> set mapred.reduce.tasks = 8;
hive> select s.id,o.order_id from sight s left join order_sight o on o.sight_id=s.id where s.id=9718 and o.create_time = '2015-10-10';
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Defaulting to jobconf value of: 8
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Cannot run job locally: Input Size (= 380265495) is larger than hive.exec.mode.local.auto.inputbytes.max (= 50000000)
Starting Job = job_1434099279301_3618454, Tracking URL = http://l-hdpm4.data.cn5.qunar.com:9981/proxy/application_1434099279301_3618454/
Kill Command = /home/q/hadoop/hadoop-2.2.0/bin/hadoop job -kill job_1434099279301_3618454
Hadoop job information for Stage-1: number of mappers: 8; number of reducers: 8
2015-10-14 15:31:55,570 Stage-1 map = 0%, reduce = 0%
2015-10-14 15:32:01,734 Stage-1 map = 25%, reduce = 0%, Cumulative CPU 4.63 sec
2015-10-14 15:32:02,760 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 10.93 sec
2015-10-14 15:32:03,786 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 10.93 sec
2015-10-14 15:32:04,812 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 21.94 sec
2015-10-14 15:32:05,837 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 21.94 sec
2015-10-14 15:32:06,892 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 21.94 sec
2015-10-14 15:32:07,947 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 21.94 sec
2015-10-14 15:32:08,983 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 21.94 sec
2015-10-14 15:32:10,039 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 21.94 sec
2015-10-14 15:32:11,088 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 21.94 sec
2015-10-14 15:32:12,114 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 21.94 sec
2015-10-14 15:32:13,143 Stage-1 map = 75%, reduce = 19%, Cumulative CPU 24.28 sec
2015-10-14 15:32:14,170 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 27.94 sec
2015-10-14 15:32:15,197 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 27.94 sec
2015-10-14 15:32:16,224 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 28.58 sec
2015-10-14 15:32:17,250 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 28.95 sec
2015-10-14 15:32:18,277 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 37.02 sec
2015-10-14 15:32:19,305 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 48.93 sec
2015-10-14 15:32:20,332 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 49.31 sec
2015-10-14 15:32:21,359 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 57.99 sec
2015-10-14 15:32:22,385 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 61.88 sec
2015-10-14 15:32:23,411 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 71.56 sec
2015-10-14 15:32:24,435 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 71.56 sec
MapReduce Total cumulative CPU time: 1 minutes 11 seconds 560 msec
Ended Job = job_1434099279301_3618454
MapReduce Jobs Launched:
Job 0: Map: 8 Reduce: 8 Cumulative CPU: 71.56 sec HDFS Read: 380267639 HDFS Write: 330 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 11 seconds 560 msec
OK
9718 210296076
9718 210299105
9718 210295344
9718 210295277
9718 210295586
9718 210295050
9718 210301363
9718 210297733
9718 210298066
9718 210295566
9718 210298219
9718 210296438
9718 210298328
9718 210298008
9718 210299712
9718 210295239
9718 210297567
9718 210295525
9718 210294949
9718 210296318
9718 210294421
9718 210295840
Time taken: 36.978 seconds, Fetched: 22 row(s)
可见8个reduce使得reduce的时间明显提升了。
增加Map的数量:
数据表大小:
map的数量就不能用上面的事例,那么看这个数据表:
hive> dfs -ls -h /user/ticketdev/hive/warehouse/business_mirror.db/b2c_money_trace;
Found 4 items
-rw-r--r-- 3 ticketdev ticketdev 600.0 M 2015-10-14 02:13 /user/ticketdev/hive/warehouse/business_mirror.db/b2c_money_trace/24f19a74-ca91-4fb2-9b79-1b1235f1c6f8
-rw-r--r-- 3 ticketdev ticketdev 597.2 M 2015-10-14 02:13 /user/ticketdev/hive/warehouse/business_mirror.db/b2c_money_trace/34ca13a3-de44-402e-9548-e6b9f92fde67
-rw-r--r-- 3 ticketdev ticketdev 590.6 M 2015-10-14 02:13 /user/ticketdev/hive/warehouse/business_mirror.db/b2c_money_trace/ac249f44-60eb-4bf7-9c1a-6f643873b823
-rw-r--r-- 3 ticketdev ticketdev 606.5 M 2015-10-14 02:13 /user/ticketdev/hive/warehouse/business_mirror.db/b2c_money_trace/f587fec9-60da-4f18-8b47-406999d95fd1
共2.4G
数据块大小:
hive> set dfs.block.size;
dfs.block.size=134217728
注意:134217728L是128M的意思!
map数量
文件大小是600M*4个,每个数据块是128M,即:取整(600/128)*4=20个Mapper
hive> select count(1) from b2c_money_trace;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1434099279301_3620170, Tracking URL = http://l-hdpm4.data.cn5.qunar.com:9981/proxy/application_1434099279301_3620170/
Kill Command = /home/q/hadoop/hadoop-2.2.0/bin/hadoop job -kill job_1434099279301_3620170
Hadoop job information for Stage-1: number of mappers: 20; number of reducers: 1
注意上面的红色部分,说明mappers的数量是20。
那么设置划分map的文件大小
set mapred.max.split.size=50000000;
set mapred.min.split.size.per.node=50000000;
set mapred.min.split.size.per.rack=50000000;
set hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;
大概解释一下:
50000000表示50M;
set hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;这个参数表示执行前进行小文件合并,当然这里没有使用到。
其他三个参数说明按照50M来划分数据块。
执行结果:
hive> select count(1) from b2c_money_trace;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1434099279301_3620223, Tracking URL = http://l-hdpm4.data.cn5.qunar.com:9981/proxy/application_1434099279301_3620223/
Kill Command = /home/q/hadoop/hadoop-2.2.0/bin/hadoop job -kill job_1434099279301_3620223
Hadoop job information for Stage-1: number of mappers: 36; number of reducers: 1
每个文件600M,正好12个Mapper,所以36个Mappers,注意上面的红色部分。
结论:
并非map和reduce数量越多越好,因为越多占用的资源越多,同时处理的时间未必一定增加,最好根据实际情况调整到一个合理的数量。
参考文章
http://lxw1234.com/archives/2015/04/15.htm
Etl之HiveSql调优(设置map reduce 的数量)的更多相关文章
- Etl之HiveSql调优(left join where的位置)
一.前言 公司实用Hadoop构建数据仓库,期间不可避免的实用HiveSql,在Etl过程中,速度成了避无可避的问题.本人有过几个数据表关联跑1个小时的经历,你可能觉得无所谓,可是多次Etl就要多个小 ...
- Etl之HiveSql调优(union all)
相信在Etl的过程中不可避免的实用union all来拼装数据,那么这就涉及到是否并行处理的问题了. 在hive中是否适用并行map,可以通过参数来设定: set hive.exec.parallel ...
- Tomcat6 一些调优设置内存和连接数
Tomcat6 一些调优设置内存和连接数 博客分类: java TomcatJVMLinux应用服务器网络应用 公司的一个服务器使用Tomcat6默认配置,在后台一阵全点击服务器就报废了,查了一下就 ...
- jmeter --JVM调优设置
JMeter用户可根据运行的计算机配置,来适当调整JMeter.bat中的JVM调优设置,如下所示: set HEAP=-Xms512m -Xmx512m set NEW=-XX:NewSize=12 ...
- HiveSql调优系列之Hive严格模式,如何合理使用Hive严格模式
目录 综述 1.严格模式 1.1 参数设置 1.2 查看参数 1.3 严格模式限制内容及对应参数设置 2.实际操作 2.1 分区表查询时必须指定分区 2.2 order by必须指定limit 2.3 ...
- [大牛翻译系列]Hadoop(10)MapReduce 性能调优:诊断reduce性能瓶颈
6.2.3 Reduce的性能问题 Reduce的性能问题有和map类似的方面,也有和map不同的方面.图6.13是reduce任务的具体的执行各阶段,标识了可能影响性能的区域. 这一章将介绍影响re ...
- Oracle 内存参数调优设置
Oracle 数据库系统中起到调节作用的参数叫初始化参数,数据库管理员根据实际情况需要适当调整这些 初始化参数以优化Oracle系统. 1 主要系统参数调优介绍 2 系统内存参数的分配 2.1 Ora ...
- Hadoop map reduce 任务数量优化
mapred.tasktracker.map.tasks.maximum 官方解释:The maximum number of map tasks that will be run simultan ...
- HiveSql调优经验
背景 在刚使用hive的过程中,碰到过很多问题,任务经常需要运行7,8个小时甚至更久,在此记录一下这个过程中,我的一些收获 join长尾 背景 SQL在Join执行阶段会将Join Key相同的数据分 ...
随机推荐
- Web服务器优化
系统 1. 修改远程默认端口 2. Raid5 3. IIS 1. 默认文档,仅保留一个有效的. 2. 数据库 1. 更改默认端口 2. 如果一台服务器,禁用网络登录
- Agile 是什么?
也许你已经习惯了 “Agile” 这个词汇不断地在你耳边狂轰滥炸,诸如敏捷团队.敏捷UX.敏捷建模.敏捷需求管理.敏捷架构等等.而且,人们还在不断的将 “Agile” 和更多的词汇进行组合,比如,也曾 ...
- AlwaysON同步性能监控的三板斧
延迟是AlwaysOn最大的敌人之一 延迟是AlwaysON的最大敌人之一.对AlwaysON而言,其首要目标就尽量减少(无法避免)主副本.辅助副本的数据延迟,实现主副本.辅助副本的“数据同步”.只有 ...
- 解析json实例
解析项目目录中的一个json文件,将之转化为List的一个方法. package com.miracles.p3.os.util; import com.miracles.p3.os.mode.Vid ...
- [python爬虫] Selenium定向爬取海量精美图片及搜索引擎杂谈
我自认为这是自己写过博客中一篇比较优秀的文章,同时也是在深夜凌晨2点满怀着激情和愉悦之心完成的.首先通过这篇文章,你能学到以下几点: 1.可以了解Python简单爬取图片的一些思路和方法 ...
- 1119 网页布局,css写下拉列表
<style type="text/css"> *{ margin:0px; padding:0px;} #body{ width:1000px; height:200 ...
- duilib进阶教程 -- 总结 (17)
整个教程的代码下载:http://download.csdn.net/detail/qq316293804/6502207 (由于duilib进阶教程主要介绍界面,所以这个教程只给出界面相关的代码,完 ...
- java-一个小练习
输出自己的姓名: public class test01 { public static void main(String[] args) { System.out.println(" # ...
- ES6转码器babel的使用
1. 进入ES6的项目,执行 npm init // 初始化package.json 2. 在与package.json同一目录下编写配置文件 .babelrc { "presets&quo ...
- 在主方法中定义一个大小为10*10的二维字符型数组,数组名为y,正反对角线上存的是‘*’,其余 位置存的是‘#’;输出这个数组中的所有元素。
//在主方法中定义一个大小为10*10的二维字符型数组,数组名为y,正反对角线上存的是‘*’,其余 位置存的是‘#’:输出这个数组中的所有元素. char [][]y=new char [10][10 ...