sqoop import mysql to hive table:GC overhead limit exceeded
1. Scenario description
when I use sqoop to import mysql table into hive, I got the following error:
// :: WARN hcat.SqoopHCatUtilities: The Sqoop job can fail if types are not assignment compatible
// :: WARN hcat.SqoopHCatUtilities: The HCatalog field submername has type string. Expected = varchar based on database column type : VARCHAR
// :: WARN hcat.SqoopHCatUtilities: The Sqoop job can fail if types are not assignment compatible
// :: INFO mapreduce.DataDrivenImportJob: Configuring mapper for HCatalog import job
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: INFO client.RMProxy: Connecting to ResourceManager at hadoop-namenode01/192.168.1.101:
// :: WARN conf.HiveConf: HiveConf of name hive.server2.webui.host.port does not exist
// :: INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
// :: INFO db.DBInputFormat: Using read commited transaction isolation
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1562229385371_50086
// :: INFO impl.YarnClientImpl: Submitted application application_1562229385371_50086
// :: INFO mapreduce.Job: The url to track the job: http://hadoop-namenode01:8088/proxy/application_1562229385371_50086/
// :: INFO mapreduce.Job: Running job: job_1562229385371_50086
// :: INFO hive.metastore: Closed a connection to metastore, current connections:
// :: INFO mapreduce.Job: Job job_1562229385371_50086 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Task Id : attempt_1562229385371_50086_m_000000_0, Status : FAILED
Error: GC overhead limit exceeded
Why Sqoop Import throws this exception?
The answer is – During the process, RDBMS database (NOT SQOOP) fetches all the rows at one shot and tries to load everything into memory. This causes memory spill out and throws error. To overcome this you need to tell RDBMS database to return the data in batches. The following parameters “?dontTrackOpenResources=true&defaultFetchSize=10000&useCursorFetch=true” following the jdbc connection string tells database to fetch 10000 rows per batch.
The script I use to import is as follows:
file sqoop_order_detail.sh
#!/bin/bash /home/lenmom/sqoop-1.4./bin/sqoop import \
--connect jdbc:mysql://lenmom-mysql:3306/inventory \
--username root --password root \
--driver com.mysql.jdbc.Driver \
--table order_detail \
--hcatalog-database orc \
--hcatalog-table order_detail \
--hcatalog-partition-keys pt_log_d \
--hcatalog-partition-values \
--hcatalog-storage-stanza 'stored as orc tblproperties ("orc.compress"="SNAPPY")' \
-m
the target mysql table has 10 billion record.
2.Solution:
2.1 solution 1
modify the mysql url to set stream read data style by append the following content:
?dontTrackOpenResources=true&defaultFetchSize=&useCursorFetch=true
of which the defaultFetchSize can be changed according to specific condition,in my case, the whole script is :
#!/bin/bash /home/lenmom/sqoop-1.4./bin/sqoop import \
--connect jdbc:mysql://lenmom-mysql:3306/inventory?dontTrackOpenResources=true\&defaultFetchSize=10000\&useCursorFetch=true\&useUnicode=yes\&characterEncoding=utf8\&characterEncoding=utf8 \
--username root --password root \
--driver com.mysql.jdbc.Driver \
--table order_detail \
--hcatalog-database orc \
--hcatalog-table order_detail \
--hcatalog-partition-keys pt_log_d \
--hcatalog-partition-values \
--hcatalog-storage-stanza 'stored as orc tblproperties ("orc.compress"="SNAPPY")' \
-m
don't forget to use escape for & in shell script, or we can also use "jdbc url" to instead of using escape.
#!/bin/bash /home/lenmom/sqoop-1.4./bin/sqoop import \
--connect "jdbc:mysql://lenmom-mysql:3306/inventory?dontTrackOpenResources=true&defaultFetchSize=10000&useCursorFetch=true&useUnicode=yes&characterEncoding=utf8&characterEncoding=utf8" \
--username root --password root \
--driver com.mysql.jdbc.Driver \
--table order_detail \
--hcatalog-database orc \
--hcatalog-table order_detail \
--hcatalog-partition-keys pt_log_d \
--hcatalog-partition-values \
--hcatalog-storage-stanza 'stored as orc tblproperties ("orc.compress"="SNAPPY")' \
-m
2.2 Solution 2
sqoop import -Dmapreduce.map.memory.mb= -Dmapreduce.map.java.opts=-Xmx1600m -Dmapreduce.task.io.sort.mb=
Above parameters needs to be tuned according to the data for a successful SQOOP pull.
2.3 Solution 3
increase mapper number(the default mapper number is 4, should not greater than datanode number)
sqoop job --exec lenmom-job -- --num-mappers ;
reference:
sqoop import mysql to hive table:GC overhead limit exceeded的更多相关文章
- troubleshooting-sqoop mysql导入hive 报:GC overhead limit exceeded
Halting due to Out Of Memory Error...18/09/13 21:42:17 INFO mapreduce.Job: Task Id : attempt_1536756 ...
- java.lang.OutOfMemoryError:GC overhead limit exceeded填坑心得
我遇到这样的问题,本地部署时抛出异常java.lang.OutOfMemoryError:GC overhead limit exceeded导致服务起不来,查看日志发现加载了太多资源到内存,本地的性 ...
- [转]java.lang.OutOfMemoryError:GC overhead limit exceeded
我遇到这样的问题,本地部署时抛出异常java.lang.OutOfMemoryError:GC overhead limit exceeded导致服务起不来,查看日志发现加载了太多资源到内存,本地的性 ...
- java.lang.OutOfMemoryError:GC overhead limit exceeded
在调测程序时报java.lang.OutOfMemoryError:GC overhead limit exceeded 错误 错误原因:在用程序进行数据切割时报了该错误.由于在本地执行数据切割测试的 ...
- Android:java.lang.OutOfMemoryError:GC overhead limit exceeded
Android编译:java.lang.OutOfMemoryError:GC overhead limit exceeded 百度好多什么JVM啊之类的东西,新手简单粗暴的办法: 1.在的Model ...
- oozie: GC overhead limit exceeded 解决方法
1.异常表现形式 1) 提示信息 Error java.lang.OutOfMemoryError: GC overhead limit exceeded 2)提示出错 Erro ...
- java.lang.OutOfMemoryError:GC overhead limit exceeded解决方法
异常如下:Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded 一.解 ...
- java.lang.OutOfMemoryError:GC overhead limit exceeded解决方
Tomcat异常信息: Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit excee ...
- solr索引报错(java.lang.OutOfMemoryError:GC overhead limit exceeded)
配置文件修改如下: <dataSource driver="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3 ...
随机推荐
- 为何基于tcp协议的通信比基于udp协议的通信更可靠?
tcp协议一定是先建好双向链接,发一个数据包要得到确认才算发送完成,没有收到就一直给你重发:udp协议没有链接存在,udp直接丢数据,不管你有没有收到. TCP的可靠保证,是它的三次握手双向机制,这一 ...
- [Kubernetes] Pod Health
Kubernetes relies on Probes to determine the health of a Pod container. A Probe is a diagnostic perf ...
- 洛谷 P1140 相似基因 题解
每日一题 day23 打卡 Analysis dp[i][j]表示序列A中前i个与序列B中前j个匹配的相似度最大值 所以,dp方程很容易想到: 1.让a[i]与b[j]匹配 2.让a[i]与B序列中一 ...
- 关于css3属性filter
今天看百度百科,看到其中一页所有图片背景全都设置为了灰白色,于是研究了番,发现是应用了filter滤镜这个属性. // 修改所有图片的颜色为黑白 (100% 灰度): img { -webkit-fi ...
- php 数组元素加法
<?php//添加一个元素 $dirs[] = '1location';//再次添加一个元素 $dirs[] = '2location';//第三次添加一个元素 $dirs[] = '3loca ...
- ModuleNotFoundError: No module named 'tqdm'
bogon:faceswap-master macname$ pip3 install tqdm Collecting tqdm Downloading https://files.pythonhos ...
- 44个Java性能优化
44个Java性能优化 首先,代码优化的目标是: 减小代码的体积 提高代码运行效率 代码优化细节 1 .尽量指定类.方法的final修饰符 带有final修饰符的类是不可派生的.在Java核心AP ...
- avalon结合原生js tab切换
<div class="fishqc-tap"> <div ms-class="[@codePic!=''&&@codeInfo!='' ...
- 开源GIT仓库-----gitlab
简介:GitLab是一个利用 Ruby on Rails 开发的开源应用程序,实现一个自托管的Git项目仓库,可通过Web界面进行访问公开的或者私人项目.它拥有与Github类似的功能,能够浏览源代码 ...
- Mysql 按年、季度、月、周查询统计
User表 CREATE TABLE `user` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '用户ID', `username` varchar( ...