1. Scenario description

when I use sqoop to import mysql table into hive, I got the following error:

// :: WARN hcat.SqoopHCatUtilities: The Sqoop job can fail if types are not  assignment compatible
// :: WARN hcat.SqoopHCatUtilities: The HCatalog field submername has type string. Expected = varchar based on database column type : VARCHAR
// :: WARN hcat.SqoopHCatUtilities: The Sqoop job can fail if types are not assignment compatible
// :: INFO mapreduce.DataDrivenImportJob: Configuring mapper for HCatalog import job
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: INFO client.RMProxy: Connecting to ResourceManager at hadoop-namenode01/192.168.1.101:
// :: WARN conf.HiveConf: HiveConf of name hive.server2.webui.host.port does not exist
// :: INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
// :: INFO db.DBInputFormat: Using read commited transaction isolation
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1562229385371_50086
// :: INFO impl.YarnClientImpl: Submitted application application_1562229385371_50086
// :: INFO mapreduce.Job: The url to track the job: http://hadoop-namenode01:8088/proxy/application_1562229385371_50086/
// :: INFO mapreduce.Job: Running job: job_1562229385371_50086
// :: INFO hive.metastore: Closed a connection to metastore, current connections:
// :: INFO mapreduce.Job: Job job_1562229385371_50086 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Task Id : attempt_1562229385371_50086_m_000000_0, Status : FAILED
Error: GC overhead limit exceeded

Why Sqoop Import throws this exception?
The answer is – During the process, RDBMS database (NOT SQOOP) fetches all the rows at one shot and tries to load everything into memory. This causes memory spill out and throws error. To overcome this you need to tell RDBMS database to return the data in batches. The following parameters “?dontTrackOpenResources=true&defaultFetchSize=10000&useCursorFetch=true” following the jdbc connection string tells database to fetch 10000 rows per batch.

The script I use to import is as follows:

file sqoop_order_detail.sh

#!/bin/bash

/home/lenmom/sqoop-1.4./bin/sqoop import \
--connect jdbc:mysql://lenmom-mysql:3306/inventory \
--username root --password root \
--driver com.mysql.jdbc.Driver \
--table order_detail \
--hcatalog-database orc \
--hcatalog-table order_detail \
--hcatalog-partition-keys pt_log_d \
--hcatalog-partition-values \
--hcatalog-storage-stanza 'stored as orc tblproperties ("orc.compress"="SNAPPY")' \
-m

the target mysql table has 10 billion record.

2.Solution:

2.1 solution 1

modify the mysql url to set stream read data style by append the following content:

?dontTrackOpenResources=true&defaultFetchSize=&useCursorFetch=true

of which the defaultFetchSize can be changed according to specific condition,in my case, the whole script is :

#!/bin/bash

/home/lenmom/sqoop-1.4./bin/sqoop import \
--connect jdbc:mysql://lenmom-mysql:3306/inventory?dontTrackOpenResources=true\&defaultFetchSize=10000\&useCursorFetch=true\&useUnicode=yes\&characterEncoding=utf8\&characterEncoding=utf8 \
--username root --password root \
--driver com.mysql.jdbc.Driver \
--table order_detail \
--hcatalog-database orc \
--hcatalog-table order_detail \
--hcatalog-partition-keys pt_log_d \
--hcatalog-partition-values \
--hcatalog-storage-stanza 'stored as orc tblproperties ("orc.compress"="SNAPPY")' \
-m

don't  forget to use escape for & in shell script, or we can also use "jdbc url" to instead of using escape.

#!/bin/bash

/home/lenmom/sqoop-1.4./bin/sqoop import \
--connect "jdbc:mysql://lenmom-mysql:3306/inventory?dontTrackOpenResources=true&defaultFetchSize=10000&useCursorFetch=true&useUnicode=yes&characterEncoding=utf8&characterEncoding=utf8" \
--username root --password root \
--driver com.mysql.jdbc.Driver \
--table order_detail \
--hcatalog-database orc \
--hcatalog-table order_detail \
--hcatalog-partition-keys pt_log_d \
--hcatalog-partition-values \
--hcatalog-storage-stanza 'stored as orc tblproperties ("orc.compress"="SNAPPY")' \
-m

2.2 Solution 2

sqoop import -Dmapreduce.map.memory.mb= -Dmapreduce.map.java.opts=-Xmx1600m -Dmapreduce.task.io.sort.mb=

Above parameters needs to be tuned according to the data for a successful SQOOP pull.

2.3 Solution 3

increase mapper number(the default mapper number is 4, should not greater than datanode number)

sqoop job --exec lenmom-job -- --num-mappers ;

reference:

https://stackoverflow.com/questions/26484873/cloudera-settings-sqoop-import-gives-java-heap-space-error-and-gc-overhead-limit

sqoop import mysql to hive table:GC overhead limit exceeded的更多相关文章

  1. troubleshooting-sqoop mysql导入hive 报:GC overhead limit exceeded

    Halting due to Out Of Memory Error...18/09/13 21:42:17 INFO mapreduce.Job: Task Id : attempt_1536756 ...

  2. java.lang.OutOfMemoryError:GC overhead limit exceeded填坑心得

    我遇到这样的问题,本地部署时抛出异常java.lang.OutOfMemoryError:GC overhead limit exceeded导致服务起不来,查看日志发现加载了太多资源到内存,本地的性 ...

  3. [转]java.lang.OutOfMemoryError:GC overhead limit exceeded

    我遇到这样的问题,本地部署时抛出异常java.lang.OutOfMemoryError:GC overhead limit exceeded导致服务起不来,查看日志发现加载了太多资源到内存,本地的性 ...

  4. java.lang.OutOfMemoryError:GC overhead limit exceeded

    在调测程序时报java.lang.OutOfMemoryError:GC overhead limit exceeded 错误 错误原因:在用程序进行数据切割时报了该错误.由于在本地执行数据切割测试的 ...

  5. Android:java.lang.OutOfMemoryError:GC overhead limit exceeded

    Android编译:java.lang.OutOfMemoryError:GC overhead limit exceeded 百度好多什么JVM啊之类的东西,新手简单粗暴的办法: 1.在的Model ...

  6. oozie: GC overhead limit exceeded 解决方法

    1.异常表现形式 1)  提示信息      Error java.lang.OutOfMemoryError: GC overhead limit exceeded 2)提示出错      Erro ...

  7. java.lang.OutOfMemoryError:GC overhead limit exceeded解决方法

    异常如下:Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded 一.解 ...

  8. java.lang.OutOfMemoryError:GC overhead limit exceeded解决方

    Tomcat异常信息: Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit excee ...

  9. solr索引报错(java.lang.OutOfMemoryError:GC overhead limit exceeded)

    配置文件修改如下: <dataSource driver="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3 ...

随机推荐

  1. 第122题:买卖股票的最佳时机II

    一. 问题描述 给定一个数组,它的第 i 个元素是一支给定股票第 i 天的价格. 设计一个算法来计算你所能获取的最大利润.你可以尽可能地完成更多的交易(多次买卖一支股票). 注意:你不能同时参与多笔交 ...

  2. 关于iPhone X 适配

    直接上代码,具体原理自己搜索网上一大堆 <!DOCTYPE html> <html> <head> <meta name="viewport&quo ...

  3. PHP流程控制之for循环控制语句

    王同学反复往返与北京和大连,并且在本上记录往返次数.在PHP中还有另外一种实现方式能够实现同样的计数.无锡大理石测量平台 for 循环是 PHP 中的一种计数型循环,它的语法比较数活多变.这是一个必须 ...

  4. python django -在setting 设定全局时间格式

    工作中遇到需要全局设定时间的格式,再此马克下 USE_L10N = False DATE_FORMAT = 'Y-m-d' DATETIME_FORMAT = 'Y年m月'

  5. SQL 删除字段 增加字段

    SQL增加字段需要用到sql语句 ALTER TABLE 加(表格名称) ADD 加(字段名称) 加(字段类型)实例:ALTER TABLE T_Basic ADD SEODescription Nv ...

  6. npm 删除指定的某个包以及再次安装

    npm uninstall xxxx --save-dev //删除包及删除配置项 npm install xxx@version //安装指定版本 npm install //覆盖

  7. Shell脚本之sed的使用

    1.sed命令:主要作用是查找:新增 删除 和修改替换. user.txt daokr#cat user.txt ID Name Sex Age zhang M wang G cheng M huah ...

  8. 网路流 uoj 168 元旦老人与丛林

    http://uoj.ac/problem/168 没想到是网络流 官方题解地址 http://jiry-2.blog.uoj.ac/blog/1115 subtask2告诉我们度数为012的点对答案 ...

  9. P5590 【赛车游戏】

    果然我还是太\(Naive\)了 首先有一些点/边其实是没有意义的,如果从1出发不能到该点或者从该点不能到n,这个点就可以不用管了.这个过程可以用正反两边\(dfs/bfs\)实现 然后删掉那些点之后 ...

  10. php 数组插入元素

    <?php $a=array("red","green"); array_push($a,"blue","yellow&qu ...