Encountered IOException running import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://slaver1:9000/user/hadoop/tb_user already exists
1、当时初学Sqoop的时候,mysql导入到hdfs导入命令执行以后,在hdfs上面没有找到对应的数据,今天根据这个bug,顺便解决这个问题吧,之前写的http://www.cnblogs.com/biehongli/p/8039128.html。
[hadoop@slaver1 sqoop-1.4.-cdh5.3.6]$ bin/sqoop import \
> --connect jdbc:mysql://slaver1:3306/test \
> --username root \
> --password \
> --table tb_user \
> --m
Warning: /home/hadoop/soft/sqoop-1.4.-cdh5.3.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/soft/sqoop-1.4.-cdh5.3.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.-cdh5.3.6
// :: WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
// :: INFO tool.CodeGenTool: Beginning code generation
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `tb_user` AS t LIMIT
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `tb_user` AS t LIMIT
// :: INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoop/soft/hadoop-2.5.-cdh5.3.6
Note: /tmp/sqoop-hadoop/compile/cb147e9deb144db0034d6f38cb47ad68/tb_user.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
// :: INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/cb147e9deb144db0034d6f38cb47ad68/tb_user.jar
// :: WARN manager.MySQLManager: It looks like you are importing from mysql.
// :: WARN manager.MySQLManager: This transfer can be faster! Use the --direct
// :: WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
// :: INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
// :: INFO mapreduce.ImportJobBase: Beginning import of tb_user
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: INFO client.RMProxy: Connecting to ResourceManager at slaver1/192.168.19.131:
// :: WARN security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://slaver1:9000/user/hadoop/tb_user already exists
// :: ERROR tool.ImportTool: Encountered IOException running import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://slaver1:9000/user/hadoop/tb_user already exists
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:)
at org.apache.hadoop.mapreduce.Job$.run(Job.java:)
at org.apache.hadoop.mapreduce.Job$.run(Job.java:)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:)
at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:)
at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:)
at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:)
at org.apache.sqoop.Sqoop.run(Sqoop.java:)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:)
at org.apache.sqoop.Sqoop.main(Sqoop.java:) [hadoop@slaver1 sqoop-1.4.-cdh5.3.6]$
2、报错说以及存在了,hdfs://slaver1:9000/user/hadoop/tb_user already exists,首先根据路径找到了问题,先将这个路径上面的删除了,然后再执行的时候发现将mysql的数据表数据可以导入到hdfs分布式文件系统上面。
[hadoop@slaver1 ~]$ hdfs dfs -rm -r /user/hadoop/tb_user
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = minutes, Emptier interval = minutes.
Deleted /user/hadoop/tb_user
[hadoop@slaver1 ~]$
3、执行如下所示:
[hadoop@slaver1 sqoop-1.4.-cdh5.3.6]$ bin/sqoop import \
> --connect jdbc:mysql://slaver1:3306/test \
> --username root \
> --password \
> --table tb_user \
> --m
Warning: /home/hadoop/soft/sqoop-1.4.-cdh5.3.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/soft/sqoop-1.4.-cdh5.3.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.-cdh5.3.6
// :: WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
// :: INFO tool.CodeGenTool: Beginning code generation
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `tb_user` AS t LIMIT
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `tb_user` AS t LIMIT
// :: INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoop/soft/hadoop-2.5.-cdh5.3.6
Note: /tmp/sqoop-hadoop/compile/464d9aff412a6285fd9f6f4c6d16b4e6/tb_user.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
// :: INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/464d9aff412a6285fd9f6f4c6d16b4e6/tb_user.jar
// :: WARN manager.MySQLManager: It looks like you are importing from mysql.
// :: WARN manager.MySQLManager: This transfer can be faster! Use the --direct
// :: WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
// :: INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
// :: INFO mapreduce.ImportJobBase: Beginning import of tb_user
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: INFO client.RMProxy: Connecting to ResourceManager at slaver1/192.168.19.131:
// :: INFO db.DBInputFormat: Using read commited transaction isolation
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1526642793183_0001
// :: INFO impl.YarnClientImpl: Submitted application application_1526642793183_0001
// :: INFO mapreduce.Job: The url to track the job: http://slaver1:8088/proxy/application_1526642793183_0001/
// :: INFO mapreduce.Job: Running job: job_1526642793183_0001
// :: INFO mapreduce.Job: Job job_1526642793183_0001 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Job job_1526642793183_0001 completed successfully
// :: INFO mapreduce.Job: Counters:
File System Counters
FILE: Number of bytes read=
FILE: Number of bytes written=
FILE: Number of read operations=
FILE: Number of large read operations=
FILE: Number of write operations=
HDFS: Number of bytes read=
HDFS: Number of bytes written=
HDFS: Number of read operations=
HDFS: Number of large read operations=
HDFS: Number of write operations=
Job Counters
Launched map tasks=
Other local map tasks=
Total time spent by all maps in occupied slots (ms)=
Total time spent by all reduces in occupied slots (ms)=
Total time spent by all map tasks (ms)=
Total vcore-seconds taken by all map tasks=
Total megabyte-seconds taken by all map tasks=
Map-Reduce Framework
Map input records=
Map output records=
Input split bytes=
Spilled Records=
Failed Shuffles=
Merged Map outputs=
GC time elapsed (ms)=
CPU time spent (ms)=
Physical memory (bytes) snapshot=
Virtual memory (bytes) snapshot=
Total committed heap usage (bytes)=
File Input Format Counters
Bytes Read=
File Output Format Counters
Bytes Written=
// :: INFO mapreduce.ImportJobBase: Transferred bytes in 38.0904 seconds (4.0168 bytes/sec)
// :: INFO mapreduce.ImportJobBase: Retrieved records.
[hadoop@slaver1 sqoop-1.4.-cdh5.3.6]$
4、数据如下所示:
[hadoop@slaver1 ~]$ hdfs dfs -cat /user/hadoop/tb_user/part-m-
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
,张三,
,李四,
,王五,
,小明,
,小红,
,小别,
,,
,,
,,
,,
[hadoop@slaver1 ~]$
Encountered IOException running import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://slaver1:9000/user/hadoop/tb_user already exists的更多相关文章
- ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Hive exited with status 2
这是sqoop的 迁移数据到hive的报错 解决方案: 1,已经尝试不是 晚上大多数说的 libthrity的原因 2,查看自己的配置 sqoop-env.sh 如果配置的路径写的不对,对应的包 ...
- ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Cannot run program "hive": error=2, No such file or directory
原因是hive没有设置环境变量 1,vim /etc/profile (切换root用户) 2.source /etc/profile
- Sqoop异常解决ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: No columns to generate for ClassWriter问题
问题详情如下: 解决办法 这个是由于mysql-connector-java的bug造成的,出错时我用的是mysql-connector-java-5.1.10-bin.jar,更新成mysql-co ...
- Hadoop问题:Input path does not exist: hdfs://Master:9000/user/hadoop/input
问题描述: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs:/ ...
- Hadoop第6周练习—在Eclipse中安装Hadoop插件及测试(Linux操作系统)
1 运行环境说明 1.1 硬软件环境 1.2 机器网络环境 2 :安装Eclipse并测试 2.1 内容 2.2 实现过程 2.2.1 2.2.2 ...
- ERROR tool.ImportTool: Import failed: java.io.IOException: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf
sqoop从mysql导入到hive报错: 18/08/22 13:30:53 ERROR tool.ImportTool: Import failed: java.io.IOException: j ...
- sqoop mysql--->hive 报错 (ERROR tool.ImportTool: Import failed: java.io.IOException: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf)
ERROR tool.ImportTool: Import failed: java.io.IOException: java.lang.ClassNotFoundException: org.apa ...
- hadoop错误Ignoring exception during close for org.apache.hadoop.mapred.MapTask$NewOutputCollector@17bda0f2 java.io.IOException Spill failed
1.错误 Ignoring exception during close for org.apache.hadoop.mapred.MapTask$NewOutputCollector@17bd ...
- 报错org.apache.hadoop.mapreduce.lib.input.FileSplit cannot be cast to org.apache.hadoop.mapred.FileSplit
报错 java.lang.Exception: java.lang.ClassCastException: org.apache.hadoop.mapreduce.lib.input.FileSpli ...
随机推荐
- 【python 网络爬虫】之scrapy系列
网络爬虫之scripy系列 [scrapy网络爬虫]之0 爬虫与反扒 [scrapy网络爬虫]之一 scrapy框架简介和基础应用 [scrapy网络爬虫]之二 持久化操作 [scrapy网络爬虫]之 ...
- Linux IDR机制【转】
转自:https://blog.csdn.net/av_geek/article/details/49640433 IDR机制在Linux内核中指的是整数ID管理机制. 实质上来讲,这就是一种将一个整 ...
- Liunx之xl2TP的一键搭建
作者:邓聪聪 1 L2TP(Layer 2 Tunnel Protocol二层隧道协议l),上图说明了VPN的一些特点,出差员工或者外出员工通过拨特定号码的方式接入到企业内部网络; --------- ...
- boost.asio包装类st_asio_wrapper开发教程(一)
一:什么是st_asio_wrapper它是一个c/s网络编程框架,基于对boost.asio的包装(最低在boost-1.49.0上调试过),目的是快速的构建一个c/s系统: 二:st_asio_w ...
- 题解-poj3682King Arthur's Birthday Celebration
Problem poj-3682 题目大意:抛一次硬币有\(p\)的概率得到正面,当有\(n\)次正面时停止,抛第\(i\)次的花费为\(2i-1\),求抛的期望次数和期望花费 Solution 本来 ...
- ffmpeg-3.2.4-static-win32-for-XP-bin.tar.xz
ffmpeg-3.2.4-static-win32-for-XP-bin.tar.xz ffmpeg-3.2.4-static-win32-for-XP-bin-v3.tar.xz v3版本升级了库文 ...
- ubuntu Qt linuxdeployqt打包
1.下载PatchELF 0.9.,https://nixos.org/patchelf.html 安装:./configure make sudo make install 2.终端命令设置设置环境 ...
- Redis Lua脚本调试
从版本3.2开始,Redis包含一个完整的Lua调试器,可以用来使编写复杂Redis脚本的任务更加简单. 由于Redis 3.2仍处于测试阶段,请unstable从Github 下载Redis 的分支 ...
- 用C#开发基于自动化接口的OPC客户端
OPC全称是Object Linking and Embedding(OLE) for Process Control,它的出现为基于Windows的应用程序和现场过程控制应用建立了桥梁.OPC作为一 ...
- Confluence 6 识别系统属性
Confluence 支持一些可以从 Java 系统属性中配置的配置参数和调试(debugging )设置.系统属性通常是使用 -D 为参数选项,这个选项是 Confluence 在运行后设置到 JV ...