Sqoop将mysql数据导入到hive表中

先在mysql创建表

CREATE TABLE `sqoop_test` (
`id` int() DEFAULT NULL,
`name` varchar() DEFAULT NULL,
`age` int() DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1

插入数据

    fz
dx
test
test_add
test_add-
test_add_2

在hive中创建表,表结构和mysql中一样

hive> create external table sqoop_test_table(id int,name string,age int)
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY ','
> STORED AS TEXTFILE;
OK
Time taken: 0.083 seconds

开始导入

sqoop import --connect jdbc:mysql://localhost:3306/sqooptest --username root --password 123qwe --table sqoop_test
--hive-import --hive-overwrite --hive-table sqoop_test_table --fields-terminated-by ',' -m 1
EFdeMacBook-Pro:jarfile FengZhen$ sqoop import --connect jdbc:mysql://localhost:3306/sqooptest --username root --password 123qwe --table sqoop_test --hive-import --hive-overwrite --hive-table sqoop_test_table --fields-terminated-by ',' -m 1
Warning: /Users/FengZhen/Desktop/Hadoop/sqoop-1.4..bin__hadoop-2.0.-alpha/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /Users/FengZhen/Desktop/Hadoop/sqoop-1.4..bin__hadoop-2.0.-alpha/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/FengZhen/Desktop/Hadoop/hadoop-2.8./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/FengZhen/Desktop/Hadoop/hbase-1.3./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
// :: WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
// :: INFO tool.CodeGenTool: Beginning code generation
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sqoop_test` AS t LIMIT
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sqoop_test` AS t LIMIT
// :: INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /Users/FengZhen/Desktop/Hadoop/hadoop-2.8.
// :: INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-FengZhen/compile/241f28a04b0ece18cd4a07bd6939d50a/sqoop_test.jar
// :: WARN manager.MySQLManager: It looks like you are importing from mysql.
// :: WARN manager.MySQLManager: This transfer can be faster! Use the --direct
// :: WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
// :: INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
// :: INFO mapreduce.ImportJobBase: Beginning import of sqoop_test
// :: INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:
// :: INFO db.DBInputFormat: Using read commited transaction isolation
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1505439184374_0001
// :: INFO impl.YarnClientImpl: Submitted application application_1505439184374_0001
// :: INFO mapreduce.Job: The url to track the job: http://192.168.1.64:8088/proxy/application_1505439184374_0001/
// :: INFO mapreduce.Job: Running job: job_1505439184374_0001
// :: INFO mapreduce.Job: Job job_1505439184374_0001 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Job job_1505439184374_0001 completed successfully
// :: INFO mapreduce.Job: Counters:
File System Counters
FILE: Number of bytes read=
FILE: Number of bytes written=
FILE: Number of read operations=
FILE: Number of large read operations=
FILE: Number of write operations=
HDFS: Number of bytes read=
HDFS: Number of bytes written=
HDFS: Number of read operations=
HDFS: Number of large read operations=
HDFS: Number of write operations=
Job Counters
Launched map tasks=
Other local map tasks=
Total time spent by all maps in occupied slots (ms)=
Total time spent by all reduces in occupied slots (ms)=
Total time spent by all map tasks (ms)=
Total vcore-milliseconds taken by all map tasks=
Total megabyte-milliseconds taken by all map tasks=
Map-Reduce Framework
Map input records=
Map output records=
Input split bytes=
Spilled Records=
Failed Shuffles=
Merged Map outputs=
GC time elapsed (ms)=
CPU time spent (ms)=
Physical memory (bytes) snapshot=
Virtual memory (bytes) snapshot=
Total committed heap usage (bytes)=
File Input Format Counters
Bytes Read=
File Output Format Counters
Bytes Written=
// :: INFO mapreduce.ImportJobBase: Transferred bytes in 22.0614 seconds (3.2636 bytes/sec)
// :: INFO mapreduce.ImportJobBase: Retrieved records.
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sqoop_test` AS t LIMIT
// :: INFO hive.HiveImport: Loading uploaded data into Hive
// :: INFO hive.HiveImport: SLF4J: Class path contains multiple SLF4J bindings.
// :: INFO hive.HiveImport: SLF4J: Found binding in [jar:file:/Users/FengZhen/Desktop/Hadoop/hadoop-2.8./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
// :: INFO hive.HiveImport: SLF4J: Found binding in [jar:file:/Users/FengZhen/Desktop/Hadoop/hbase-1.3./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
// :: INFO hive.HiveImport: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
// :: INFO hive.HiveImport: SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
// :: INFO hive.HiveImport: // :: WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist
// :: INFO hive.HiveImport:
// :: INFO hive.HiveImport: Logging initialized using configuration in file:/Users/FengZhen/Desktop/Hadoop/hive/apache-hive-1.2.-bin/conf/hive-log4j.properties
// :: INFO hive.HiveImport: OK
// :: INFO hive.HiveImport: Time taken: 1.502 seconds
// :: INFO hive.HiveImport: Loading data to table default.sqoop_test_table
// :: INFO hive.HiveImport: Table default.sqoop_test_table stats: [numFiles=, numRows=, totalSize=, rawDataSize=]
// :: INFO hive.HiveImport: OK
// :: INFO hive.HiveImport: Time taken: 0.762 seconds
// :: INFO hive.HiveImport: Hive import complete.
// :: INFO hive.HiveImport: Export directory is contains the _SUCCESS file only, removing the directory.

导入成功后,会在hdfs中产生数据文件

在路径 /user/hive/warehouse/sqoop_test_table 下

EFdeMacBook-Pro:jarfile FengZhen$ hadoop fs -ls /user/hive/warehouse/sqoop_test_table
Found items
-rwxr-xr-x FengZhen supergroup -- : /user/hive/warehouse/sqoop_test_table/part-m- EFdeMacBook-Pro:jarfile FengZhen$ hadoop fs -text /user/hive/warehouse/sqoop_test_table/part-m-
,fz,
,dx,
,test,
,test_add,
,test_add-,
,test_add_2,

hive中查看表数据

hive> select * from sqoop_test_table;
OK
fz
dx
test
test_add
test_add-
test_add_2
Time taken: 0.507 seconds, Fetched: row(s)

完成。

使用 sqoop 将mysql数据导入到hive表(import)的更多相关文章

  1. 使用 sqoop 将mysql数据导入到hdfs(import)

    Sqoop 将mysql 数据导入到hdfs(import) 1.创建mysql表 CREATE TABLE `sqoop_test` ( `id` ) DEFAULT NULL, `name` va ...

  2. 使用sqoop将mysql数据导入到hive中

    首先准备工具环境:hadoop2.7+mysql5.7+sqoop1.4+hive3.1 准备一张数据库表: 接下来就可以操作了... 一.将MySQL数据导入到hdfs 首先我测试将zhaopin表 ...

  3. 使用sqoop把mysql数据导入hive

    使用sqoop把mysql数据导入hive export HADOOP_COMMON_HOME=/hadoop export HADOOP_MAPRED_HOME=/hadoop   cp /hive ...

  4. Sqoop将mysql数据导入hbase的血与泪

    Sqoop将mysql数据导入hbase的血与泪(整整搞了大半天)  版权声明:本文为yunshuxueyuan原创文章.如需转载请标明出处: https://my.oschina.net/yunsh ...

  5. python脚本 用sqoop把mysql数据导入hive

    转:https://blog.csdn.net/wulantian/article/details/53064123 用python把mysql数据库的数据导入到hive中,该过程主要是通过pytho ...

  6. 用sqoop将mysql的数据导入到hive表中

    1:先将mysql一张表的数据用sqoop导入到hdfs中 准备一张表 需求 将 bbs_product 表中的前100条数据导 导出来  只要id  brand_id和 name 这3个字段 数据存 ...

  7. 11.把文本文件的数据导入到Hive表中

    先在hive里面创建一个表 create table mydb2.t3(id int,name string,age int) row format delimited fields terminat ...

  8. sqoop将mysql数据导入hbase、hive的常见异常处理

    原创不易,如需转载,请注明出处https://www.cnblogs.com/baixianlong/p/10700700.html,否则将追究法律责任!!! 一.需求: 1.将以下这张表(test_ ...

  9. 使用sqoop将mysql数据导入到hadoop

    hadoop的安装配置这里就不讲了. Sqoop的安装也很简单. 完成sqoop的安装后,可以这样测试是否可以连接到mysql(注意:mysql的jar包要放到 SQOOP_HOME/lib 下): ...

随机推荐

  1. An error occurred while searching for implementations of method

    1:在我安装完scala的插件后,在打开方法的实现类(open implementactions)的时候,抛出这个异常,后来发现这个异常是因为我的scala的插件跟我eclipse版本不兼容导致的. ...

  2. ionic新手教程第七课-简要说明几种界面之间的參数传递及优缺点

    截至2016年4月13日19点32分,我公布的ionic新手教程,已经公布6课了, 总訪问量将近6000,平均每节课能有1000的訪问量.当中訪客最多的是第三课有2700的訪客. watermark/ ...

  3. 【JMeter4.0学习(二)】之搭建openLDAP在windows8.1上的安装配置以及JMeter对LDAP服务器的性能测试脚本开发

    目录: 概述 安装测试环境 安装过程 配置启动 配置搭建OpenLDAP 给数据库添加数据 测试查询刚刚插入的数据 客户端介绍 JMeter建立一个扩展LDAP服务器的性能测试脚本开发 附:LDAP学 ...

  4. Iterator模式----一个一个遍历

    说起遍历,我立马就想到for循环,增强for循环,foreach循环这类的循环遍历,这个不错,既然有这么方便的遍历,为什么我们还要学习Iterator这样的遍历呢? 一个重要的理由是:引入Iterat ...

  5. Java图形界面实战案例——实现打字母游戏

    实现打字母的游戏 这次这个案例能够说是头几次所讲的内容的一个技术汇总,主要是 运用了几大块的知识.我们先来定义一下案例的背景:在一个300*400的窗口上.有10个随机产生的字母下落,在键盘上敲击字母 ...

  6. 自定义tabpageindicator,可以自定义tab是三角形还是矩形,但是tab不具有滑动的功能

    我是不会滴,但是看了一些大神写的,我修改了一下,大家可以参照参照 一,自定义Mytabpageindicator,直接贴代码了,具体的在代码中有注释 package com.wangy.mytabpa ...

  7. 爬虫入门【3】BeautifulSoup4用法简介

    快速开始使用BeautifulSoup 首先创建一个我们需要解析的html文档,这里采用官方文档里面的内容: html_doc = """ <html>< ...

  8. Office Web Apps 2013对文档的精细定位

    在一般情况下,我们使用Office Web Apps查看文档都是从第一页开始查看,不过在SharePoint搜索中,我们看到这样的结果: 这是2013搜索的一个新特性,可以深入定位到文档内部,支持PP ...

  9. hive深入使用

    Hive表的创建和数据类型 https://cwiki.apache.org/confluence/display/Hive/Home 管理表和外部的区别 # 管理表 1. 内部表也称之为MANAGE ...

  10. 3.二级接口HierarchicalBeanFactory

    HierarchicalBeanFactory   字面意思是分层工厂, 那么这个工厂是怎么分层的呢? package org.springframework.beans.factory; //分层工 ...