安装hive

1、下载hive-2.1.1(搭配hadoop版本为2.7.3)

2、解压到文件夹下

/wdcloud/app/hive-2.1.1

3、配置环境变量

4、在mysql上创建元数据库hive_metastore编码选latin,并授权

grant all on hive_metastore.* to 'root'@'%' IDENTIFIED BY 'weidong' with grant option;
flush privileges;

5、新建hive-site.xml,内容如下:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--><configuration>
<!-- WARNING!!! This file is auto generated for documentation purposes ONLY! -->
<!-- WARNING!!! Any changes you make to this file will be ignored by Hive. -->
<!-- WARNING!!! You must make your changes in hive-site.xml instead. -->
<!-- Hive Execution Parameters -->
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.200.250:3306/hive_metastore?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>weidong</value>
</property>
<property>
<name>datanucleus.schema.autoCreateTables</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/hive/warehouse</value>
</property>
<property>
<name>hive.exec.scratchdir</name>
<value>/hive/warehouse</value>
</property>
<property>
<name>hive.querylog.location</name>
<value>/wdcloud/app/hive-2.1.1/logs</value>
</property>
<property>
<name>hive.aux.jars.path</name>
<value>/wdcloud/app/hbase-1.1.6/lib</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://192.168.200.123:9083</value>
</property>
</configuration>

6、hive-env.sh

7、放开日志

cp hive-log4j2.properties. template  hive-log4j2.properties
cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties

8、导入mysql connector jar包

9、启动元数据库服务

hive --service metastore &

出错:

再次启动

成功不报错

元数据库也生成了一些需要的表

调试 模式命令  hive -hiveconf hive.root.logger=DEBUG,console

客户端连接

OK,现在hive就安装完毕了,让我们来执行下将mysql数据通过sqoop导入hive

 sqoop import --connect jdbc:mysql://192.168.200.250:3306/sqoop --table widgets_copy -m 1 --hive-import --username root -P

出错

解决方法:

再次执行,导入成功,后台日志

[hadoop@hadoop-allinone-- conf]$ sqoop import --connect jdbc:mysql://192.168.200.250:3306/sqoop --table widgets_copy -m 1 --hive-import --username root -P
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
Enter password:
// :: INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
// :: INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
// :: INFO tool.CodeGenTool: Beginning code generation
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `widgets_copy` AS t LIMIT
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `widgets_copy` AS t LIMIT
// :: INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /wdcloud/app/hadoop-2.7.
Note: /tmp/sqoop-hadoop/compile/4a89a67225918969c1c0f4c7c13168e9/widgets_copy.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
// :: INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/4a89a67225918969c1c0f4c7c13168e9/widgets_copy.jar
// :: WARN manager.MySQLManager: It looks like you are importing from mysql.
// :: WARN manager.MySQLManager: This transfer can be faster! Use the --direct
// :: WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
// :: INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
// :: INFO mapreduce.ImportJobBase: Beginning import of widgets_copy
// :: INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/wdcloud/app/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/wdcloud/app/hbase-1.1./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
// :: INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: INFO client.RMProxy: Connecting to ResourceManager at hadoop-allinone-200-123.wdcloud.locl/192.168.200.123:8032
// :: INFO db.DBInputFormat: Using read commited transaction isolation
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1485230213604_0011
// :: INFO impl.YarnClientImpl: Submitted application application_1485230213604_0011
// :: INFO mapreduce.Job: The url to track the job: http://hadoop-allinone-200-123.wdcloud.locl:8088/proxy/application_1485230213604_0011/
// :: INFO mapreduce.Job: Running job: job_1485230213604_0011
// :: INFO mapreduce.Job: Job job_1485230213604_0011 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Job job_1485230213604_0011 completed successfully
// :: INFO mapreduce.Job: Counters:
File System Counters
FILE: Number of bytes read=
FILE: Number of bytes written=
FILE: Number of read operations=
FILE: Number of large read operations=
FILE: Number of write operations=
HDFS: Number of bytes read=
HDFS: Number of bytes written=
HDFS: Number of read operations=
HDFS: Number of large read operations=
HDFS: Number of write operations=
Job Counters
Launched map tasks=
Other local map tasks=
Total time spent by all maps in occupied slots (ms)=
Total time spent by all reduces in occupied slots (ms)=
Total time spent by all map tasks (ms)=
Total vcore-milliseconds taken by all map tasks=
Total megabyte-milliseconds taken by all map tasks=
Map-Reduce Framework
Map input records=
Map output records=
Input split bytes=
Spilled Records=
Failed Shuffles=
Merged Map outputs=
GC time elapsed (ms)=
CPU time spent (ms)=
Physical memory (bytes) snapshot=
Virtual memory (bytes) snapshot=
Total committed heap usage (bytes)=
File Input Format Counters
Bytes Read=
File Output Format Counters
Bytes Written=
// :: INFO mapreduce.ImportJobBase: Transferred 169 bytes in 31.7543 seconds (5.3221 bytes/sec)
// :: INFO mapreduce.ImportJobBase: Retrieved 4 records.
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `widgets_copy` AS t LIMIT
// :: WARN hive.TableDefWriter: Column price had to be cast to a less precise type in Hive
// :: WARN hive.TableDefWriter: Column design_date had to be cast to a less precise type in Hive
// :: INFO hive.HiveImport: Loading uploaded data into Hive(将生成在HDFS的数据加载到HIVE中)
// :: INFO hive.HiveImport: SLF4J: Class path contains multiple SLF4J bindings.
// :: INFO hive.HiveImport: SLF4J: Found binding in [jar:file:/wdcloud/app/hive-2.1./lib/log4j-slf4j-impl-2.4..jar!/org/slf4j/impl/StaticLoggerBinder.class]
// :: INFO hive.HiveImport: SLF4J: Found binding in [jar:file:/wdcloud/app/hbase-1.1./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
// :: INFO hive.HiveImport: SLF4J: Found binding in [jar:file:/wdcloud/app/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
// :: INFO hive.HiveImport: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
// :: INFO hive.HiveImport: SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
// :: INFO hive.HiveImport:
// :: INFO hive.HiveImport: Logging initialized using configuration in file:/wdcloud/app/hive-2.1./conf/hive-log4j2.properties Async: true
// :: INFO hive.HiveImport: OK
// :: INFO hive.HiveImport: Time taken: 3.687 seconds
// :: INFO hive.HiveImport: Loading data to table default.widgets_copy
// :: INFO hive.HiveImport: OK
// :: INFO hive.HiveImport: Time taken: 1.92 seconds
// :: INFO hive.HiveImport: Hive import complete.
// :: INFO hive.HiveImport: Export directory is contains the _SUCCESS file only, removing the directory.(加载进Hive成功后将HDFS上的中间数据删除掉)

如果曾经执行失败过,那再执行的时候,会有错误提示:

ERROR tool.ImportTool: Encountered IOException running import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory xxx already exists

这时,执行hadoop fs -rmr xxx 即可

或者在命令行加上参数

--hive-overwrite : Overwrite existing data inthe Hive table

这个参数会自动覆盖掉曾经存在与hive表的数据

这样即使失败了也会自动去覆盖

查看导入数据

拓展: Sqoop-1.4.4工具import和export使用详解

http://blog.csdn.net/wangmuming/article/details/25303831

附上最后的配置文件hive-site.xml

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--><configuration>
<!-- WARNING!!! This file is auto generated for documentation purposes ONLY! -->
<!-- WARNING!!! Any changes you make to this file will be ignored by Hive. -->
<!-- WARNING!!! You must make your changes in hive-site.xml instead. -->
<!-- Hive Execution Parameters -->
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.200.250:3306/hive_metastore?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>weidong</value>
</property>
<property>
<name>datanucleus.schema.autoCreateTables</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/hive/warehouse</value>
</property>
<property>
<name>hive.exec.scratchdir</name>
<value>/hive/warehouse</value>
</property>
<property>
<name>hive.querylog.location</name>
<value>/wdcloud/app/hive-2.1./logs</value>
</property>
<property>
<name>hive.aux.jars.path</name>
<value>/wdcloud/app/hbase-1.1./lib</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://192.168.200.123:9083</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
</configuration>

下一步需要考虑如何定时去执行这些数据同步和增量同步任务

[hadoop读书笔记] 第十五章 sqoop1.4.6小实验 - 将mysq数据导入hive的更多相关文章

  1. [hadoop读书笔记] 第十五章 sqoop1.4.6小实验 - 将mysq数据导入HBASE

    导入命令 sqoop import --connect jdbc:mysql://192.168.200.250:3306/sqoop --table widgets --hbase-create-t ...

  2. [hadoop读书笔记] 第十五章 sqoop1.4.6小实验 - 数据在mysq和hdfs之间的相互转换

    P573 从mysql导入数据到hdfs 第一步:在mysql中创建待导入的数据 1.创建数据库并允许所有用户访问该数据库 mysql -h 192.168.200.250 -u root -p CR ...

  3. 《android开发艺术探索》读书笔记(十五)--Android性能优化

    接上篇<android开发艺术探索>读书笔记(十四)--JNI和NDK编程 No1: 如果<include>制定了这个id属性,同时被包含的布局文件的根元素也制定了id属性,那 ...

  4. 《LINUX内核设计与实现》读书笔记之第五章

    第五章——系统调用 5.1 与内核通信 1.为用户空间提供一种硬件的抽象接口 2.保证系统稳定和安全 3.除异常和陷入,是内核唯一的合法入口. API.POSIX和C库 关于Unix接口设计:提供机制 ...

  5. Linux内核分析 读书笔记 (第五章)

    第五章 系统调用 5.1 与内核通信 1.调用在用户空间进程和硬件设备之间添加了一个中间层.该层主要作用有三个: 为用户空间提供了硬件的抽象接口. 系统调用保证了系统的稳定和安全. 实现多任务和虚拟内 ...

  6. 《深入理解java虚拟机》读书笔记四——第五章

    第五章 调优案例分析与实战

  7. 《APUE》读书笔记第十二章-线程控制

    本章中,主要是介绍控制线程行为方面的内容,同时介绍了在同一进程中的多个线程之间如何保持数据的私有性以及基于进程的系统调用如何与线程进行交互. 一.线程属性 我们在创建线程的时候可以通过修改pthrea ...

  8. Programming In Scala笔记-第十五章、Case Classes和模式匹配

    本章主要分析case classes和模式匹配(pattern matching). 一.简单例子 接下来首先以一个包含case classes和模式匹配的例子来展开本章内容. 下面的例子中将模拟实现 ...

  9. C primer plus 读书笔记第十四章

    这一章主要介绍C语言的结构和其他数据形式,是学习算法和数据结构的重点. 1.示例代码 /*book.c -- 仅包含一本书的图书目录*/ #include <stdio.h> #defin ...

随机推荐

  1. LeetCode OJ 之 Maximal Square (最大的正方形)

    题目: Given a 2D binary matrix filled with 0's and 1's, find the largest square containing all 1's and ...

  2. [Contiki系列论文之2]WSN的自适应通信架构

    说明:本系列文章翻译自Contiki之父Adam Dunkels经典论文,版权归原作者全部. Contiki是由Adam Dunkels及其团队开发的系统.研读其论文是对深入理解Contiki系统的最 ...

  3. 文档 - STOMP Over WebSocket

    http://jmesnil.net/stomp-websocket/doc/ What is STOMP? STOMP is a simple text-orientated messaging p ...

  4. 每日英语:A Chinese Soccer Club Has Won Something!

    A 1-1 tie at home was sufficient for Guangzhou Evergrande to clinch the Asian Champions League title ...

  5. python(49):把文件压缩成zip格式的文件

    有时需要用到压缩文件,网上搜集了一段代码: 分享一下: import os import zipfile def make_zip(localPath, pname): zipf = zipfile. ...

  6. kafka消费者客户端启动之后消费不到消息的原因分析

    如果你发现你的一个消费者客户端A已经启动了,但是就是不消费消息,此时你应该检查一下该消费者所在的组中(ConsumerGroup)是否还有其他的消费者,topic的分区可能被组中其他的消费者线程抢走( ...

  7. dovecot--查询未读邮件个数

    最近负责的邮箱系统项目中有一个这样的需求:提供一个接口给业务层,可以通过邮箱查询到该用户的未读邮件个数. 之前的方案是通过查看用户目录下.INBOX/new目录中的文件个数,但是这个方法不准确,当有用 ...

  8. ThinkPad X220 完美黑苹果 Hackintosh OS X 10.11 El Capitan

    原文链接:https://www.gaojinan.com/thinkpad-x220-hackintosh-osx-10-11-el-capitan-perfect.html //Update 20 ...

  9. html5 canvas 画图移动端出现锯齿毛边的解决方法

    使用HTML5的canvas元素画出来的.在移动端手机上测试都发现画图有一点锯齿问题 出现这个问题的原因应该是手机的宽是720像素的, 而这个canvas是按照小于720像素画出来的, 所以在720像 ...

  10. SqlServer select * into 对应 Oracle语法

    创建新表,并插入旧表值 Sql Server select * into new_emp from emp; Oracle create table new_emp as select * from ...