环境:
hadoop2.2.0
hive0.13.1
Ubuntu 14.04 LTS
java version "1.7.0_60"
Oracle10g

***欢迎转载。请注明来源*** 
 

http://blog.csdn.net/u010967382/article/details/38709751

到下面地址下载安装包
http://mirrors.cnnic.cn/apache/hive/stable/apache-hive-0.13.1-bin.tar.gz

安装包解压到server上
/home/fulong/Hive/apache-hive-0.13.1-bin

改动环境变量,加入下面内容
export HIVE_HOME=/home/fulong/Hive/apache-hive-0.13.1-bin
export PATH=$HIVE_HOME/bin:$PATH

进到conf文件夹下拷贝模板配置文件重命名
fulong@FBI006:~/Hive/apache-hive-0.13.1-bin/conf$ ls
hive-default.xml.template  hive-exec-log4j.properties.template
hive-env.sh.template       hive-log4j.properties.template
fulong@FBI006:~/Hive/apache-hive-0.13.1-bin/conf$ cp hive-env.sh.template hive-env.sh
fulong@FBI006:~/Hive/apache-hive-0.13.1-bin/conf$ cp hive-default.xml.template hive-site.xml
fulong@FBI006:~/Hive/apache-hive-0.13.1-bin/conf$ ls
hive-default.xml.template  hive-env.sh.template                 hive-log4j.properties.template
hive-env.sh                hive-exec-log4j.properties.template  hive-site.xml

改动配置文件hive-env.sh中的下面几处。分别制定Hadoop的根文件夹,Hive的conf和lib文件夹
# Set HADOOP_HOME to point to a specific hadoop install directory
HADOOP_HOME=/home/fulong/Hadoop/hadoop-2.2.0

# Hive Configuration Directory can be controlled by:
export HIVE_CONF_DIR=/home/fulong/Hive/apache-hive-0.13.1-bin/conf

# Folder containing extra ibraries required for hive compilation/execution can be controlled by:
export HIVE_AUX_JARS_PATH=/home/fulong/Hive/apache-hive-0.13.1-bin/lib

改动配置文件hive-site.sh中的下面几处连接Oracle相关參数
<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:oracle:thin:@192.168.0.138:1521:orcl</value>
  <description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>oracle.jdbc.driver.OracleDriver</value>
  <description>Driver class name for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>hive</value>
  <description>username to use against metastore database</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>hivefbi</value>
  <description>password to use against metastore database</description>
</property>


配置log4j
在$HIVE_HOME下创建log4j文件夹,用于存储日志文件
拷贝模板重命名
fulong@FBI006:~/Hive/apache-hive-0.13.1-bin/conf$ cp hive-log4j.properties.template hive-log4j.properties

改动存放日志的文件夹
hive.log.dir=/home/fulong/Hive/apache-hive-0.13.1-bin/log4j

拷贝Oracle JDBC的jar包
将相应Oracle的jdbc包复制到$HIVE_HOME/lib下

启动Hive
fulong@FBI006:~/Hive/apache-hive-0.13.1-bin$ hive
14/08/20 17:14:05 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/08/20 17:14:05 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/08/20 17:14:05 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/08/20 17:14:05 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/08/20 17:14:05 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/08/20 17:14:05 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/08/20 17:14:05 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/08/20 17:14:05 INFO Configuration.deprecation: mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed
14/08/20 17:14:05 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect.  Use hive.hmshandler.retry.* instead

Logging initialized using configuration in file:/home/fulong/Hive/apache-hive-0.13.1-bin/conf/hive-log4j.properties
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /home/fulong/Hadoop/hadoop-2.2.0/lib/native/libhadoop.so which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
hive>

验证
打算创建一张表存储搜狗实验室下载的用户搜索行为日志。

数据下载地址:
http://www.sogou.com/labs/dl/q.html

首先创建表:
hive> create table searchlog (time string,id string,sword string,rank int,clickrank int,url string) row format delimited fields
terminated by '\t' lines terminated by '\n' stored as textfile;

此时会报错:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDODataStoreException: An exception was thrown while adding/validating class(es) : ORA-01754: a table may contain
only one column of type LONG

解决的方法:
用解压缩工具打开${HIVE_HOME}/lib中的hive-metastore-0.13.0.jar,发现名为package.jdo的文件。打开该文件并找到以下的内容。
<field name="viewOriginalText" default-fetch-group="false">
        <column name="VIEW_ORIGINAL_TEXT" jdbc-type="LONGVARCHAR"/>
</field>
<field name="viewExpandedText" default-fetch-group="false">
        <column name="VIEW_EXPANDED_TEXT" jdbc-type="LONGVARCHAR"/>
</field>
能够发现列VIEW_ORIGINAL_TEXT和VIEW_EXPANDED_TEXT的类型都为LONGVARCHAR,相应于Oracle中的LONG,这样就与Oracle表仅仅能存在一列类型为LONG的列的要求相矛盾,所以就出现错误了。

依照Hive官网的建议将该两列的jdbc-type的值改为CLOB。改动后的内容例如以下所看到的。
<field name="viewOriginalText"default-fetch-group="false">
             <column name="VIEW_ORIGINAL_TEXT" jdbc-type="CLOB"/>
</field>
<field name="viewExpandedText"default-fetch-group="false">
             <column name="VIEW_EXPANDED_TEXT" jdbc-type="CLOB"/>
</field>

改动以后,重新启动hive。


又一次运行创建表的命令。创建表成功:
hive> create table searchlog (time string,id string,sword string,rank int,clickrank int,url string) row format delimited fields terminated by '\t' lines terminated by '\n' stored
as textfile;
OK
Time taken: 0.986 seconds

将本地数据载入进表中:
hive> load data local inpath '/home/fulong/Downloads/SogouQ.reduced' overwrite into table searchlog;
Copying data from file:/home/fulong/Downloads/SogouQ.reduced
Copying file: file:/home/fulong/Downloads/SogouQ.reduced
Loading data to table default.searchlog
rmr: DEPRECATED: Please use 'rm -r' instead.
Deleted hdfs://fulonghadoop/user/hive/warehouse/searchlog
Table default.searchlog stats: [numFiles=1, numRows=0, totalSize=152006060, rawDataSize=0]
OK
Time taken: 25.705 seconds

查看全部表:
hive> show tables;
OK
searchlog
Time taken: 0.139 seconds, Fetched: 1 row(s)

统计行数:
hive> select count(*) from searchlog;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1407233914535_0001, Tracking URL = http://FBI003:8088/proxy/application_1407233914535_0001/
Kill Command = /home/fulong/Hadoop/hadoop-2.2.0/bin/hadoop job  -kill job_1407233914535_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2014-08-20 18:03:17,667 Stage-1 map = 0%,  reduce = 0%
2014-08-20 18:04:05,426 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.46 sec
2014-08-20 18:04:27,317 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.74 sec
MapReduce Total cumulative CPU time: 4 seconds 740 msec
Ended Job = job_1407233914535_0001
MapReduce Jobs Launched:
Job 0: Map: 1  Reduce: 1   Cumulative CPU: 4.74 sec   HDFS Read: 152010455 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 4 seconds 740 msec
OK
Time taken: 103.154 seconds, Fetched: 1 row(s)





版权声明:本文博客原创文章。博客,未经同意,不得转载。

【甘道夫】Hive 0.13.1 on Hadoop2.2.0 + Oracle10g部署详细解释的更多相关文章

  1. hive-安装0.13.1(hadoop2.2.0)

    hadoop2.2.0 hive0.13.1 (事先已经安装好hadoop.MySQL以及在MySQL中建好了hive专用账号,数据创建不创建都可以) 1.下载解压 2.把MySQL驱动加入hive的 ...

  2. 【甘道夫】Apache Hadoop 2.5.0-cdh5.2.0 HDFS Quotas 配额控制

    前言 HDFS为管理员提供了针对文件夹的配额控制特性,能够控制名称配额(指定文件夹下的文件&文件夹总数),或者空间配额(占用磁盘空间的上限). 本文探究了HDFS的配额控制特性,记录了各类配额 ...

  3. 【甘道夫】Win7x64环境下编译Apache Hadoop2.2.0的Eclipse小工具

    目标: 编译Apache Hadoop2.2.0在win7x64环境下的Eclipse插件 环境: win7x64家庭普通版 eclipse-jee-kepler-SR1-win32-x86_64.z ...

  4. 【甘道夫】MapReduce实现矩阵乘法--实现代码

    之前写了一篇分析MapReduce实现矩阵乘法算法的文章: [甘道夫]Mapreduce实现矩阵乘法的算法思路 为了让大家更直观的了解程序运行,今天编写了实现代码供大家參考. 编程环境: java v ...

  5. 【甘道夫】Win7环境下Eclipse连接Hadoop2.2.0

    准备: 确保hadoop2.2.0集群正常执行 1.eclipse中建立javaproject,导入hadoop2.2.0相关jar包 2.在src根文件夹下拷入log4j.properties,通过 ...

  6. 【甘道夫】Hadoop2.2.0 NN HA具体配置+Client透明性试验【完整版】

    引言: 前面转载过一篇团队兄弟[伊利丹]写的NN HA实验记录,我也基于他的环境实验了NN HA对于Client的透明性. 本篇文章记录的是亲自配置NN HA的具体全过程,以及全面測试HA对clien ...

  7. 【甘道夫】Ubuntu14 server + Hadoop2.2.0环境下Sqoop1.99.3部署记录

    第一步.下载.解压.配置环境变量: 官网下载sqoop1.99.3 http://mirrors.cnnic.cn/apache/sqoop/1.99.3/ 将sqoop解压到目标文件夹,我的是 /h ...

  8. 【甘道夫】Hadoop2.2.0环境使用Sqoop-1.4.4将Oracle11g数据导入HBase0.96,并自己主动生成组合行键

    目的: 使用Sqoop将Oracle中的数据导入到HBase中,并自己主动生成组合行键! 环境: Hadoop2.2.0 Hbase0.96 sqoop-1.4.4.bin__hadoop-2.0.4 ...

  9. 【甘道夫】Sqoop1.99.3基础操作--导入Oracle的数据到HDFS

    第一步:进入clientShell fulong@FBI008:~$ sqoop.sh client Sqoop home directory: /home/fulong/Sqoop/sqoop-1. ...

随机推荐

  1. 将Datagridview中的数据导出至Excel中

        首先添加一个模块ImportToExcel,并添加引用         然后导入命名空间: Imports Microsoft.Office.Interop Imports System.Da ...

  2. 王垠:Lisp 已死,Lisp 万岁!

    王垠:Lisp 已死,Lisp 万岁!_IT新闻_博客园 王垠:Lisp 已死,Lisp 万岁!

  3. casio 手表北京维修网络

    http://www.casio.com.cn/support/service/wat/28.html 手表北京维修网络 号新东安广场2座11层1103室电话:010-65157818/8391585 ...

  4. linux kernel的函数与抽象层

    在数学领域,函数是一种关系,这种关系使一个集合里的每一个元素对应到另一个(可能相同的)集合里的唯一元素. 在C语言中函数也有这种联系.自变量影响着因变量. 在linux内核驱动编程经常会有抽象层的概念 ...

  5. Hadoop之MapReduce程序应用三

    摘要:MapReduce程序进行数据去重. 关键词:MapReduce   数据去重 数据源:人工构造日志数据集log-file1.txt和log-file2.txt. log-file1.txt内容 ...

  6. BI中事实表和维度表的定义

    一个典型的样例是,把逻辑业务比作一个立方体,产品维.时间维.地点维分别作为不同的坐标轴,而坐标轴的交点就是一个详细的事实.也就是说事实表是多个维度表的一个交点.而维度表是分析事实的一个窗体. 首先介绍 ...

  7. nginx做下载限速

    nginx做下载限速-szszszsz-ChinaUnix博客 nginx做下载限速 2009-12-25 14:34:57 分类: 系统运维 nginx做下载服务器,在性能上满足需求.自带limit ...

  8. HDU 3277 Marriage Match III(二分+最大流)

    HDU 3277 Marriage Match III 题目链接 题意:n个女孩n个男孩,每一个女孩能够和一些男孩配对,此外还能够和k个随意的男孩配对.然后有些女孩是朋友,满足这个朋友圈里面的人.假设 ...

  9. 获取Google音乐的具体信息(方便对Google音乐批量下载)

    Google音乐都是正版音乐, 不像百度所有都是盗链, 并且死链也多. 但有一个麻烦就是要下载Google音乐的时候得一个一个的点击下载链接, 进入下载页面再点"下载", 才干下载 ...

  10. 实习第一天之数据绑定:<%#Eval("PartyName")%>'

    1.asp:HyperLink ID="Link" runat="server" Target="_blank" Text='<%#E ...