jdbc远程连接hiveserver2

2016-04-26 15:59 本站整理 浏览(425)
 
 

在之前的学习和实践Hive中,使用的都是CLI或者hive –e的方式,该方式仅允许使用HiveQL执行查询、更新等操作,并且该方式比较笨拙单一。幸好Hive提供了轻客户端的实现,通过HiveServer或者HiveServer2,客户端可以在不启动CLI的情况下对Hive中的数据进行操作,两者都允许远程客户端使用多种编程语言如Java、Python向Hive提交请求,取回结果。HiveServer或者HiveServer2都是基于Thrift的,但HiveSever有时被称为Thrift
server,而HiveServer2却不会。既然已经存在HiveServer为什么还需要HiveServer2呢?这是因为HiveServer不能处理多于一个客户端的并发请求,这是由于HiveServer使用的Thrift接口所导致的限制,不能通过修改HiveServer的代码修正。因此在Hive-0.11.0版本中重写了HiveServer代码得到了HiveServer2,进而解决了该问题。HiveServer2支持多客户端的并发和认证,为开放API客户端如JDBC、ODBC提供了更好的支持。

所以本文将以HiveServer2为例,介绍并编写远程操作的Hive的Java API。

首先先列出并本文使用的hive的关键的配置信息:

<property>
<name>hive.metastore.warehouse.dir</name>
<value>/usr/hive/warehouse</value> //(hive中的数据库和表在HDFS中存放的文件夹的位置)
<description>location of default database for the warehouse</description>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value> //(HiveServer2远程连接的端口,默认为10000)
<description>Port number of HiveServer2 Thrift interface.
Can be overridden by setting $HIVE_SERVER2_THRIFT_PORT</description>
</property> <property>
<name>hive.server2.thrift.bind.host</name>
<value>**.**.**.**</value> //(hive所在集群的IP地址)
<description>Bind host on which to run the HiveServer2 Thrift interface.
Can be overridden by setting $HIVE_SERVER2_THRIFT_BIND_HOST</description>
</property>
<property>
<name>hive.server2.long.polling.timeout</name>
<value>5000</value> // (默认为5000L,此处修改为5000,不然程序会报错)
<description>Time in milliseconds that HiveServer2 will wait, before responding to asynchronous calls that use long polling</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value> //(Hive的元数据库,我采用的是本地Mysql作为元数据库)
<description>JDBC connect string for a JDBC metastore</description>
</property> <property>
<name>javax.jdo.option.ConnectionDriverName</name> //(连接元数据的驱动名)
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name> //(连接元数据库用户名)
<value>hive</value>
<description>username to use against metastore database</description>
</property> <property>
<name>javax.jdo.option.ConnectionPassword</name> // (连接元数据库密码)
<value>hive</value>
<description>password to use against metastore database</description>
</property>



确保上述正确配置后,下面启动HiveServer2服务:

先启动元数据库,在命令行中键入:hive --service metastore & (&符号表示该进程将在后台运行,因为执行此命令后命令行会卡住,如果没加此符号,用ctrl+C退回命令行输入界面后会自动shotdown 该服务)

如下图:

之后命令行会卡住,此时查看日志文件hive.log,显示如下:

2016-04-26 04:44:53,956 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:main(5060)) - Starting hive metastore on port 9083
2016-04-26 04:44:54,174 WARN [main]: conf.HiveConf (HiveConf.java:initialize(1390)) - DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
2016-04-26 04:44:54,326 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:newRawStore(494)) - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
2016-04-26 04:44:54,412 INFO [main]: metastore.ObjectStore (ObjectStore.java:initialize(245)) - ObjectStore, initialize called
2016-04-26 04:44:57,240 WARN [main]: conf.HiveConf (HiveConf.java:initialize(1390)) - DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
2016-04-26 04:44:57,246 INFO [main]: metastore.ObjectStore (ObjectStore.java:getPMF(314)) - Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
2016-04-26 04:45:03,597 INFO [main]: metastore.ObjectStore (ObjectStore.java:setConf(228)) - Initialized ObjectStore
2016-04-26 04:45:03,806 WARN [main]: metastore.ObjectStore (ObjectStore.java:checkSchema(6273)) - Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.13.0
2016-04-26 04:45:04,811 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles(552)) - Added admin role in metastore
2016-04-26 04:45:04,828 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles(561)) - Added public role in metastore
2016-04-26 04:45:04,984 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:addAdminUsers(589)) - No user is added in admin role, since config is empty
2016-04-26 04:45:05,361 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(5182)) - Starting DB backed MetaStore Server
2016-04-26 04:45:05,369 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(5194)) - Started the new metaserver on port [9083]...
2016-04-26 04:45:05,369 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(5196)) - Options.minWorkerThreads = 200
2016-04-26 04:45:05,370 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(5198)) - Options.maxWorkerThreads = 100000
2016-04-26 04:45:05,370 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(5200)) - TCP keepalive = true

此时证明metastore已经开启。

接下来开启hiveserver2服务:

在命令行中键入:hive --service hiveserver2 &

同上,也会出现命令行卡住的现象。查看日志文件如下:

2016-04-26 04:53:24,212 INFO  [main]: server.HiveServer2 (HiveStringUtils.java:startupShutdownMessage(605)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting HiveServer2
STARTUP_MSG: host = master/(你之前配置的IP)
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.13.0
STARTUP_MSG: classpath = /opt/modules/hadoop-2.2.0/etc/hadoop:/opt/modules/hadoop-2.2.0/share/hadoop/common/lib
//(……中间略掉classpath内容,日志信息太长……)
STARTUP_MSG: build = file:///Users/hbutani/svn/branch-0.13 -r Unknown; compiled by 'hbutani' on Tue Apr 15 13:55:42 PDT 2014
************************************************************/
2016-04-26 04:53:24,553 WARN [main]: conf.HiveConf (HiveConf.java:initialize(1390)) - DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
2016-04-26 04:53:25,258 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:newRawStore(494)) - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
2016-04-26 04:53:25,325 INFO [main]: metastore.ObjectStore (ObjectStore.java:initialize(245)) - ObjectStore, initialize called
2016-04-26 04:53:28,312 WARN [main]: conf.HiveConf (HiveConf.java:initialize(1390)) - DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
2016-04-26 04:53:28,313 INFO [main]: metastore.ObjectStore (ObjectStore.java:getPMF(314)) - Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
2016-04-26 04:53:31,537 INFO [main]: metastore.ObjectStore (ObjectStore.java:setConf(228)) - Initialized ObjectStore
2016-04-26 04:53:32,064 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles(552)) - Added admin role in metastore
2016-04-26 04:53:32,079 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles(561)) - Added public role in metastore
2016-04-26 04:53:32,205 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:addAdminUsers(589)) - No user is added in admin role, since config is empty
2016-04-26 04:53:33,887 INFO [main]: session.SessionState (SessionState.java:start(358)) - No Tez session required at this point. hive.execution.engine=mr.
2016-04-26 04:53:34,168 WARN [main]: conf.HiveConf (HiveConf.java:initialize(1390)) - DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
2016-04-26 04:53:34,241 INFO [main]: service.CompositeService (SessionManager.java:init(70)) - HiveServer2: Async execution thread pool size: 100
2016-04-26 04:53:34,241 INFO [main]: service.CompositeService (SessionManager.java:init(72)) - HiveServer2: Async execution wait queue size: 100
2016-04-26 04:53:34,242 INFO [main]: service.CompositeService (SessionManager.java:init(74)) - HiveServer2: Async execution thread keepalive time: 10
2016-04-26 04:53:34,244 INFO [main]: service.AbstractService (AbstractService.java:init(89)) - Service:OperationManager is inited.
2016-04-26 04:53:34,247 INFO [main]: service.AbstractService (AbstractService.java:init(89)) - Service:SessionManager is inited.
2016-04-26 04:53:34,247 INFO [main]: service.AbstractService (AbstractService.java:init(89)) - Service:CLIService is inited.
2016-04-26 04:53:34,247 INFO [main]: service.AbstractService (AbstractService.java:init(89)) - Service:ThriftBinaryCLIService is inited.
2016-04-26 04:53:34,247 INFO [main]: service.AbstractService (AbstractService.java:init(89)) - Service:HiveServer2 is inited.
2016-04-26 04:53:34,248 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:OperationManager is started.
2016-04-26 04:53:34,248 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:SessionManager is started.
2016-04-26 04:53:34,248 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:CLIService is started.
2016-04-26 04:53:34,698 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:addAdminUsers(589)) - No user is added in admin role, since config is empty
2016-04-26 04:53:34,699 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(624)) - 0: get_databases: default
2016-04-26 04:53:34,701 INFO [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(306)) - ugi=hh ip=unknown-ip-addr cmd=get_databases: default
2016-04-26 04:53:34,725 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:newRawStore(494)) - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
2016-04-26 04:53:34,728 INFO [main]: metastore.ObjectStore (ObjectStore.java:initialize(245)) - ObjectStore, initialize called
2016-04-26 04:53:34,745 INFO [main]: metastore.ObjectStore (ObjectStore.java:setConf(228)) - Initialized ObjectStore
2016-04-26 04:53:34,795 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:ThriftBinaryCLIService is started.
2016-04-26 04:53:34,796 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:HiveServer2 is started.
2016-04-26 04:53:34,947 WARN [Thread-5]: conf.HiveConf (HiveConf.java:initialize(1390)) - DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
2016-04-26 04:53:35,584 INFO [Thread-5]: thrift.ThriftCLIService (ThriftBinaryCLIService.java:run(88)) - ThriftBinaryCLIService listening on /(你的IP):10000

你也可以通过下述命令查看hiveserver2是否已经开启:

[hh@master Desktop]$ netstat -nl |grep 10000
tcp 0 0 (你的IP):10000 0.0.0.0:* LISTEN

此时证明hiveserver2服务已经开启!
(注意:一定要去查看日志信息,因为命令行并不会报错,如果启动失败,相应的异常会在日志信息中显示,日志文件hive.log的路径在$HIVE_HOME/conf/hive-log4j.properties中配置)

下面开始编写java API:

首先列出本程序依赖的Jar包:

hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar
$HIVE_HOME/lib/hive-exec-0.11.0.jar
$HIVE_HOME/lib/hive-jdbc-0.11.0.jar
$HIVE_HOME/lib/hive-metastore-0.11.0.jar
$HIVE_HOME/lib/hive-service-0.11.0.jar
$HIVE_HOME/lib/libfb303-0.9.0.jar
$HIVE_HOME/lib/commons-logging-1.0.4.jar
$HIVE_HOME/lib/slf4j-api-1.6.1.jar

下面贴出java代码:

JDBCToHiveUtils.java

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.SQLException; public class JDBCToHiveUtils {
private static String driverName ="org.apache.hive.jdbc.HiveDriver";
private static String Url="jdbc:hive2://**.**.**.**:10000/default"; //填写hive的IP,之前在配置文件中配置的IP
private static Connection conn;
public static Connection getConnnection()
{
try
{
Class.forName(driverName);
conn = DriverManager.getConnection(Url,"hh",""); //此处的用户名一定是有权限操作HDFS的用户,否则程序会提示"permission deny"异常
}
catch(ClassNotFoundException e) {
e.printStackTrace();
System.exit(1);
}
catch (SQLException e) {
e.printStackTrace();
}
return conn;
}
public static PreparedStatement prepare(Connection conn, String sql) {
PreparedStatement ps = null;
try {
ps = conn.prepareStatement(sql);
} catch (SQLException e) {
e.printStackTrace();
}
return ps;
}
}

QueryHiveUtils.java

import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement; public class QueryHiveUtils {
private static Connection conn=JDBCToHiveUtils.getConnnection();
private static PreparedStatement ps;
private static ResultSet rs;
public static void getAll(String tablename)
{
String sql="select * from "+tablename;
System.out.println(sql);
try {
ps=JDBCToHiveUtils.prepare(conn, sql);
rs=ps.executeQuery();
int columns=rs.getMetaData().getColumnCount();
while(rs.next())
{
for(int i=1;i<=columns;i++)
{
System.out.print(rs.getString(i));
System.out.print("\t\t");
}
System.out.println();
}
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} } }

QuerHiveTest.java

public class QueryHiveTest {

	public static void main(String[] args) {
String tablename="test1";
QueryHiveUtils.getAll(tablename);
} }

运行结果如下:

select * from test1
1 张三 男 20.0
2 李四 女 35.0
3 王五 男 null
4 赵六 null 70.0

调用javaAPI访问hive的更多相关文章

  1. 其它语言通过HiveServer2访问Hive

    先解释一下几个名词: metadata :hive元数据,即hive定义的表名,字段名,类型,分区,用户这些数据.一般存储关系型书库mysql中,在测试阶段也可以用hive内置Derby数据库. me ...

  2. Spark&Hive:如何使用scala开发spark访问hive作业,如何使用yarn resourcemanager。

    背景: 接到任务,需要在一个一天数据量在460亿条记录的hive表中,筛选出某些host为特定的值时才解析该条记录的http_content中的经纬度: 解析规则譬如: 需要解析host: api.m ...

  3. SparkSQL On Yarn with Hive,操作和访问Hive表

    转载自:http://lxw1234.com/archives/2015/08/466.htm 本文将介绍以yarn-cluster模式运行SparkSQL应用程序,访问和操作Hive中的表,这个和在 ...

  4. 利用JavaAPI访问HDFS的文件

    body{ font-family: "Microsoft YaHei UI","Microsoft YaHei",SimSun,"Segoe UI& ...

  5. 使用spark访问hive错误记录

    在spark集群中执行./spark-shell时报以下错误: 18/07/23 10:02:39 WARN DataNucleus.Connection: BoneCP specified but ...

  6. Spark访问Hive表

    知识点1:Spark访问HIVE上面的数据 配置注意点:. 1.拷贝mysql-connector-java-5.1.38-bin.jar等相关的jar包到你${spark_home}/lib中(sp ...

  7. spark on yarn模式下配置spark-sql访问hive元数据

    spark on yarn模式下配置spark-sql访问hive元数据 目的:在spark on yarn模式下,执行spark-sql访问hive的元数据.并对比一下spark-sql 和hive ...

  8. ASP调用存储过程访问SQL Server

     ASP调用存储过程访问SQL Server 2011-02-15 10:22:57 标签:asp 数据库 sQL 存储过程 Server ASP和存储过程(Stored Procedures)的文章 ...

  9. winfrom绘制渐变 / 调用浏览器访问指定地址

    private void Form1_Paint(object sender, System.Windows.Forms.PaintEventArgs e) {//绘制渐变色背景 Graphics g ...

随机推荐

  1. webpack ,gulp/grunt的介绍

    http://www.jianshu.com/p/42e11515c10f# bfc的概念block formatting context http://www.cnblogs.com/dojo-lz ...

  2. 把本地仓库工程上传到github上和从gitbu同步工程到本地

    1.在本地产生秘钥和公钥 [root@jacky git_project]# ssh-keygen -t rsa -C "jacky-lulu@1073740572@qq.com" ...

  3. Centos下Yum安装PHP5.5,5.6,7.0

    默认的版本太低了,手动安装有一些麻烦,想采用Yum安装的可以使用下面的方案: 1.检查当前安装的PHP包 yum list installed | grep php 如果有安装的PHP包,先删除他们 ...

  4. 关于 XMLHttpRequest对象的onreadyStateChange方法

    最近做了一个Ajax的demo,前台用HTML+javascript,后台用一个servlet来响应,流程如下: 页面点击链接事件,由js捕获,生成一个请求到后台,servlet处理后给出响应信息,并 ...

  5. asp.net "true"的小坑

    在cs文件中 写了一个 属性 protected bool IsTrue { get{ return true; } } 在页面 .aspx文件中 在js中 var flag="<%= ...

  6. 2016年最佳Linux发行版排行榜

    2015年,不管在企业市场还是个人消费市场都是 Linux 非常重要的一年. 最好的回归发行版:openSUSE openSUSE 背后的 SUSE 公司是最老的 Linux 企业,它成立于 Linu ...

  7. 常看常遇见之一——BS架构VS CS架构

    常看常遇见之一——BS架构VS CS架构 1.BS架构 即Browser/Server(浏览器/服务器)结构,是随着Internet技术的兴起,对C/S结构的一种变化或者改进的结构.在这种结构下,用户 ...

  8. PHP发送电子邮件

    1.导入文件,如本案例把Stmp.class.php放到Common\Common目录下,代码很多,直接复制就行! <?php namespace Common\Common; class Sm ...

  9. git clone带用户名和密码的方式

    git clone http://username:password@127.0.0.1/res/res.git

  10. ubuntu下部署SVN

    sudo apt-get install subversion 创建库文件夹 sudo mkdir svn sudo chown -R 777 svn 设置为所有用户配置777权限 sudo chmo ...