1)本地目录/home/hadoop/test下的test4.txt文件内容(每行数据之间用tab键隔开)如下所示:

[hadoop@master test]$ sudo vim test4.txt

    dajiangtai
hadoop
hive
hbase
spark

2)启动hiveserver2

[hadoop@master test]$ cd ${HIVE_HOME}/bin
[hadoop@master bin]$ ll
total
-rwxr-xr-x hadoop hadoop Jan beeline
drwxr-xr-x hadoop hadoop May : ext
-rwxr-xr-x hadoop hadoop Jan hive
-rwxr-xr-x hadoop hadoop Jan hive-config.sh
-rwxr-xr-x hadoop hadoop Jan hiveserver2
-rwxr-xr-x hadoop hadoop Jan metatool
-rwxr-xr-x hadoop hadoop Jan schematool [hadoop@master bin]$ ./hiveserver2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/modules/hadoop-2.6./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/modules/hive1.0.0/lib/hive-jdbc-1.0.-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

3) 程序代码

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
public class Hive {
private static String driverName = "org.apache.hive.jdbc.HiveDriver";//hive驱动名称
private static String url = "jdbc:hive2://master:10000/default";//连接hive2服务的连接地址,Hive0.11.0以上版本提供了一个全新的服务:HiveServer2
private static String user = "hadoop";//对HDFS有操作权限的用户
private static String password = "";//在非安全模式下,指定一个用户运行查询,忽略密码
private static String sql = "";
private static ResultSet res;
public static void main(String[] args) {
try {
Class.forName(driverName);//加载HiveServer2驱动程序
Connection conn = DriverManager.getConnection(url, user, password);//根据URL连接指定的数据库
Statement stmt = conn.createStatement(); //创建的表名
String tableName = "testHiveDriverTable"; /** 第一步:表存在就先删除 **/
sql = "drop table " + tableName;
stmt.execute(sql); /** 第二步:表不存在就创建 **/
sql = "create table " + tableName + " (key int, value string) row format delimited fields terminated by '\t' STORED AS TEXTFILE";
stmt.execute(sql); // 执行“show tables”操作
sql = "show tables '" + tableName + "'";
res = stmt.executeQuery(sql);
if (res.next()) {
System.out.println(res.getString());
} // 执行“describe table”操作
sql = "describe " + tableName;
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(res.getString() + "\t" + res.getString());
} // 执行“load data into table”操作
String filepath = "/home/hadoop/test/test4.txt";//hive服务所在节点的本地文件路径
sql = "load data local inpath '" + filepath + "' into table " + tableName;
stmt.execute(sql); // 执行“select * query”操作
sql = "select * from " + tableName;
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(res.getInt() + "\t" + res.getString());
} // 执行“regular hive query”操作,此查询会转换为MapReduce程序来处理
sql = "select count(*) from " + tableName;
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(res.getString());
}
conn.close();
conn = null;
} catch (ClassNotFoundException e) {
e.printStackTrace();
System.exit();
} catch (SQLException e) {
e.printStackTrace();
System.exit();
}
}
}

4) 运行结果(右击-->Run as-->Run on Hadoop)

此时直接运行会报错,解决方案请见下一篇博文:HiveSQLException: Error while compiling statement: No privilege 'Create' found for outputs { database:default }

运行日志如下:

-- ::, INFO [org.apache.hive.jdbc.Utils] - Supplied authorities: master:
-- ::, INFO [org.apache.hive.jdbc.Utils] - Resolved authority: master:
-- ::, INFO [org.apache.hive.jdbc.HiveConnection] - Will try to open client transport with JDBC Uri: jdbc:hive2://master:10000/default
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - opening transport org.apache.thrift.transport.TSaslClientTransport@3834d63f
-- ::, DEBUG [org.apache.thrift.transport.TSaslClientTransport] - Sending mechanism name PLAIN and initial response of length
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: Writing message with status START and payload length
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: Writing message with status COMPLETE and payload length
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: Start message handled
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: Main negotiation loop complete
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: SASL Client receiving last message
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: Received message with status COMPLETE and payload length
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string:
testhivedrivertable
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string:
key int
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string:
value string
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string:
1 dajiangtai
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string:
2 hadoop
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string:
3 hive
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string:
4 hbase
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string:
5 spark
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string: -- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:

执行“show tables”运行结果:

testhivedrivertable

执行“describe table”运行结果:

key    int
value string

执行“select * query”运行结果:

    dajiangtai
hadoop
hive
hbase
spark
或者从集群上查看运行结果。
hive> show tables;
OK
copy_student1
copy_student2
copy_student3
copy_student4
employee
group_gender_agg
group_gender_sum
group_test
index_test
index_tmp
partition_test
student1
student2
test
test_view
testhivedrivertable
user
Time taken: 0.153 seconds, Fetched: row(s)
hive> desc testhivedrivertable;
OK
key int
value string
Time taken: 0.184 seconds, Fetched: row(s)
hive> select * from testhivedrivertable;
OK
1 dajiangtai
2 hadoop
3 hive
4 hbase
5 spark
Time taken: 0.346 seconds, Fetched: row(s)

以上就是博主为大家介绍的这一板块的主要内容,这都是博主自己的学习过程,希望能给大家带来一定的指导作用,有用的还望大家点个支持,如果对你没用也望包涵,有错误烦请指出。如有期待可关注博主以第一时间获取更新哦,谢谢! 

版权声明:本文为博主原创文章,未经博主允许不得转载。

												

Hive:JDBC示例的更多相关文章

  1. Hive和Jdbc示例

    重要:在使用 JDBC 开发 Hive 程序时, 必须首先开启 Hive 的远程服务接口.使用下面命令进行开启:hive -service hiveserver & 1). 测试数据 user ...

  2. 三、hive JavaAPI示例

    在上文中https://www.cnblogs.com/lay2017/p/9973370.html 我们通过hive shell去操作hive,本文我们以Java代码的示例去对hive执行加载数据和 ...

  3. Hive JDBC——深入浅出学Hive

    第一部分:搭建Hive JDBC开发环境 搭建:Steps •新建工程hiveTest •导入Hive依赖的包 •Hive  命令行启动Thrift服务 •hive --service hiveser ...

  4. Hive学习之六 《Hive进阶— —hive jdbc》 详解

    接Hive学习五 http://www.cnblogs.com/invban/p/5331159.html 一.配置环境变量 hive jdbc的开发,在开发环境中,配置Java环境变量 修改/etc ...

  5. Hive 8、Hive2 beeline 和 Hive jdbc

    1.Hive2 beeline  Beeline 要与HiveServer2配合使用,支持嵌入模式和远程模式 启动beeline 打开两个Shell窗口,一个启动Hive2 一个beeline连接hi ...

  6. Hive JDBC:java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate anonymous

    今天使用JDBC来操作Hive时,首先启动了hive远程服务模式:hiveserver2 &(表示后台运行),然后到eclipse中运行程序时出现错误: java.sql.SQLExcepti ...

  7. 使用Spring Boot操作Hive JDBC时,启动时报出错误:NoSuchMethodError: org.eclipse.jetty.servlet.ServletMapping.setDef

    使用Spring Boot操作Hive JDBC时,启动时报出错误:NoSuchMethodError: org.eclipse.jetty.servlet.ServletMapping.setDef ...

  8. hive JDBC异常到多租户

    hive jdbc执行select count(*) from test报错. return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedT ...

  9. Hive jdbc连接出现java.sql.SQLException: enabling autocommit is not supported

    1.代码如下 String url = "jdbc:hive2://master135:10000/default"; String user = "root" ...

随机推荐

  1. netty支持的协议

    流经网络的数据总是具有相同的类型:字节.这些字节是如何流动的主要取决于我们所说的 网络传输--一个帮助我们抽象底层数据传输机制的概念.用户并不关心这些细节:他们只想确保他们的字节被可靠地发送和接收. ...

  2. jquery 中post 、get的同步问题,从外部获取返回数据

    解决方法1: 在全局设置: $.ajaxSetup({ async : false }); $.ajaxSetup({ async : false }); 然后再使用post或get方法 $.get( ...

  3. 一个有关Golang变量作用域的坑

    转自:http://tonybai.com/2015/01/13/a-hole-about-variable-scope-in-golang/ 临近下班前编写和调试一段Golang代码,但运行结果始终 ...

  4. IOS的设计模式

    对象创建 原型(Prototype) 使用原型实例指定创建对象的种类,并通过复制这个原型创建新的对象. NSArray *array = [[NSArray alloc] initWithObject ...

  5. [hdu4960]Another OCD Patient(区间dp)

    题意:给出n个数,把这n个数合成一个对称的集合.每个数只能合并一次. 解题关键:区间dp,dp[l][r]表示l-r区间内满足条件的最大值.vi是大于0的,所以可以直接双指针确定. 转移方程:$dp[ ...

  6. PCLVisualizer可视化类(4)

    博客转载自:http://www.pclcn.org/study/shownews.php?lang=cn&id=168 多视口显示 所示,并进行比较分析,利用不同的搜索半径,基于同一点云计算 ...

  7. kafka学习之相关命令

    1 分别启动zoo和kafka ./zkServer.sh start 然后需要使用./zkServer.sh status查看状态,会发现一个奇怪得问题,即使start启动的时候表示启动成功,但是s ...

  8. 19. CTF综合靶机渗透(十二)

    靶机说明: 靶机主题来自美剧<黑客军团> 本次靶机有三个flag,难度在初级到中级,非常适合新手训练学习,不需要逆向技术,目标就是找到三个key,并且拿到主机root权限. 渗透过程: 本 ...

  9. 16. 再说 WAF 绕过

    1,大小写混排 这可以算最容易想到的方式了.大小写绕过用于只针对小写或大写的关键字匹配技术,正则表达式 /express/i 大小写不敏感即无法绕过,这是最简单的绕过技术. 举例: z.com/ind ...

  10. get与post方法(吴老师整理)

    Get方式:(用get方式请求时就是调用Servlet中的doGet方法) 1.第一种: 2.第二种:(<a>标签是一种get方式提交) 1.通过GET提交数据,用户名和密码将明文出现在U ...