HiveServer

查看/home/hadoop/bigdatasoftware/apache-hive-0.13.1-bin/bin目录文件,其中有hiveserver2

启动hiveserver2,如下图:

打开多一个终端,查看进程

有RunJar进程说明hiveserver正在运行;

beeline

启动beeline

连接到jdbc


!connect jdbc:hive2://hadoop-001:10000 hadoop hadooporg.apache.hive.jdbc.HiveDriver;


然后就可以进行一系列操作了

 IDEA上jdbc客户端操作hive

Using JDBC

You can use JDBC to access data stored in a relational database or other tabular format.

  1. Load the HiveServer2 JDBC driver. As of 1.2.0 applications no longer need to explicitly load JDBC drivers using Class.forName().

    For example:


    Class.forName("org.apache.hive.jdbc.HiveDriver");

    
    
  2. Connect to the database by creating a Connection object with the JDBC driver.

    For example:


    Connection cnct = DriverManager.getConnection("jdbc:hive2://<host>:<port>", "<user>", "<password>");

    
    

    The default <port> is 10000. In non-secure configurations, specify a <user> for the query to run as. The <password> field value is ignored in non-secure mode.


    Connection cnct = DriverManager.getConnection("jdbc:hive2://<host>:<port>", "<user>", "");

    
    

    In Kerberos secure mode, the user information is based on the Kerberos credentials.

  3. Submit SQL to the database by creating a Statement object and using its executeQuery() method.

    For example:


    Statement stmt = cnct.createStatement();
    ResultSet rset = stmt.executeQuery("SELECT foo FROM bar");

    
    
  4. Process the result set, if necessary.

These steps are illustrated in the sample code below.

代码如下:

package com.gec.demo;

import java.sql.SQLException;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.Statement;
import java.sql.DriverManager; public class HiveJdbcClient {
private static final String DRIVERNAME = "org.apache.hive.jdbc.HiveDriver"; /**
* @param args
* @throws SQLException
*/
public static void main(String[] args) throws SQLException {
try {
Class.forName(DRIVERNAME);
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.exit(1);
}
//replace "hive" here with the name of the user the queries should run as
Connection con = DriverManager.getConnection("jdbc:hive2://hadoop-001:10000/default", "hadoop", "hadoop");
Statement stmt = con.createStatement();
String tableName = "dept";
// stmt.execute("drop table if exists " + tableName);
// stmt.execute("create table " + tableName + " (key int, value string)");
// show tables
String sql = "show tables '" + tableName + "'";
System.out.println("Running: " + sql);
ResultSet res = stmt.executeQuery(sql);
if (res.next()) {
System.out.println(res.getString(1));
}
// describe table
sql = "describe " + tableName;
System.out.println("Running: " + sql);
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(res.getString(1) + "\t" + res.getString(2));
} // load data into table
// NOTE: filepath has to be local to the hive server
// NOTE: /tmp/a.txt is a ctrl-A separated file with two fields per line
// String filepath = "/";
// sql = "load data local inpath '" + filepath + "' into table " + tableName;
// System.out.println("Running: " + sql);
// stmt.execute(sql); // select * query
sql = "select * from " + tableName;
System.out.println("Running: " + sql);
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(String.valueOf(res.getInt(1)) + "\t" + res.getString(2));
} // regular hive query
sql = "select count(1) from " + tableName;
System.out.println("Running: " + sql);
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(res.getString(1));
}
}
}

pom.xml配置文件如下:

<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>BigdataStudy</artifactId>
<groupId>com.gec.demo</groupId>
<version>1.0-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion> <artifactId>HiveJdbcClient</artifactId> <name>HiveJdbcClient</name>
<!-- FIXME change it to the project's website -->
<url>http://www.example.com</url> <properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<hadoop.version>2.7.2</hadoop.version>
<!--<hive.version> 0.13.1</hive.version>-->
</properties> <dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency> <dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.8.2</version>
</dependency> <dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.2</version>
</dependency> <dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>${hadoop.version}</version>
</dependency> <dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency> <dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>0.13.1</version>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>0.13.1</version>
</dependency> </dependencies>
</project>

Hive HiveServer2+beeline+jdbc客户端访问操作的更多相关文章

  1. 3.1 HiveServer2.Beeline JDBC使用

    https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients 一.HiveServer2.Beeline 1.HiveSer ...

  2. JDBC数据库访问操作的动态监测 之 Log4JDBC

    log4jdbc是一个JDBC驱动器,能够记录SQL日志和SQL执行时间等信息.log4jdbc使用SLF4J(Simple Logging Facade)作为日志系统. 特性: 1.支持JDBC3和 ...

  3. JDBC数据库访问操作的动态监测 之 p6spy

    P6spy是一个JDBC Driver的包装工具,p6spy通过对JDBC Driver的封装以达到对SQL语句的监听和分析,以达到各种目的. P6spy1.3 sf.net http://sourc ...

  4. hive JDBC客户端启动

    JDBC客户端操作步骤

  5. [Hive]HiveServer2配置

    HiveServer2(HS2)是一个服务器接口,能使远程客户端执行Hive查询,并且可以检索结果.HiveServer2是HiveServer1的改进版,HiveServer1已经被废弃.HiveS ...

  6. [Hive]HiveServer2概述

    1. HiveServer1 HiveServer是一种可选服务,允许远程客户端可以使用各种编程语言向Hive提交请求并检索结果.HiveServer是建立在Apache ThriftTM(http: ...

  7. [Spark][Hive]Hive的命令行客户端启动:

    [Spark][Hive]Hive的命令行客户端启动: [training@localhost Desktop]$ chkconfig | grep hive hive-metastore 0:off ...

  8. jdbc数据访问技术

    jdbc数据访问技术 1.JDBC如何做事务处理? Con.setAutoCommit(false) Con.commit(); Con.rollback(); 2.写出几个在Jdbc中常用的接口 p ...

  9. Redis的C++与JavaScript访问操作

    上篇简单介绍了Redis及其安装部署,这篇记录一下如何用C++语言和JavaScript语言访问操作Redis 1. Redis的接口访问方式(通用接口或者语言接口) 很多语言都包含Redis支持,R ...

随机推荐

  1. Oracle流程控制语句

    1.选择语句 1.1 IF...THEN...END IF语句 DECLARE MY_AGE INT; IF MY_AGE IS NULL THEN DBMS_OUTPUT.put_line('AGE ...

  2. 【Python】unittest-3

    一.@unittest.skip("skipping this case") # 无条件忽略该测试方法 二.@unittest.skipIf(a > 5, "con ...

  3. async/await学习笔记

    async/await 的目的是简化使用 promises 的写法.     让我们来看看下面的例子: // 一个标准的 JavaScript 函数 function getNumber1() { r ...

  4. 阿里云CentOS中vsftp安装、配置、卸载

    1--卸载 查看当前服务器中的vsftpdrpm -qa|grep vsftpd 例如结果为:vsftpd-2.2.2-13.el6_6.1.x86_64执行卸载rpm -e vsftpd-2.2.2 ...

  5. C/C++内存泄漏检测 —— memleax

    memleax是个开源项目,原理是通过注入hook目标进程的malloc(new也是用的malloc)内存分配函数,在指定时间未释放则认为内存泄漏.优点是不需要重启,attach到目标进程. gith ...

  6. 【CSP】字符与int

    [转自https://yq.aliyun.com/articles/19153] WIKIOI-1146 ISBN号码   光仔december 2014-03-01 16:20:00 浏览479 评 ...

  7. Unity跳转场景进度条制作教程(异步加载)

    Unity跳转场景进度条制作 本文提供全流程,中文翻译. Chinar 坚持将简单的生活方式,带给世人!(拥有更好的阅读体验 -- 高分辨率用户请根据需求调整网页缩放比例) Chinar -- 心分享 ...

  8. s21day04 python笔记

    s21day04 python笔记 一.上周知识回顾及补充 回顾 补充 编译型语言和解释性语言 编译型:代码写完后,编译器将其变成成另外一个文件,然后交给计算机执行 常见的编译型语言:C,C++,Ja ...

  9. 简易计算器的java实现

    伪代码 public class MainTestwei { 定义两个数组,List<Double> number和 List<Character>calculation分别用 ...

  10. 陕西师范第七届K题----动态规划

    ps: 自己的方法绝对是弱爆了 肯定存在更优的方法 O(n^3)复杂度 暴力求解的.. 链接:https://www.nowcoder.com/acm/contest/121/K来源:牛客网 柯怡最近 ...