Thrift JDBC Server描述

Thrift JDBC Server使用的是HIVE0.12的HiveServer2实现。能够使用Spark或者hive0.12版本的beeline脚本与JDBC Server进行交互使用。Thrift JDBC Server默认监听端口是10000。

使用Thrift JDBC Server前需要注意:

1、将hive-site.xml配置文件拷贝到$SPARK_HOME/conf目录下;

2、需要在$SPARK_HOME/conf/spark-env.sh中的SPARK_CLASSPATH添加jdbc驱动的jar包

export SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/hadoop/software/mysql-connector-java-5.1.27-bin.jar

Thrift JDBC Server命令使用帮助:

cd $SPARK_HOME/sbin
start-thriftserver.sh --help
Usage: ./sbin/start-thriftserver [options] [thrift server options]
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Options:
--master MASTER_URL spark://host:port, mesos://host:port, yarn, or local.
--deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or
on one of the worker machines inside the cluster ("cluster")
(Default: client).
--class CLASS_NAME Your application's main class (for Java / Scala apps).
--name NAME A name of your application.
--jars JARS Comma-separated list of local jars to include on the driver
and executor classpaths.
--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place
on the PYTHONPATH for Python apps.
--files FILES Comma-separated list of files to be placed in the working
directory of each executor. --conf PROP=VALUE Arbitrary Spark configuration property.
--properties-file FILE Path to a file from which to load extra properties. If not
specified, this will look for conf/spark-defaults.conf. --driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 512M).
--driver-java-options Extra Java options to pass to the driver.
--driver-library-path Extra library path entries to pass to the driver.
--driver-class-path Extra class path entries to pass to the driver. Note that
jars added with --jars are automatically included in the
classpath. --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G). --help, -h Show this help message and exit
--verbose, -v Print additional debug output Spark standalone with cluster deploy mode only:
--driver-cores NUM Cores for driver (Default: ).
--supervise If given, restarts the driver on failure. Spark standalone and Mesos only:
--total-executor-cores NUM Total cores for all executors. YARN-only:
--executor-cores NUM Number of cores per executor (Default: ).
--queue QUEUE_NAME The YARN queue to submit to (Default: "default").
--num-executors NUM Number of executors to launch (Default: ).
--archives ARCHIVES Comma separated list of archives to be extracted into the
working directory of each executor. Thrift server options:
--hiveconf <property=value> Use value for given property

master的描述与Spark SQL CLI一致

beeline命令使用帮助:

cd $SPARK_HOME/bin
beeline --help
Usage: java org.apache.hive.cli.beeline.BeeLine
-u <database url> the JDBC URL to connect to
-n <username> the username to connect as
-p <password> the password to connect as
-d <driver class> the driver class to use
-e <query> query that should be executed
-f <file> script file that should be executed
--color=[true/false] control whether color is used for display
--showHeader=[true/false] show column names in query results
--headerInterval=ROWS; the interval between which heades are displayed
--fastConnect=[true/false] skip building table/column list for tab-completion
--autoCommit=[true/false] enable/disable automatic transaction commit
--verbose=[true/false] show verbose error messages and debug info
--showWarnings=[true/false] display connection warnings
--showNestedErrs=[true/false] display nested errors
--numberFormat=[pattern] format numbers using DecimalFormat pattern
--force=[true/false] continue running script even after errors
--maxWidth=MAXWIDTH the maximum width of the terminal
--maxColumnWidth=MAXCOLWIDTH the maximum width to use when displaying columns
--silent=[true/false] be more silent
--autosave=[true/false] automatically save preferences
--outputformat=[table/vertical/csv/tsv] format mode for result display
--isolation=LEVEL set the transaction isolation level
--help display this message

Thrift JDBC Server/beeline启动

启动Thrift JDBC Server:默认端口是10000

cd $SPARK_HOME/sbin
start-thriftserver.sh

如何修改Thrift JDBC Server的默认监听端口号?借助于--hiveconf

start-thriftserver.sh  --hiveconf hive.server2.thrift.port=

HiveServer2 Clients 详情参见:https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients

启动beeline

cd $SPARK_HOME/bin
beeline -u jdbc:hive2://hadoop000:10000/default -n hadoop

sql脚本测试

SELECT track_time, url, session_id, referer, ip, end_user_id, city_id FROM page_views WHERE city_id = -1000 limit 10;
SELECT session_id, count(*) c FROM page_views group by session_id order by c desc limit 10;

SparkSQL使用之Thrift JDBC server的更多相关文章

  1. SparkSQL使用之JDBC代码访问Thrift JDBC Server

    启动ThriftJDBCServer: cd $SPARK_HOME/sbin start-thriftserver.sh & 使用jdbc访问ThriftJDBCServer代码段: pac ...

  2. Thrift项目Server端开发流程

    Thrift项目Server端开发流程 首先,先了解工程中所有包的功能(见下图) 该图为用户中心项目的目录结构,以下依次介绍. 1.     src/main/java com.framework:该 ...

  3. SparkSQL使用之如何使用UDF

    使用java开发一个helloworld级别UDF,打包成udf.jar,存放在/home/hadoop/lib下,代码如下: package com.luogankun.udf; import or ...

  4. Apache Spark 2.2.0 中文文档 - Spark SQL, DataFrames and Datasets Guide | ApacheCN

    Spark SQL, DataFrames and Datasets Guide Overview SQL Datasets and DataFrames 开始入门 起始点: SparkSession ...

  5. Spark SQL官方文档阅读--待完善

    1,DataFrame是一个将数据格式化为列形式的分布式容器,类似于一个关系型数据库表. 编程入口:SQLContext 2,SQLContext由SparkContext对象创建 也可创建一个功能更 ...

  6. Apache Spark 2.2.0 中文文档

    Apache Spark 2.2.0 中文文档 - 快速入门 | ApacheCN Geekhoo 关注 2017.09.20 13:55* 字数 2062 阅读 13评论 0喜欢 1 快速入门 使用 ...

  7. Spark入门之DataFrame/DataSet

    目录 Part I. Gentle Overview of Big Data and Spark Overview 1.基本架构 2.基本概念 3.例子(可跳过) Spark工具箱 1.Dataset ...

  8. Spark SQL概念学习系列之分布式SQL引擎

    不多说,直接上干货! parkSQL作为分布式查询引擎:两种方式 除了在Spark程序里使用Spark SQL,我们也可以把Spark SQL当作一个分布式查询引擎来使用,有以下两种使用方式: 1.T ...

  9. Apache Spark 2.2.0 中文文档 - Spark SQL, DataFrames and Datasets

    Spark SQL, DataFrames and Datasets Guide Overview SQL Datasets and DataFrames 开始入门 起始点: SparkSession ...

随机推荐

  1. ASP.NET MVC 中实现View与Controller分离

    一.开篇题外话 我经常会在博客园逛来逛去,看过很多大牛们的Blog,我很少在这块技术天地活动,之前有发表过几篇日志,好像大部分是和电商有关,作为一个多年的开发人员,很少在这里分享,之前一直在CSDN上 ...

  2. SourceInsight支持Python代码阅读

    这个话题,很简单,主要是要有一个插件Python.CLF,这个文件可以从我的GitHub上下载.然后,参照下面的图片显示的步骤,就很快搞定! 具体的步骤,看下面的三张图片,顺序编号了,从1到9,对照着 ...

  3. Spring入门学习(一)

    SpringMVC基础平台补充(2016.03.03) 如果想要开发SpringMVC,那么前期依次安装好:JDK(jdk-8u74-windows-x64,安装后配置环境变量JAVA_HOME和CL ...

  4. 【jmeter】元件的作用域与执行顺序

    1.元件的作用域 JMeter中共有8类可被执行的元件(测试计划与线程组不属于元件),这些元件中,取样器是典型的不与其它元件发生交互作用的元件,逻辑控制器只对其子节点的取样器有效,而其它元件(conf ...

  5. sql 操作,

    SELECT * FROM dbo.tbl_Web_Photos AS pt LEFT JOIN dbo.tbl_Web_Friends AS fd ON pt.user_email=fd.AddEm ...

  6. 清理SQL Server日志释放文件空间的终极方法

    清理SQL Server日志释放文件空间的终极方法  转自:http://www.cnblogs.com/dudu/archive/2013/04/10/3011416.html [问题场景]有一个数 ...

  7. golang的验证码相关的库

    识别库 https://github.com/goghcrow/capture_easy 生成验证码的库 https://github.com/hanguofeng/gocaptcha 生成图片水印 ...

  8. 【hibernate】之标注枚举类型@Enumerated(转载)

    实体Entity中通过@Enumerated标注枚举类型,例如将CustomerEO实体中增加一个CustomerType类型的枚举型属性,标注实体后的代码如下所示. @Entity @Table(n ...

  9. 连接数据库的URL等于多少?

    JDBC编程步骤如下: 1.加载驱动 Class.forname(driverClass); 比如:加载MySQL的驱动 Class.forname("com.mysql.jdbc.Driv ...

  10. 45. Jump Game II

    Given an array of non-negative integers, you are initially positioned at the first index of the arra ...