Spark源码编译与环境搭建

Note that you must have a version of Spark which does not include the Hive jars;

Spark编译:

git clone https://github.com/apache/spark.git spark_src
cd spark_src
export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"
./make-distribution.sh --name "spark-without-hive" --tgz -Phadoop-2.4 -Dhadoop.version=2.5.-cdh5.3.1 -Pyarn -DskipTests package

Spark搭建:见Spark环境搭建章节

Hive源码编译与环境搭建

Hive编译

git clone https://github.com/apache/hive.git hive_on_spark
git checkout spark
cd hive_on_spark
mvn clean install -Phadoop-,dist -DskipTests

编译完成后,hive安装包的位置: /packaging/target/apache-hive-1.2.0-SNAPSHOT-bin.tar.gz

注意pom.xml中spark.version要和spark的版本号对应

<spark.version>1.3.0</spark.version>

Hive安装:见Hive环境搭建章节

本案例中Spark和Hive的安装路径如下:

Spark安装目录:/home/spark/app/spark-1.3.0-bin-spark-without-hive

Hive安装目录:/home/spark/app/apache-hive-1.2.0-SNAPSHOT-bin

添加Spark的依赖到Hive的方法

方式一: Set the property 'spark.home' to point to the Spark installation:

hive> set spark.home=/home/spark/app/spark-1.3.-bin-spark-without-hive;

方式二: Define the SPARK_HOME environment variable before starting Hive CLI/HiveServer2:

export SPARK_HOME=/home/spark/app/spark-1.3.-bin-spark-without-hive

方式三: Set the spark-assembly jar on the Hive auxpath:

hive --auxpath /home/spark/app/spark-1.3.-bin-spark-without-hive/lib/spark-assembly-*.jar

方式四: Add the spark-assembly jar for the current user session:

hive> add jar /home/spark/app/spark-1.3.-bin-spark-without-hive/lib/spark-assembly-*.jar;

方式五: Link the spark-assembly jar to $HIVE_HOME/lib.

启动Hive过程中可能出现的错误:

[ERROR] Terminal initialization failed; falling back to unsupported
java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
at jline.TerminalFactory.create(TerminalFactory.java:)
at jline.TerminalFactory.get(TerminalFactory.java:)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:)
at org.apache.hadoop.hive.cli.CliDriver.getConsoleReader(CliDriver.java:)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.util.RunJar.main(RunJar.java:) Exception in thread "main" java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected

解决方法:export HADOOP_USER_CLASSPATH_FIRST=true

其他场景的错误解决方法参见:https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started

还有一个坑:需要设置spark.eventLog.dir参数,比如:

set spark.eventLog.dir= hdfs://hadoop000:/directory

否则查询会报错,这个坑深啊。。。。。。,否则一直报错:/tmp/spark-event类似的文件夹不存在。。。。

启动hive后设置执行引擎为spark:

hive> set hive.execution.engine=spark;

设置spark的运行模式:

hive> set spark.master=spark://hadoop000:7077

或者yarn:spark.master=yarn

Configure Spark-application configs for Hive

可以配置在spark-defaults.conf或者hive-site.xml

spark.master=<Spark Master URL>
spark.eventLog.enabled=true;
spark.executor.memory=512m;
spark.serializer=org.apache.spark.serializer.KryoSerializer;
spark.executor.memory=... #Amount of memory to use per executor process.
spark.executor.cores=... #Number of cores per executor.
spark.yarn.executor.memoryOverhead=...
spark.executor.instances=... #The number of executors assigned to each application.
spark.driver.memory=... #The amount of memory assigned to the Remote Spark Context (RSC). We recommend 4GB.
spark.yarn.driver.memoryOverhead=... #We recommend (MB).

参数配置详见文档:https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started

执行sql语句后可以在监控页面查看job/stages等信息

hive (default)> select city_id, count(*) c from page_views group by city_id order by c desc limit 5;
Query ID = spark_20150309173838_444cb5b1-b72e-4fc3-87db-4162e364cb1e
Total jobs =
Launching Job out of
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
state = SENT
state = STARTED
state = STARTED
state = STARTED
state = STARTED
Query Hive on Spark job[] stages: Status: Running (Hive on Spark job[])
Job Progress Format
CurrentTime StageId_StageAttemptId: SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount [StageCost]
-- ::, Stage-0_0: (+)/ Stage-1_0: / Stage-2_0: /
state = STARTED
state = STARTED
state = STARTED
-- ::, Stage-0_0: (+)/ Stage-1_0: / Stage-2_0: /
state = STARTED
state = STARTED
-- ::, Stage-0_0: / Finished Stage-1_0: (+)/ Stage-2_0: /
state = SUCCEEDED
-- ::, Stage-0_0: / Finished Stage-1_0: / Finished Stage-2_0: / Finished
Status: Finished successfully in 10.07 seconds
OK
city_id c
-
-
-
- Time taken: 18.417 seconds, Fetched: row(s)

Hive On Spark环境搭建的更多相关文章

  1. 分布式计算框架-Spark(spark环境搭建、生态环境、运行架构)

    Spark涉及的几个概念:RDD:Resilient Distributed Dataset(弹性分布数据集).DAG:Direct Acyclic Graph(有向无环图).SparkContext ...

  2. Spark学习进度-Spark环境搭建&Spark shell

    Spark环境搭建 下载包 所需Spark包:我选择的是2.2.0的对应Hadoop2.7版本的,下载地址:https://archive.apache.org/dist/spark/spark-2. ...

  3. Spark环境搭建(四)-----------数据仓库Hive环境搭建

    Hive产生背景 1)MapReduce的编程不便,需通过Java语言等编写程序 2) HDFS上的文缺失Schema(在数据库中的表名列名等),方便开发者通过SQL的方式处理结构化的数据,而不需要J ...

  4. 大数据学习系列之六 ----- Hadoop+Spark环境搭建

    引言 在上一篇中 大数据学习系列之五 ----- Hive整合HBase图文详解 : http://www.panchengming.com/2017/12/18/pancm62/ 中使用Hive整合 ...

  5. Spark环境搭建(六)-----------sprk源码编译

    想要搭建自己的Hadoop和spark集群,尤其是在生产环境中,下载官网提供的安装包远远不够的,必须要自己源码编译spark才行. 环境准备: 1,Maven环境搭建,版本Apache Maven 3 ...

  6. 学习Spark——环境搭建(Mac版)

    大数据情结 还记得上次跳槽期间,与很多猎头都有聊过,其中有一个猎头告诉我,整个IT跳槽都比较频繁,但是相对来说,做大数据的比较"懒"一些,不太愿意动.后来在一篇文中中也证实了这一观 ...

  7. Spark环境搭建(上)——基础环境搭建

    Spark摘说 Spark的环境搭建涉及三个部分,一是linux系统基础环境搭建,二是Hadoop集群安装,三是Spark集群安装.在这里,主要介绍Spark在Centos系统上的准备工作--linu ...

  8. Eclipse+maven+scala+spark环境搭建

    准备条件 我用的Eclipse版本 Eclipse Java EE IDE for Web Developers. Version: Luna Release (4.4.0) 我用的是Eclipse ...

  9. Hive记录-Hive on Spark环境部署

    1.hive执行引擎 Hive默认使用MapReduce作为执行引擎,即Hive on mr.实际上,Hive还可以使用Tez和Spark作为其执行引擎,分别为Hive on Tez和Hive on ...

随机推荐

  1. 十分钟让你的javascript登峰造极

    javascipt被称作前端的灵魂,没法灵活运用它,你的前端就只是一具行死走肉.大多初学者能顺利度过div+css,然后倒在了js怀抱,即时跨过了这一关,也只是会用,其底层原理一概不知.小编这就带大家 ...

  2. mysqlbinlog 参数及用法说明

    mysqlbinlog用法说明 服务器生成的二进制日志文件写成二进制格式.要想检查这些文本格式的文件,应使用mysqlbinlog实用工具.应这样调用mysqlbinlog:shell> mys ...

  3. caching redirect views leads to memory leak (Spring 3.1)

    在Spring 3.1以及以下版本使用org.springframework.web.servlet.view.UrlBasedViewResolver + cache(如下配置),在会出现任意种re ...

  4. MongoDB windows解压缩版安装

    创建目录如下 将mongodb的压缩包解压到mongodb目录下 mongodata下创建data目录存放数据:创建log目录存放目录 配置服务,cmd 输入命令: D:\mongo\mongodb\ ...

  5. LeetCode() Symmetric Tree

    /** * Definition for a binary tree node. * struct TreeNode { * int val; * TreeNode *left; * TreeNode ...

  6. 前端基础 JavaScript~~~ 数据处理 奇技淫巧!!!!!!

    常用的JS数据处理手段      2016-09-21          17:40:07 1.   number类型  数组中取出里面的最大/最小值: // 数组中取出 最小值 [数值] var n ...

  7. TimeQuest 静态时序分析 基本概论

    静态时序分析 基本概念  [转载] 1.   背景 静态时序分析的前提就是设计者先提出要求,然后时序分析工具才会根据特定的时序模型进行分析,给出正确是时序报告. 进行静态时序分析,主要目的就是为了提高 ...

  8. 关于是用dotnet获取本机IP地址+计算机名的方法

    印象中在maxscript帮助文档里找到过方法,但是当时没记下来.只能通过dotnet实现了. 如果电脑有无线网卡和本地连接,可能会出现乱码,也问了写dotnet的朋友,提供了一些思路,不过最终还是使 ...

  9. Linux环境下段错误的产生原因及调试方法小结(转)

    最近在Linux环境下做C语言项目,由于是在一个原有项目基础之上进行二次开发,而且 项目工程庞大复杂,出现了不少问题,其中遇到最多.花费时间最长的问题就是著名的“段错误”(Segmentation F ...

  10. zeromq:c,c++,golang及nodejs使用

    官网:www.zeromq.org 消息队列比较:http://www.cnblogs.com/charlesblc/p/6058799.html zeromq的一些观点:http://www.cnb ...