1、Tez简介

Tez是Hontonworks开源的支持DAG作业的计算框架,它可以将多个有依赖的作业转换为一个作业从而大幅提升MapReduce作业的性能。Tez并不直接面向最终用户——事实上它允许开发者为最终用户构建性能更快、扩展性更好的应用程序

2、编译tez

本文记录Tez 0.8.5的编译过程,之前的Tez版本都是源码包,最新的版本虽然提供了编译后的tar包,但是大部分情况下是针对特定的Hadoop版本,如果和我们的Hadoop版本不一致,可能某个时刻会出现一些未知的问题,所以为了稳定,还是建议和自己使用的Hadoop版本匹配,所以就需要编译了。

(1)解压完毕,修改根目录下的pom.xml,修改对应的Hadoop的版本。

(2)注释掉tez-ui2的子项目依赖pom,因为tez ui2编译坑比较多,可能通不过

<modules>
     <module>hadoop-shim</module>
     <module>tez-api</module>
     <module>tez-common</module>
     <module>tez-runtime-library</module>
     <module>tez-runtime-internals</module>
     <module>tez-mapreduce</module>
     <module>tez-examples</module>
     <module>tez-tests</module>
     <module>tez-dag</module>
     <module>tez-ext-service-tests</module>
     <!--
     <module>tez-ui</module>
     <module>tez-ui2</module>
     -->
     <module>tez-plugins</module>
     <module>tez-tools</module>
     <module>hadoop-shim-impls</module>
     <module>tez-dist</module>
     <module>docs</module>
   </modules>

(3)如果你是root用户编译Tez,记得修改tez-ui/pom.xml,添加允许root权限执行nodejs安装bower

<execution>
             <id>Bower install</id>
             <phase>generate-sources</phase>
             <goals>
               <goal>exec</goal>
             </goals>
             <configuration>
               <workingDirectory>${webappDir}</workingDirectory>
               <executable>${node.executable}</executable>
               <arguments>
                 <argument>node_modules/bower/bin/bower</argument>
                 <argument>install</argument>
                 <argument>--allow-root</argument>
                 <argument>--remove-unnecessary-resolutions=false</argument>
               </arguments>
             </configuration>
           </execution>

(4)注意编译的linux机器最好能fan qiang下载东西,如果不能就把根目录下的pom.xml中tez-ui也注释掉,因为不管是tez-ui还是tez-ui2都需要下载nodejs相关的东西,默认的是在墙外的,不能fan出去80%的几率会编译失败,所以如果是nodejs相关的编译失败,就把tez-ui相关的子项目都注释掉不让参与编译,这个ui没什么大的作用,就是看下job的计划,没有它也能使用Tez优化DAG依赖。

(5)能不能自己在linux上单独装nodejs,然后让tez的nodejs用本机装的那个而避免下载墙外的,经实测发现不行,tez里面的nodejs好像是单独依赖的,只要编译就会下载,最好的办法就是注释掉和tez-ui相关的东西

上面的一切搞定后,开始执行编译命令:

mvn clean package -DskipTests=true -Dmaven.javadoc.skip=true

编译成功后,出现下图:

[INFO] Building jar: /mnt/apache-tez-0.8.5-src/docs/target/tez-docs-0.8.5-tests.jar

[INFO] ------------------------------------------------------------------------

[INFO] Reactor Summary:

[INFO]

[INFO] tez ................................................ SUCCESS [01:57 min]

[INFO] hadoop-shim ........................................ SUCCESS [01:03 min]

[INFO] tez-api ............................................ SUCCESS [01:33 min]

[INFO] tez-common ......................................... SUCCESS [  4.987 s]

[INFO] tez-runtime-internals .............................. SUCCESS [  7.396 s]

[INFO] tez-runtime-library ................................ SUCCESS [ 27.988 s]

[INFO] tez-mapreduce ...................................... SUCCESS [  7.937 s]

[INFO] tez-examples ....................................... SUCCESS [  1.829 s]

[INFO] tez-dag ............................................ SUCCESS [ 34.257 s]

[INFO] tez-tests .......................................... SUCCESS [ 20.367 s]

[INFO] tez-ext-service-tests .............................. SUCCESS [  4.663 s]

[INFO] tez-plugins ........................................ SUCCESS [  0.126 s]

[INFO] tez-yarn-timeline-history .......................... SUCCESS [  2.838 s]

[INFO] tez-yarn-timeline-history-with-acls ................ SUCCESS [  1.692 s]

[INFO] tez-history-parser ................................. SUCCESS [01:31 min]

[INFO] tez-tools .......................................... SUCCESS [  0.169 s]

[INFO] tez-perf-analyzer .................................. SUCCESS [  0.090 s]

[INFO] tez-job-analyzer ................................... SUCCESS [01:19 min]

[INFO] tez-javadoc-tools .................................. SUCCESS [  0.632 s]

[INFO] hadoop-shim-impls .................................. SUCCESS [  0.203 s]

[INFO] hadoop-shim-2.6 .................................... SUCCESS [  0.688 s]

[INFO] tez-dist ........................................... SUCCESS [01:58 min]

[INFO] Tez ................................................ SUCCESS [  0.141 s]

[INFO] ------------------------------------------------------------------------

[INFO] BUILD SUCCESS

[INFO] ------------------------------------------------------------------------

[INFO] Total time: 11:27 min

[INFO] Finished at: 2017-10-29T21:01:55+08:00

[INFO] Final Memory: 73M/262M

[INFO] ------------------------------------------------------------------------

编译成功后的文件在tez-dist/target目录下:

cd /mnt/apache-tez-0.8.5-src/tez-dist/target

$ ls

archive-tmp 

maven-archiver 

tez-0.8.5 

tez-0.8.5-minimal 

tez-0.8.5-minimal.tar.gz 

tez-0.8.5.tar.gz 

tez-dist-0.8.5-tests.jar

3、配置hive on tez

将tez-0.8.5下所有jar包cp到hive lib/目录下,将tez-0.8.5.tar.gz上传到hdfs一个目录下:

$ /opt/cdh5/hadoop-2.6.0-cdh5.10.0/bin/hdfs dfs -mkdir -p /user/hadoop

$ /opt/cdh5/hadoop-2.6.0-cdh5.10.0/bin/hdfs dfs -put /home/hadoop/tez-0.8.5.tar.gz /user/hadoop

编辑tez配置文件etc/hadoop/tez-site.xml:

<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
         <property>
                 <name>tez.lib.uris</name>
                 <value>/user/hadoop/tez-0.8.5.tar.gz</value>
         </property>

</configuration>

重启hadoop集群。

set hive.execution.engine=tez;

select count(*) from t1;

注意点:

本次测试我安装了cdh5.10.0的hive,部署上述tez包,运行程序报错,具体错误见下文。

将tez部署到hive 2.1.0上运行成功,结果如下:

--测试结果:

hive (default)> set hive.execution.engine=tez;

hive (default)> select count(*) from t1;

17/11/05 21:14:54 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive

17/11/05 21:14:54 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name

Query ID = hadoop_20171105211451_0c1df9ef-c3d2-4ec9-b52b-cd5770d7b5b7

Total jobs = 1

Launching Job 1 out of 1

17/11/05 21:14:54 INFO Configuration.deprecation: mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed

17/11/05 21:14:56 INFO client.RMProxy: Connecting to ResourceManager at db01/192.168.0.181:8032

17/11/05 21:14:56 INFO impl.YarnClientImpl: Submitted application application_1509317142960_0011

17/11/05 21:15:02 INFO client.RMProxy: Connecting to ResourceManager at db01/192.168.0.181:8032

Status: Running (Executing on YARN cluster with App id application_1509317142960_0011)

----------------------------------------------------------------------------------------------
         VERTICES      MODE        STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  KILLED 

----------------------------------------------------------------------------------------------

Map 1 .......... container     SUCCEEDED      1          1        0        0       0       0 

Reducer 2 ...... container     SUCCEEDED      1          1        0        0       0       0 

----------------------------------------------------------------------------------------------

VERTICES: 02/02  [==========================>>] 100%  ELAPSED TIME: 7.02 s    

----------------------------------------------------------------------------------------------

OK

c0

17/11/05 21:15:10 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir

17/11/05 21:15:10 INFO mapred.FileInputFormat: Total input paths to process : 1

3

Time taken: 18.919 seconds, Fetched: 1 row(s)

hive (default)>

4、常见问题:

1)问题如下:

hive (default)> set hive.execution.engine=tez;

hive (default)> select * from t1 order by aa desc;

Query ID = hadoop_20171030053838_a83cb5bd-102f-4362-90b0-3fe3bcda9aa1

Total jobs = 1

Launching Job 1 out of 1

Status: Running (Executing on YARN cluster with App id application_1509312456681_0004)

--------------------------------------------------------------------------------
         VERTICES      STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  KILLED

--------------------------------------------------------------------------------

Map 1                 FAILED     -1          0        0       -1       0       0

Reducer 2             KILLED      1          0        0        1       0       0

--------------------------------------------------------------------------------

VERTICES: 00/02  [>>--------------------------] 0%    ELAPSED TIME: 0.24 s    

--------------------------------------------------------------------------------

Status: Failed

Vertex failed, vertexName=Map 1, vertexId=vertex_1509312456681_0004_1_00, diagnostics=[Vertex vertex_1509312456681_0004_1_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed, vertex=vertex_1509312456681_0004_1_00 [Map 1], java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/MRVersion
     at org.apache.hadoop.hive.shims.Hadoop23Shims.isMR2(Hadoop23Shims.java:892)
     at org.apache.hadoop.hive.shims.Hadoop23Shims.getHadoopConfNames(Hadoop23Shims.java:963)
     at org.apache.hadoop.hive.conf.HiveConf$ConfVars.<clinit>(HiveConf.java:362)
     at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:377)
     at org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:302)
     at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:107)
     at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
     at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
     at java.security.AccessController.doPrivileged(Native Method)
     at javax.security.auth.Subject.doAs(Subject.java:415)
     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
     at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
     at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapred.MRVersion
     at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
     at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
     at java.security.AccessController.doPrivileged(Native Method)
     at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
     ... 17 more

]

Vertex killed, vertexName=Reducer 2, vertexId=vertex_1509312456681_0004_1_01, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_1509312456681_0004_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]

DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask

解决办法:

cp /opt/cdh5/hadoop-2.6.0-cdh5.10.0/share/hadoop/mapreduce1/hadoop-core-2.6.0-mr1-cdh5.10.0.jar /opt/cdh5/hive-1.1.0-cdh5.10.0/lib/

2)问题如下:

hive (default)> set hive.execution.engine=tez;

hive (default)> select * from t1 order by aa desc;

Query ID = hadoop_20171030054343_2707c5bd-650e-4b71-89ae-cc094beafb39

Total jobs = 1

Launching Job 1 out of 1

Status: Running (Executing on YARN cluster with App id application_1509312456681_0005)

--------------------------------------------------------------------------------
         VERTICES      STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  KILLED

--------------------------------------------------------------------------------

Map 1                 FAILED     -1          0        0       -1       0       0

Reducer 2             KILLED      1          0        0        1       0       0

--------------------------------------------------------------------------------

VERTICES: 00/02  [>>--------------------------] 0%    ELAPSED TIME: 0.24 s    

--------------------------------------------------------------------------------

Status: Failed

Vertex failed, vertexName=Map 1, vertexId=vertex_1509312456681_0005_1_00, diagnostics=[Vertex vertex_1509312456681_0005_1_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed, vertex=vertex_1509312456681_0005_1_00 [Map 1], java.lang.NoClassDefFoundError: com/esotericsoftware/kryo/Serializer
     at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:107)
     at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
     at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
     at java.security.AccessController.doPrivileged(Native Method)
     at javax.security.auth.Subject.doAs(Subject.java:415)
     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
     at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
     at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.ClassNotFoundException: com.esotericsoftware.kryo.Serializer
     at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
     at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
     at java.security.AccessController.doPrivileged(Native Method)
     at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
     ... 12 more

]

Vertex killed, vertexName=Reducer 2, vertexId=vertex_1509312456681_0005_1_01, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_1509312456681_0005_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]

DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask

解决办法:暂时没找到根本原因。

hive on tez配置的更多相关文章

  1. 配置 Hive On Tez

    配置 Hive On Tez 标签(空格分隔): hive Tez 部署底层应用 简单介绍 介绍:tez 是基于hive 之上,可以将sql翻译解析成DAG计算的引擎.基于DAG 与mr 架构本身的优 ...

  2. hive on tez 错误记录

    1.执行过程失败,报 Container killed on request. Exit code is 143 如下图: 分析:造成这种原因是由于总内存不多,而容器在jvm中占比过高,修改tez-s ...

  3. hive on tez

    hive运行模式 hive on mapreduce 离线计算(默认) hive on tez  YARN之上支持DAG作业的计算框架 hive on spark 内存计算 hive on tez T ...

  4. Hive 安装和配置

    环境准备 已安装 HDFS 和 Yarn 集群环境 Windows 已安装 MySQL 8 (开启远程连接用户权限) 安装步骤 1. 上传并解压 Hive 安装文件 将 apache-hive-3.1 ...

  5. hadoop2.2.0 + hbase 0.94 + hive 0.12 配置记录

    一开始用hadoop2.2.0 + hbase 0.96 + hive 0.12 ,基本全部都配好了.只有在hive中查询hbase的表出错.以直报如下错误: java.io.IOException: ...

  6. 【转】hive简介安装 配置常见问题和例子

    原文来自:  http://blog.csdn.net/zhumin726/article/details/8027802 1 HIVE概述 Hive是基于Hadoop的一个数据仓库工具,可以将结构化 ...

  7. Hive的安装配置

    Hive的安装配置 Hive的安装配置 安装前准备 下载Hive版本1.2.1: 1.[root@iZ28gvqe4biZ ~]# wget http://mirror.bit.edu.cn/apac ...

  8. Hive安装与配置详解

    既然是详解,那么我们就不能只知道怎么安装hive了,下面从hive的基本说起,如果你了解了,那么请直接移步安装与配置 hive是什么 hive安装和配置 hive的测试 hive 这里简单说明一下,好 ...

  9. [hive] hive 安装、配置

    一.hive安装 1.官网下载 1.2.2版本 http://apache.fayea.com/hive/hive-1.2.2/ 2. 解压,此处目录为 /opt/hadoop/hive-1.2.2 ...

随机推荐

  1. [转]Linux性能分析工具汇总合集

    出于对Linux操作系统的兴趣,以及对底层知识的强烈欲望,因此整理了这篇文章.本文也可以作为检验基础知识的指标,另外文章涵盖了一个系统的方方面面.如果没有完善的计算机系统知识,网络知识和操作系统知识, ...

  2. Linux 下查看局域网内所有主机IP和MAC

    linux环境下,执行namp对局域网扫描一遍,然后查看arp缓存表就可以知道局域内ip对应的mac.namp比较强大也可以直接扫描mac地址和端口,执行扫描之后就可以在/proc/net/arp查看 ...

  3. python中的ord函数

    chr().unichr()和ord() chr()函数用一个范围在range(256)内的(就是0-255)整数作参数,返回一个对应的字符.unichr()跟它一样,只不过返回的是Unicode字符 ...

  4. 【Java虚拟机】浅谈Java虚拟机

    跨平台 Java的一大特性是跨平台,而Java是如何做到跨平台的呢? 主要依赖Java虚拟机,具体来说,是Java虚拟机在各平台上的实现. Java虚拟机在不同的平台有不同的实现.同一份字节码,通过运 ...

  5. masscan

    masscan是一个快速的端口扫描工具 大概说一下它的使用方法,既有原创也有翻译 欢迎补充 扫描10.x.x.x的网络:masscan 10.0.0.0/8 -p80 程序将自动探测网络的接口和适配器 ...

  6. oracle 函数to_char(数据,'FM999,999,999,999,990.00') 格式化数据(转)

    转载自:https://blog.csdn.net/fupengyao/article/details/52778565 遇到了oracle 取数格式问题,这里记一下 我们通常在做数据算数后,会想要让 ...

  7. 【iCore3应用】基于iCore3双核心板的编码器应用实例

    简介 本硬件电路方案是针对集电极开路输出的编码器设计的.隔离前电压为5V,同时5V也是编码器的驱动电压,由外部供电:隔离后电压为3.3V,由核心板提供.隔离芯片采用3通道ADUM1300隔离芯片.因为 ...

  8. http://www.cnblogs.com/chenmeng0818/p/6370819.html

    http://www.cnblogs.com/chenmeng0818/p/6370819.html js中的正则表达式入门   什么是正则表达式呢? 正则表达式(regular expression ...

  9. xampp+YII搭建网站

    一.安装xampp xampp专为php开发设计,需要的apache,mysql,php已经自带了.特别提醒,请下载PHP版本高于5.4支持Yii2.0的xampp 二.配置环境变量 在系统的环境变量 ...

  10. JSESSIONID、SESSION、cookie .

    所谓session可以这样理解:当与服务端进行会话时,比如说登陆成功后,服务端会为用户开壁一块内存区间,用以存放用户这次会话的一些内容,比如说用户名之类的.那么就需要一个东西来标志这个内存区间是你的而 ...