Apache Tez Design
http://tez.incubator.apache.org/
http://dongxicheng.org/mapreduce-nextgen/apache-tez/
http://dongxicheng.org/mapreduce-nextgen/apache-tez-newest-progress/
Tez aims to be a general purpose execution runtime that enhances various scenarios that are not well served by classic Map-Reduce.
In the short term the major focus is to support Hive and Pig, specifically to enable performance improvements to batch and ad-hoc interactive queries.

What services will Tez provide
Tez兼容传统的map-reduce jobs, 当然主要focus提供基于DAG的jobs和相应的API以及primitives.
Tez provides runtime components:
- An execution environment that can handle traditional map-reduce jobs
- An execution environment that handles DAG-based jobs comprising various built-in and extendable primitives
- Cluster-side determination of input pieces
- Runtime planning such as task cardinality determination and dynamic modification to the DAG structure
Tez provides APIs to access these services:
- Traditional map-reduce functionality is accessed via java classes written to the Job interface: org.apache.hadoop.mapred.Job and/or org.apache.hadoop.mapreduce.v2.app.job.Job;
and by specifying in yarn-site that the map-reduce framework should be Tez. - DAG-based execution is accessed via the new Tez DAG API: org.apache.tez.dag.api.*, org.apache.tez.engine.api.*.
Tez provides pre-made primitives for use with the DAG API (org.apache.tez.engine.common.*)
- Vertex Input
- Vertex Output
- Sorting
- Shuffling
- Merging
- Data transfer
Tez-YARN architecture
In the above figure Tez is represented by the red components: client-side API, an AppMaster, and multiple containers that execute child processes under the control of the AppMaster.

Three separate software stacks are involved in the execution of a Tez job, each using components from the clientapplication, Tez, and YARN:

DAG topologies and scenarios
The following terminology is used:
Job Vertex: A “stage” in the job plan. 逻辑顶点, 可以理解成stage
Job Edge: The logical connections between Job Vertices. 逻辑边, 关联
Vertex: A materialized stage at runtime comprising a certain number of materialized tasks. 物理顶点, 由并行的tasks节点组成
Edge: Represents actual data movement between tasks. 物理边, 代表实际数据流向
Task: A process performing computation within a YARN container. Task, 一个执行节点
Task cardinality: The number of materialized tasks in a Vertex. Task基数, Vertex的并发度
Static plan: Planning decisions fixed before job submission.
Dynamic plan: Planning decisions made at runtime in the AppMaster process.
Tez API
The Tez API comprises many services that support applications to run DAG-style jobs. An application that makes use of Tez will need to:
1. Create a job plan (the DAG) comprising vertices, edges, and data source references
2. Create task implementations that perform computations and interact with the DAG AppMaster
3. Configure Yarn and Tez appropriately
DAG definition API
抽象DAG的定义接口
public class DAG{
DAG();
void addVertex(Vertex);
void addEdge(Edge);
void addConfiguration(String, String);
void setName(String);
void verify();
DAGPlan createDaG();
}
public class Vertex {
Vertex(String vertexName, String processorName, int parallelism);
void setTaskResource();
void setTaskLocationsHint(TaskLocationHint[]);
void setJavaOpts(String);
String getVertexName();
String getProcessorName();
int getParallelism();
Resource getTaskResource();
TaskLocationHint[] getTaskLocationsHint();
String getJavaOpts();
}
public class Edge {
Edge(Vertex inputVertex, Vertex outputVertex, EdgeProperty edgeProperty);
String getInputVertex();
String getOutputVertex();
EdgeProperty getEdgeProperty();
String getId();
}
Execution APIs
Task作为Tez的执行者, 遵循input, output, processor的模式
public interface Master
//a context object for task execution. currently only stub public interface Input{
void initialize(Configuration conf, Master master)
boolean hasNext()
Object getNextKey()
Iterable<Object> getNextValues()
float getProgress()
void close()
} public interface Output{
void initialize(Configuration conf, Master master);
void write(Object key, Object value);
OutputContext getOutputContext();
void close();
} public interface Partitioner {
int getPartition(Object key, Object value, int numPartitions);
} public interface Processor {
void initialize(Configuration conf, Master master)
void process(Input[] in, Output[] out)
void close()
} public interface Task{
void initialize(Configuration conf, Master master)
Input[] getInputs();
Processor getProcessor();
Output[] getOutputs();
void run()
void close()
}
Apache Tez Design的更多相关文章
- CentOS 6.5 Maven 编译 Apache Tez 0.8.3 踩坑/报错解决记录
最近准备学习使用Tez,因此从官网下载了最新的Tez 0.8.3源码,按照安装教程编译使用.平时使用的集群环境是离线的,本打算这一次也进行离线编译,无奈一编译就开始报缺少jar包的错,即使手动下载ja ...
- Apache Tez 了解
你可能听说过Apache Tez,它是一个针对Hadoop数据处理应用程序的新分布式执行框架.但是它到底是什么呢?它的工作原理是什么?哪些人应该使用它,为什么?如果你有这些疑问,那么可以看一下Bika ...
- Apache Tez 0.7、0.83、 0.82 安装、调试笔记
———————————————————— 准备 Tez 编译环境 ———————————————————— 1 需要的支持 tez0.7 需要 Hadoop 2.60 以上 2 需要的 linux 相 ...
- Apache Tez on hive
———————————————————— 调配 Hadoop ———————————————————— 1 将 编译好的 TEZ .tar.gz 文件上传到 HDFS 中. hdfs fs -p ...
- Big Data资料汇总
整理和翻新一下自己看过和笔记过的Big Data相关的论文和Blog Streaming & Spark In-Stream Big Data Processing Discretized S ...
- apache开源项目 -- tez
为了更高效地运行存在依赖关系的作业(比如Pig和Hive产生的MapReduce作业),减少磁盘和网络IO,Hortonworks开发了DAG计 算框架Tez.Tez是从MapReduce计算框架演化 ...
- Hadoop2.0/YARN深入浅出(Hadoop2.0、Spark、Storm和Tez)
随着云计算.大数据迅速发展,亟需用hadoop解决大数据量高并发访问的瓶颈.谷歌.淘宝.百度.京东等底层都应用hadoop.越来越多的企 业急需引入hadoop技术人才.由于掌握Hadoop技术的开发 ...
- Apache 项目列表功能分类便于技术选型
big-data (49): Apache Accumulo Apache Airavata Apache Ambari Apache Apex Apache Avro Apache Be ...
- hive on tez配置
1.Tez简介 Tez是Hontonworks开源的支持DAG作业的计算框架,它可以将多个有依赖的作业转换为一个作业从而大幅提升MapReduce作业的性能.Tez并不直接面向最终用户--事实上它允许 ...
随机推荐
- Memcache应用场景介绍,说明[zz]
转于:http://www.cnblogs.com/literoad/archive/2012/12/23/2830178.html 面临的问题 对于高并发高访问的 Web应用程序来说,数据库存取瓶颈 ...
- size_t详细解释
在学习sizeof运算符的时候,它的值类型为size_t,结果在使用printf函数显示的时候,凭空多了很多警告,有点不不理解,为啥搞这么复杂?直接用个int类型多省事? 经过一番搜索和阅读文档,找到 ...
- Atitit.导出excel功能的设计 与解决方案
Atitit.导出excel功能的设计 与解决方案 1.1. 项目起源于背景1 1.2. Js jquery方案(推荐)jquery.table2excel1 1.3. 服务器方案2 1.4. 详细 ...
- Docker使用Dockerfile创建支持ssh服务自启动的容器镜像
原文链接:Docker使用Dockerfile创建支持ssh服务自启动的容器镜像 1. 首先创建一个Dockerfile文件.文件内容例如以下 # 选择一个已有的os镜像作为基础 FROM cento ...
- Enable multithreading to use std::thread: Operation not permitted问题解决
在用g++ 4.8.2编译C++11的线程代码后,运行时遇到了如下报错: terminate called after throwing an instance of 'std::system_err ...
- Jmeter常用操作
一,Jmeter-http 接口脚本添加cookie 实例:学生金币充值接口 该接口有权限验证,需要admin 用户才可以操作,需要添加cookie cookie 中key 为登录的用户名,valu ...
- 关于UNIX/Linux下安装《UNIX环境高级编程》源代码的问题
<UNIX环境高级编程(第三版)>是一本广为人知的unix系统编程书籍. 但是,书中的代码示例,要想正确的编译运行,要先做好准备工作: 1.下载源代码 传送门:http://apueboo ...
- 消除^M
在Linux和windows之间移动文件时,总是会出现在windows下编辑的文件在Linux打开时每一行都显示一个^M,虽然不影响使用,但是却影响美观,于是就想自己实现个小程序来进行转换. 要实现转 ...
- js arguments 内置对象
1.arguments是js的内置对象. 2.在不确定对象是可以用来重载函数. 3.用法如下: function goTo() { var i=arguments.length; alert(i); ...
- spring-webservice.xml
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.sp ...