http://tez.incubator.apache.org/

http://dongxicheng.org/mapreduce-nextgen/apache-tez/

http://dongxicheng.org/mapreduce-nextgen/apache-tez-newest-progress/

 

Tez aims to be a general purpose execution runtime that enhances various scenarios that are not well served by classic Map-Reduce.
In the short term the major focus is to support Hive and Pig, specifically to enable performance improvements to batch and ad-hoc interactive queries.

 

What services will Tez provide

Tez兼容传统的map-reduce jobs, 当然主要focus提供基于DAG的jobs和相应的API以及primitives.

Tez provides runtime components:

  • An execution environment that can handle traditional map-reduce jobs
  • An execution environment that handles DAG-based jobs comprising various built-in and extendable primitives
  • Cluster-side determination of input pieces
  • Runtime planning such as task cardinality determination and dynamic modification to the DAG structure

Tez provides APIs to access these services:

  • Traditional map-reduce functionality is accessed via java classes written to the Job interface: org.apache.hadoop.mapred.Job and/or org.apache.hadoop.mapreduce.v2.app.job.Job;
    and by specifying in yarn-site that the map-reduce framework should be Tez.
  • DAG-based execution is accessed via the new Tez DAG API: org.apache.tez.dag.api.*, org.apache.tez.engine.api.*.

Tez provides pre-made primitives for use with the DAG API (org.apache.tez.engine.common.*)

  • Vertex Input
  • Vertex Output
  • Sorting
  • Shuffling
  • Merging
  • Data transfer

 

Tez-YARN architecture

In the above figure Tez is represented by the red components: client-side API, an AppMaster, and multiple containers that execute child processes under the control of the AppMaster.

Three separate software stacks are involved in the execution of a Tez job, each using components from the clientapplication, Tez, and YARN:

 

DAG topologies and scenarios

The following terminology is used:

Job Vertex: A “stage” in the job plan. 逻辑顶点, 可以理解成stage
Job Edge: The logical connections between Job Vertices. 逻辑边, 关联
Vertex: A materialized stage at runtime comprising a certain number of materialized tasks. 物理顶点, 由并行的tasks节点组成
Edge: Represents actual data movement between tasks. 物理边, 代表实际数据流向
Task: A process performing computation within a YARN container. Task, 一个执行节点
Task cardinality: The number of materialized tasks in a Vertex. Task基数, Vertex的并发度
Static plan: Planning decisions fixed before job submission.
Dynamic plan: Planning decisions made at runtime in the AppMaster process.

 

Tez API

The Tez API comprises many services that support applications to run DAG-style jobs. An application that makes use of Tez will need to:
1. Create a job plan (the DAG) comprising vertices, edges, and data source references
2. Create task implementations that perform computations and interact with the DAG AppMaster
3. Configure Yarn and Tez appropriately

DAG definition API

抽象DAG的定义接口

public class DAG{
DAG();
void addVertex(Vertex);
void addEdge(Edge);
void addConfiguration(String, String);
void setName(String);
void verify();
DAGPlan createDaG();
} public class Vertex {
Vertex(String vertexName, String processorName, int parallelism);
void setTaskResource();
void setTaskLocationsHint(TaskLocationHint[]);
void setJavaOpts(String);
String getVertexName();
String getProcessorName();
int getParallelism();
Resource getTaskResource();
TaskLocationHint[] getTaskLocationsHint();
String getJavaOpts();
} public class Edge {
Edge(Vertex inputVertex, Vertex outputVertex, EdgeProperty edgeProperty);
String getInputVertex();
String getOutputVertex();
EdgeProperty getEdgeProperty();
String getId();
}

Execution APIs

Task作为Tez的执行者, 遵循input, output, processor的模式

public interface Master
//a context object for task execution. currently only stub public interface Input{
void initialize(Configuration conf, Master master)
boolean hasNext()
Object getNextKey()
Iterable<Object> getNextValues()
float getProgress()
void close()
} public interface Output{
void initialize(Configuration conf, Master master);
void write(Object key, Object value);
OutputContext getOutputContext();
void close();
} public interface Partitioner {
int getPartition(Object key, Object value, int numPartitions);
} public interface Processor {
void initialize(Configuration conf, Master master)
void process(Input[] in, Output[] out)
void close()
} public interface Task{
void initialize(Configuration conf, Master master)
Input[] getInputs();
Processor getProcessor();
Output[] getOutputs();
void run()
void close()
}
 

Apache Tez Design的更多相关文章

  1. CentOS 6.5 Maven 编译 Apache Tez 0.8.3 踩坑/报错解决记录

    最近准备学习使用Tez,因此从官网下载了最新的Tez 0.8.3源码,按照安装教程编译使用.平时使用的集群环境是离线的,本打算这一次也进行离线编译,无奈一编译就开始报缺少jar包的错,即使手动下载ja ...

  2. Apache Tez 了解

    你可能听说过Apache Tez,它是一个针对Hadoop数据处理应用程序的新分布式执行框架.但是它到底是什么呢?它的工作原理是什么?哪些人应该使用它,为什么?如果你有这些疑问,那么可以看一下Bika ...

  3. Apache Tez 0.7、0.83、 0.82 安装、调试笔记

    ———————————————————— 准备 Tez 编译环境 ———————————————————— 1 需要的支持 tez0.7 需要 Hadoop 2.60 以上 2 需要的 linux 相 ...

  4. Apache Tez on hive

    ———————————————————— 调配 Hadoop  ———————————————————— 1 将 编译好的 TEZ .tar.gz 文件上传到 HDFS 中.   hdfs fs -p ...

  5. Big Data资料汇总

    整理和翻新一下自己看过和笔记过的Big Data相关的论文和Blog Streaming & Spark In-Stream Big Data Processing Discretized S ...

  6. apache开源项目 -- tez

    为了更高效地运行存在依赖关系的作业(比如Pig和Hive产生的MapReduce作业),减少磁盘和网络IO,Hortonworks开发了DAG计 算框架Tez.Tez是从MapReduce计算框架演化 ...

  7. Hadoop2.0/YARN深入浅出(Hadoop2.0、Spark、Storm和Tez)

    随着云计算.大数据迅速发展,亟需用hadoop解决大数据量高并发访问的瓶颈.谷歌.淘宝.百度.京东等底层都应用hadoop.越来越多的企 业急需引入hadoop技术人才.由于掌握Hadoop技术的开发 ...

  8. Apache 项目列表功能分类便于技术选型

    big-data (49):  Apache Accumulo  Apache Airavata  Apache Ambari  Apache Apex  Apache Avro  Apache Be ...

  9. hive on tez配置

    1.Tez简介 Tez是Hontonworks开源的支持DAG作业的计算框架,它可以将多个有依赖的作业转换为一个作业从而大幅提升MapReduce作业的性能.Tez并不直接面向最终用户--事实上它允许 ...

随机推荐

  1. Atitit.加密算法 des  aes 各个语言不同的原理与解决方案java php c#

    Atitit.加密算法 des  aes 各个语言不同的原理与解决方案java php c# 1. 加密算法的参数::算法/模式/填充 1 2. 标准加密api使用流程1 2.1. Md5——16bi ...

  2. [iOS]delegate和protocol

    转自:http://haoxiang.org/2011/08/ios-delegate-and-protocol/ 今天上班和同事讨论工程怎么组织的时候涉及到这个话题.iOS开发上对delegate使 ...

  3. mysql only_full_group_by问题

    我的mysql出现了only_full_group_by问题,网上一堆处理方案! 主要两种 一种修改配置表my.ini 另一种通过指令,屏蔽当前链接的only_full_group_by报错!我想永久 ...

  4. Java接口的异常设计

    一.问题的提出   疑惑1:在设计接口的时,对于接口方法何时需要声明抛出受检异常或者说所有的接口方法最后都声明抛出受检异常? 以下是代码片段: public interface xx{ public ...

  5. makefile之命令包&多行变量

    define&endef 1. 命令包(canned recipes)&多行变量(muti-line variables) The define directive is follow ...

  6. poj Sudoku(数独) DFS

    Sudoku Time Limit: 2000MS   Memory Limit: 65536K Total Submissions: 13665   Accepted: 6767   Special ...

  7. 前端_basic

    web: 分三部分:1.HTML:2.CSS:3.JavaScript. 1.HTML:用来构建网页的结构和内容: 2.CSS:用来给网页化妆,美化网页: 3.JavaScript:用来让网页呈现动态 ...

  8. CCNA2.0笔记_二层交换

    VLAN上并不需要配置IP地址,除非是出于管理的需要. 基于Vlan的设计原理,即隔离网络的广播域,再者运行STP来提供二层的防环机制:在同一个设备集中不同Vlan之间是无法通信的(在没有三层设备的情 ...

  9. 可执行文件格式elf和bin

    区别 常用的可执行文件包含两类:原始二进制文件(bin)和可加载执行的二进制文件,在linux中可加载执行的二进制文件为elf文件. BIN文件是直接的二进制文件,内部没有地址标记.bin文件内部数据 ...

  10. Angular4中的依赖注入

    在Angular中使用依赖注入,可以帮助我们实现松耦合,可以说只有在组件中使用依赖注入才能真正 的实现可重用的组件. 如果我们有个服务product.service.ts,其中export了一个Pro ...