[Hive - LanguageManual ] Explain (待)
EXPLAIN Syntax
Hive provides an EXPLAIN command that shows the execution plan for a query. The syntax for this statement is as follows:
EXPLAIN [EXTENDED|DEPENDENCY|AUTHORIZATION] query |
AUTHORIZATION is supported from HIVE 0.14.0 via HIVE-5961.
The use of EXTENDED in the EXPLAIN statement produces extra information about the operators in the plan. This is typically physical information like file names.
A Hive query gets converted into a sequence (it is more an Directed Acyclic Graph) of stages. These stages may be map/reduce stages or they may even be stages that do metastore or file system operations like move and rename. The explain output comprises of three parts:
- The Abstract Syntax Tree for the query
- The dependencies between the different stages of the plan
- The description of each of the stages
The description of the stages itself shows a sequence of operators with the metadata associated with the operators. The metadata may comprise of things like filter expressions for the FilterOperator or the select expressions for the SelectOperator or the output file names for the FileSinkOperator.
As an example, consider the following EXPLAIN query:
EXPLAINFROM src INSERT OVERWRITE TABLE dest_g1 SELECT src.key, sum(substr(src.value,4)) GROUP BY src.key; |
The output of this statement contains the following parts:
The Abstract Syntax Tree
ABSTRACT SYNTAX TREE:(TOK_QUERY (TOK_FROM (TOK_TABREF src)) (TOK_INSERT (TOK_DESTINATION (TOK_TAB dest_g1)) (TOK_SELECT (TOK_SELEXPR (TOK_COLREF src key)) (TOK_SELEXPR (TOK_FUNCTION sum (TOK_FUNCTION substr (TOK_COLREF src value)4)))) (TOK_GROUPBY (TOK_COLREF src key))))The Dependency Graph
STAGE DEPENDENCIES:Stage-1is a root stageStage-2depends on stages: Stage-1Stage-0depends on stages: Stage-2This shows that Stage-1 is the root stage, Stage-2 is executed after Stage-1 is done and Stage-0 is executed after Stage-2 is done.
The plans of each Stage
STAGE PLANS:Stage: Stage-1Map ReduceAlias -> Map Operator Tree:srcReduce Output Operatorkey expressions:expr: keytype: stringsort order: +Map-reduce partition columns:expr: rand()type:doubletag: -1value expressions:expr: substr(value,4)type: stringReduce Operator Tree:Group By Operatoraggregations:expr: sum(UDFToDouble(VALUE.0))keys:expr: KEY.0type: stringmode: partial1File Output Operatorcompressed:falsetable:input format: org.apache.hadoop.mapred.SequenceFileInputFormatoutput format: org.apache.hadoop.mapred.SequenceFileOutputFormatname: binary_tableStage: Stage-2Map ReduceAlias -> Map Operator Tree:/tmp/hive-zshao/67494501/106593589.10001Reduce Output Operatorkey expressions:expr:0type: stringsort order: +Map-reduce partition columns:expr:0type: stringtag: -1value expressions:expr:1type:doubleReduce Operator Tree:Group By Operatoraggregations:expr: sum(VALUE.0)keys:expr: KEY.0type: stringmode:finalSelect Operatorexpressions:expr:0type: stringexpr:1type:doubleSelect Operatorexpressions:expr: UDFToInteger(0)type:intexpr:1type:doubleFile Output Operatorcompressed:falsetable:input format: org.apache.hadoop.mapred.TextInputFormatoutput format: org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormatserde: org.apache.hadoop.hive.serde2.dynamic_type.DynamicSerDename: dest_g1Stage: Stage-0Move Operatortables:replace:truetable:input format: org.apache.hadoop.mapred.TextInputFormatoutput format: org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormatserde: org.apache.hadoop.hive.serde2.dynamic_type.DynamicSerDename: dest_g1In this example there are 2 map/reduce stages (Stage-1 and Stage-2) and 1 File System related stage (Stage-0). Stage-0 basically moves the results from a temporary directory to the directory corresponding to the table dest_g1.
A map/reduce stage itself comprises of 2 parts:
- A mapping from table alias to Map Operator Tree - This mapping tells the mappers which operator tree to call in order to process the rows from a particular table or result of a previous map/reduce stage. In Stage-1 in the above example, the rows from src table are processed by the operator tree rooted at a Reduce Output Operator. Similarly, in Stage-2 the rows of the results of Stage-1 are processed by another operator tree rooted at another Reduce Output Operator. Each of these Reduce Output Operators partitions the data to the reducers according to the criteria shown in the metadata.
- A Reduce Operator Tree - This is the operator tree which processes all the rows on the reducer of the map/reduce job. In Stage-1 for example, the Reducer Operator Tree is carrying out a partial aggregation where as the Reducer Operator Tree in Stage-2 computes the final aggregation from the partial aggregates computed in Stage-1
The use of DEPENDENCY in the EXPLAIN statement produces extra information about the inputs in the plan. It shows various attributes for the inputs. For example, for a query like:
EXPLAIN DEPENDENCY SELECT key, count(1) FROM srcpart WHERE ds IS NOT NULL GROUP BY key |
the following output is produced:
{"input_partitions":[{"partitionName":"default<at:var at:name="srcpart" />ds=2008-04-08/hr=11"},{"partitionName":"default<at:var at:name="srcpart" />ds=2008-04-08/hr=12"},{"partitionName":"default<at:var at:name="srcpart" />ds=2008-04-09/hr=11"},{"partitionName":"default<at:var at:name="srcpart" />ds=2008-04-09/hr=12"}],"input_tables":[{"tablename":"default@srcpart","tabletype":"MANAGED_TABLE"}]} |
The inputs contain both the tables and the partitions. Note that the table is present even if none of the partitions is accessed in the query.
The dependencies show the parents in case a table is accessed via a view. Consider the following queries:
CREATE VIEW V1 AS SELECT key, value from src;EXPLAIN DEPENDENCY SELECT * FROM V1; |
The following output is produced:
{"input_partitions":[],"input_tables":[{"tablename":"default@v1","tabletype":"VIRTUAL_VIEW"},{"tablename":"default@src","tabletype":"MANAGED_TABLE","tableParents":"[default@v1]"}]} |
As above, the inputs contain the view V1 and the table 'src' that the view V1 refers to.
All the outputs are shown if a table is being accessed via multiple parents.
CREATE VIEW V2 AS SELECT ds, key, value FROM srcpart WHERE ds IS NOT NULL;CREATE VIEW V4 AS SELECT src1.key, src2.value as value1, src3.value as value2 FROM V1 src1 JOIN V2 src2 on src1.key = src2.key JOIN src src3 ON src2.key = src3.key;EXPLAIN DEPENDENCY SELECT * FROM V4; |
The following output is produced.
{"input_partitions":[{"partitionParents":"[default@v2]","partitionName":"default<at:var at:name="srcpart" />ds=2008-04-08/hr=11"},{"partitionParents":"[default@v2]","partitionName":"default<at:var at:name="srcpart" />ds=2008-04-08/hr=12"},{"partitionParents":"[default@v2]","partitionName":"default<at:var at:name="srcpart" />ds=2008-04-09/hr=11"},{"partitionParents":"[default@v2]","partitionName":"default<at:var at:name="srcpart" />ds=2008-04-09/hr=12"}],"input_tables":[{"tablename":"default@v4","tabletype":"VIRTUAL_VIEW"},{"tablename":"default@v2","tabletype":"VIRTUAL_VIEW","tableParents":"[default@v4]"},{"tablename":"default@v1","tabletype":"VIRTUAL_VIEW","tableParents":"[default@v4]"},{"tablename":"default@src","tabletype":"MANAGED_TABLE","tableParents":"[default@v4, default@v1]"},{"tablename":"default@srcpart","tabletype":"MANAGED_TABLE","tableParents":"[default@v2]"}]} |
As can be seen, src is being accessed via parents v1 and v4.
The use of AUTHORIZATION in the EXPLAIN statement shows all entities needed to be authorized to execute the query and authorization failures if exists. For example, for a query like:
EXPLAIN AUTHORIZATION SELECT * FROM src JOIN srcpart; |
the following output is produced:
INPUTS: default@srcpart default@src default@srcpart@ds=2008-04-08/hr=11 default@srcpart@ds=2008-04-08/hr=12 default@srcpart@ds=2008-04-09/hr=11 default@srcpart@ds=2008-04-09/hr=12OUTPUTS: hdfs://localhost:9000/tmp/.../-mr-10000CURRENT_USER: navisOPERATION: QUERYAUTHORIZATION_FAILURES: Permission denied: Principal [name=navis, type=USER] does not have following privileges for operation QUERY [[SELECT] on Object [type=TABLE_OR_VIEW, name=default.src], [SELECT] on Object [type=TABLE_OR_VIEW, name=default.srcpart]] |
With the FORMATTED keyword, it will be returned in JSON format.
"OUTPUTS":["hdfs://localhost:9000/tmp/.../-mr-10000"],"INPUTS":["default@srcpart","default@src","default@srcpart@ds=2008-04-08/hr=11","default@srcpart@ds=2008-04-08/hr=12","default@srcpart@ds=2008-04-09/hr=11","default@srcpart@ds=2008-04-09/hr=12"],"OPERATION":"QUERY","CURRENT_USER":"navis","AUTHORIZATION_FAILURES":["Permission denied: Principal [name=navis, type=USER] does not have following privileges for operation QUERY [[SELECT] on Object [type=TABLE_OR_VIEW, name=default.src], [SELECT] on Object [type=TABLE_OR_VIEW, name=default.srcpart]]"]} |
[Hive - LanguageManual ] Explain (待)的更多相关文章
- Hive的Explain命令
Hive的Explain命令,用于显示SQL查询的执行计划. Hive查询被转化成序列阶段(这是一个有向无环图).这些阶段可能是mapper/reducer阶段,或者是Metastore或文件系统的操 ...
- [Hive - LanguageManual ] ]SQL Standard Based Hive Authorization
Status of Hive Authorization before Hive 0.13 SQL Standards Based Hive Authorization (New in Hive 0. ...
- [Hive - LanguageManual ] Windowing and Analytics Functions (待)
LanguageManual WindowingAndAnalytics Skip to end of metadata Added by Lefty Leverenz, last edi ...
- [HIve - LanguageManual] Hive Operators and User-Defined Functions (UDFs)
Hive Operators and User-Defined Functions (UDFs) Hive Operators and User-Defined Functions (UDFs) Bu ...
- [Hive - LanguageManual] Import/Export
LanguageManual ImportExport Skip to end of metadata Added by Carl Steinbach, last edited by Le ...
- [Hive - LanguageManual] DML: Load, Insert, Update, Delete
LanguageManual DML Hive Data Manipulation Language Hive Data Manipulation Language Loading files int ...
- [Hive - LanguageManual] Alter Table/Partition/Column
Alter Table/Partition/Column Alter Table Rename Table Alter Table Properties Alter Table Comment Add ...
- Hive LanguageManual DDL
hive语法规则LanguageManual DDL SQL DML 和 DDL 数据操作语言 (DML) 和 数据定义语言 (DDL) 一.数据库 增删改都在文档里说得也很明白,不重复造车轮 二.表 ...
- 【Hive】explain command throw ClassCastException in 2.3.4
参考:https://issues.apache.org/jira/browse/HIVE-21489 (一)问题描述: Hive-2.3.4 执行 explain select * from sr ...
随机推荐
- 关于spring-mvc的InitBinder注解的参数
关于spring-mvc的InitBinder注解的参数 通过Spring-mvc的@InitBinder注释的方法可以对WebDataBinder做一些初始化操作.比如设置Validator. 我一 ...
- opencv 画延长线
hough变换可以让我们检测到直线,这在前面已有详解,对于车道检测,我们需要其到图像边界的延长线一遍之后数据帧分析. 以下代码帮助我们在opencv中画延长线,本来想用虚线表示延长线的,无奈参数调不好 ...
- 加密解密(11)HMAC-在sha1,md5基础上加密
HMAC: Hash-based Message Authentication Code http://baike.sogou.com/v10977193.htm http://www.baike.c ...
- Control Flow in Async Programs
Control Flow in Async Programs You can write and maintain asynchronous programs more easily by using ...
- 高斯消元 分析 && 模板 (转载)
转载自:http://hi.baidu.com/czyuan_acm/item/dce4e6f8a8c45f13d7ff8cda czyuan 先上模板: /* 用于求整数解得方程组. */ #inc ...
- 第六届华为创新杯编程大赛-进阶1第1轮 洞穴逃生 (bfs + 优先队列)
这个题容易出错想了挺长时间,然后代码不长,1Y.. 做完题,看了一下别人的博客,也可以优先用 闪烁法术, 在闪烁法术不不如跑步的阶段(即魔法恢复的时候)用跑步. 洞穴逃生 描述: 精灵王子爱好冒险,在 ...
- 计算几何基础——矢量和叉积 && 叉积、线段相交判断、凸包(转载)
转载自 http://blog.csdn.net/william001zs/article/details/6213485 矢量 如果一条线段的端点是有次序之分的话,那么这种线段就称为 有向线段,如果 ...
- build path功能详解
在项目上右键>Build path>Config build path “web project”中,一般把"src"设置为source folder,把WEB-INF ...
- SQL SERVER 2008筛选时报错 无法为该请求检索数据
使用SqlServer2008的筛选功能时报错“无法为该请求检索数据. (Microsoft.SqlServer.Management.Sdk.Sfc)” 如下图: 解决方法: 打上SQL SERVE ...
- ZJ2008树的统计(树链剖分)
type node1=record go,next:longint;end; node2=record l,r,mx,sum:longint;end; var i,x,y,n,q,tmp,cnt,sz ...