oozie捕获标准输出&异常capture-output
对于普通的java-action或者shell-action 都是支持的只要标准输出是"k1=v1"这中格式的就行:
现用test.py进行测试:
##test.py
#! /opt/anaconda3/bin/python import re
import os
import sys
import traceback if __name__ == '__main__':
try:
print("k1=v1") print(aaa) ##这里是个故意设置的错误
except Exception as e: print(traceback.format_exc())
exit(0) ##这个地方要特别注意,当异常退出时capture-output将会失效,所以要想获取异常信息,一定要正常退出,然后在decison节点处理错误退出
#workflow.xml
<workflow-app xmlns="uri:oozie:workflow:0.4" name="adaf4df46a6597914b9ff6cd80eff542c6a">
<start to="python-node"/>
<action name="python-node">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>108412.server.bigdata.com.cn:8032</job-tracker>
<name-node>hdfs://108474.server.bigdata.com.cn:8020</name-node>
<configuration>
<property>
<name>oozie.launcher.mapred.job.queue.name</name>
<value>ada.oozielauncher</value>
</property>
</configuration>
<exec>model.py</exec>
<file>model.py</file>
<capture-output/>
</shell>
<ok to="python-node1"/>
<error to="fail"/>
</action>
<action name="python-node1">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>108412.server.bigdata.com.cn:8032</job-tracker>
<name-node>hdfs://108474.server.bigdata.com.cn:8020</name-node>
<configuration>
<property>
<name>oozie.launcher.mapred.job.queue.name</name>
<value>ada.oozielauncher</value>
</property>
</configuration>
<exec>echo</exec>
<argument>k1=${wf:actionData("python-node")["k1"]}</argument>
<capture-output/>
</shell>
<ok to="check-output"/>
<error to="fail"/>
</action>
<decision name="check-output">
<switch>
<case to="end">
${wf:actionData('python-node1')['k1'] eq 'Hello Oozie'}
</case>
<default to="fail"/>
</switch>
</decision>
<kill name="fail">
<message>Python action failed, error message[${wf:actionData('python-node')['k1']}]</message>
<!--message>Python action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message-->
</kill>
<end name="end"/>
</workflow-app>
#job.properties
oozie.use.system.libpath=True
security_enabled=False
dryrun=False
jobTracker=108412.server.bigdata.com.cn:8032
nameNode=hdfs://108474.server.bigdata.com.cn:8020
user.name=root
queueName=test #sharelib配置不能配置hive,会报错
#oozie.action.sharelib.for.spark=spark,hive spark-action
#oozie.action.sharelib.for.sqoop=sqoop,hbase
oozie.wf.application.path=${nameNode}/user/lyy/oozie/test
将以上test.py和workflow.xml放到hdfs的/user/lyy/oozie/test目录下,使用一下命令提交:
oozie job -oozie http://10.8.4.46:11000/oozie -config job.properties -run
另外如果代码中有标准输出,但是格式不是"k=v"类型的,则用el函数wf:actionData无法获取,然而capture-output依旧把标准输出的信息捕获了,存储在oozie元数据表oozie.WF_ACTIONS中data字段存放,这个字段是mediumblob类型不能直接查看,可以通过下面restfulAPI获取json格式的数据,如下:
http://108446.server.bigdata.com.cn:11000/oozie/v1/job/0000106-181129152008300-oozie-oozi-W
{
"appPath":"hdfs://108474.server.bigdata.com.cn:8020/user/lyy/oozie/3a0c7d3a2ed5468087d93c69db651f3f",
"acl":null,
"status":"KILLED",
"createdTime":"Mon, 10 Dec 2018 03:50:13 GMT",
"conf":"<configuration>
<property>
<name>user.name</name>
<value>root</value>
</property>
<property>
<name>oozie.use.system.libpath</name>
<value>True</value>
</property>
<property>
<name>mapreduce.job.user.name</name>
<value>root</value>
</property>
<property>
<name>security_enabled</name>
<value>False</value>
</property>
<property>
<name>queueName</name>
<value>ada.spark</value>
</property>
<property>
<name>nameNode</name>
<value>hdfs://108474.server.bigdata.com.cn:8020</value>
</property>
<property>
<name>dryrun</name>
<value>False</value>
</property>
<property>
<name>jobTracker</name>
<value>108412.server.bigdata.com.cn:8032</value>
</property>
<property>
<name>oozie.wf.application.path</name>
<value>hdfs://108474.server.bigdata.com.cn:8020/user/lyy/oozie/3a0c7d3a2ed5468087d93c69db651f3f</value>
</property>
</configuration>",
"lastModTime":"Mon, 10 Dec 2018 03:51:17 GMT",
"run":0,
"endTime":"Mon, 10 Dec 2018 03:51:17 GMT",
"externalId":null,
"appName":"adaf4df46a6597914b9ff6cd80eff542c6a",
"id":"0000106-181129152008300-oozie-oozi-W",
"startTime":"Mon, 10 Dec 2018 03:50:13 GMT",
"parentId":null,
"toString":"Workflow id[0000106-181129152008300-oozie-oozi-W] status[KILLED]",
"group":null,
"consoleUrl":"http://108446.server.bigdata.com.cn:11000/oozie?job=0000106-181129152008300-oozie-oozi-W",
"user":"root",
"actions":[
{
"errorMessage":null,
"status":"OK",
"stats":null,
"data":null,
"transition":"python-node",
"externalStatus":"OK",
"cred":"null",
"conf":"",
"type":":START:",
"endTime":"Mon, 10 Dec 2018 03:50:14 GMT",
"externalId":"-",
"id":"0000106-181129152008300-oozie-oozi-W@:start:",
"startTime":"Mon, 10 Dec 2018 03:50:13 GMT",
"userRetryCount":0,
"externalChildIDs":null,
"name":":start:",
"errorCode":null,
"trackerUri":"-",
"retries":0,
"userRetryInterval":10,
"toString":"Action name[:start:] status[OK]",
"consoleUrl":"-",
"userRetryMax":0
},
{
"errorMessage":null,
"status":"OK",
"stats":null,
"data":"#
#Mon Dec 10 11:50:24 CST 2018
File="./model.py", line 12, in <module>
Traceback=(most recent call last)\:
print(aaa)=
NameError=name 'aaa' is not defined ####这个就是出错的栈信息
k1=v1 ##这个是标准输出的信息
",
"transition":"python-node1",
"externalStatus":"SUCCEEDED",
"cred":"null",
"conf":"<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>108412.server.bigdata.com.cn:8032</job-tracker>
<name-node>hdfs://108474.server.bigdata.com.cn:8020</name-node>
<configuration>
<property xmlns="">
<name>oozie.launcher.mapred.job.queue.name</name>
<value>ada.oozielauncher</value>
<source>programatically</source>
</property>
</configuration>
<exec>model.py</exec>
<file>model.py</file>
<capture-output />
</shell>",
"type":"shell",
"endTime":"Mon, 10 Dec 2018 03:50:24 GMT",
"externalId":"job_1542533868365_0510",
"id":"0000106-181129152008300-oozie-oozi-W@python-node",
"startTime":"Mon, 10 Dec 2018 03:50:14 GMT",
"userRetryCount":0,
"externalChildIDs":null,
"name":"python-node",
"errorCode":null,
"trackerUri":"108412.server.bigdata.com.cn:8032",
"retries":0,
"userRetryInterval":10,
"toString":"Action name[python-node] status[OK]",
"consoleUrl":"http://108412.server.bigdata.com.cn:8088/proxy/application_1542533868365_0510/",
"userRetryMax":0
},
{
"errorMessage":null,
"status":"OK",
"stats":null,
"data":"#
#Mon Dec 10 11:51:16 CST 2018
k1=v1
",
"transition":"check-output",
"externalStatus":"SUCCEEDED",
"cred":"null",
"conf":"<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>108412.server.bigdata.com.cn:8032</job-tracker>
<name-node>hdfs://108474.server.bigdata.com.cn:8020</name-node>
<configuration>
<property xmlns="">
<name>oozie.launcher.mapred.job.queue.name</name>
<value>ada.oozielauncher</value>
<source>programatically</source>
</property>
</configuration>
<exec>echo</exec>
<argument>k1=v1</argument> ##这个就是正常的k1=v1标准输出传递到了python-node1节点了
<capture-output />
</shell>",
"type":"shell",
"endTime":"Mon, 10 Dec 2018 03:51:16 GMT",
"externalId":"job_1542533868365_0511",
"id":"0000106-181129152008300-oozie-oozi-W@python-node1",
"startTime":"Mon, 10 Dec 2018 03:50:24 GMT",
"userRetryCount":0,
"externalChildIDs":null,
"name":"python-node1",
"errorCode":null,
"trackerUri":"108412.server.bigdata.com.cn:8032",
"retries":0,
"userRetryInterval":10,
"toString":"Action name[python-node1] status[OK]",
"consoleUrl":"http://108412.server.bigdata.com.cn:8088/proxy/application_1542533868365_0511/",
"userRetryMax":0
},
{
"errorMessage":null,
"status":"OK",
"stats":null,
"data":null,
"transition":"fail",
"externalStatus":"fail",
"cred":"null",
"conf":"<switch xmlns="uri:oozie:workflow:0.4">
<case to="end">false</case>
<default to="fail" />
</switch>",
"type":"switch",
"endTime":"Mon, 10 Dec 2018 03:51:17 GMT",
"externalId":"-",
"id":"0000106-181129152008300-oozie-oozi-W@check-output",
"startTime":"Mon, 10 Dec 2018 03:51:16 GMT",
"userRetryCount":0,
"externalChildIDs":null,
"name":"check-output",
"errorCode":null,
"trackerUri":"-",
"retries":0,
"userRetryInterval":10,
"toString":"Action name[check-output] status[OK]",
"consoleUrl":"-",
"userRetryMax":0
},
{
"errorMessage":"Python action failed, error message[v1]",
"status":"OK",
"stats":null,
"data":null,
"transition":null,
"externalStatus":"OK",
"cred":"null",
"conf":"Python action failed, error message[${wf:actionData('python-node')['k1']}]",
"type":":KILL:",
"endTime":"Mon, 10 Dec 2018 03:51:17 GMT",
"externalId":"-",
"id":"0000106-181129152008300-oozie-oozi-W@fail",
"startTime":"Mon, 10 Dec 2018 03:51:17 GMT",
"userRetryCount":0,
"externalChildIDs":null,
"name":"fail",
"errorCode":"E0729",
"trackerUri":"-",
"retries":0,
"userRetryInterval":10,
"toString":"Action name[fail] status[OK]",
"consoleUrl":"-",
"userRetryMax":0
}
]
}
spark使用oozie提交的两种方式的workflow.xml:
#shell-action:
<workflow-app xmlns="uri:oozie:workflow:0.4" name="shell-wf">
<start to="shell-node"/>
<action name="shell-node">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapreduce.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>spark2-submit</exec>
<argument>--master</argument>
<argument>yarn</argument>
<argument>--deploy-mode</argument>
<argument>cluster</argument>
<argument>--queue</argument>
<argument>ada.spark</argument>
<argument>--name</argument>
<argument>testYarn</argument>
<argument>--conf</argument>
<argument>spark.yarn.appMasterEnv.JAVA_HOME=/usr/java/jdk1.8</argument>
<argument>--conf</argument>
<argument>spark.executorEnv.JAVA_HOME=/usr/java/jdk1.8</argument>
<argument>--jars</argument>
<argument>hdfs://10.8.18.74:8020/ada/spark/share/tech_component/tc.plat.spark.jar,hdfs://10.8.18.74:8020/ada/spark/share/tech_component/bigdata4i-1.0.jar,hdfs://10.8.18.74:8020/ada/spark/share/tech_component/bigdata-sparklog-1.0.jar</argument>
<argument>--files</argument>
<argument>/etc/hive/conf/hive-site.xml</argument>
<argument>--class</argument>
<argument>testYarn.test.Ttest</argument>
<argument>hdfs://10.8.18.74:8020/user/lyy/App/testYarn.test.jar</argument>
<capture-output/>
</shell>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app> ##spark-action:
<workflow-app xmlns="uri:oozie:workflow:0.4" name="spark-action">
<start to="spark-node"/>
<action name="spark-node">
<spark xmlns="uri:oozie:spark-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapreduce.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<master>yarn</master>
<mode>cluster</mode>
<name>Spark-Action</name>
<class>testYarn.test.Ttest</class>
<jar>${nameNode}/user/lyy/App/testYarn.test.jar</jar>
<spark-opts>--conf spark.yarn.appMasterEnv.JAVA_HOME=/usr/java/jdk1.8 --conf spark.executorEnv.JAVA_HOME=/usr/java/jdk1.8 --jars hdfs://10.8.18.74:8020/ada/spark/share/tech_component/tc.plat.spark.jar,hdfs://10.8.18.74:8020/ada/spark/share/tech_component/bigdata4i-1.0.jar, hdfs://10.8.18.74:8020/ada/spark/share/tech_component/bigdata-sparklog-1.0.jar --conf spark.executor.extraJavaOptions=-Dlog4j.configuration=/etc/hadoop/conf/log4j.properties --conf spark.driver.extraJavaOptions=-Dlog4j.configuration=/etc/hadoop/conf/log4j.properties --conf spark.yarn.queue=ada.spark --files /etc/hive/conf/hive-site.xml</spark-opts>
</spark>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
oozie捕获标准输出&异常capture-output的更多相关文章
- 7、pytest -- 捕获标准输出和标准错误输出
目录 1. 标准输出/标准错误输出/标准输入的默认捕获行为 2. 修改和去使能捕获行为 2.1. 文件描述符级别的捕获行为(默认) 2.2. sys级别的捕获行为 2.3. 去使能捕获行为 3. 使用 ...
- 存储过程——异常捕获&打印异常信息
目录 0. 背景说明 1. 建立异常信息表ErrorLog 2. 建立保存异常信息的存储过程 3. 建立在SQL Server中打印异常信息的存储过程 4. 建立一个用于测试的存储过程抛出异常进行测试 ...
- 在C#代码中应用Log4Net(四)在Winform和Web中捕获全局异常
毕竟人不是神,谁写的程序都会有bug,有了bug不可怕,可怕的是出错了,你却不知道错误在哪里.所以我们需要将应用程序中抛出的所有异常都记录起来,不然出了错,找问题就能要了你的命.下面我们主要讨论的是如 ...
- Java未被捕获的异常该怎么处理
在你学习在程序中处理异常之前,看一看如果你不处理它们会有什么情况发生是很有好处的.下面的小程序包括一个故意导致被零除错误的表达式.class Exc0 { public static void ...
- WCF基础教程之异常处理:你的Try..Catch语句真的能捕获到异常吗?
在上一篇WCF基础教程之开篇:创建.测试和调用WCF博客中,我们简单的介绍了如何创建一个WCF服务并调用这个服务.其实,上一篇博客主要是为了今天这篇博客做铺垫,考虑到网上大多数WCF教程都是从基础讲起 ...
- C# WINFORM 捕获全局异常
using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Thr ...
- Javascript异步请求你能捕获到异常吗?
Javascript异步请求你能捕获到异常吗? 异常处理是程序发布之前必须要解决的问题,不经过异常处理的应用会让用户对产品失去信心.在异常处理中,我们一贯的做法是按照函数调用的次序,将异常从数据访问层 ...
- Android捕获崩溃异常
开发中最让人头疼的是应用突然爆炸,然后跳回到桌面.而且我们常常不知道这种状况会何时出现,在应用调试阶段还好,还可以通过调试工具的日志查看错误出现在哪里.但平时使用的时候给你闹崩溃,那你就欲哭无泪了. ...
- 测试 __try, __finally, __except(被__finally捕获的异常, 还会被上一级的__except捕获。反之不行)
C语言标准是没有 try-catch语法 的, M$家自己提供了一组. /// @file ClassroomExamples.c /// @brief 验证C语言的非标准try, catch #in ...
随机推荐
- IOS的滑动菜单(Sliding Menu)的具体写法(附代码)
滑动菜单是一个很流行的IOS控件 先上效果图: 这里使用github的JTReveal框架来开发,链接是https://github.com/agassiyzh/JTRevealSide ...
- two sum[easy]
Given an array of integers, return indices of the two numbers such that they add up to a specific ta ...
- Intellij IDEA的激活(2100年你值得拥有)
下载ide官网地址:https://download.jetbrains.com/idea/ideaIU-2018.2.7.exe 安装下一步下一步:进入安装bin目录 首先下载需要破解的jar包链接 ...
- IE8 下处理select标签高度不居中
IE8: Chrome: 同样的代码可是在IE8下select的文字并不是垂直居中. 处理:给select的样式添加padding上下的内边距. 加line-height一点用都 ...
- #leetcode刷题之路48-旋转图像
给定一个 n × n 的二维矩阵表示一个图像.将图像顺时针旋转 90 度.说明:你必须在原地旋转图像,这意味着你需要直接修改输入的二维矩阵.请不要使用另一个矩阵来旋转图像.示例 1:给定 matrix ...
- HTML5 drag & drop 拖拽与拖放
关键词: 1. draggable:规定元素是否可拖动的,draggable=true可拖动 2. dataTransfer:拖拽对象用来传递的媒介,使用方式:event.dataTransfer 3 ...
- 关于groupby与层次化索引的联系和层次化标签的使用
groupby出来对象并不是dataFrame,所以直接print是看不到矩阵或者高维矩阵的,所以需要用能够产生标量值的方法去处理groupby对象,这样可以利用矩阵形式处理高维数据: 这样group ...
- 修复Gradle CreateProcess error=206
插件地址:https://plugins.gradle.org/plugin/ua.eshepelyuk.ManifestClasspath 修复Window系统中Gradle 路径太长问题, Fix ...
- 【转载】COM 组件设计与应用(十二)——错误与异常处理
原文:http://vckbase.com/index.php/wv/1238.html 一.前言 程序设计中,错误处理必不可少,而且通常要占用很大的篇幅.本回书着落在 COM 中的错误(异常)的处理 ...
- callable(object)
callable(object) 中文说明:检查对象object是否可调用.如果返回True,object仍然可能调用失败:但如果返回False,调用对象ojbect绝对不会成功. 注意:类是可调用的 ...