对于普通的java-action或者shell-action 都是支持的只要标准输出是"k1=v1"这中格式的就行:

现用test.py进行测试:

 ##test.py
#! /opt/anaconda3/bin/python import re
import os
import sys
import traceback if __name__ == '__main__':
try:
print("k1=v1") print(aaa) ##这里是个故意设置的错误
except Exception as e: print(traceback.format_exc())
exit(0) ##这个地方要特别注意,当异常退出时capture-output将会失效,所以要想获取异常信息,一定要正常退出,然后在decison节点处理错误退出

#workflow.xml
<workflow-app xmlns="uri:oozie:workflow:0.4" name="adaf4df46a6597914b9ff6cd80eff542c6a">
<start to="python-node"/>
<action name="python-node">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>108412.server.bigdata.com.cn:8032</job-tracker>
<name-node>hdfs://108474.server.bigdata.com.cn:8020</name-node>
<configuration>
<property>
<name>oozie.launcher.mapred.job.queue.name</name>
<value>ada.oozielauncher</value>
</property>
</configuration>
<exec>model.py</exec>
<file>model.py</file>
<capture-output/>
</shell>
<ok to="python-node1"/>
<error to="fail"/>
</action>
<action name="python-node1">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>108412.server.bigdata.com.cn:8032</job-tracker>
<name-node>hdfs://108474.server.bigdata.com.cn:8020</name-node>
<configuration>
<property>
<name>oozie.launcher.mapred.job.queue.name</name>
<value>ada.oozielauncher</value>
</property>
</configuration>
<exec>echo</exec>
<argument>k1=${wf:actionData("python-node")["k1"]}</argument>
<capture-output/>
</shell>
<ok to="check-output"/>
<error to="fail"/>
</action>
<decision name="check-output">
<switch>
<case to="end">
${wf:actionData('python-node1')['k1'] eq 'Hello Oozie'}
</case>
<default to="fail"/>
</switch>
</decision>
<kill name="fail">
<message>Python action failed, error message[${wf:actionData('python-node')['k1']}]</message>
<!--message>Python action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message-->
</kill>
<end name="end"/>
</workflow-app>
 #job.properties
oozie.use.system.libpath=True
security_enabled=False
dryrun=False
jobTracker=108412.server.bigdata.com.cn:8032
nameNode=hdfs://108474.server.bigdata.com.cn:8020
user.name=root
queueName=test #sharelib配置不能配置hive,会报错
#oozie.action.sharelib.for.spark=spark,hive spark-action
#oozie.action.sharelib.for.sqoop=sqoop,hbase
oozie.wf.application.path=${nameNode}/user/lyy/oozie/test

将以上test.py和workflow.xml放到hdfs的/user/lyy/oozie/test目录下,使用一下命令提交:

oozie job -oozie http://10.8.4.46:11000/oozie -config job.properties -run

另外如果代码中有标准输出,但是格式不是"k=v"类型的,则用el函数wf:actionData无法获取,然而capture-output依旧把标准输出的信息捕获了,存储在oozie元数据表oozie.WF_ACTIONS中data字段存放,这个字段是mediumblob类型不能直接查看,可以通过下面restfulAPI获取json格式的数据,如下:

http://108446.server.bigdata.com.cn:11000/oozie/v1/job/0000106-181129152008300-oozie-oozi-W

{
"appPath":"hdfs://108474.server.bigdata.com.cn:8020/user/lyy/oozie/3a0c7d3a2ed5468087d93c69db651f3f",
"acl":null,
"status":"KILLED",
"createdTime":"Mon, 10 Dec 2018 03:50:13 GMT",
"conf":"<configuration>
<property>
<name>user.name</name>
<value>root</value>
</property>
<property>
<name>oozie.use.system.libpath</name>
<value>True</value>
</property>
<property>
<name>mapreduce.job.user.name</name>
<value>root</value>
</property>
<property>
<name>security_enabled</name>
<value>False</value>
</property>
<property>
<name>queueName</name>
<value>ada.spark</value>
</property>
<property>
<name>nameNode</name>
<value>hdfs://108474.server.bigdata.com.cn:8020</value>
</property>
<property>
<name>dryrun</name>
<value>False</value>
</property>
<property>
<name>jobTracker</name>
<value>108412.server.bigdata.com.cn:8032</value>
</property>
<property>
<name>oozie.wf.application.path</name>
<value>hdfs://108474.server.bigdata.com.cn:8020/user/lyy/oozie/3a0c7d3a2ed5468087d93c69db651f3f</value>
</property>
</configuration>",
"lastModTime":"Mon, 10 Dec 2018 03:51:17 GMT",
"run":0,
"endTime":"Mon, 10 Dec 2018 03:51:17 GMT",
"externalId":null,
"appName":"adaf4df46a6597914b9ff6cd80eff542c6a",
"id":"0000106-181129152008300-oozie-oozi-W",
"startTime":"Mon, 10 Dec 2018 03:50:13 GMT",
"parentId":null,
"toString":"Workflow id[0000106-181129152008300-oozie-oozi-W] status[KILLED]",
"group":null,
"consoleUrl":"http://108446.server.bigdata.com.cn:11000/oozie?job=0000106-181129152008300-oozie-oozi-W",
"user":"root",
"actions":[
{
"errorMessage":null,
"status":"OK",
"stats":null,
"data":null,
"transition":"python-node",
"externalStatus":"OK",
"cred":"null",
"conf":"",
"type":":START:",
"endTime":"Mon, 10 Dec 2018 03:50:14 GMT",
"externalId":"-",
"id":"0000106-181129152008300-oozie-oozi-W@:start:",
"startTime":"Mon, 10 Dec 2018 03:50:13 GMT",
"userRetryCount":0,
"externalChildIDs":null,
"name":":start:",
"errorCode":null,
"trackerUri":"-",
"retries":0,
"userRetryInterval":10,
"toString":"Action name[:start:] status[OK]",
"consoleUrl":"-",
"userRetryMax":0
},
{
"errorMessage":null,
"status":"OK",
"stats":null,
"data":"#
#Mon Dec 10 11:50:24 CST 2018
File="./model.py", line 12, in <module>
Traceback=(most recent call last)\:
print(aaa)=
NameError=name 'aaa' is not defined ####这个就是出错的栈信息
k1=v1 ##这个是标准输出的信息
",
"transition":"python-node1",
"externalStatus":"SUCCEEDED",
"cred":"null",
"conf":"<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>108412.server.bigdata.com.cn:8032</job-tracker>
<name-node>hdfs://108474.server.bigdata.com.cn:8020</name-node>
<configuration>
<property xmlns="">
<name>oozie.launcher.mapred.job.queue.name</name>
<value>ada.oozielauncher</value>
<source>programatically</source>
</property>
</configuration>
<exec>model.py</exec>
<file>model.py</file>
<capture-output />
</shell>",
"type":"shell",
"endTime":"Mon, 10 Dec 2018 03:50:24 GMT",
"externalId":"job_1542533868365_0510",
"id":"0000106-181129152008300-oozie-oozi-W@python-node",
"startTime":"Mon, 10 Dec 2018 03:50:14 GMT",
"userRetryCount":0,
"externalChildIDs":null,
"name":"python-node",
"errorCode":null,
"trackerUri":"108412.server.bigdata.com.cn:8032",
"retries":0,
"userRetryInterval":10,
"toString":"Action name[python-node] status[OK]",
"consoleUrl":"http://108412.server.bigdata.com.cn:8088/proxy/application_1542533868365_0510/",
"userRetryMax":0
},
{
"errorMessage":null,
"status":"OK",
"stats":null,
"data":"#
#Mon Dec 10 11:51:16 CST 2018
k1=v1
",
"transition":"check-output",
"externalStatus":"SUCCEEDED",
"cred":"null",
"conf":"<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>108412.server.bigdata.com.cn:8032</job-tracker>
<name-node>hdfs://108474.server.bigdata.com.cn:8020</name-node>
<configuration>
<property xmlns="">
<name>oozie.launcher.mapred.job.queue.name</name>
<value>ada.oozielauncher</value>
<source>programatically</source>
</property>
</configuration>
<exec>echo</exec>
<argument>k1=v1</argument> ##这个就是正常的k1=v1标准输出传递到了python-node1节点了
<capture-output />
</shell>",
"type":"shell",
"endTime":"Mon, 10 Dec 2018 03:51:16 GMT",
"externalId":"job_1542533868365_0511",
"id":"0000106-181129152008300-oozie-oozi-W@python-node1",
"startTime":"Mon, 10 Dec 2018 03:50:24 GMT",
"userRetryCount":0,
"externalChildIDs":null,
"name":"python-node1",
"errorCode":null,
"trackerUri":"108412.server.bigdata.com.cn:8032",
"retries":0,
"userRetryInterval":10,
"toString":"Action name[python-node1] status[OK]",
"consoleUrl":"http://108412.server.bigdata.com.cn:8088/proxy/application_1542533868365_0511/",
"userRetryMax":0
},
{
"errorMessage":null,
"status":"OK",
"stats":null,
"data":null,
"transition":"fail",
"externalStatus":"fail",
"cred":"null",
"conf":"<switch xmlns="uri:oozie:workflow:0.4">
<case to="end">false</case>
<default to="fail" />
</switch>",
"type":"switch",
"endTime":"Mon, 10 Dec 2018 03:51:17 GMT",
"externalId":"-",
"id":"0000106-181129152008300-oozie-oozi-W@check-output",
"startTime":"Mon, 10 Dec 2018 03:51:16 GMT",
"userRetryCount":0,
"externalChildIDs":null,
"name":"check-output",
"errorCode":null,
"trackerUri":"-",
"retries":0,
"userRetryInterval":10,
"toString":"Action name[check-output] status[OK]",
"consoleUrl":"-",
"userRetryMax":0
},
{
"errorMessage":"Python action failed, error message[v1]",
"status":"OK",
"stats":null,
"data":null,
"transition":null,
"externalStatus":"OK",
"cred":"null",
"conf":"Python action failed, error message[${wf:actionData('python-node')['k1']}]",
"type":":KILL:",
"endTime":"Mon, 10 Dec 2018 03:51:17 GMT",
"externalId":"-",
"id":"0000106-181129152008300-oozie-oozi-W@fail",
"startTime":"Mon, 10 Dec 2018 03:51:17 GMT",
"userRetryCount":0,
"externalChildIDs":null,
"name":"fail",
"errorCode":"E0729",
"trackerUri":"-",
"retries":0,
"userRetryInterval":10,
"toString":"Action name[fail] status[OK]",
"consoleUrl":"-",
"userRetryMax":0
}
]
}

spark使用oozie提交的两种方式的workflow.xml:

#shell-action:
<workflow-app xmlns="uri:oozie:workflow:0.4" name="shell-wf">
<start to="shell-node"/>
<action name="shell-node">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapreduce.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>spark2-submit</exec>
<argument>--master</argument>
<argument>yarn</argument>
<argument>--deploy-mode</argument>
<argument>cluster</argument>
<argument>--queue</argument>
<argument>ada.spark</argument>
<argument>--name</argument>
<argument>testYarn</argument>
<argument>--conf</argument>
<argument>spark.yarn.appMasterEnv.JAVA_HOME=/usr/java/jdk1.8</argument>
<argument>--conf</argument>
<argument>spark.executorEnv.JAVA_HOME=/usr/java/jdk1.8</argument>
<argument>--jars</argument>
<argument>hdfs://10.8.18.74:8020/ada/spark/share/tech_component/tc.plat.spark.jar,hdfs://10.8.18.74:8020/ada/spark/share/tech_component/bigdata4i-1.0.jar,hdfs://10.8.18.74:8020/ada/spark/share/tech_component/bigdata-sparklog-1.0.jar</argument>
<argument>--files</argument>
<argument>/etc/hive/conf/hive-site.xml</argument>
<argument>--class</argument>
<argument>testYarn.test.Ttest</argument>
<argument>hdfs://10.8.18.74:8020/user/lyy/App/testYarn.test.jar</argument>
<capture-output/>
</shell>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app> ##spark-action:
<workflow-app xmlns="uri:oozie:workflow:0.4" name="spark-action">
<start to="spark-node"/>
<action name="spark-node">
<spark xmlns="uri:oozie:spark-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapreduce.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<master>yarn</master>
<mode>cluster</mode>
<name>Spark-Action</name>
<class>testYarn.test.Ttest</class>
<jar>${nameNode}/user/lyy/App/testYarn.test.jar</jar>
<spark-opts>--conf spark.yarn.appMasterEnv.JAVA_HOME=/usr/java/jdk1.8 --conf spark.executorEnv.JAVA_HOME=/usr/java/jdk1.8 --jars hdfs://10.8.18.74:8020/ada/spark/share/tech_component/tc.plat.spark.jar,hdfs://10.8.18.74:8020/ada/spark/share/tech_component/bigdata4i-1.0.jar, hdfs://10.8.18.74:8020/ada/spark/share/tech_component/bigdata-sparklog-1.0.jar --conf spark.executor.extraJavaOptions=-Dlog4j.configuration=/etc/hadoop/conf/log4j.properties --conf spark.driver.extraJavaOptions=-Dlog4j.configuration=/etc/hadoop/conf/log4j.properties --conf spark.yarn.queue=ada.spark --files /etc/hive/conf/hive-site.xml</spark-opts>
</spark>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>

oozie捕获标准输出&异常capture-output的更多相关文章

  1. 7、pytest -- 捕获标准输出和标准错误输出

    目录 1. 标准输出/标准错误输出/标准输入的默认捕获行为 2. 修改和去使能捕获行为 2.1. 文件描述符级别的捕获行为(默认) 2.2. sys级别的捕获行为 2.3. 去使能捕获行为 3. 使用 ...

  2. 存储过程——异常捕获&打印异常信息

    目录 0. 背景说明 1. 建立异常信息表ErrorLog 2. 建立保存异常信息的存储过程 3. 建立在SQL Server中打印异常信息的存储过程 4. 建立一个用于测试的存储过程抛出异常进行测试 ...

  3. 在C#代码中应用Log4Net(四)在Winform和Web中捕获全局异常

    毕竟人不是神,谁写的程序都会有bug,有了bug不可怕,可怕的是出错了,你却不知道错误在哪里.所以我们需要将应用程序中抛出的所有异常都记录起来,不然出了错,找问题就能要了你的命.下面我们主要讨论的是如 ...

  4. Java未被捕获的异常该怎么处理

    在你学习在程序中处理异常之前,看一看如果你不处理它们会有什么情况发生是很有好处的.下面的小程序包括一个故意导致被零除错误的表达式.class Exc0 {    public static void ...

  5. WCF基础教程之异常处理:你的Try..Catch语句真的能捕获到异常吗?

    在上一篇WCF基础教程之开篇:创建.测试和调用WCF博客中,我们简单的介绍了如何创建一个WCF服务并调用这个服务.其实,上一篇博客主要是为了今天这篇博客做铺垫,考虑到网上大多数WCF教程都是从基础讲起 ...

  6. C# WINFORM 捕获全局异常

    using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Thr ...

  7. Javascript异步请求你能捕获到异常吗?

    Javascript异步请求你能捕获到异常吗? 异常处理是程序发布之前必须要解决的问题,不经过异常处理的应用会让用户对产品失去信心.在异常处理中,我们一贯的做法是按照函数调用的次序,将异常从数据访问层 ...

  8. Android捕获崩溃异常

    开发中最让人头疼的是应用突然爆炸,然后跳回到桌面.而且我们常常不知道这种状况会何时出现,在应用调试阶段还好,还可以通过调试工具的日志查看错误出现在哪里.但平时使用的时候给你闹崩溃,那你就欲哭无泪了. ...

  9. 测试 __try, __finally, __except(被__finally捕获的异常, 还会被上一级的__except捕获。反之不行)

    C语言标准是没有 try-catch语法 的, M$家自己提供了一组. /// @file ClassroomExamples.c /// @brief 验证C语言的非标准try, catch #in ...

随机推荐

  1. 基于easyui开发Web版Activiti流程定制器详解(六)——Draw2d的扩展(三)

    题外话: 最近在忙公司的云项目空闲时间不是很多,所以很久没来更新,今天补上一篇! 回顾: 前几篇介绍了一下设计器的界面和Draw2d基础知识,这篇讲解一下本设计器如何扩展Draw2d. 进入主题: 先 ...

  2. [SDOI2009]HH的项链(莫队)

    嘟嘟嘟 这题原本莫队能过,自从某谷加强数据后好像就只能80了. 但这并不重要. (其实写这篇博客只是想记录一下莫队板子) 莫队,总而言之,离线,排序,暴力. #include<cstdio> ...

  3. mysql 去除特殊字符和前后空白字符

    mysql 去除特殊字符 update table set field = replace(replace(replace(field,char(9),''),char(10),''),char(13 ...

  4. 记一次爬虫经历(友话APP的Web端)

    背景:学校为迎接新生举办了一个活动,在友话APP的校园圈子内发布动态即可参与活动,最终抽取数名同学赠送福利. 分析:动态的数量会随着迎新的开始逐渐增加,人工统计显然不现实,因此可以使用爬虫脚本在友话A ...

  5. 程序集(Assembly)和模块(Managed Module)

    前言 一直都用集成开发坏境(IDE),一直对模块和程序集的概念理解的不是很直观,因为一Build就把你的单个模块塞进程序集里面去了.当然,对你的编程也不会造成太大的影响.但有些东西你最好还是知道比较好 ...

  6. MySQL插入emoji表情失败问题的解决方法

    前言 之前一直认为UTF-8是万能的字符集问题解决方案,直到最近遇到这个问题.最近在做新浪微博的爬虫, 在存库的时候发现只要保持emoji表情,就回抛出以下异常: Incorrect string v ...

  7. 细数用anaconda安装mayavi时出现的各种问题

    这段时间需要利用mayavi做科学数据的处理,因此需要利用到mayavi库,但是官网上面的指示说:如果安装了anaconda,其中自带各种科学库,但是实践中,并没有发现mayavi. 官方网站导航:m ...

  8. (转)LR性能测试结果样例分析

    原文作者:猥琐丶欲为 传送门:http://www.cnblogs.com/hyzhou/archive/2011/11/16/2251316.html 测试结果分析 LoadRunner性能测试结果 ...

  9. 轻量级IOC容器:Ninject

    Ninject是一个快如闪电.超轻量级的基于.Net平台的依赖注入框架.它能够帮助你把应用程序分离成一个个松耦合.高内聚的模块,然后用一种灵活的方式组装起来.通过使用Ninject配套你的软件架构,那 ...

  10. PHP中上传文件打印错误,错误类型

    一般使用$_FILES来进行文件上传时,可以使用$_FILES["file"]["error"]来判断文件上传是否出错.$_FILES["file&q ...