反垃圾rd那边有一个hql,在执行过程中出现错误退出,报java.io.IOException: Broken pipe异常,hql中使用到了python脚本,hql和python脚本最近没有人改过,在10.1号时还执行正常,可是在10.4号之后执行就老是出现同样的错误,并且错误出如今stage-2的reduce阶段,gateway上面的错误提演示样例如以下:

2014-10-10 15:05:32,724 Stage-2 map = 100%,  reduce = 100%
Ended Job = job_201406171104_4019895 with errors
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask

jobtracker页面job报错信息:

2014-10-10 15:00:29,614 WARN org.apache.hadoop.mapred.Child: Error running child
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0) {"key":{"reducesinkkey0":"1000390355","reducesinkkey1":"14"},"value":{"_col0":"1000390355","_col1":25,"_col2":"Infinity","_col3":"14","_col4":17},"alias":0}
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:268)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:518)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:419)
at org.apache.hadoop.mapred.Child$4.run(Child.java:259)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1061)
at org.apache.hadoop.mapred.Child.main(Child.java:253)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0) {"key":{"reducesinkkey0":"1000390355","reducesinkkey1":"14"},"value":{"_col0":"1000390355","_col1":25,"_col2":"Infinity","_col3":"14","_col4":17},"alias":0}
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:256)
... 7 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: Broken pipe
at org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:348)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)
at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)
at org.apache.hadoop.hive.ql.exec.ExtractOperator.processOp(ExtractOperator.java:45)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:247)
... 7 more
Caused by: java.io.IOException: Broken pipe
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:260)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at org.apache.hadoop.hive.ql.exec.TextRecordWriter.write(TextRecordWriter.java:43)
at org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:331)
... 15 more

stderr logs:

Traceback (most recent call last):
File "/data10/hadoop/local/taskTracker/liangjun/jobcache/job_201406171104_4019895/attempt_201406171104_4019895_r_000000_0/work/./pranalysis.py", line 86, in <module>
pranalysis(cols[0],pr,cols[1],cols[4],prnum)
File "/data10/hadoop/local/taskTracker/liangjun/jobcache/job_201406171104_4019895/attempt_201406171104_4019895_r_000000_0/work/./pranalysis.py", line 60, in pranalysis
print '%s\t%d\t%d\t%d'%(uid,v[14]-20,type,rank)
TypeError: %d format: a number is required, not float

从以上job的错误信息初步推断,问题原因应该是10.1之后的数据出现故障。导致python脚本运行的时候退出。数据流通道被关闭,而ExecReducer.reduce()方法不知道往python写数据的通道已经由于异常而关闭。还继续往里写数据,这时就会出现java.io.IOException: Broken pipe异常。

下面是分析过程:

1、hql和python

hql内容例如以下:

add file /usr/home/wbdata_anti/shell/sass_offline/pranalysis.py;
select transform(BS.*) using 'pranalysis.py' as uid,prvalue,trend,prlevel
from
(
select B1.uid,B1.flws,B1.pr,iter,B2.alivefans from tmp_anti_user_pagerank1 B1
join
mds_anti_user_flwpr B2
on B1.uid=B2.uid
where iter>'00' and iter<='14' and dt='lowrlfans20141001'
distribute by uid sort by uid,iter
)BS;

python脚本内容例如以下:

#!/usr/bin/python
#coding=utf-8
import sys,time
import re,math
from optparse import OptionParser
import ConfigParser reload(sys)
sys.setdefaultencoding('utf-8') parser = OptionParser(usage="usage:%prog [optinos] filepath")
parser.add_option("-i", "--iter",action = "store",type = 'string', dest = "iter", default = '14',
help="how many iterators" )
(options, args) = parser.parse_args() def pranalysis(uid,prs,flw,fans,prnum):
tasc=tdesc=0 try:
v=[float(pr)*100000000000 for pr in prs]
fans=int(fans)
interval=fans/100
except:
#rst=sys.exc_info()
#sys.excepthook(rst[0],rst[1],rst[2])
return
for i in range(1,prnum-1) :
if i==1:
if v[i+1]-v[i]>interval and v>fans: tasc += 1
elif v[i]-v[i+1]>interval and v[i+1]<fans: tdesc += 1
continue
if v[i+1]-v[i]>interval: tasc += 1
elif v[i]-v[i+1]>interval: tdesc += 1 # rank indicate the rate between pr and fans. higher rank(big number) mean more possible negative user
rate=v[prnum-1]/fans
rank=4
if rate>3.0: rank=0
elif rate>2.0: rank=1
elif rate>1.3: rank=2
elif rate>0.7: rank=3
elif rate>0.5: rank=4
elif rate>0.3: rank=5
elif rate>0.2: rank=6
else: rank=7 # 0 for stable trend. 1 for round trend, 2, for positive user, 3 for negative user.
type=0
if tasc>0 and tdesc>0:
type=1
elif tasc>0:
type=2
elif tdesc>0:
type=3
else: # tdesc=0 and tasc=0
type=0
#if fans<60:
# type=0 print '%s\t%d\t%d\t%d'%(uid,v[14]-20,type,rank) #format sort by uid, iter
#uid follow pr iter fans
#1642909335 919 0.00070398898 04 68399779 prnum=int(options.iter)+1
pr=[0]*prnum
idx=1
lastiter='00'
lastuid=''
for line in sys.stdin:
line=line.rstrip('\n')
cols=line.split('\t')
if len(cols)<5: continue
if cols[3]>options.iter or cols[3]=='00': continue
if cols[3]<=lastiter:
print '%s\t%d\t%d\t%d'%(lastuid,2,0,7)
pr=[0]*prnum
idx=1
lastiter=cols[3]
lastuid=cols[0]
pr[idx]=cols[2]
idx+=1
if cols[3]==options.iter:
pranalysis(cols[0],pr,cols[1],cols[4],prnum)
pr=[0]*prnum
lastiter='00'
idx=1

2、stage-2 reduce阶段的运行计划:

      Reduce Operator Tree:
Extract
Select Operator
expressions:
expr: _col0
type: string
expr: _col1
type: bigint
expr: _col2
type: string
expr: _col3
type: string
expr: _col4
type: bigint
outputColumnNames: _col0, _col1, _col2, _col3, _col4
Transform Operator
command: pranalysis.py
output info:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
File Output Operator
compressed: false
GlobalTableId: 0
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat

依据运行计划,能够看出。stage-2 的reduce阶段事实上非常easy,就是将map阶段拿到的数据使用pranalysis.py脚本进行计算。由5列转换成4列,python输出的时候有数据格式要求:

print '%s\t%d\t%d\t%d'%(uid,v[14]-20,type,rank)

依据运行计划定位到的结果。在结合job的stderr logs信息:

Traceback (most recent call last):
File "/data10/hadoop/local/taskTracker/liangjun/jobcache/job_201406171104_4019895/attempt_201406171104_4019895_r_000000_0/work/./pranalysis.py", line 86, in <module>
pranalysis(cols[0],pr,cols[1],cols[4],prnum)
File "/data10/hadoop/local/taskTracker/liangjun/jobcache/job_201406171104_4019895/attempt_201406171104_4019895_r_000000_0/work/./pranalysis.py", line 60, in pranalysis
print '%s\t%d\t%d\t%d'%(uid,v[14]-20,type,rank)
TypeError: %d format: a number is required, not float

能够看出,hql确实是在运行python的时候由于数据出现异常。python计算完毕之后的有一个数据的格式是float型的,而我们对该数据预期的格式应该是number型的,导致python脚本异常退出,退出的时候关闭了数据流通道。可是ExecReducer.reduce()方法事实上是不知道往python写数据的通道已经由于异常而关闭,还继续往里写数据,这时就出现了java.io.IOException:
Broken pipe的异常。

參考:

http://fgh2011.iteye.com/blog/1684544

http://blog.csdn.net/churylin/article/details/11969925

hive使用python脚本导致java.io.IOException: Broken pipe异常退出的更多相关文章

  1. POI 导入导出时异常[java.io.IOException: Broken pipe]

    使用用POI导出文件时抛出异常java.io.IOException: Broken pipe ERROR: 'java.io.IOException: Broken pipe' org.apache ...

  2. 线上问题!----------org.apache.catalina.connector.ClientAbortException: java.io.IOException: Broken pipe

    1.问题出现 昨晚项目在上线的时候因为推广的原因,新增的大量请求.在八点的时候. org.apache.catalina.connector.ClientAbortException: java.io ...

  3. java.io.IOException: Broken pipe

    最近项目虽然已经在正常运行,但是偶尔会有一些不知名的错误冒出来,比如时不时报一个数据库主键重复或者某些时候会有null的异常报出来.看看代码写完能跑起来还只是开始而已,需要不断精进重构,才能让代码运行 ...

  4. java.io.IOException 断开的管道 解决方法 ClientAbortException: java.io.IOException: Broken pipe

    今天公司技术支持的童鞋报告一个客户的服务不工作了,紧急求助,于是远程登陆上服务器排查问题. 查看采集数据的tomcat日志,习惯性的先翻到日志的最后去查看有没有异常的打印,果然发现了好几种异常信息,但 ...

  5. Tomcat报java.io.IOException: Broken pipe错误

    Tomcat报java.io.IOException: Broken pipe错误,如下图: 解决方案:我的原因是因为网络策略导致出现该问题,即网络端口未启用或被限制.

  6. 控制台(Console)报错:java.io.IOException: Broken pipe

    控制台(Console)输出: java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Meth ...

  7. openTSDB ConnectionManager: Unexpected exception from downstream java.io.IOException: Broken pipe

    openTSDB有这种错误: ConnectionManager: Unexpected exception from downstream for [id: 0xf85323a8, /10.65.3 ...

  8. 断开的管道 java.io.IOException: Broken pipe 解决方法

    一.Broken pipe产生原因分析 1.当访问某个服务突然服务器挂了,就会产生Broken pipe; 2.客户端读取超时关闭了连接,这时服务器往客户端再写数据就发生了broken pipe异常! ...

  9. troubleshooting-执行Oozie调度Hive导数脚本抛java.io.IOException: output.properties data exceeds its limit [2048]

    执行Oozie调度Hive导数脚本抛java.io.IOException: output.properties data exceeds its limit [2048] 原因分析 shell脚本中 ...

随机推荐

  1. BZOJ2119 股市的预测 字符串 SA ST表

    原文链接https://www.cnblogs.com/zhouzhendong/p/9069171.html 题目传送门 - BZOJ2119 题意 给定一个股票连续$n$个时间点的价位,问有多少段 ...

  2. BZOJ2142 礼物 扩展lucas 快速幂 数论

    原文链接http://www.cnblogs.com/zhouzhendong/p/8110015.html 题目传送门 - BZOJ2142 题意概括 小E购买了n件礼物,送给m个人,送给第i个人礼 ...

  3. 018 easygui的使用

    一:安装 1.说明 看到小甲鱼的视频,也看了一些人家的安装,感觉不是太好. 还是想使用pip这种傻瓜的安装方式. 这个地方在实验了很多次,总算是可以了. 2.安装 3.测试 二:小测试 1.输入窗口 ...

  4. 用easy-ui中的表单操作实现一行操作和多行操作

    http://blog.sina.com.cn/s/blog_8e50ede90101fff9.html

  5. 64位 windows10下 Apache2.4 + php7 + phpstorm 相关设置

    64位 windows10下 Apache2.4 + php7 + phpstorm 相关设置   转  https://blog.csdn.net/laurencechan/article/deta ...

  6. python编码问题在此终结

     转载:https://www.cnblogs.com/whatisfantasy/p/6422028.html 1 版本差异概览 1.1 Python 2.X: str(用于8位文本和二进制数据) ...

  7. python MySQLdb 字段与关键字重名

    在做一个东西的时候,将字段命名为desc和describe. 构造sql语句并插入数据,报语法错误.一个一个字段排查,发现是这两个字段的问题. 找了一下,这两个是关键字. 所以,字段与关键字重名的时候 ...

  8. 002.Ceph安装部署

    一 前期准备 1.1 配置规格 节点 类型 IP CPU 内存 ceph-deploy 部署管理平台 172.24.8.71 2 C 4 G node1 Monitor OSD 172.24.8.72 ...

  9. Nginx配置以及域名转发

    工程中的nginx配置 #user nobody; worker_processes 24; error_log /home/xxx/opt/nginx/logs/error.log; pid /ho ...

  10. vim编辑器第二天

    编辑模式的进入: i  :在光标所在的字符前插入 a :在光标所在的字符后插入 o :在光标所在的行的下面一行插入 I  : 在光标所在的行的行首插入,如果行首有空格则在空格后面开始插入 A :在光标 ...