用户使用的sql: select count( distinct patient_id ) from argus.table_aa000612_641cd8ce_ceff_4ea0_9b27_0a3a743f0fe3;

下面做不同的测试:

1.beeline -u jdbc:hive2://0.0.0.0:10000 -e "select count( distinct patient_id ) from argus.table_aa000612_641cd8ce_ceff_4ea0_9b27_0a3a743f0fe3;"

报错:

INFO  : Starting Job = job_1494385775332_0822, Tracking URL = http://hadoop1.tj2.yiducloud.cn:8088/proxy/application_1494385775332_0822/
INFO : Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1494385775332_0822
ERROR : Ended Job = job_1494385775332_0822 with exception 'java.io.IOException(Job status not available )'
java.io.IOException: Job status not available
at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:)
at org.apache.hadoop.mapreduce.Job.getJobState(Job.java:)
at org.apache.hadoop.mapred.JobClient$NetworkedJob.getJobState(JobClient.java:)
at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:)
at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:)
at org.apache.hive.service.cli.operation.SQLOperation.access$(SQLOperation.java:)
at org.apache.hive.service.cli.operation.SQLOperation$$.run(SQLOperation.java:)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
at org.apache.hive.service.cli.operation.SQLOperation$.run(SQLOperation.java:)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:)
at java.util.concurrent.FutureTask.run(FutureTask.java:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:)
Error: Error while processing statement: FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=)

yarn上报错:

Diagnostics: 
Staging dir does not exist /mr-history/anonymous/.staging

2. beeline -u jdbc:hive2://0.0.0.0:10000 -n rd -e "select count( distinct patient_id ) from argus.table_aa000612_641cd8ce_ceff_4ea0_9b27_0a3a743f0fe3;"

报错栈:

INFO  : Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1494385775332_0823
INFO : Hadoop job information for Stage-: number of mappers: ; number of reducers:
INFO : -- ::, Stage- map = %, reduce = %
ERROR : Ended Job = job_1494385775332_0823 with exception 'java.io.IOException(java.io.IOException: Unknown Job job_1494385775332_0823
at org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.verifyAndGetJob(HistoryClientService.java:)
at org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getTaskAttemptCompletionEvents(HistoryClientService.java:)
at org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getTaskAttemptCompletionEvents(MRClientProtocolPBServiceImpl.java:)
at org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$.callBlockingMethod(MRClientProtocol.java:)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:)
at org.apache.hadoop.ipc.Server$Handler$.run(Server.java:)
at org.apache.hadoop.ipc.Server$Handler$.run(Server.java:)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:)
)'
java.io.IOException: java.io.IOException: Unknown Job job_1494385775332_0823
at org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.verifyAndGetJob(HistoryClientService.java:)
at org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getTaskAttemptCompletionEvents(HistoryClientService.java:)
at org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getTaskAttemptCompletionEvents(MRClientProtocolPBServiceImpl.java:)
at org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$.callBlockingMethod(MRClientProtocol.java:)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:)
at org.apache.hadoop.ipc.Server$Handler$.run(Server.java:)
at org.apache.hadoop.ipc.Server$Handler$.run(Server.java:)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:)
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:)
at org.apache.hadoop.mapred.ClientServiceDelegate.getTaskCompletionEvents(ClientServiceDelegate.java:)
at org.apache.hadoop.mapred.YARNRunner.getTaskCompletionEvents(YARNRunner.java:)
at org.apache.hadoop.mapreduce.Job$.run(Job.java:)
at org.apache.hadoop.mapreduce.Job$.run(Job.java:)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
at org.apache.hadoop.mapreduce.Job.getTaskCompletionEvents(Job.java:)
at org.apache.hadoop.mapred.JobClient$NetworkedJob.getTaskCompletionEvents(JobClient.java:)
at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.computeReducerTimeStatsPerJob(HadoopJobExecHelper.java:)
at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:)
at org.apache.hive.service.cli.operation.SQLOperation.access$(SQLOperation.java:)
at org.apache.hive.service.cli.operation.SQLOperation$$.run(SQLOperation.java:)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
at org.apache.hive.service.cli.operation.SQLOperation$.run(SQLOperation.java:)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:)
at java.util.concurrent.FutureTask.run(FutureTask.java:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:)
Caused by: java.io.IOException: Unknown Job job_1494385775332_0823
at org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.verifyAndGetJob(HistoryClientService.java:)
at org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getTaskAttemptCompletionEvents(HistoryClientService.java:)
at org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getTaskAttemptCompletionEvents(MRClientProtocolPBServiceImpl.java:)
at org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$.callBlockingMethod(MRClientProtocol.java:)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:)
at org.apache.hadoop.ipc.Server$Handler$.run(Server.java:)
at org.apache.hadoop.ipc.Server$Handler$.run(Server.java:)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:)
at java.lang.reflect.Constructor.newInstance(Constructor.java:)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:)
at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.unwrapAndThrowException(MRClientProtocolPBClientImpl.java:)
at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getTaskAttemptCompletionEvents(MRClientProtocolPBClientImpl.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:)
... more
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unknown Job job_1494385775332_0823
at org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.verifyAndGetJob(HistoryClientService.java:)
at org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getTaskAttemptCompletionEvents(HistoryClientService.java:)
at org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getTaskAttemptCompletionEvents(MRClientProtocolPBServiceImpl.java:)
at org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$.callBlockingMethod(MRClientProtocol.java:)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:)
at org.apache.hadoop.ipc.Server$Handler$.run(Server.java:)
at org.apache.hadoop.ipc.Server$Handler$.run(Server.java:)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:)
at com.sun.proxy.$Proxy47.getTaskAttemptCompletionEvents(Unknown Source)
at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getTaskAttemptCompletionEvents(MRClientProtocolPBClientImpl.java:)
... more
Error: Error while processing statement: FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=)

yarn上报错:

Diagnostics: 
Job init failed : org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://tijmu/mr-history/rd/.staging/job_1494385775332_0823/job.splitmetainfo    

3. beeline -u jdbc:hive2://0.0.0.0:10000 -n work -e "xxx": 报错和1一样

4. hive -e "select count( distinct patient_id ) from argus.table_aa000612_641cd8ce_ceff_4ea0_9b27_0a3a743f0fe3;" 正常结束

5. 怀疑是权限调试时改了代谢病的hdfs配置

6. 解决方案:根据第二步Diagnostics的报错信息搜索到一篇文章:https://community.hortonworks.com/questions/17489/job-init-fail-job-splitmetainfo-file-does-not-exis.html

 

对比了hive和hs2中此属性的值,果然不一样,在hs2中采用hive中的值后(set yarn.app.mapreduce.am.staging-dir=/user/history;)hql执行正常。

另一个集群的HS2上此属性值也一样是/mr-history, hdfs同样存在目录/mr-history/rd/.staging,权限也一样,但执行就没问题,因此估计还是和第5点有关。
抽查了以前的集群,值都为/user。问题关键是需要两个地方配置成一样。

解决hiveserver2报错:java.io.IOException: Job status not available - Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask的更多相关文章

  1. 执行Hive sql 报FAILED:Execution Error,return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

    在hive Beeline命令行使用insert into ... select ...向hive表插入数据时,报FAILED:Execution Error,return code 2 from o ...

  2. java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

    执行Hive查询: Console是这样报错的 java.sql.SQLException: Error from org.apache.hadoop.hive.ql.exec.mr.MapRedTa ...

  3. Hive创建表格报【Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException】引发的血案

    在成功启动Hive之后感慨这次终于没有出现Bug了,满怀信心地打了长长的创建表格的命令,结果现实再一次给了我一棒,报了以下的错误Error, return code 1 from org.apache ...

  4. Hive的Shell里hive> 执行操作时,出现FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask错误的解决办法(图文详解)

    不多说,直接上干货! 这个问题,得非 你的hive和hbase是不是同样都是CDH版本,还是一个是apache版本,一个是CDH版本. 问题详情 [kfk@bigdata-pro01 apache-h ...

  5. hive报错 Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:For direct MetaStore DB connections,

    学习hive 使用mysql作为元数据  hive创建数据库和切换数据库都是可以的 但是创建表就是出问题 百度之后发现 是编码问题 特别记录一下~~~ 1.报错前如图: 2.在mysql数据库中执行如 ...

  6. hive 字符集问题 报错 Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:For direct MetaStore DB connections,

    学习hive 使用mysql作为元数据  hive创建数据库和切换数据库都是可以的 但是创建表就是出问题 百度之后发现 是编码问题 特别记录一下~~~ 1.报错前如图: 2.在mysql数据库中执行如 ...

  7. hive_异常_01_(未解决)FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V

    一.如果出现如下错误需要编译源码 需要重新编译Hbase-handler源码 步骤如下: 准备Jar包: 将Hbase 中lib下的jar包和Hive中lib下的jar包全部导入到一起. 记得删除里面 ...

  8. 解决kylin报错 ClassCastException org.apache.hadoop.hive.ql.exec.ConditionalTask cannot be cast to org.apache.hadoop.hive.ql.exec.mr.MapRedTask

    方法:去掉参数SET hive.auto.convert.join=true; 从配置文件$KYLIN_HOME/conf/kylin_hive_conf.xml删掉 或 kylin-gui的cube ...

  9. Spark报错java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

    Spark 读取 JSON 文件时运行报错 java.io.IOException: Could not locate executable null\bin\winutils.exe in the ...

随机推荐

  1. tomcat 内存参数优化示例

    https://www.cnblogs.com/cornerxin/p/9304100.html

  2. Java第三阶段学习(四、缓冲流)

    一.缓冲流: Java中提供了一套缓冲流,它的存在,可提高IO流的读写速度 缓冲流,根据流的分类分为:字节缓冲流与字符缓冲流. 二.字节缓冲流: 字节缓冲流根据流的方向,共有2个: 1.写入数据到流中 ...

  3. 【LOJ】#2562. 「SDOI2018」战略游戏

    题解 圆方树建好之后点是原来的两倍,而st表求lca也要开到点的两倍,所以是四倍 我并没有开小,然而= =,我的预处理log2,写成了200000,而不是400000 我是不是折翼啊= = 很可写,我 ...

  4. Python全栈开发之13、CSS

    一.css简介 CSS 是 Cascading Style Sheets的缩写,用来设计网页的样式布局,以及大小来适应不同的屏幕等,使网页的样式和网页数据分离, 二.导入css 导入css有4种方式: ...

  5. DML语句、创建和管理表

    insert语句基本语法: insert into table(column) values(values); insert into dept (deptno,dname,loc) values(5 ...

  6. ecshop用户中心菜单选项显示内容标签

    ecshop用户中心菜单选项有了,那肯定需要给相应的菜单选项添加内容,下面我们主要来讲下调用内容的标签,你也可以先访问一下用户中心菜单选项修改. 用户中心页面的内容分布在两个模板文件中:user_cl ...

  7. IP、TCP和DNS与HTTP的密切关系

    看了上一篇博文的发表时间,是7月22日,现在是10月22日,已经有三个月没写博客了.这三个月里各种忙各种瞎折腾,发生了很多事情,也思考了很多问题.现在这段时间开始闲下来了,同时该思考的事情也思考清楚了 ...

  8. 子查询中的NULL问题

    子查询返回有单行,多行和null值:适用于单行子查询的比较运算符是=,>,>=,<,<=<>和!=.适用于多行子查询的比较运算符是in,not in,any和any ...

  9. 虚拟机zookeeper和hbase集群搭建

    集群zookeeper dataDir=/usr/local/zookeeper/dataDir dataLogDir=/usr/local/zookeeper/dataLogDir # the po ...

  10. HDU - 1051 Wooden Sticks 贪心 动态规划

    Wooden Sticks Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 65536/32768 K (Java/Others)    ...