hadoop job -kill 与 yarn application -kii(作业卡了或作业重复提交或MapReduce任务运行到running job卡住)
问题详情


解决办法
[hadoop@master ~]$ hadoop job -kill job_1493782088693_0001
DEPRECATED: Use of this script to execute mapred command is deprecated.
Instead use the mapred command for it. // :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:
// :: INFO impl.YarnClientImpl: Killed application application_1493782088693_0001
Killed job job_1493782088693_0001 [hadoop@master ~]$ hadoop job -kill job_1493782088693_0002
DEPRECATED: Use of this script to execute mapred command is deprecated.
Instead use the mapred command for it. // :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:
// :: INFO impl.YarnClientImpl: Killed application application_1493782088693_0001
Killed job job_1493782088693_0002 [hadoop@master ~]$ hadoop job -kill job_1493782088693_0003
DEPRECATED: Use of this script to execute mapred command is deprecated.
Instead use the mapred command for it. // :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:
// :: INFO impl.YarnClientImpl: Killed application application_1493782088693_0001
Killed job job_1493782088693_0003
有时候上述这样kill做下来,并不管用,得再来
[hadoop@master ~]$ yarn application -kill application_1493782088693_0001
// :: INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Application application_1493782088693_0001 has already finished
[hadoop@master ~]$ yarn application -kill application_1493782088693_0002
// :: INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Application application_1493782088693_0002 has already finished
[hadoop@master ~]$ yarn application -kill application_1493782088693_0003
// :: INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Application application_1493782088693_0003 has already finished
[hadoop@master ~]$

[hadoop@master ~]$ hadoop job -list
DEPRECATED: Use of this script to execute mapred command is deprecated.
Instead use the mapred command for it. // :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:
Total jobs:
JobId State StartTime UserName Queue Priority UsedContainers RsvdContainers UsedMem RsvdMem NeededMem AM info
[hadoop@master ~]$
同时,大家要注意,是否是进程的消失?

也会是你的slave1 还是 slave2的进程自动消失了。注意 ,这是个很隐蔽的问题。
重新停止集群,再重新启动集群。
如果还是出现这个问题的话,则


关于这里参数配置的解释,见
Hadoop YARN配置参数剖析(2)—权限与日志聚集相关参数
注意,对master、slave1和slave2都要操作,然后,再[hadoop@master hadoop-2.6.0]$ sbin/stop-all.sh
再[hadoop@master hadoop-2.6.0]$ sbin/start-all.sh即可。



成功!
hadoop job -kill 与 yarn application -kii(作业卡了或作业重复提交或MapReduce任务运行到running job卡住)的更多相关文章
- hadoop job -kill 和 yarn application -kill 区别
hadoop job -kill 调用的是CLI.java里面的job.killJob(); 这里会分几种情况,如果是能查询到状态是RUNNING的话,是直接向AppMaster发送kill请求的.Y ...
- yarn application -kill application_id yarn kill 超时任务脚本
需求:kill 掉yarn上超时的任务,实现不同队列不同超时时间的kill机制,并带有任务名的白名单功能 此为python脚本,可配置crontab使用 # _*_ coding=utf-8 _*_ ...
- Hadoop(七)YARN的资源调度
一.YARN 概述 YARN 是一个资源调度平台,负责为运算程序提供服务器运算资源,相当于一个分布式的操 作系统平台,而 MapReduce 等运算程序则相当于运行于操作系统之上的应用程序 YARN ...
- spark-shell启动报错:Yarn application has already ended! It might have been killed or unable to launch application master
spark-shell不支持yarn cluster,以yarn client方式启动 spark-shell --master=yarn --deploy-mode=client 启动日志,错误信息 ...
- yarn application ID 增长达到10000后
Job, Task, and Task Attempt IDs In Hadoop 2, MapReduce job IDs are generated from YARN application I ...
- spark利用yarn提交任务报:YARN application has exited unexpectedly with state UNDEFINED
spark用yarn提交任务会报ERROR cluster.YarnClientSchedulerBackend: YARN application has exited unexpectedly w ...
- yarn application命令介绍
yarn application 1.-list 列出所有 application 信息 示例:yarn application -list 2.-appStates <Stat ...
- Apache hadoop namenode ha和yarn ha ---HDFS高可用性
HDFS高可用性Hadoop HDFS 的两大问题:NameNode单点:虽然有StandbyNameNode,但是冷备方案,达不到高可用--阶段性的合并edits和fsimage,以缩短集群启动的时 ...
- 【深入浅出 Yarn 架构与实现】3-1 Yarn Application 流程与编写方法
本篇学习 Yarn Application 编写方法,将带你更清楚的了解一个任务是如何提交到 Yarn ,在运行中的交互和任务停止的过程.通过了解整个任务的运行流程,帮你更好的理解 Yarn 运作方式 ...
随机推荐
- 关于html的一些杂技
html预定义字符指的是 :< > html实体指的是 $amp 等 php中htmlspeciachar()就是讲html预定义字符转换成html实体. 浏览器渲染时,会将html实 ...
- Binding RelativeSource
IsChecked="{Binding IsExpanded, Mode=TwoWay, RelativeSource={RelativeSource TemplatedParent}}&q ...
- python中set类型总结
set的创建无非有两种方式: 一 直接使用{}创建新的set并初始化 例如: set1 = {1,2,3,"good news",(1,2,3)} #声明的时候可以包含元组,但不能 ...
- Apache CommonLogging + Log4J
package cn.byref.demo.logging; import org.apache.commons.logging.Log; import org.apache.commons.logg ...
- ViewPager渲染背景颜色渐变(引导页)--第三方开源--ColorAnimationView
下载地址:https://github.com/TaurusXi/GuideBackgroundColorAnimation 使用方法如下: <FrameLayout xmlns:android ...
- Git_学习_07_ 推送修改到远端
一.操作流程 多人协作时,若自己的本地代码有了修改,想提交自己的代码,就需要按照以下步骤操作: 1.确认修改正确 使用以下命令,查看有哪些是自己未提交的代码 git status 2.拉取远程最新代码 ...
- Python3.6.0安装
1.安装 具体详情请参考下图: 双击安装包: 勾选“add python 3.6 to PATH”这样可以自动生成环境变量,选择“Customize installation”自定义安装. 2. ...
- hdu-2544-最短路(dijkstra算法模板)
题目链接 题意很清晰,入门级题目,适合各种模板,可用dijkstra, floyd, Bellman-ford, spfa Dijkstra链接 Floyd链接 Bellman-Ford链接 SPFA ...
- 一段tcl代码
#!/usr/bin/wish proc icanspeak {} { set name [.ent get] } { exec s $name } } label .lab -text " ...
- 团队队列(列和map结合的经典运用)
Queues and Priority Queues are data structures which are known to most computer scientists. The Team ...