原文地址:http://zh.hortonworks.com/blog/apache-hadoop-yarn-resourcemanager/

ResourceManager (RM) is the master that arbitrates all the available cluster resources and thus helps manage the distributed applications running on the YARN system. It works together with the per-node NodeManagers (NMs) and the per-application ApplicationMasters (AMs).

  1. NodeManagers take instructions from the ResourceManager and manage resources available on a single node.
  2. ApplicationMasters are responsible for negotiating resources with the ResourceManager and for working with the NodeManagers to start the containers.

ResourceManager Components

The ResourceManager has the following components (see the figure above):

  1. Components interfacing RM to the clients:

    • ClientService: The client interface to the Resource Manager. This component handles all the RPC interfaces to the RM from the clients including operations like application submission, application termination, obtaining queue information, cluster statistics etc.
    • AdminService: To make sure that admin requests don’t get starved due to the normal users’ requests and to give the operators’ commands the higher priority, all the admin operations like refreshing node-list, the queues’ configuration etc. are served via this separate interface.
  2. Components connecting RM to the nodes:
    • ResourceTrackerService: This is the component that responds to RPCs from all the nodes. It is responsible for registration of new nodes, rejecting requests from any invalid/decommissioned nodes, obtain node-heartbeats and forward them over to the YarnScheduler. It works closely with NMLivelinessMonitor and NodesListManager described below.
    • NMLivelinessMonitor: To keep track of live nodes and specifically note down the dead nodes, this component keeps track of each node’s its last heartbeat time. Any node that doesn’t heartbeat within a configured interval of time, by default 10 minutes, is deemed dead and is expired by the RM. All the containers currently running on an expired node are marked as dead and no new containers are scheduling on such node.
    • NodesListManager: A collection of valid and excluded nodes. Responsible for reading the host configuration files specified via yarn.resourcemanager.nodes.include-path andyarn.resourcemanager.nodes.exclude-path and seeding the initial list of nodes based on those files. Also keeps track of nodes that are decommissioned as time progresses.
  3. Components interacting with the per-application AMs:
    • ApplicationMasterService: This is the component that responds to RPCs from all the AMs. It is responsible for registration of new AMs, termination/unregister-requests from any finishing AMs, obtaining container-allocation & deallocation requests from all running AMs and forward them over to the YarnScheduler. This works closely with AMLivelinessMonitor described below.
    • AMLivelinessMonitor: To help manage the list of live AMs and dead/non-responding AMs, this component keeps track of each AM and its last heartbeat time. Any AM that doesn’t heartbeat within a configured interval of time, by default 10 minutes, is deemed dead and is expired by the RM. All the containers currently running/allocated to an AM that gets expired are marked as dead. RM schedules the same AM to run on a new container, allowing up to a maximum of 4 such attempts by default.
  4. The core of the ResourceManager – the scheduler and related components:
    • ApplicationsManager: Responsible for maintaining a collection of submitted applications. Also keeps a cache of completed applications so as to serve users’ requests via web UI or command line long after the applications in question finished.
    • ApplicationACLsManager: RM needs to gate the user facing APIs like the client and admin requests to be accessible only to authorized users. This component maintains the ACLs lists per application and enforces them whenever an request like killing an application, viewing an application status is received.
    • ApplicationMasterLauncher: Maintains a thread-pool to launch AMs of newly submitted applications as well as applications whose previous AM attempts exited due to some reason. Also responsible for cleaning up the AM when an application has finished normally or forcefully terminated.
    • YarnScheduler: The Scheduler is responsible for allocating resources to the various running applications subject to constraints of capacities, queues etc. It performs its scheduling function based on the resource requirements of the applications such as memory, CPU, disk, network etc. Currently, only memory is supported and support for CPU is close to completion.
    • ContainerAllocationExpirer: This component is in charge of ensuring that all allocated containers are used by AMs and subsequently launched on the correspond NMs. AMs run as untrusted user code and can potentially hold on to allocations without using them, and as such can cause cluster under-utilization. To address this, ContainerAllocationExpirer maintains the list of allocated containers that are still not used on the corresponding NMs. For any container, if the corresponding NM doesn’t report to the RM that the container has started running within a configured interval of time, by default 10 minutes, the container is deemed as dead and is expired by the RM.
  5. TokenSecretManagers (for security):ResourceManager has a collection of SecretManagers which are charged with managing tokens, secret-keys that are used to authenticate/authorize requests on various RPC interfaces. A future post on YARN security will cover a more detailed descriptions of the tokens, secret-keys and the secret-managers but a brief summary follows:
    • ApplicationTokenSecretManager: To avoid arbitrary processes from sending RM scheduling requests, RM uses the per-application tokens called ApplicationTokens. This component saves each token locally in memory till application finishes and uses it to authenticate any request coming from a valid AM process.
    • ContainerTokenSecretManager: SecretManager for ContainerTokens that are special tokens issued by RM to an AM for a container on a specific node. ContainerTokens are used by AMs to create a connection to the corresponding NM where the container is allocated. This component is RM-specific, keeps track of the underlying master and secret-keys and rolls the keys every so often.
    • RMDelegationTokenSecretManager: A ResourceManager specific delegation-token secret-manager. It is responsible for generating delegation tokens to clients which can be passed on to unauthenticated processes that wish to be able to talk to RM.
  6. DelegationTokenRenewer: In secure mode, RM is Kerberos authenticated and so provides the service of renewing file-system tokens on behalf of the applications. This component renews tokens of submitted applications as long as the application runs and till the tokens can no longer be renewed.

Conclusion

In YARN, the ResourceManager is primarily limited to scheduling i.e. only arbitrating available resources in the system among the competing applications and not concerning itself with per-application state management. Because of this clear separation of responsibilities coupled with the modularity described above, and with the powerful scheduler API discussed in the previous post, RM is able to address the most important design requirements – scalability, support for alternate programming paradigms.

To allow for different policy constraints, the scheduler described above in the RM is pluggable and allows for different algorithms. In a future post of this series, we will dig deeper into various features of CapacityScheduler that schedules containers based on capacity guarantees and queues.

The next post will dive into details of the NodeManager, the component responsible for managing the containers’ life cycle and much more.

Apache Hadoop YARN – ResourceManager--转载的更多相关文章

  1. Apache Hadoop YARN: 背景及概述

    从2012年8月开始Apache Hadoop YARN(YARN = Yet Another Resource Negotiator)成了Apache Hadoop的一项子工程.自此Apache H ...

  2. hadoop错误org.apache.hadoop.yarn.exceptions.YarnException Unauthorized request to start container

    错误: 14/04/29 02:45:07 INFO mapreduce.Job: Job job_1398704073313_0021 failed with state FAILED due to ...

  3. Apache Hadoop YARN – NodeManager--转载

    原文地址:http://zh.hortonworks.com/blog/apache-hadoop-yarn-nodemanager/ The NodeManager (NM) is YARN’s p ...

  4. spark 笔记 4:Apache Hadoop YARN: Yet Another Resource Negotiator

    spark支持YARN做资源调度器,所以YARN的原理还是应该知道的:http://www.socc2013.org/home/program/a5-vavilapalli.pdf    但总体来说, ...

  5. 记录一次 hadoop yarn resourceManager无故切换的故障

    某日 收到告警 线上集群rm切换 观察resourcemanager 日志报错如下 这行不明显 再看看其他日志报错 在 app attempt_removed 时候发生了空指针错误 break; ca ...

  6. spark on yarn 动态资源分配报错的解决:org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:spark_shuffle does not exist

    组件:cdh5.14.0 spark是自己编译的spark2.1.0-cdh5.14.0 第一步:确认spark-defaults.conf中添加了如下配置: spark.shuffle.servic ...

  7. org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService: mapreduce_shuffle do

    在yarn-site.xml 配置文件中增加: <property> <name>yarn.nodemanager.aux-services</name> < ...

  8. Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/exceptions/YarnException

    这个是Flink 1.11.1  使用yarn-session 出现的错误:原因是在Flink1.11 之后不再提供flink-shaded-hadoop-*” jars 需要在yarn-sessio ...

  9. Caused by:java.lang.ClassNotFoundException:org.apache.hadoop.yarn.util.Apps

    错误原因 缺少hadoop-yarn.jar包. 导入jar包就好了~-~

随机推荐

  1. [BZOJ4712]洪水-[树链剖分+线段树]

    Description 小A走到一个山脚下,准备给自己造一个小屋.这时候,小A的朋友(op,又叫管理员)打开了创造模式,然后飞到山顶放了格水.于是小A面前出现了一个瀑布.作为平民的小A只好老实巴交地爬 ...

  2. 【LG4841】城市规划

    [LG4841]城市规划 题面 洛谷 题解 记\(t_i\)表示\(i\)个点的无向图个数,显然\(t_i=2^{C_i^2}\). 设\(f_i\)表示\(i\)个点的无向连通图个数,容斥一下,枚举 ...

  3. IIS解决上传文件大小限制

    目的:通过配置文件和IIS来解决服务器对上传文件大小的限制 1:修改配置文件(默认为4M 值的大小根据自己情况进行修改) <httpRuntime  maxRequestLength=" ...

  4. Siki_Unity_3-16_3D数学基础

    Unity 3-16 3D数学基础 任务0-1:课程介绍 课程大纲: 1. 3D数学介绍 2. Unity中的几种坐标系: 全局坐标系.屏幕坐标系等 坐标系间的坐标转换:比如屏幕坐标转换到世界坐标 3 ...

  5. 【java请求】- jmeter_jdbc脚本实战

    一,导入 使用Jmeter运行Java脚本,需要用到Jmeter的提供的框架jar包(分别在jmeter目录下的lib和ext目录下)1.ApacheJMeter_core.jar2.ApacheJM ...

  6. Python数据挖掘——数据概述

    Python数据挖掘——数据概述 数据集由数据对象组成: 数据的基本统计描述 中心趋势度量 均值 中位数 众数 中列数 数据集的最大值和最小值的平均 度量数据分布 极差 最大值与最小值的差 四分位数 ...

  7. windows下对python的pip更新到最新版本

    1->打开windows的命令窗口. 2->进入到pip.exe所在的文件夹下,我安装的python在G:\python3.6文件夹下,pip.exe则在G:\python3.6\Scri ...

  8. Spark入门(Python)

    Hadoop是对大数据集进行分布式计算的标准工具,这也是为什么当你穿过机场时能看到”大数据(Big Data)”广告的原因.它已经成为大数据的操作系统,提供了包括工具和技巧在内的丰富生态系统,允许使用 ...

  9. hadoop HA sshfen切换隔离时无法跳转ssh: bash: fuser: 未找到命令

    在zkfc的日志里面,有一个warn:PATH=$PATH:/sbin:/usr/sbin fuser -v -k -n tcp 8090 via ssh: bash: fuser: 未找到命令原因是 ...

  10. 王者荣耀交流协会-Alpha发布用户使用报告

    用户数量:10人 姓名如下(包括化名):张小斌.王瑞瑞.蛋蛋.小美.晨曦.小丽.张利刚.小闫.小谢.小崔 寻找的用户多为王者荣耀交流协会成员的同学,对管理时间有着强烈的需求,也对PSP Daily软件 ...