Apache Hadoop YARN – NodeManager--转载
原文地址:http://zh.hortonworks.com/blog/apache-hadoop-yarn-nodemanager/
The NodeManager (NM) is YARN’s per-node agent, and takes care of the individual compute nodes in a Hadoop cluster. This includes keeping up-to date with the ResourceManager (RM), overseeing containers’ life-cycle management; monitoring resource usage (memory, CPU) of individual containers, tracking node-health, log’s management and auxiliary services which may be exploited by different YARN applications.
NodeManager Components

- NodeStatusUpdater
On startup, this component registers with the RM and sends information about the resources available on the nodes. Subsequent NM-RM communication is to provide updates on container statuses – new containers running on the node, completed containers, etc.
In addition the RM may signal the NodeStatusUpdater to potentially kill already running containers.
- ContainerManager
This is the core of the NodeManager. It is composed of the following sub-components, each of which performs a subset of the functionality that is needed to manage containers running on the node.
- RPC server: ContainerManager accepts requests from Application Masters (AMs) to start new containers, or to stop running ones. It works with ContainerTokenSecretManager (described below) to authorize all requests. All the operations performed on containers running on this node are written to an audit-log which can be post-processed by security tools.
- ResourceLocalizationService: Responsible for securely downloading and organizing various file resources needed by containers. It tries its best to distribute the files across all the available disks. It also enforces access control restrictions of the downloaded files and puts appropriate usage limits on them.
- ContainersLauncher: Maintains a pool of threads to prepare and launch containers as quickly as possible. Also cleans up the containers’ processes when such a request is sent by the RM or the ApplicationMasters (AMs).
- AuxServices: The NM provides a framework for extending its functionality by configuring auxiliary services. This allows per-node custom services that specific frameworks may require, and still sandbox them from the rest of the NM. These services have to be configured before NM starts. Auxiliary services are notified when an application’s first container starts on the node, and when the application is considered to be complete.
- ContainersMonitor: After a container is launched, this component starts observing its resource utilization while the container is running. To enforce isolation and fair sharing of resources like memory, each container is allocated some amount of such a resource by the RM. The ContainersMonitor monitors each container’s usage continuously and if a container exceeds its allocation, it signals the container to be killed. This is done to prevent any runaway container from adversely affecting other well-behaved containers running on the same node.
- LogHandler: A pluggable component with the option of either keeping the containers’ logs on the local disks or zipping them together and uploading them onto a file-system.
- ContainerExecutor
Interacts with the underlying operating system to securely place files and directories needed by containers and subsequently to launch and clean up processes corresponding to containers in a secure manner.
- NodeHealthCheckerService
Provides functionality of checking the health of the node by running a configured script frequently. It also monitors the health of the disks specifically by creating temporary files on the disks every so often. Any changes in the health of the system are notified to NodeStatusUpdater (described above) which in turn passes on the information to the RM.
- Security
- ApplicationACLsManagerNM needs to gate the user facing APIs like container-logs’ display on the web-UI to be accessible only to authorized users. This component maintains the ACLs lists per application and enforces them whenever such a request is received.
- ContainerTokenSecretManager: verifies various incoming requests to ensure that all the incoming operations are indeed properly authorized by the RM.
- WebServer
Exposes the list of applications, containers running on the node at a given point of time, node-health related information and the logs produced by the containers.
Spotlight on Key Functionality
- Container Launch
To facilitate container launch, the NM expects to receive detailed information about a container’s runtime as part of the container-specifications. This includes the container’s command line, environment variables, a list of (file) resources required by the container and any security tokens.
On receiving a container-launch request – the NM first verifies this request, if security is enabled, to authorize the user, correct resources assignment, etc. The NM then performs the following set of steps to launch the container.
- A local copy of all the specified resources is created (Distributed Cache).
- Isolated work directories are created for the container, and the local resources are made available in these directories.
- The launch environment and command line is used to start the actual container.
- Log Aggregation
Handling user-logs has been one of the big pain-points for Hadoop installations in the past. Instead of truncating user-logs, and leaving them on individual nodes like the TaskTracker, the NM addresses the logs’ management issue by providing the option to move these logs securely onto a file-system (FS), for e.g. HDFS, after the application completes.
Logs for all the containers belonging to a single Application and that ran on this NM are aggregated and written out to a single (possibly compressed) log file at a configured location in the FS. Users have access to these logs via YARN command line tools, the web-UI or directly from the FS.
- How MapReduce shuffle takes advantage of NM’s Auxiliary-services
The Shuffle functionality required to run a MapReduce (MR) application is implemented as an Auxiliary Service. This service starts up a Netty Web Server, and knows how to handle MR specific shuffle requests from Reduce tasks. The MR AM specifies the service id for the shuffle service, along with security tokens that may be required. The NM provides the AM with the port on which the shuffle service is running which is passed onto the Reduce tasks.
Conclusion
In YARN, the NodeManager is primarily limited to managing abstract containers i.e. only processes corresponding to a container and not concerning itself with per-application state management like MapReduce tasks. It also does away with the notion of named slots like map and reduce slots. Because of this clear separation of responsibilities coupled with the modular architecture described above, NM can scale much more easily and its code is much more maintainable.
Apache Hadoop YARN – NodeManager--转载的更多相关文章
- Apache Hadoop YARN: 背景及概述
从2012年8月开始Apache Hadoop YARN(YARN = Yet Another Resource Negotiator)成了Apache Hadoop的一项子工程.自此Apache H ...
- hadoop错误org.apache.hadoop.yarn.exceptions.YarnException Unauthorized request to start container
错误: 14/04/29 02:45:07 INFO mapreduce.Job: Job job_1398704073313_0021 failed with state FAILED due to ...
- spark on yarn 动态资源分配报错的解决:org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:spark_shuffle does not exist
组件:cdh5.14.0 spark是自己编译的spark2.1.0-cdh5.14.0 第一步:确认spark-defaults.conf中添加了如下配置: spark.shuffle.servic ...
- org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService: mapreduce_shuffle do
在yarn-site.xml 配置文件中增加: <property> <name>yarn.nodemanager.aux-services</name> < ...
- Hadoop - YARN NodeManager 剖析
一 概述 NodeManager是执行在单个节点上的代理,它管理Hadoop集群中单个计算节点,功能包含与ResourceManager保持通信,管理Container的生命周期.监控 ...
- spark 笔记 4:Apache Hadoop YARN: Yet Another Resource Negotiator
spark支持YARN做资源调度器,所以YARN的原理还是应该知道的:http://www.socc2013.org/home/program/a5-vavilapalli.pdf 但总体来说, ...
- Apache Hadoop YARN – ResourceManager--转载
原文地址:http://zh.hortonworks.com/blog/apache-hadoop-yarn-resourcemanager/ ResourceManager (RM) is the ...
- Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/exceptions/YarnException
这个是Flink 1.11.1 使用yarn-session 出现的错误:原因是在Flink1.11 之后不再提供flink-shaded-hadoop-*” jars 需要在yarn-sessio ...
- Caused by:java.lang.ClassNotFoundException:org.apache.hadoop.yarn.util.Apps
错误原因 缺少hadoop-yarn.jar包. 导入jar包就好了~-~
随机推荐
- print puts p
共同点:都是用来屏幕输出的. 不同点:puts 输出内容后,会自动换行(如果内容参数为空,则仅输出一个换行符号):另外如果内容参数中有转义符,输出时将先处理转义再输出p 基本与puts相同,但不会处理 ...
- jdk各种老版本的下载链接
java SE 1.6各个版本 jdk api http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downlo ...
- 【LG5020】[NOIP2018]货币系统
[LG5020][NOIP2018]货币系统 题面 洛谷 题解 考场上第一眼还不会233 可以发现只要可以被其他的货币通过一些奇奇怪怪的方式表示出来的货币就\(ban\)掉即可 就是个完全背包 我是统 ...
- CF 1083 C. Max Mex
C. Max Mex https://codeforces.com/contest/1083/problem/C 题意: 一棵$n$个点的树,每个点上有一个数(每个点的上的数互不相同,而且构成一个0~ ...
- Nginx入门篇(四)之常用配置解析
1.Nginx状态信息功能 Nginx的模块当中有一个ngx_http_stub_status_module模块,这个模块主要记录Nginx的基本访问信息,要使用该模块,需要在编译的时候增加http_ ...
- [Vani有约会]雨天的尾巴 线段树合并
[Vani有约会]雨天的尾巴 LG传送门 线段树合并入门好题. 先别急着上线段树合并,考虑一下这题的暴力.一看就是树上差分,对于每一个节点统计每种救济粮的数量,再一遍dfs把差分的结果统计成答案.如果 ...
- stl源码分析之priority queue
前面两篇介绍了gcc4.8的vector和list的源码实现,这是stl最常用了两种序列式容器.除了容器之外,stl还提供了一种借助容器实现特殊操作的组件,谓之适配器,比如stack,queue,pr ...
- scrapy 爬取知乎问题、答案 ,并异步写入数据库(mysql)
python版本 python2.7 爬取知乎流程: 一 .分析 在访问知乎首页的时候(https://www.zhihu.com),在没有登录的情况下,会进行重定向到(https://www. ...
- halcon中关于文本的创建以及写入
原文链接:http://blog.sina.com.cn/s/blog_61cc743001017nxr.html#FileName 1.open_file( : : FileName, FileTy ...
- 设计模式C++实现
准备写一系列笔记用来记录学习设计模式的过程,同时写出自己对几种主要的设计模式的理解,以及编码实现,同时总结. 主要参考书籍就是 <Head First Design Patterns>这本 ...