原文地址:http://zh.hortonworks.com/blog/apache-hadoop-yarn-nodemanager/

The NodeManager (NM) is YARN’s per-node agent, and takes care of the individual compute nodes in a Hadoop cluster. This includes keeping up-to date with the ResourceManager (RM), overseeing containers’ life-cycle management; monitoring resource usage (memory, CPU) of individual containers, tracking node-health, log’s management and auxiliary services which may be exploited by different YARN applications.

NodeManager Components

    1. NodeStatusUpdater

On startup, this component registers with the RM and sends information about the resources available on the nodes. Subsequent NM-RM communication is to provide updates on container statuses – new containers running on the node, completed containers, etc.

In addition the RM may signal the NodeStatusUpdater to potentially kill already running containers.

    1. ContainerManager

This is the core of the NodeManager. It is composed of the following sub-components, each of which performs a subset of the functionality that is needed to manage containers running on the node.

      1. RPC server: ContainerManager accepts requests from Application Masters (AMs) to start new containers, or to stop running ones. It works with ContainerTokenSecretManager (described below) to authorize all requests. All the operations performed on containers running on this node are written to an audit-log which can be post-processed by security tools.
      2. ResourceLocalizationService: Responsible for securely downloading and organizing various file resources needed by containers. It tries its best to distribute the files across all the available disks. It also enforces access control restrictions of the downloaded files and puts appropriate usage limits on them.
      3. ContainersLauncher: Maintains a pool of threads to prepare and launch containers as quickly as possible. Also cleans up the containers’ processes when such a request is sent by the RM or the ApplicationMasters (AMs).
      4. AuxServices: The NM provides a framework for extending its functionality by configuring auxiliary services. This allows per-node custom services that specific frameworks may require, and still sandbox them from the rest of the NM. These services have to be configured before NM starts. Auxiliary services are notified when an application’s first container starts on the node, and when the application is considered to be complete.
      5. ContainersMonitor: After a container is launched, this component starts observing its resource utilization while the container is running. To enforce isolation and fair sharing of resources like memory, each container is allocated some amount of such a resource by the RM. The ContainersMonitor monitors each container’s usage continuously and if a container exceeds its allocation, it signals the container to be killed. This is done to prevent any runaway container from adversely affecting other well-behaved containers running on the same node.
      6. LogHandler: A pluggable component with the option of either keeping the containers’ logs on the local disks or zipping them together and uploading them onto a file-system.
    1. ContainerExecutor

Interacts with the underlying operating system to securely place files and directories needed by containers and subsequently to launch and clean up processes corresponding to containers in a secure manner.

    1. NodeHealthCheckerService

Provides functionality of checking the health of the node by running a configured script frequently. It also monitors the health of the disks specifically by creating temporary files on the disks every so often. Any changes in the health of the system are notified to NodeStatusUpdater (described above) which in turn passes on the information to the RM.

    1. Security
      1. ApplicationACLsManagerNM needs to gate the user facing APIs like container-logs’ display on the web-UI to be accessible only to authorized users. This component maintains the ACLs lists per application and enforces them whenever such a request is received.
      2. ContainerTokenSecretManager: verifies various incoming requests to ensure that all the incoming operations are indeed properly authorized by the RM.
    2. WebServer

Exposes the list of applications, containers running on the node at a given point of time, node-health related information and the logs produced by the containers.

Spotlight on Key Functionality

    1. Container Launch

To facilitate container launch, the NM expects to receive detailed information about a container’s runtime as part of the container-specifications. This includes the container’s command line, environment variables, a list of (file) resources required by the container and any security tokens.

On receiving a container-launch request – the NM first verifies this request, if security is enabled, to authorize the user, correct resources assignment, etc. The NM then performs the following set of steps to launch the container.

      1. A local copy of all the specified resources is created (Distributed Cache).
      2. Isolated work directories are created for the container, and the local resources are made available in these directories.
      3. The launch environment and command line is used to start the actual container.
    1. Log Aggregation

Handling user-logs has been one of the big pain-points for Hadoop installations in the past. Instead of truncating user-logs, and leaving them on individual nodes like the TaskTracker, the NM addresses the logs’ management issue by providing the option to move these logs securely onto a file-system (FS), for e.g. HDFS, after the application completes.

Logs for all the containers belonging to a single Application and that ran on this NM are aggregated and written out to a single (possibly compressed) log file at a configured location in the FS. Users have access to these logs via YARN command line tools, the web-UI or directly from the FS.

    1. How MapReduce shuffle takes advantage of NM’s Auxiliary-services

The Shuffle functionality required to run a MapReduce (MR) application is implemented as an Auxiliary Service. This service starts up a Netty Web Server, and knows how to handle MR specific shuffle requests from Reduce tasks. The MR AM specifies the service id for the shuffle service, along with security tokens that may be required. The NM provides the AM with the port on which the shuffle service is running which is passed onto the Reduce tasks.

Conclusion

In YARN, the NodeManager is primarily limited to managing abstract containers i.e. only processes corresponding to a container and not concerning itself with per-application state management like MapReduce tasks. It also does away with the notion of named slots like map and reduce slots. Because of this clear separation of responsibilities coupled with the modular architecture described above, NM can scale much more easily and its code is much more maintainable.

Apache Hadoop YARN – NodeManager--转载的更多相关文章

  1. Apache Hadoop YARN: 背景及概述

    从2012年8月开始Apache Hadoop YARN(YARN = Yet Another Resource Negotiator)成了Apache Hadoop的一项子工程.自此Apache H ...

  2. hadoop错误org.apache.hadoop.yarn.exceptions.YarnException Unauthorized request to start container

    错误: 14/04/29 02:45:07 INFO mapreduce.Job: Job job_1398704073313_0021 failed with state FAILED due to ...

  3. spark on yarn 动态资源分配报错的解决:org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:spark_shuffle does not exist

    组件:cdh5.14.0 spark是自己编译的spark2.1.0-cdh5.14.0 第一步:确认spark-defaults.conf中添加了如下配置: spark.shuffle.servic ...

  4. org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService: mapreduce_shuffle do

    在yarn-site.xml 配置文件中增加: <property> <name>yarn.nodemanager.aux-services</name> < ...

  5. Hadoop - YARN NodeManager 剖析

    一 概述         NodeManager是执行在单个节点上的代理,它管理Hadoop集群中单个计算节点,功能包含与ResourceManager保持通信,管理Container的生命周期.监控 ...

  6. spark 笔记 4:Apache Hadoop YARN: Yet Another Resource Negotiator

    spark支持YARN做资源调度器,所以YARN的原理还是应该知道的:http://www.socc2013.org/home/program/a5-vavilapalli.pdf    但总体来说, ...

  7. Apache Hadoop YARN – ResourceManager--转载

    原文地址:http://zh.hortonworks.com/blog/apache-hadoop-yarn-resourcemanager/ ResourceManager (RM) is the ...

  8. Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/exceptions/YarnException

    这个是Flink 1.11.1  使用yarn-session 出现的错误:原因是在Flink1.11 之后不再提供flink-shaded-hadoop-*” jars 需要在yarn-sessio ...

  9. Caused by:java.lang.ClassNotFoundException:org.apache.hadoop.yarn.util.Apps

    错误原因 缺少hadoop-yarn.jar包. 导入jar包就好了~-~

随机推荐

  1. PHP学习笔记之interface关键字

    interface用于定义接口 接口里边的方法不需要有方法的实现 implements用于表示类实现某个接口 实现了某个接口之后,必须提供接口中定义的方法的具体实现. 可以用instanceof关键字 ...

  2. jenkins+jacoco+ant自动化代码和应用服务代码分离场景获取远程服务的覆盖率

    前提 自动化代码和应用服务代码分离.jenkins和tomcat服务器分离 思想 1.在tomcat启动javaagent监听. 2.运用其他job_B已部署的应用服务代码 3.拉取自动化代码,开始测 ...

  3. 我们一起学习WCF 第九篇聊天功能

    说到聊天,那么其实就是传输数据,把自己写的东西传给自己想发送的那么人.我总结一下传输常见的有三种方式 1:就是我们常见的数据库传输 2:就是文件(流)传输 3:就是socket传输 今天我们说的wcf ...

  4. 强化学习读书笔记 - 12 - 资格痕迹(Eligibility Traces)

    强化学习读书笔记 - 12 - 资格痕迹(Eligibility Traces) 学习笔记: Reinforcement Learning: An Introduction, Richard S. S ...

  5. 创建第一个Scrapy项目

    d:进入D盘 scrapy startproject tutorial建立一个新的Scrapy项目 工程的目录结构: tutorial/ scrapy.cfg # 部署配置文件 tutorial/ # ...

  6. Eclipse将Java项目打成jar工具包

    jar包:就是别人已经写好的一些类,然后将这些类进行打包,你可以将这些jar包引入你的项目中,然后就可以直接使用这些jar包中的类和属性以及方法. jar包可分为可执行jar包和jar工具包,在这里, ...

  7. Linux系统网络安装——基于pxe+dhcp+nfs+tftp+kickstart

    原文发表于:2010-09-05 转载至cu于:2012-07-21 一.原理简介 PXE(preboot execute environment)工作于Client/Server的网络模式,支持工作 ...

  8. SPP-net论文总结

    SPPNet方法来自<Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition> ,是大神 ...

  9. Cannot find class [org.springframework.http.converter.json.MappingJacksonHttpMessageConverter]

    <!--避免IE执行AJAX时,返回JSON出现下载文件 --> <bean id="mappingJacksonHttpMessageConverter" cl ...

  10. HttpServlet 详解(基础)

    HttpServlet详解 大家都知道Servlet,但是不一定很清楚servlet框架,这个框架是由两个Java包组成:javax.servlet和javax.servlet.http. 在java ...