1.测试环境

ip 主机名 角色
10.124.147.22 hadoop1 namenode
10.124.147.23 hadoop2 namenode
10.124.147.32 hadoop3 resourcemanager
10.124.147.33 hadoop4 resourcemanager
10.110.92.161 hadoop5 datanode/journalnode
10.110.92.162 hadoop6 datanode
10.122.147.37 hadoop7 datanode

2.配置文件中必备参数

2.1 hdfs-site.xml参数

[hadoop@10-124-147-22 hadoop]$ grep dfs\.host -A10 /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<!-- datanode踢除主机列表文件 -->
<name>dfs.hosts.exclude</name>
<value>/usr/local/hadoop/etc/hadoop/dfs_exclude</value>
</property> <!-- datanode添加主机列表文件-->
<property>
<name>dfs.hosts</name>
<value>/usr/local/hadoop/etc/hadoop/slaves</value>
</property>

2.2 yarn-site.xml参数

[hadoop@10-124-147-22 hadoop]$ grep exclude-path -A10 /usr/local/hadoop/etc/hadoop/yarn-site.xml
<!-- datanode踢除主机列表文件 -->
<name>yarn.resourcemanager.nodes.exclude-path</name>
<value>/usr/local/hadoop/etc/hadoop/dfs_exclude</value>
</property> <!-- datanode添加主机列表文件-->
<property>
<name>yarn.resourcemanager.nodes.include-path</name>
<value>/usr/local/hadoop/etc/hadoop/slaves</value>
</property>

3.踢除现有主机

1.在namenode主机中,将要踢除主机的ip添加到hdfs-site.xml配置文件dfs.hosts.exclude参数指定的文件dfs_exclude

[hadoop@10-124-147-22 hadoop]$ cat /usr/local/hadoop/etc/hadoop/dfs_exclude
10.122.147.37

2.将其copy至hadoop其它主机

[hadoop@10-124-147-22 hadoop]$ for i in {2,3,4,5,6,7};do scp etc/hadoop/dfs_exclude hadoop$i:/usr/local/hadoop/etc/hadoop/;done

3.更新namenode信息

[hadoop@10-124-147-22 hadoop]$ hdfs dfsadmin -refreshNodes
Refresh nodes successful for hadoop1/10.124.147.22:9000
Refresh nodes successful for hadoop2/10.124.147.23:9000

4.查看namenode 状态信息

[hadoop@10-124-147-22 hadoop]$ hdfs dfsadmin -report
Configured Capacity: 1100228980736 (1.00 TB)
Present Capacity: 1087754866688 (1013.05 GB)
DFS Remaining: 1087752667136 (1013.05 GB)
DFS Used: 2199552 (2.10 MB)
DFS Used%: 0.00%
Under replicated blocks: 11
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0 -------------------------------------------------
Live datanodes (3): Name: 10.122.147.37:50010 (hadoop7)
Hostname: hadoop7
Decommission Status : Decommission in progress
Configured Capacity: 250831044608 (233.60 GB)
DFS Used: 733184 (716 KB)
Non DFS Used: 1235771392 (1.15 GB)
DFS Remaining: 249594540032 (232.45 GB)
DFS Used%: 0.00%
DFS Remaining%: 99.51%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Jul 24 10:25:17 CST 2018 Name: 10.110.92.161:50010 (hadoop5)
Hostname: hadoop5
Decommission Status : Normal
以下略

可以看到被踢除主机10.122.147.37的状态变成Decommission in progress,表示集群对存放于该节点的副本正在进行转移。当其变成Decommissioned时,即代表已经结束,相当于已经踢除集群。

同时此状态可以在hdfs的web页面查看

5.更新resourcemananger信息

[hadoop@10-124-147-32 hadoop]$ yarn rmadmin -refreshNodes

更新之后,可以在resourcemanager的web页面查看到Active Nodes 的信息

或者使用命令查看

[hadoop@10-124-147-32 hadoop]$ yarn node -list
Total Nodes:2
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
hadoop5:37438 RUNNING hadoop5:8042 0
hadoop6:9001 RUNNING hadoop6:8042 0

4.添加新主机至集群

1.将原hadoop配置文件copy新主机,并安装好java环境

2.在namenode中将新主机的ip添加于dfs.hosts参数指定的文件中

[hadoop@10-124-147-22 hadoop]$ cat /usr/local/hadoop/etc/hadoop/slaves
hadoop5
hadoop6
10.122.147.37

3.将该slaves文件同步到其它主机之上

[hadoop@10-124-147-22 hadoop]$ for i in {2,3,4,5,6,7};do scp etc/hadoop/slaves hadoop$i:/usr/local/hadoop/etc/hadoop/;done

4.启动新主机的datanode进程和nodemanager进程

[hadoop@10-122-147-37 hadoop]$ sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /letv/hadoop-2.7.6/logs/hadoop-hadoop-datanode-10-122-147-37.out
[hadoop@10-122-147-37 hadoop]$ jps
3068 DataNode
6143 Jps
[hadoop@10-122-147-37 hadoop]$ sbin/yarn-daemon.sh start nodemanager
starting nodemanager, logging to /letv/hadoop-2.7.6/logs/yarn-hadoop-nodemanager-10-122-147-37.out
[hadoop@10-122-147-37 hadoop]$ jps
6211 NodeManager
6403 Jps
3068 DataNode

5.刷新namenode

[hadoop@10-124-147-22 hadoop]$ hdfs dfsadmin -refreshNodes
Refresh nodes successful for hadoop1/10.124.147.22:9000
Refresh nodes successful for hadoop2/10.124.147.23:9000

6.查看hdfs信息

[hadoop@10-124-147-22 hadoop]$ hdfs dfsadmin -refreshNodes
Refresh nodes successful for hadoop1/10.124.147.22:9000
Refresh nodes successful for hadoop2/10.124.147.23:9000
[hadoop@10-124-147-22 hadoop]$ hdfs dfsadmin -report
Configured Capacity: 1351059292160 (1.23 TB)
Present Capacity: 1337331367936 (1.22 TB)
DFS Remaining: 1337329156096 (1.22 TB)
DFS Used: 2211840 (2.11 MB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0 -------------------------------------------------
Live datanodes (3): Name: 10.122.147.37:50010 (hadoop7)
Hostname: hadoop7
Decommission Status : Normal
Configured Capacity: 250831044608 (233.60 GB)
DFS Used: 737280 (720 KB)
Non DFS Used: 1240752128 (1.16 GB)
DFS Remaining: 249589555200 (232.45 GB)
DFS Used%: 0.00%
DFS Remaining%: 99.51%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Jul 24 17:15:09 CST 2018 Name: 10.110.92.161:50010 (hadoop5)
Hostname: hadoop5
Decommission Status : Normal
Configured Capacity: 550114123776 (512.33 GB)
DFS Used: 737280 (720 KB)
Non DFS Used: 11195953152 (10.43 GB)
DFS Remaining: 538917433344 (501.91 GB)
DFS Used%: 0.00%
DFS Remaining%: 97.96%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Jul 24 17:15:10 CST 2018 Name: 10.110.92.162:50010 (hadoop6)
Hostname: hadoop6
Decommission Status : Normal
Configured Capacity: 550114123776 (512.33 GB)
DFS Used: 737280 (720 KB)
Non DFS Used: 1291218944 (1.20 GB)
DFS Remaining: 548822167552 (511.13 GB)
DFS Used%: 0.00%
DFS Remaining%: 99.77%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Jul 24 17:15:10 CST 2018

7.更新resourcemanager信息

[hadoop@10-124-147-32 hadoop]$ yarn rmadmin -refreshNodes
[hadoop@10-124-147-32 hadoop]$ yarn node -list
18/07/24 18:11:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
Total Nodes:3
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
hadoop7:3296 RUNNING hadoop7:8042
hadoop5:37438 RUNNING hadoop5:8042 0
hadoop6:9001 RUNNING hadoop6:8042 0

8.include与exclude对yarn和hdfs的影响

判断一个nodemanager能否连接到resourcemanager的条件是,该nodemanager出现在include文件且不出现exclude文件中

而hdfs规与yarn不太一样(hdfs中的include直接即为dfs.hosts),其规则如下表

是否在include 是否在exclude 是否可连接
无法连接
无法连接
可以连接
可连接,即将解除

如果未指定include或者include为空,即意味着所有节点都在include文件

5.遇到异常

在移除datanode中的,会遇到被移除datanode一直处于Decommission in progress状态,这是因为默认测试环境中,没有设置副本数量,在hadoop中的默认副本数为3,而本测试环境中,因为datanode总共只有3个节点,所以会出现该异常

将副本数量设置成小于datanode数量即可

[hadoop@10-124-147-22 hadoop]$ grep dfs\.replication -C3 /usr/local/hadoop/etc/hadoop/hdfs-site.xml

<!-- 副本复制数量 -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>

hadoop 2.7 添加或删除datanode节点的更多相关文章

  1. (转载) 添加或删除datanode节点

    转载:https://www.cnblogs.com/marility/p/9362168.html 1.测试环境 ip 主机名 角色 10.124.147.22 hadoop1 namenode 1 ...

  2. Hadoop 2.6.3动态增加/删除DataNode节点

    假设集群操作系统均为:CentOS 6.7 x64 Hadoop版本为:2.6.3 一.动态增加DataNode 1.准备新的DataNode节点机器,配置SSH互信,可以直接复制已有DataNode ...

  3. hadoop集群中动态添加新的DataNode节点

    集群中现有的计算能力不足,须要另外加入新的节点时,使用例如以下方法就能动态添加新的节点: 1.在新的节点上安装hadoop程序,一定要控制好版本号,能够从集群上其它机器cp一份改动也行 2.把name ...

  4. Hadoop DataNode 节点的动态添加和动态删除

    动态添加 DataNode 节点 hadoop环境是必须的 需要加入新的 DataNode 节点,前提是已经配置好 SSH 无密登录:直接复制已有DataNode中.ssh目录中的authorized ...

  5. Hadoop 添加删除数据节点(datanode)

    前提条件: 添加机器安装jdk等,最好把环境都搞成一样,示例可做相应改动 实现目的: 在hadoop集群中添加一个新增数据节点. 1. 创建目录和用户  mkdir -p /app/hadoop gr ...

  6. Hadoop记录-Hadoop集群添加节点和删除节点

    1.添加节点 A:新节点中添加账户,设置无密码登陆 B:Name节点中设置到新节点的无密码登陆 C:在Name节点slaves文件中添加新节点 D:在所有节点/etc/hosts文件中增加新节点(所有 ...

  7. Hadoop中正确地添加和移除节点

    正确地添加和移除节点 添加节点 克隆 克隆一台全新的Linux(如有IP冲突,可右击VMware右下角网络图标断开连接) 打开/etc/hostname修改主机名 打开/etc/sysconfig/n ...

  8. adoop集群动态添加和删除节点

    hadoop集群动态添加和删除节点说明 上篇博客我已经安装了Hadoop集群(hadoop集群的安装步骤和配置),现在写这个博客我将在之前的基础上进行节点的添加的删除. 首先将启动四台机器(一主三从) ...

  9. 【Hadoop故障处理】全分布下,DataNode进程正常启动,但是网页上不显示,并且DataNode节点为空

    [故障背景] DataNode进程正常启动,但是网页上不显示,并且DataNode节点为空. /etc/hosts   的ip和hostname配置正常,各个机器之间能够ping通. [日志错误信息] ...

随机推荐

  1. Spring---条件注解@Conditional

    1.概述 1.1.Spring4  提供了一个更通用的  基于条件的Bean的创建,即使用@Conditional注解: 1.2.案例 package com.an.config; import co ...

  2. JavaScript-黑科技

    单行写一个评级 var rate = 3; "★★★★★☆☆☆☆☆".slice(5 - rate, 10 - rate); 随机字符串 Math.random().toStrin ...

  3. Python函数中*args和**kwargs来传递变长参数的用法

    参考自: http://www.jb51.net/article/78705.htm 单星号形式(*args)用来传递非命名键可变参数列表.双星号形式(**kwargs)用来传递键值可变参数列表. 1 ...

  4. 一双木棋(chess)

    一双木棋(chess) 题目描述 菲菲和牛牛在一块 nn 行 mm 列的棋盘上下棋,菲菲执黑棋先手,牛牛执白棋后手. 棋局开始时,棋盘上没有任何棋子,两人轮流在格子上落子,直到填满棋盘时结束.落子的规 ...

  5. 做一个简单的scrapy爬虫

    前言: 做一个简单的scrapy爬虫,带大家认识一下创建scrapy的大致流程.我们就抓取扇贝上的单词书,python的高频词汇. 步骤: 一,新建一个工程scrapy_shanbay 二,在工程中中 ...

  6. arcgis api for javascipt 加载天地图、百度地图

    写在前面的话: 1.百度地图是自己定义的坐标系统,wkid=102100.百度地图数据是加密的产物.下文将附上百度坐标与WGS84,谷歌等坐标系统转换方法(地理-地理),此方法并未亲测,据说准 2.百 ...

  7. 【VisualStdio】在VS2015中显示上下文菜单中“创建单元测试”菜单

    ---恢复内容开始--- VS2012以后创建单元测试的选项被默认隐藏了,创建单元测试变得无比低效率.看msdn的说法好像是想推荐使用Intell Test来替代单元测试的用途,但是还没摸清楚也不敢瞎 ...

  8. 用 Flask 来写个轻博客 (30) — 使用 Flask-Admin 增强文章管理功能

    Blog 项目源码:https://github.com/JmilkFan/JmilkFan-s-Blog 目录 目录 前文列表 扩展阅读 实现文章管理功能 实现效果 前文列表 用 Flask 来写个 ...

  9. java构造器内部多态方法

    public class TestC { public static void main(String []args) { new Graph(5); }}class Grp{ void draw() ...

  10. H5rem

    <meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=1, ...