hadoop集群之HDFS和YARN启动和停止命令
假如我们只有3台linux虚拟机,主机名分别为hadoop01、hadoop02和hadoop03,在这3台机器上,hadoop集群的部署情况如下:
hadoop01:1个namenode,1个datanode,1个journalnode,1个zkfc,1个resourcemanager,1个nodemanager; hadoop02:1个namenode,1个datanode,1个journalnode,1个zkfc,1个resourcemanager,1个nodemanager; hadoop03:1个datenode,1个journalnode,1个nodemanager;
下面我们来介绍启动hdfs和yarn的一些命令。
1.启动hdfs集群(使用hadoop的批量启动脚本)
/root/apps/hadoop/sbin/start-dfs.sh
[root@hadoop01 ~]# /root/apps/hadoop/sbin/start-dfs.sh
Starting namenodes on [hadoop01 hadoop02]
hadoop01: starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop01.out
hadoop02: starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop02.out
hadoop03: starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop03.out
hadoop02: starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop02.out
hadoop01: starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop01.out
Starting journal nodes [hadoop01 hadoop02 hadoop03]
hadoop03: starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop03.out
hadoop02: starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop02.out
hadoop01: starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop01.out
Starting ZK Failover Controllers on NN hosts [hadoop01 hadoop02]
hadoop01: starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop01.out
hadoop02: starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop02.out
[root@hadoop01 ~]#
从上面的启动日志可以看出,start-dfs.sh这个启动脚本是通过ssh对多个节点的namenode、datanode、journalnode以及zkfc进程进行批量启动的。
2.停止hdfs集群(使用hadoop的批量启动脚本)
/root/apps/hadoop/sbin/stop-dfs.sh
[root@hadoop01 ~]# /root/apps/hadoop/sbin/stop-dfs.sh
Stopping namenodes on [hadoop01 hadoop02]
hadoop02: stopping namenode
hadoop01: stopping namenode
hadoop02: stopping datanode
hadoop03: stopping datanode
hadoop01: stopping datanode
Stopping journal nodes [hadoop01 hadoop02 hadoop03]
hadoop03: stopping journalnode
hadoop02: stopping journalnode
hadoop01: stopping journalnode
Stopping ZK Failover Controllers on NN hosts [hadoop01 hadoop02]
hadoop01: stopping zkfc
hadoop02: stopping zkfc
[root@hadoop01 ~]#
3.启动单个进程
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop01.out
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop02.out
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop01.out
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop02.out
[root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop03.out
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop01.out
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop02.out
[root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop03.out
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start zkfc
starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop01.out
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start zkfc
starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop02.out
分别查看启动后3台虚拟机上的进程情况:
[root@hadoop01 ~]# jps
6695 DataNode
2002 QuorumPeerMain
6879 DFSZKFailoverController
7035 Jps
6800 JournalNode
6580 NameNode
[root@hadoop01 ~]#
[root@hadoop02 ~]# jps
6360 JournalNode
6436 DFSZKFailoverController
2130 QuorumPeerMain
6541 Jps
6255 DataNode
6155 NameNode
[root@hadoop02 ~]#
[root@hadoop03 apps]# jps
5331 Jps
5103 DataNode
5204 JournalNode
2258 QuorumPeerMain
[root@hadoop03 apps]#
3.停止单个进程
[root@hadoop01 ~]# jps
6695 DataNode
2002 QuorumPeerMain
8486 Jps
6879 DFSZKFailoverController
6800 JournalNode
6580 NameNode
[root@hadoop01 ~]#
[root@hadoop01 ~]#
[root@hadoop01 ~]#
[root@hadoop01 ~]#
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop zkfc
stopping zkfc
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop journalnode
stopping journalnode
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop datanode
stopping datanode
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop namenode
stopping namenode
[root@hadoop01 ~]# jps
2002 QuorumPeerMain
8572 Jps
[root@hadoop01 ~]#
[root@hadoop02 ~]# jps
6360 JournalNode
6436 DFSZKFailoverController
2130 QuorumPeerMain
7378 Jps
6255 DataNode
6155 NameNode
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop zkfc
stopping zkfc
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop journalnode
stopping journalnode
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop datanode
stopping datanode
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop namenode
stopping namenode
[root@hadoop02 ~]# jps
7455 Jps
2130 QuorumPeerMain
[root@hadoop02 ~]#
[root@hadoop03 apps]# jps
5103 DataNode
5204 JournalNode
5774 Jps
2258 QuorumPeerMain
[root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop journalnode
stopping journalnode
[root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop datanode
stopping datanode
[root@hadoop03 apps]# jps
5818 Jps
2258 QuorumPeerMain
[root@hadoop03 apps]#
3.启动yarn集群(使用hadoop的批量启动脚本)
/root/apps/hadoop/sbin/start-yarn.sh
[root@hadoop01 ~]# /root/apps/hadoop/sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /root/apps/hadoop/logs/yarn-root-resourcemanager-hadoop01.out
hadoop03: starting nodemanager, logging to /root/apps/hadoop/logs/yarn-root-nodemanager-hadoop03.out
hadoop02: starting nodemanager, logging to /root/apps/hadoop/logs/yarn-root-nodemanager-hadoop02.out
hadoop01: starting nodemanager, logging to /root/apps/hadoop/logs/yarn-root-nodemanager-hadoop01.out
[root@hadoop01 ~]#
从上面的启动日志可以看出,start-yarn.sh启动脚本只在本地启动一个ResourceManager进程,而3台机器上的nodemanager都是通过ssh的方式启动的。所以hadoop02机器上的ResourceManager需要我们手动去启动。
4.启动hadoop02上的ResourceManager进程
/root/apps/hadoop/sbin/yarn-daemon.sh start resourcemanager
5.停止yarn
/root/apps/hadoop/sbin/stop-yarn.sh
[root@hadoop01 ~]# /root/apps/hadoop/sbin/stop-yarn.sh
stopping yarn daemons
stopping resourcemanager
hadoop01: stopping nodemanager
hadoop03: stopping nodemanager
hadoop02: stopping nodemanager
no proxyserver to stop
[root@hadoop01 ~]#
通过上面的停止日志可以看出,stop-yarn.sh脚本只停止了本地的那个ResourceManager进程,所以hadoop02上的那个resourcemanager我们需要单独去停止。
6.停止hadoop02上的resourcemanager
/root/apps/hadoop/sbin/yarn-daemon.sh stop resourcemanager
注意:启动和停止单个hdfs相关的进程使用的是"hadoop-daemon.sh"脚本,而启动和停止yarn使用的是"yarn-daemon.sh"脚本。
如果觉得本文对您有帮助,不妨扫描下方微信二维码打赏点,您的鼓励是我前进最大的动力:

hadoop集群之HDFS和YARN启动和停止命令的更多相关文章
- Hadoop集群(二) HDFS搭建
HDFS只是Hadoop最基本的一个服务,很多其他服务,都是基于HDFS展开的.所以部署一个HDFS集群,是很核心的一个动作,也是大数据平台的开始. 安装Hadoop集群,首先需要有Zookeeper ...
- hadoop 集群搭建 配置 spark yarn 对效率的提升永无止境
[手动验证:任意2个节点间是否实现 双向 ssh免密登录] 弄懂通信原理和集群的容错性 任意2个节点间实现双向 ssh免密登录,默认在~目录下 [实现上步后,在其中任一节点安装\配置hadoop后,可 ...
- hadoop 集群搭建 配置 spark yarn 对效率的提升永无止境 Hadoop Volume 配置
[手动验证:任意2个节点间是否实现 双向 ssh免密登录] 弄懂通信原理和集群的容错性 任意2个节点间实现双向 ssh免密登录,默认在~目录下 [实现上步后,在其中任一节点安装\配置hadoop后,可 ...
- 一键配置高可用Hadoop集群(hdfs HA+zookeeper HA)
准备环境 3台节点,主节点 建议 2G 内存,两个从节点 1.5G内存, 桥接网络 关闭防火墙 配置ssh,让节点之间能够相互 ping 通 准备 软件放到 autoInstall 目录下,已存放 ...
- Hadoop集群(第13期)_HBase 常用Shell命令
进入hbase shell console$HBASE_HOME/bin/hbase shell如果有kerberos认证,需要事先使用相应的keytab进行一下认证(使用kinit命令),认证成功之 ...
- 大数据系列(3)——Hadoop集群完全分布式坏境搭建
前言 上一篇我们讲解了Hadoop单节点的安装,并且已经通过VMware安装了一台CentOS 6.8的Linux系统,咱们本篇的目标就是要配置一个真正的完全分布式的Hadoop集群,闲言少叙,进入本 ...
- hadoop集群环境搭建之安装配置hadoop集群
在安装hadoop集群之前,需要先进行zookeeper的安装,请参照hadoop集群环境搭建之zookeeper集群的安装部署 1 将hadoop安装包解压到 /itcast/ (如果没有这个目录 ...
- Hadoop集群完全分布式坏境搭建
前言 上一篇我们讲解了Hadoop单节点的安装,并且已经通过VMware安装了一台CentOS 6.8的Linux系统,咱们本篇的目标就是要配置一个真正的完全分布式的Hadoop集群,闲言少叙,进入本 ...
- hadoop(二)hadoop集群的搭建
一.集群环境准备工作 1.修改主机名 在root 账户下 vi /etc/sysconfig/network 或者 sudo vi /etc/sysconfig/network 2.设置系统默认启 ...
随机推荐
- Spring MVC使用@ResponseBody返回JSON数据406以及乱码问题解决方案
刚到一家公司,做原始的工作---接口,程序就我一人,没环境,没架构,更没旧项目可以利用,一切从0开始,经理以为我是老鸟,来就布置任务,要一周拿出结果.其实有现成架构的话写个接口还不是分分钟钟的事,关键 ...
- DLog 技巧
#ifdef DEBUG#ifndef DLog# define DLog(fmt, ...) {NSLog((@"%s [Line %d] " fmt), __PRETTY_FU ...
- php 之mysqli简单封装
1:DBHelper.class.php <?php class DBHelper{ private $mysqli; private static $host='127.0.0.1'; pri ...
- 如何查找到文件以后,带目录一起拷贝到新的目录? cp --parents source destination
如何查找到文件以后,带目录一起拷贝到新的目录? cp --parents source destination
- BZOJ2045: 双亲数
2045: 双亲数 Time Limit: 10 Sec Memory Limit: 259 MBSubmit: 602 Solved: 275[Submit][Status] Descripti ...
- 【剑指offer】面试题42:翻转单词顺序 VS 左旋转字符串
题目: 汇编语言中有一种移位指令叫做循环左移(ROL),现在有个简单的任务,就是用字符串模拟这个指令的运算结果.对于一个给定的字符序列S,请你把其循环左移K位后的序列输出.例如,字符序列S=”abcX ...
- socket 网摘
一.基本socket函数 Linux系统是通过提供套接字(socket)来进行网络编程的.网络的socket数据传输是一种特殊的I/O,socket也是一种文件描述符.socket也有一个类似于打 开 ...
- spring 配置文件XSD地址
这边部署不能访问外网,所以sping配置文件里的XSD地址要改一下象 http://www.springframework.org/schema/beans/spring-beans-2.0.xsd ...
- Spring中给Bean注入集合
Spring中如果一个Bean里含有集合元素,需要给Bean里的集合元素注入元素时,可以采用如下方法,一个是构造器注入,一个是setter注入 JavaBean源代码: import java.uti ...
- jquery ajax提交json格式的数据,后台接收并显示各个属性
我的表单如下: <form onsubmit="return false"> <ul> <li><span>用户名</span ...