hadoop 管理命令dfsadmin

dfsadmin 命令用于管理HDFS集群,这些命令常用于管理员。

1. (Safemode)安全模式

动作 命令
把集群切换到安全模式 bin/hdfs dfsadmin -safemode [enter/get/leave]
数据节点状态列表 bin/hadoop dfsadmin -report
添加或删除数据节点 bin/hadoop dfsadmin -refreshNodes
打印网络拓扑  bin/hadoop dfsadmin -printTopology
官当网站 http://hadoop.apache.org/docs/r2.8.3/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#dfsadmin
  1.  
    [hadoop@master bin]$ ./hdfs dfsadmin -safemode enter #进入安全模式
  2.  
    Safe mode is ON
  3.  
     
  4.  
    [hadoop@master bin]$ ./hdfs dfsadmin -safemode get #获取当前状态
  5.  
    Safe mode is ON
  6.  
     
  7.  
    [hadoop@master bin]$ ./hdfs dfsadmin -safemode leave #离开safemode状态
  8.  
    Safe mode is OFF
  9.  
    [hadoop@master bin]$ 
  10.  
     
  11.  
     

安全模式:On startup, the NameNode enters a special state called Safemode. Replication of data blocks does not occur when the NameNode is in the Safemode state. The NameNode receives Heartbeat and Blockreport messages from the DataNodes. A Blockreport contains the list of data blocks that a DataNode is hosting. Each block has a specified minimum number of replicas. A block is considered safely replicated when the minimum number of replicas of that data block has checked in with the NameNode. After a configurable percentage of safely replicated data blocks checks in with the NameNode (plus an additional 30 seconds), the NameNode exits the Safemode state. It then determines the list of data blocks (if any) that still have fewer than the specified number of replicas. The NameNode then replicates these blocks to other DataNodes.

hadoop启动时,会处于一种特殊的状态称做安全模式,这种模式下,数据块的复制是不能发生的,主节点会收到各数据节点的心跳(Heartbeat)和块报告(Blockreport)信息。块报告信息包含该数据节点包含的块信息,每一个数据块都有一个最小被副本数,当一个块最小副本数与NN记录的相符合时就被认为是安全的复制,在配置一个安全副本百分数与NN相符后(再加30s)(意思就是副本数*N%后与NN记录的相符就认为是安全,N可配置),NN就退出Safemode状态。之后(safemode 时是不能发生数据复制的)如果列表中仍然有少数的副本数比已备份少,NN将会把这些块复制到其他数据节点。

根据上面说明:

1.Safemode 主要是校验数据节点的块信息。

2.safemode 不能发生块复制(Replication )。

3.hadoop 的维护工作是在模式下进行的。

Safemode状态时创建目录报错:

Cannot create directory /hxw. Name node is in safe mode. It was turned on manually. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.

2.集群信息状态报告

以及集群资源占用情况,以及各数据节点信息。

  1.  
    [hadoop@master logs]$ hadoop dfsadmin -report
  2.  
    DEPRECATED: Use of this script to execute hdfs command is deprecated.
  3.  
    Instead use the hdfs command for it.
  4.  
     
  5.  
     
  6.  
    Configured Capacity: 37492883456 (34.92 GB)
  7.  
    Present Capacity: 22908968960 (21.34 GB)
  8.  
    DFS Remaining: 21126250496 (19.68 GB)
  9.  
    DFS Used: 1782718464 (1.66 GB)
  10.  
    DFS Used%: 7.78%
  11.  
    Under replicated blocks: 18
  12.  
    Blocks with corrupt replicas: 0
  13.  
    Missing blocks: 0
  14.  
    Missing blocks (with replication factor 1): 0
  15.  
    Pending deletion blocks: 0
  16.  
     
  17.  
     
  18.  
    -------------------------------------------------
  19.  
    Live datanodes (2):
  20.  
     
  21.  
     
  22.  
    Name: 10.0.1.226:50010 (slave-2)
  23.  
    Hostname: slave-2
  24.  
    Decommission Status : Normal
  25.  
    Configured Capacity: 18746441728 (17.46 GB)
  26.  
    DFS Used: 891359232 (850.07 MB)
  27.  
    Non DFS Used: 7806763008 (7.27 GB)
  28.  
    DFS Remaining: 10048319488 (9.36 GB)
  29.  
    DFS Used%: 4.75%
  30.  
    DFS Remaining%: 53.60%
  31.  
    Configured Cache Capacity: 0 (0 B)
  32.  
    Cache Used: 0 (0 B)
  33.  
    Cache Remaining: 0 (0 B)
  34.  
    Cache Used%: 100.00%
  35.  
    Cache Remaining%: 0.00%
  36.  
    Xceivers: 1
  37.  
    Last contact: Wed Jan 17 17:09:23 CST 2018
  38.  
     
  39.  
     
  40.  
     
  41.  
     
  42.  
    Name: 10.0.1.227:50010 (slave-1)
  43.  
    Hostname: slave-1
  44.  
    Decommission Status : Normal
  45.  
    Configured Capacity: 18746441728 (17.46 GB)
  46.  
    DFS Used: 891359232 (850.07 MB)
  47.  
    Non DFS Used: 6777151488 (6.31 GB)
  48.  
    DFS Remaining: 11077931008 (10.32 GB)
  49.  
    DFS Used%: 4.75%
  50.  
    DFS Remaining%: 59.09%
  51.  
    Configured Cache Capacity: 0 (0 B)
  52.  
    Cache Used: 0 (0 B)
  53.  
    Cache Remaining: 0 (0 B)
  54.  
    Cache Used%: 100.00%
  55.  
    Cache Remaining%: 0.00%
  56.  
    Xceivers: 1
  57.  
    Last contact: Wed Jan 17 17:09:24 CST 2018

3.节点刷新

当集群有新增或删除节点时使用。

  1.  
    [hadoop@master bin]$ ./hdfs dfsadmin -refreshNodes
  2.  
    Refresh nodes successful
  3.  
     
  4.  
    [hadoop@slave-1 sbin]$ ./hadoop-daemon.sh  stop datanode
  5.  
    stopping datanode
  6.  
     
  7.  
    [hadoop@master bin]$ hadoop dfsadmin -report
  8.  
    DEPRECATED: Use of this script to execute hdfs command is deprecated.
  9.  
    Instead use the hdfs command for it.
  10.  
     
  11.  
     
  12.  
    Configured Capacity: 37492883456 (34.92 GB)
  13.  
    Present Capacity: 22914383872 (21.34 GB)
  14.  
    DFS Remaining: 21131665408 (19.68 GB)
  15.  
    DFS Used: 1782718464 (1.66 GB)
  16.  
    DFS Used%: 7.78%
  17.  
    Under replicated blocks: 18
  18.  
    Blocks with corrupt replicas: 0
  19.  
    Missing blocks: 0
  20.  
    Missing blocks (with replication factor 1): 0
  21.  
    Pending deletion blocks: 0
  22.  
     
  23.  
     
  24.  
    -------------------------------------------------
  25.  
    Live datanodes (2):
  26.  
     
  27.  
     
  28.  
    Name: 10.0.1.226:50010 (slave-2)
  29.  
    Hostname: slave-2
  30.  
    Decommission Status : Normal
  31.  
    Configured Capacity: 18746441728 (17.46 GB)
  32.  
    DFS Used: 891359232 (850.07 MB)
  33.  
    Non DFS Used: 7801290752 (7.27 GB)
  34.  
    DFS Remaining: 10053791744 (9.36 GB)
  35.  
    DFS Used%: 4.75%
  36.  
    DFS Remaining%: 53.63%
  37.  
    Configured Cache Capacity: 0 (0 B)
  38.  
    Cache Used: 0 (0 B)
  39.  
    Cache Remaining: 0 (0 B)
  40.  
    Cache Used%: 100.00%
  41.  
    Cache Remaining%: 0.00%
  42.  
    Xceivers: 1
  43.  
    Last contact: Wed Jan 17 18:16:06 CST 2018
  44.  
     
  45.  
     
  46.  
     
  47.  
     
  48.  
    Name: 10.0.1.227:50010 (slave-1)
  49.  
    Hostname: slave-1
  50.  
    Decommission Status : Normal
  51.  
    Configured Capacity: 18746441728 (17.46 GB)
  52.  
    DFS Used: 891359232 (850.07 MB)
  53.  
    Non DFS Used: 6777208832 (6.31 GB)
  54.  
    DFS Remaining: 11077873664 (10.32 GB)
  55.  
    DFS Used%: 4.75%
  56.  
    DFS Remaining%: 59.09%
  57.  
    Configured Cache Capacity: 0 (0 B)
  58.  
    Cache Used: 0 (0 B)
  59.  
    Cache Remaining: 0 (0 B)
  60.  
    Cache Used%: 100.00%
  61.  
    Cache Remaining%: 0.00%
  62.  
    Xceivers: 1
  63.  
    Last contact: Wed Jan 17 18:13:43 CST 2018
  64.  
     
  65.  
    [hadoop@master bin]$ ./hdfs dfsadmin  -refreshNodes
  66.  
    Refresh nodes successful
  67.  
     
  68.  
     
  69.  
    [hadoop@master bin]$ ./hdfs dfsadmin  -refreshNodes
  70.  
    Refresh nodes successful
  71.  
    [hadoop@master bin]$ hadoop dfsadmin -report       
  72.  
    DEPRECATED: Use of this script to execute hdfs command is deprecated.
  73.  
    Instead use the hdfs command for it.
  74.  
     
  75.  
     
  76.  
    Configured Capacity: 37492883456 (34.92 GB)
  77.  
    Present Capacity: 22914379776 (21.34 GB)
  78.  
    DFS Remaining: 21131661312 (19.68 GB)
  79.  
    DFS Used: 1782718464 (1.66 GB)
  80.  
    DFS Used%: 7.78%
  81.  
    Under replicated blocks: 18
  82.  
    Blocks with corrupt replicas: 0
  83.  
    Missing blocks: 0
  84.  
    Missing blocks (with replication factor 1): 0
  85.  
    Pending deletion blocks: 0
  86.  
     
  87.  
     
  88.  
    -------------------------------------------------
  89.  
    Live datanodes (2):
  90.  
     
  91.  
     
  92.  
    Name: 10.0.1.226:50010 (slave-2)
  93.  
    Hostname: slave-2
  94.  
    Decommission Status : Normal
  95.  
    Configured Capacity: 18746441728 (17.46 GB)
  96.  
    DFS Used: 891359232 (850.07 MB)
  97.  
    Non DFS Used: 7801294848 (7.27 GB)
  98.  
    DFS Remaining: 10053787648 (9.36 GB)
  99.  
    DFS Used%: 4.75%
  100.  
    DFS Remaining%: 53.63%
  101.  
    Configured Cache Capacity: 0 (0 B)
  102.  
    Cache Used: 0 (0 B)
  103.  
    Cache Remaining: 0 (0 B)
  104.  
    Cache Used%: 100.00%
  105.  
    Cache Remaining%: 0.00%
  106.  
    Xceivers: 1
  107.  
    Last contact: Wed Jan 17 18:18:54 CST 2018
  108.  
     
  109.  
     
  110.  
     
  111.  
     
  112.  
    Name: 10.0.1.227:50010 (slave-1)
  113.  
    Hostname: slave-1
  114.  
    Decommission Status : Normal
  115.  
    Configured Capacity: 18746441728 (17.46 GB)
  116.  
    DFS Used: 891359232 (850.07 MB)
  117.  
    Non DFS Used: 6777208832 (6.31 GB)
  118.  
    DFS Remaining: 11077873664 (10.32 GB)
  119.  
    DFS Used%: 4.75%
  120.  
    DFS Remaining%: 59.09%
  121.  
    Configured Cache Capacity: 0 (0 B)
  122.  
    Cache Used: 0 (0 B)
  123.  
    Cache Remaining: 0 (0 B)
  124.  
    Cache Used%: 100.00%
  125.  
    Cache Remaining%: 0.00%
  126.  
    Xceivers: 1
  127.  
    Last contact: Wed Jan 17 18:13:43 CST 2018

说明:在停止某个数据节点后,刷新节点信息仍然能看到该节点信息,状态noraml 状态,界面上看到last contact 时间是560+s。

在Namenode 的配置文件slaves 中删除该节点,然后重新刷新节点信息,则后台显示:

  1.  
    [hadoop@master bin]$ ./hdfs dfsadmin -refreshNodes
  2.  
    Refresh nodes successful
  3.  
    [hadoop@master bin]$ hadoop dfsadmin -report
  4.  
    DEPRECATED: Use of this script to execute hdfs command is deprecated.
  5.  
    Instead use the hdfs command for it.
  6.  
     
  7.  
    Configured Capacity: 18746441728 (17.46 GB)
  8.  
    Present Capacity: 10945093632 (10.19 GB)
  9.  
    DFS Remaining: 10053734400 (9.36 GB)
  10.  
    DFS Used: 891359232 (850.07 MB)
  11.  
    DFS Used%: 8.14%
  12.  
    Under replicated blocks: 161
  13.  
    Blocks with corrupt replicas: 0
  14.  
    Missing blocks: 0
  15.  
    Missing blocks (with replication factor 1): 0
  16.  
    Pending deletion blocks: 0
  17.  
     
  18.  
    -------------------------------------------------
  19.  
    Live datanodes (1):
  20.  
     
  21.  
    Name: 10.0.1.226:50010 (slave-2)
  22.  
    Hostname: slave-2
  23.  
    Decommission Status : Normal
  24.  
    Configured Capacity: 18746441728 (17.46 GB)
  25.  
    DFS Used: 891359232 (850.07 MB)
  26.  
    Non DFS Used: 7801348096 (7.27 GB)
  27.  
    DFS Remaining: 10053734400 (9.36 GB)
  28.  
    DFS Used%: 4.75%
  29.  
    DFS Remaining%: 53.63%
  30.  
    Configured Cache Capacity: 0 (0 B)
  31.  
    Cache Used: 0 (0 B)
  32.  
    Cache Remaining: 0 (0 B)
  33.  
    Cache Used%: 100.00%
  34.  
    Cache Remaining%: 0.00%
  35.  
    Xceivers: 1
  36.  
    Last contact: Wed Jan 17 18:26:36 CST 2018
  37.  
     
  38.  
     
  39.  
    Dead datanodes (1):
  40.  
     
  41.  
    Name: 10.0.1.227:50010 (slave-1)
  42.  
    Hostname: slave-1
  43.  
    Decommission Status : Normal
  44.  
    Configured Capacity: 0 (0 B)
  45.  
    DFS Used: 0 (0 B)
  46.  
    Non DFS Used: 6777208832 (6.31 GB)
  47.  
    DFS Remaining: 0 (0 B)
  48.  
    DFS Used%: 100.00%
  49.  
    DFS Remaining%: 0.00%
  50.  
    Configured Cache Capacity: 0 (0 B)
  51.  
    Cache Used: 0 (0 B)
  52.  
    Cache Remaining: 0 (0 B)
  53.  
    Cache Used%: 100.00%
  54.  
    Cache Remaining%: 0.00%
  55.  
    Xceivers: 0
  56.  
    Last contact: Wed Jan 17 18:13:43 CST 2018
  57.  
     

4.网络拓扑

  1.  
    [hadoop@master ~]$ hadoop dfsadmin -printTopology
  2.  
    DEPRECATED: Use of this script to execute hdfs command is deprecated.
  3.  
    Instead use the hdfs command for it.
  4.  
     
  5.  
    Rack: /default-rack
  6.  
    10.0.1.226:50010 (slave-2)
  7.  
    10.0.1.227:50010 (slave-1)

总结:

  1.  
    [hadoop@master bin]$ ./hdfs dfsadmin -safemode enter #进入Safemode模式
  2.  
    [hadoop@master bin]$ ./hdfs dfsadmin -safemode get #获取当前运行模式
  3.  
    [hadoop@master bin]$ ./hdfs dfsadmin -safemode leave #退出Safemode模式
  4.  
    [hadoop@master bin]$ hadoop dfsadmin -report #当前hadoop集群状态信息
  5.  
    [hadoop@master bin]$ ./hdfs dfsadmin -refreshNodes #新增删除节点更新集群信息
  6.  
    [hadoop@master sbin]$ ./hadoop-daemon.sh  stop datanode #停止单个数据节点
  7.  
    [hadoop@master ~]$ hadoop dfsadmin -printTopology #打印集群网络拓扑

hadoop 管理命令dfsadmin的更多相关文章

  1. hadoop管理命令

    -help  功能:输出这个命令参数手册 -ls   功能:显示目录信息 示例: hadoop fs -ls hdfs://hadoop-server01:9000/ 备注:这些参数中,所有的hdfs ...

  2. hadoop管理

    hadoop管理工具: 一,dfsadmin工具 dfsadmin 工具用途比较广,既可以查找HDFS状态信息,又可以在HDFS上执行管理操作,以hdfs dfsadmin形式调用,且需要超级用户权限 ...

  3. Hadoop进阶命令使用介绍

    hadoop生产环境的使用是很复杂的,有些公司是CDH或者Ambari等管理工具运维的,有些是原生的集群俗称裸奔,前者优缺点是运维,查看监控等容易,对于底层理解不友好:裸奔集群反之,裸奔集群的很多东西 ...

  4. 【Hadoop篇】--Hadoop常用命令总结

    一.前述 分享一篇hadoop的常用命令的总结,将常用的Hadoop命令总结如下. 二.具体 1.启动hadoop所有进程start-all.sh等价于start-dfs.sh + start-yar ...

  5. Hadoop常用命令总结

    一.前述 分享一篇hadoop的常用命令的总结,将常用的Hadoop命令总结如下. 二.具体 1.启动hadoop所有进程start-all.sh等价于start-dfs.sh + start-yar ...

  6. 17 RAID与mdadm管理命令

    在"14 磁盘及文件系统管理详解"中,我们详细介绍了磁盘的工作原理,但是,有一点我们一定要明白,作为现在存储数据的主要设备,机械磁盘早就是上个世纪的产品,而它的读写速度与内存.CP ...

  7. hadoop基础----hadoop实战(七)-----hadoop管理工具---使用Cloudera Manager安装Hadoop---Cloudera Manager和CDH5.8离线安装

    hadoop基础----hadoop实战(六)-----hadoop管理工具---Cloudera Manager---CDH介绍 简介 我们在上篇文章中已经了解了CDH,为了后续的学习,我们本章就来 ...

  8. 大数据之路week06--day07(Hadoop常用命令)

    一.前述 分享一篇hadoop的常用命令的总结,将常用的Hadoop命令总结如下. 二.具体 1.启动hadoop所有进程start-all.sh等价于start-dfs.sh + start-yar ...

  9. Hadoop Shell命令大全

    hadoop支持命令行操作HDFS文件系统,并且支持shell-like命令与HDFS文件系统交互,对于大多数程序猿/媛来说,shell-like命令行操作都是比较熟悉的,其实这也是Hadoop的极大 ...

随机推荐

  1. golang 开发gui

    可能因为我电脑上的mingw下只有gcc,没有g++的原因,之前用walk和andlabs都不成功 最后用github上gxui的sample代码终于编译出来一个丑陋的GUI,但编译过程也提示了一堆类 ...

  2. 自然语言处理之LDA主题模型

    1.LDA概述 在机器学习领域,LDA是两个常用模型的简称:线性判别分析(Linear Discriminant Analysis)和 隐含狄利克雷分布(Latent Dirichlet Alloca ...

  3. esp8266 免费wifi强推广告神器(0) 项目介绍

    某宝产品 WIFI SSID广告终端路由推广宝 简单来说,手机连接免费wifi,自动弹出广告页面,有二维码和电话,点击电话直接打电话给商家客服,用户点击链接跳转到商家网页. 同时存在设置页面,使用者可 ...

  4. (五)JavaScript 变量

    JavaScript 变量 与代数一样,JavaScript 变量可用于存放值(比如 x=5)和表达式(比如 z=x+y). 变量可以使用短名称(比如 x 和 y),也可以使用描述性更好的名称(比如 ...

  5. ActiveMQ安装配置及使用 转发 https://www.cnblogs.com/hushaojun/p/6016709.html

    ActiveMQ安装配置及使用 ActiveMQ介绍 ActiveMQ 是Apache出品,最流行的,能力强劲的开源消息总线.ActiveMQ 是一个完全支持JMS1.1和J2EE 1.4规范的 JM ...

  6. 手动安装 Eclipse 插件 Viplugin

    对 Vimer 来说,切换到 Eclipse 环境,传统的码code方式明显降低效率,Viplugin 是一款类 Vi 模拟器,能实现 Vi 的基本编辑功能. 安装方法 (适用于Windows 和 L ...

  7. 不可变对象和Biulder模式(面试问题)

    String就是一个典型的不可变对象.外界的操作不能改变它,如果尝试改变都会返回一个新的String对象. 具体实现起来就是把属性全部变成private 和 final的,这个类也是final的不可继 ...

  8. ASP.NET Core如何使用WSFederation身份认证集成ADFS

    如果要在ASP.NET Core项目中使用WSFederation身份认证,首先需要在项目中引入NuGet包: Microsoft.AspNetCore.Authentication.WsFedera ...

  9. Insider Dev Tour(2018.06.28)

    时间:2018.06.28地点:北京金茂万丽酒店

  10. Groovy语言学习--语法基础(3)

    侧重点可能是groovy metaClass基元类的概念,有点像java的反射,因为java反射目前基本也没研究过,就mark一下,后续若有用到就深入研究一下. 基础语法的东西貌似差不多八九不离十了, ...