hadoop 管理命令dfsadmin

dfsadmin 命令用于管理HDFS集群,这些命令常用于管理员。

1. (Safemode)安全模式

动作 命令
把集群切换到安全模式 bin/hdfs dfsadmin -safemode [enter/get/leave]
数据节点状态列表 bin/hadoop dfsadmin -report
添加或删除数据节点 bin/hadoop dfsadmin -refreshNodes
打印网络拓扑  bin/hadoop dfsadmin -printTopology
官当网站 http://hadoop.apache.org/docs/r2.8.3/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#dfsadmin
  1.  
    [hadoop@master bin]$ ./hdfs dfsadmin -safemode enter #进入安全模式
  2.  
    Safe mode is ON
  3.  
     
  4.  
    [hadoop@master bin]$ ./hdfs dfsadmin -safemode get #获取当前状态
  5.  
    Safe mode is ON
  6.  
     
  7.  
    [hadoop@master bin]$ ./hdfs dfsadmin -safemode leave #离开safemode状态
  8.  
    Safe mode is OFF
  9.  
    [hadoop@master bin]$ 
  10.  
     
  11.  
     

安全模式:On startup, the NameNode enters a special state called Safemode. Replication of data blocks does not occur when the NameNode is in the Safemode state. The NameNode receives Heartbeat and Blockreport messages from the DataNodes. A Blockreport contains the list of data blocks that a DataNode is hosting. Each block has a specified minimum number of replicas. A block is considered safely replicated when the minimum number of replicas of that data block has checked in with the NameNode. After a configurable percentage of safely replicated data blocks checks in with the NameNode (plus an additional 30 seconds), the NameNode exits the Safemode state. It then determines the list of data blocks (if any) that still have fewer than the specified number of replicas. The NameNode then replicates these blocks to other DataNodes.

hadoop启动时,会处于一种特殊的状态称做安全模式,这种模式下,数据块的复制是不能发生的,主节点会收到各数据节点的心跳(Heartbeat)和块报告(Blockreport)信息。块报告信息包含该数据节点包含的块信息,每一个数据块都有一个最小被副本数,当一个块最小副本数与NN记录的相符合时就被认为是安全的复制,在配置一个安全副本百分数与NN相符后(再加30s)(意思就是副本数*N%后与NN记录的相符就认为是安全,N可配置),NN就退出Safemode状态。之后(safemode 时是不能发生数据复制的)如果列表中仍然有少数的副本数比已备份少,NN将会把这些块复制到其他数据节点。

根据上面说明:

1.Safemode 主要是校验数据节点的块信息。

2.safemode 不能发生块复制(Replication )。

3.hadoop 的维护工作是在模式下进行的。

Safemode状态时创建目录报错:

Cannot create directory /hxw. Name node is in safe mode. It was turned on manually. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.

2.集群信息状态报告

以及集群资源占用情况,以及各数据节点信息。

  1.  
    [hadoop@master logs]$ hadoop dfsadmin -report
  2.  
    DEPRECATED: Use of this script to execute hdfs command is deprecated.
  3.  
    Instead use the hdfs command for it.
  4.  
     
  5.  
     
  6.  
    Configured Capacity: 37492883456 (34.92 GB)
  7.  
    Present Capacity: 22908968960 (21.34 GB)
  8.  
    DFS Remaining: 21126250496 (19.68 GB)
  9.  
    DFS Used: 1782718464 (1.66 GB)
  10.  
    DFS Used%: 7.78%
  11.  
    Under replicated blocks: 18
  12.  
    Blocks with corrupt replicas: 0
  13.  
    Missing blocks: 0
  14.  
    Missing blocks (with replication factor 1): 0
  15.  
    Pending deletion blocks: 0
  16.  
     
  17.  
     
  18.  
    -------------------------------------------------
  19.  
    Live datanodes (2):
  20.  
     
  21.  
     
  22.  
    Name: 10.0.1.226:50010 (slave-2)
  23.  
    Hostname: slave-2
  24.  
    Decommission Status : Normal
  25.  
    Configured Capacity: 18746441728 (17.46 GB)
  26.  
    DFS Used: 891359232 (850.07 MB)
  27.  
    Non DFS Used: 7806763008 (7.27 GB)
  28.  
    DFS Remaining: 10048319488 (9.36 GB)
  29.  
    DFS Used%: 4.75%
  30.  
    DFS Remaining%: 53.60%
  31.  
    Configured Cache Capacity: 0 (0 B)
  32.  
    Cache Used: 0 (0 B)
  33.  
    Cache Remaining: 0 (0 B)
  34.  
    Cache Used%: 100.00%
  35.  
    Cache Remaining%: 0.00%
  36.  
    Xceivers: 1
  37.  
    Last contact: Wed Jan 17 17:09:23 CST 2018
  38.  
     
  39.  
     
  40.  
     
  41.  
     
  42.  
    Name: 10.0.1.227:50010 (slave-1)
  43.  
    Hostname: slave-1
  44.  
    Decommission Status : Normal
  45.  
    Configured Capacity: 18746441728 (17.46 GB)
  46.  
    DFS Used: 891359232 (850.07 MB)
  47.  
    Non DFS Used: 6777151488 (6.31 GB)
  48.  
    DFS Remaining: 11077931008 (10.32 GB)
  49.  
    DFS Used%: 4.75%
  50.  
    DFS Remaining%: 59.09%
  51.  
    Configured Cache Capacity: 0 (0 B)
  52.  
    Cache Used: 0 (0 B)
  53.  
    Cache Remaining: 0 (0 B)
  54.  
    Cache Used%: 100.00%
  55.  
    Cache Remaining%: 0.00%
  56.  
    Xceivers: 1
  57.  
    Last contact: Wed Jan 17 17:09:24 CST 2018

3.节点刷新

当集群有新增或删除节点时使用。

  1.  
    [hadoop@master bin]$ ./hdfs dfsadmin -refreshNodes
  2.  
    Refresh nodes successful
  3.  
     
  4.  
    [hadoop@slave-1 sbin]$ ./hadoop-daemon.sh  stop datanode
  5.  
    stopping datanode
  6.  
     
  7.  
    [hadoop@master bin]$ hadoop dfsadmin -report
  8.  
    DEPRECATED: Use of this script to execute hdfs command is deprecated.
  9.  
    Instead use the hdfs command for it.
  10.  
     
  11.  
     
  12.  
    Configured Capacity: 37492883456 (34.92 GB)
  13.  
    Present Capacity: 22914383872 (21.34 GB)
  14.  
    DFS Remaining: 21131665408 (19.68 GB)
  15.  
    DFS Used: 1782718464 (1.66 GB)
  16.  
    DFS Used%: 7.78%
  17.  
    Under replicated blocks: 18
  18.  
    Blocks with corrupt replicas: 0
  19.  
    Missing blocks: 0
  20.  
    Missing blocks (with replication factor 1): 0
  21.  
    Pending deletion blocks: 0
  22.  
     
  23.  
     
  24.  
    -------------------------------------------------
  25.  
    Live datanodes (2):
  26.  
     
  27.  
     
  28.  
    Name: 10.0.1.226:50010 (slave-2)
  29.  
    Hostname: slave-2
  30.  
    Decommission Status : Normal
  31.  
    Configured Capacity: 18746441728 (17.46 GB)
  32.  
    DFS Used: 891359232 (850.07 MB)
  33.  
    Non DFS Used: 7801290752 (7.27 GB)
  34.  
    DFS Remaining: 10053791744 (9.36 GB)
  35.  
    DFS Used%: 4.75%
  36.  
    DFS Remaining%: 53.63%
  37.  
    Configured Cache Capacity: 0 (0 B)
  38.  
    Cache Used: 0 (0 B)
  39.  
    Cache Remaining: 0 (0 B)
  40.  
    Cache Used%: 100.00%
  41.  
    Cache Remaining%: 0.00%
  42.  
    Xceivers: 1
  43.  
    Last contact: Wed Jan 17 18:16:06 CST 2018
  44.  
     
  45.  
     
  46.  
     
  47.  
     
  48.  
    Name: 10.0.1.227:50010 (slave-1)
  49.  
    Hostname: slave-1
  50.  
    Decommission Status : Normal
  51.  
    Configured Capacity: 18746441728 (17.46 GB)
  52.  
    DFS Used: 891359232 (850.07 MB)
  53.  
    Non DFS Used: 6777208832 (6.31 GB)
  54.  
    DFS Remaining: 11077873664 (10.32 GB)
  55.  
    DFS Used%: 4.75%
  56.  
    DFS Remaining%: 59.09%
  57.  
    Configured Cache Capacity: 0 (0 B)
  58.  
    Cache Used: 0 (0 B)
  59.  
    Cache Remaining: 0 (0 B)
  60.  
    Cache Used%: 100.00%
  61.  
    Cache Remaining%: 0.00%
  62.  
    Xceivers: 1
  63.  
    Last contact: Wed Jan 17 18:13:43 CST 2018
  64.  
     
  65.  
    [hadoop@master bin]$ ./hdfs dfsadmin  -refreshNodes
  66.  
    Refresh nodes successful
  67.  
     
  68.  
     
  69.  
    [hadoop@master bin]$ ./hdfs dfsadmin  -refreshNodes
  70.  
    Refresh nodes successful
  71.  
    [hadoop@master bin]$ hadoop dfsadmin -report       
  72.  
    DEPRECATED: Use of this script to execute hdfs command is deprecated.
  73.  
    Instead use the hdfs command for it.
  74.  
     
  75.  
     
  76.  
    Configured Capacity: 37492883456 (34.92 GB)
  77.  
    Present Capacity: 22914379776 (21.34 GB)
  78.  
    DFS Remaining: 21131661312 (19.68 GB)
  79.  
    DFS Used: 1782718464 (1.66 GB)
  80.  
    DFS Used%: 7.78%
  81.  
    Under replicated blocks: 18
  82.  
    Blocks with corrupt replicas: 0
  83.  
    Missing blocks: 0
  84.  
    Missing blocks (with replication factor 1): 0
  85.  
    Pending deletion blocks: 0
  86.  
     
  87.  
     
  88.  
    -------------------------------------------------
  89.  
    Live datanodes (2):
  90.  
     
  91.  
     
  92.  
    Name: 10.0.1.226:50010 (slave-2)
  93.  
    Hostname: slave-2
  94.  
    Decommission Status : Normal
  95.  
    Configured Capacity: 18746441728 (17.46 GB)
  96.  
    DFS Used: 891359232 (850.07 MB)
  97.  
    Non DFS Used: 7801294848 (7.27 GB)
  98.  
    DFS Remaining: 10053787648 (9.36 GB)
  99.  
    DFS Used%: 4.75%
  100.  
    DFS Remaining%: 53.63%
  101.  
    Configured Cache Capacity: 0 (0 B)
  102.  
    Cache Used: 0 (0 B)
  103.  
    Cache Remaining: 0 (0 B)
  104.  
    Cache Used%: 100.00%
  105.  
    Cache Remaining%: 0.00%
  106.  
    Xceivers: 1
  107.  
    Last contact: Wed Jan 17 18:18:54 CST 2018
  108.  
     
  109.  
     
  110.  
     
  111.  
     
  112.  
    Name: 10.0.1.227:50010 (slave-1)
  113.  
    Hostname: slave-1
  114.  
    Decommission Status : Normal
  115.  
    Configured Capacity: 18746441728 (17.46 GB)
  116.  
    DFS Used: 891359232 (850.07 MB)
  117.  
    Non DFS Used: 6777208832 (6.31 GB)
  118.  
    DFS Remaining: 11077873664 (10.32 GB)
  119.  
    DFS Used%: 4.75%
  120.  
    DFS Remaining%: 59.09%
  121.  
    Configured Cache Capacity: 0 (0 B)
  122.  
    Cache Used: 0 (0 B)
  123.  
    Cache Remaining: 0 (0 B)
  124.  
    Cache Used%: 100.00%
  125.  
    Cache Remaining%: 0.00%
  126.  
    Xceivers: 1
  127.  
    Last contact: Wed Jan 17 18:13:43 CST 2018

说明:在停止某个数据节点后,刷新节点信息仍然能看到该节点信息,状态noraml 状态,界面上看到last contact 时间是560+s。

在Namenode 的配置文件slaves 中删除该节点,然后重新刷新节点信息,则后台显示:

  1.  
    [hadoop@master bin]$ ./hdfs dfsadmin -refreshNodes
  2.  
    Refresh nodes successful
  3.  
    [hadoop@master bin]$ hadoop dfsadmin -report
  4.  
    DEPRECATED: Use of this script to execute hdfs command is deprecated.
  5.  
    Instead use the hdfs command for it.
  6.  
     
  7.  
    Configured Capacity: 18746441728 (17.46 GB)
  8.  
    Present Capacity: 10945093632 (10.19 GB)
  9.  
    DFS Remaining: 10053734400 (9.36 GB)
  10.  
    DFS Used: 891359232 (850.07 MB)
  11.  
    DFS Used%: 8.14%
  12.  
    Under replicated blocks: 161
  13.  
    Blocks with corrupt replicas: 0
  14.  
    Missing blocks: 0
  15.  
    Missing blocks (with replication factor 1): 0
  16.  
    Pending deletion blocks: 0
  17.  
     
  18.  
    -------------------------------------------------
  19.  
    Live datanodes (1):
  20.  
     
  21.  
    Name: 10.0.1.226:50010 (slave-2)
  22.  
    Hostname: slave-2
  23.  
    Decommission Status : Normal
  24.  
    Configured Capacity: 18746441728 (17.46 GB)
  25.  
    DFS Used: 891359232 (850.07 MB)
  26.  
    Non DFS Used: 7801348096 (7.27 GB)
  27.  
    DFS Remaining: 10053734400 (9.36 GB)
  28.  
    DFS Used%: 4.75%
  29.  
    DFS Remaining%: 53.63%
  30.  
    Configured Cache Capacity: 0 (0 B)
  31.  
    Cache Used: 0 (0 B)
  32.  
    Cache Remaining: 0 (0 B)
  33.  
    Cache Used%: 100.00%
  34.  
    Cache Remaining%: 0.00%
  35.  
    Xceivers: 1
  36.  
    Last contact: Wed Jan 17 18:26:36 CST 2018
  37.  
     
  38.  
     
  39.  
    Dead datanodes (1):
  40.  
     
  41.  
    Name: 10.0.1.227:50010 (slave-1)
  42.  
    Hostname: slave-1
  43.  
    Decommission Status : Normal
  44.  
    Configured Capacity: 0 (0 B)
  45.  
    DFS Used: 0 (0 B)
  46.  
    Non DFS Used: 6777208832 (6.31 GB)
  47.  
    DFS Remaining: 0 (0 B)
  48.  
    DFS Used%: 100.00%
  49.  
    DFS Remaining%: 0.00%
  50.  
    Configured Cache Capacity: 0 (0 B)
  51.  
    Cache Used: 0 (0 B)
  52.  
    Cache Remaining: 0 (0 B)
  53.  
    Cache Used%: 100.00%
  54.  
    Cache Remaining%: 0.00%
  55.  
    Xceivers: 0
  56.  
    Last contact: Wed Jan 17 18:13:43 CST 2018
  57.  
     

4.网络拓扑

  1.  
    [hadoop@master ~]$ hadoop dfsadmin -printTopology
  2.  
    DEPRECATED: Use of this script to execute hdfs command is deprecated.
  3.  
    Instead use the hdfs command for it.
  4.  
     
  5.  
    Rack: /default-rack
  6.  
    10.0.1.226:50010 (slave-2)
  7.  
    10.0.1.227:50010 (slave-1)

总结:

  1.  
    [hadoop@master bin]$ ./hdfs dfsadmin -safemode enter #进入Safemode模式
  2.  
    [hadoop@master bin]$ ./hdfs dfsadmin -safemode get #获取当前运行模式
  3.  
    [hadoop@master bin]$ ./hdfs dfsadmin -safemode leave #退出Safemode模式
  4.  
    [hadoop@master bin]$ hadoop dfsadmin -report #当前hadoop集群状态信息
  5.  
    [hadoop@master bin]$ ./hdfs dfsadmin -refreshNodes #新增删除节点更新集群信息
  6.  
    [hadoop@master sbin]$ ./hadoop-daemon.sh  stop datanode #停止单个数据节点
  7.  
    [hadoop@master ~]$ hadoop dfsadmin -printTopology #打印集群网络拓扑

hadoop 管理命令dfsadmin的更多相关文章

  1. hadoop管理命令

    -help  功能:输出这个命令参数手册 -ls   功能:显示目录信息 示例: hadoop fs -ls hdfs://hadoop-server01:9000/ 备注:这些参数中,所有的hdfs ...

  2. hadoop管理

    hadoop管理工具: 一,dfsadmin工具 dfsadmin 工具用途比较广,既可以查找HDFS状态信息,又可以在HDFS上执行管理操作,以hdfs dfsadmin形式调用,且需要超级用户权限 ...

  3. Hadoop进阶命令使用介绍

    hadoop生产环境的使用是很复杂的,有些公司是CDH或者Ambari等管理工具运维的,有些是原生的集群俗称裸奔,前者优缺点是运维,查看监控等容易,对于底层理解不友好:裸奔集群反之,裸奔集群的很多东西 ...

  4. 【Hadoop篇】--Hadoop常用命令总结

    一.前述 分享一篇hadoop的常用命令的总结,将常用的Hadoop命令总结如下. 二.具体 1.启动hadoop所有进程start-all.sh等价于start-dfs.sh + start-yar ...

  5. Hadoop常用命令总结

    一.前述 分享一篇hadoop的常用命令的总结,将常用的Hadoop命令总结如下. 二.具体 1.启动hadoop所有进程start-all.sh等价于start-dfs.sh + start-yar ...

  6. 17 RAID与mdadm管理命令

    在"14 磁盘及文件系统管理详解"中,我们详细介绍了磁盘的工作原理,但是,有一点我们一定要明白,作为现在存储数据的主要设备,机械磁盘早就是上个世纪的产品,而它的读写速度与内存.CP ...

  7. hadoop基础----hadoop实战(七)-----hadoop管理工具---使用Cloudera Manager安装Hadoop---Cloudera Manager和CDH5.8离线安装

    hadoop基础----hadoop实战(六)-----hadoop管理工具---Cloudera Manager---CDH介绍 简介 我们在上篇文章中已经了解了CDH,为了后续的学习,我们本章就来 ...

  8. 大数据之路week06--day07(Hadoop常用命令)

    一.前述 分享一篇hadoop的常用命令的总结,将常用的Hadoop命令总结如下. 二.具体 1.启动hadoop所有进程start-all.sh等价于start-dfs.sh + start-yar ...

  9. Hadoop Shell命令大全

    hadoop支持命令行操作HDFS文件系统,并且支持shell-like命令与HDFS文件系统交互,对于大多数程序猿/媛来说,shell-like命令行操作都是比较熟悉的,其实这也是Hadoop的极大 ...

随机推荐

  1. 提高git下载速度(非代理或修改HOST)

    1. 利用开源中国提供的代码仓库 标题已经说的很清楚了,我想对于经常使用git的人来讲,很可能已经知道了.对于新手刚接触git的人来讲,可能你只知道github. 实际上,国内也有很多代码仓库提供方, ...

  2. BZOJ5197:[CERC2017]Gambling Guide(最短路,期望DP)

    Description 给定一张n个点,m条双向边的无向图. 你要从1号点走到n号点.当你位于x点时,你需要花1元钱,等概率随机地买到与x相邻的一个点的票,只有通过票才能走到其它点. 每当完成一次交易 ...

  3. 【转】VMware 14 Pro安装mac os 10.12

    一.准备工作 [1]资源下载 VMware Workstation Pro 14 已安装或自行安装 Unlocker (链接: https://pan.baidu.com/s/1dG5jkuH 密码: ...

  4. node.js如何引用其它js文件

    以Java来说,比如要实现第三方存储,我可能需要导入对应的库,以maven为例,使用腾讯云或者七牛云.阿里云,我需要导入对应的maven依赖.再比如,有些时候我们封装某个类,而那个类不在该包下,我们需 ...

  5. Tomcat的九个内置对象

    在之前学习过程中使用的对象大部分都是我们自己使用new关键字或者反射创建的,现在容器中会自动创建对象,我们只要直接使用即可,不需要我们再去创建这些对象,在Tomcat容器中提供了九种内置对象,有一些不 ...

  6. item 23: 理解std::move和std::forward

    本文翻译自<effective modern C++>,由于水平有限,故无法保证翻译完全正确,欢迎指出错误.谢谢! 博客已经迁移到这里啦 根据std::move和std::forward不 ...

  7. Feature Extractor[Inception v4]

    0. 背景 随着何凯明等人提出的ResNet v1,google这边坐不住了,他们基于inception v3的基础上,引入了残差结构,提出了inception-resnet-v1和inception ...

  8. 十二省联考题解 - JLOI2019 题解

    十二省联考题解 - JLOI2019 题解 两个T3的难度较大 平均代码量远大于去年省选 套路题考查居多 A 难度等级 1 $n^2$暴力可以拿到$60$分的优秀成绩 然后可以想到把区间异或转化为前缀 ...

  9. Express+MongoDB开发web后台接口MongoDB

    摘要: Express开发web接口; 安装MongoDB,启动.连接MongoDB服务台; 使用nodejs和mongoose模块链接和操作MongoDB; 一.Express开发web接口 exp ...

  10. flink1.7自定义source实现

    flink读取source data 数据的来源是flink程序从中读取输入的地方.我们可以使用StreamExecutionEnvironment.addSource(sourceFunction) ...