Formatting HDFS
Working on hadoop, especially on test clusters, I have managed to break my HDFS layer and sometimes with no possible redemption, or at least none that I wanted to invest time in. For whatever other reason sometimes you just want to scratch your HDFS and start anew.
Without going on too much details, which is outside the point of this blog post. HDFS is mainly composed of 2 types of elements:
- Namenode: At high level the namenode stores the HDFS namespace, think of it as your file system tree.
- Datanode: this is where your data is actually stored
The Namenode: /hadoop/hdfs/namenode/current

All new edits are written to the the edit log and regularly merged out to an FSImage file, for more concise management. An fsimage file represents the file system state after all modifications up to a specific transaction ID. The seen_txid file, has the last seen transaction. VERSION: contains cluster and hdfs IDs.
For a more detailled explanation: Hdfs metadata
The Datanode: /hadoop/hdfs/data/current
In our example we will only focus on VERSIOn very close to the namenode VERSION.
Hdfs non HA formatting
In non HA everything is simple enough.
- Stop the HDFS Service
- run hadoop namenode -format (as user hdfs)
- clear the data directory on all datanodes
- restart hdfs
At this point your HDFS layer is empty and if you check the VERSION of namenodes and datanodes they should coincide
Hdfs HA formatting
In HA things get a little more complicated. In HA Standby and Active namenodes have a shared storage managed by the journal node service. HA relies on a failover scenario to swap from StandBy to Active Namenode and as any other system in hadoop this uses zookeeper. As you can see a couple more pieces need to made aware of a formatting action.
The initial steps are very close
- Stop the Hdfs service
- Start only the journal nodes (as they will need to be made aware of the formatting)
- On the first namenode (as user hdfs)
- hadoop namenode -format
- hdfs namenode -initializeSharedEdits -force (for the journal nodes)
- hdfs zkfc -formatZK -force (to force zookeeper to reinitialise)
- restart that first namenode
- On the second namenode
- hdfs namenode -bootstrapStandby -force (force synch with first namenode)
- On every datanode clear the data directory
- Restart the HDFS service
This was a very simple step by step guide to formatting. In a later article we will cover actually repairing common errors in HDFS
Formatting HDFS的更多相关文章
- HDFS中namenode启动失败
1.环境配置: -1.core-site.xml文件 <configuration> <property> <name>fs.defaultFS</name& ...
- Hadoop 2.7.4 HDFS+YRAN HA部署
实验环境 主机名称 IP地址 角色 统一安装目录 统一安装用户 sht-sgmhadoopnn-01 172.16.101.55 namenode,resourcemanager /usr/local ...
- Hadoop集群-HDFS集群中大数据运维常用的命令总结
Hadoop集群-HDFS集群中大数据运维常用的命令总结 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 本篇博客会简单涉及到滚动编辑,融合镜像文件,目录的空间配额等运维操作简介.话 ...
- Hadoop集群(二) HDFS搭建
HDFS只是Hadoop最基本的一个服务,很多其他服务,都是基于HDFS展开的.所以部署一个HDFS集群,是很核心的一个动作,也是大数据平台的开始. 安装Hadoop集群,首先需要有Zookeeper ...
- Apache hadoop namenode ha和yarn ha ---HDFS高可用性
HDFS高可用性Hadoop HDFS 的两大问题:NameNode单点:虽然有StandbyNameNode,但是冷备方案,达不到高可用--阶段性的合并edits和fsimage,以缩短集群启动的时 ...
- HDFS ha 格式化报错:a shared edits dir must not be specified if HA is not enabled.
错误内容: Formatting using clusterid: CID-19921335-620f-4e72-a056-899702613a6b2019-01-12 07:28:46,986 IN ...
- hadoop 2.7.3本地环境运行官方wordcount-基于HDFS
接上篇<hadoop 2.7.3本地环境运行官方wordcount>.继续在本地模式下测试,本次使用hdfs. 2 本地模式使用fs计数wodcount 上面是直接使用的是linux的文件 ...
- Hadoop学习之旅二:HDFS
本文基于Hadoop1.X 概述 分布式文件系统主要用来解决如下几个问题: 读写大文件 加速运算 对于某些体积巨大的文件,比如其大小超过了计算机文件系统所能存放的最大限制或者是其大小甚至超过了计算机整 ...
- python基础操作以及hdfs操作
目录 前言 基础操作 hdfs操作 总结 一.前言 作为一个全栈工程师,必须要熟练掌握各种语言...HelloWorld.最近就被"逼着"走向了python开发之路, ...
随机推荐
- zz如何让你的婚姻天长地久?
如果天长地久意味着一列永不出轨的火车,下面有关婚姻生活的战略就像制定一张准确的运行时刻表.因为成功的婚姻并非源于机运,所谓的七年之痒也不是空穴来风.对那些已婚男人来说,他们需要计划——为了一年比一年过 ...
- Dockerfile RUN, CMD & ENTRYPOINT
Dockerfile RUN, CMD & ENTRYPOINT 在使用Dockerfile创建image时, 有几条指令比较容易混淆, RUN, CMD, ENTRYPOINT. RUN是在 ...
- Linux下编译busybox时出现的问题
编译busybox的时候出现了一个问题: sync.c:(.text.sync_main+0x78): undefined reference to `syncfs' collect2: ld ret ...
- (一)JQuery动态加载js的三种方法
Jquery动态加载js的三种方法如下: 第一种: $.getscript("test.js"); 例如: <script type="text/javascrip ...
- SpringMvc与Struts2的对比
目前企业中使用SpringMvc的比例已经远远超过Struts2,那么两者到底有什么区别,是很多初学者比较关注的问题,下面我们就来对SpringMvc和Struts2进行各方面的比较: 1.核心控制器 ...
- Flask测试和部署
一 蓝图Blueprint 为什么学习蓝图? 我们学习Flask框架,是从写单个文件,执行hello world开始的.我们在这单个文件中可以定义路由.视图函数.定义模型等等.但这显然存在一个问题:随 ...
- web api 跨域问题的解决办法
在APP_Start文件夹下面的WebApiConfig.cs文件夹配置跨域 public static class WebApiConfig { public static void Registe ...
- Linq的Join == 两个foreach
因为实在太懒了,很久没动笔,今天强迫自己写一个小短篇. 之前讨论过用SelectMany代替两重的foreach循环.今天我们看一下Join和foreach的关系. 首先是Join的定义 public ...
- [UWP]xaml中自定义附加属性使用方法的注意项
---恢复内容开始--- 随笔小记,欢迎指正 在UWP平台上做WVVM的时候,想针对ListBox的SelectionChanged事件定义一个自定义的命令,于是使用自定义附加属性的方式.可是最后自定 ...
- 遇到问题-----cas4.2.x登录成功后报错No principal was found---cas中文乱码问题完美解决
情况 我们之前已经完成了cas4.2.x登录使用MongoDB验证方式并且自定义了加密. 单点登录(十五)-----实战-----cas4.2.x登录mongodb验证方式实现自定义加密 但是悲剧的是 ...