hadoop namenode格式化问题汇总
hadoop namenode格式化问题汇总
(持续更新)
0 Hadoop集群环境
3台rhel6.4,2个namenode+2个zkfc, 3个journalnode+zookeeper-server 组成一个最简单的HA集群方案。
1) hdfs-site.xml配置如下:
<?xml version="1.0" ?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Quorum Journal Manager HA:
http://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
-->
<configuration>
<!-- Quorum Journal Manager HA -->
<property>
<name>dfs.nameservices</name>
<value>hacl</value>
<description>unique identifiers for each NameNode in the nameservice.</description>
</property>
<property>
<name>dfs.ha.namenodes.hacl</name>
<value>hn1,hn2</value>
<description>Configure with a list of comma-separated NameNode IDs.</description>
</property>
<property>
<name>dfs.namenode.rpc-address.hacl.hn1</name>
<value>hacl-node1.pepstack.com:8020</value>
<description>the fully-qualified RPC address for each NameNode to listen on.</description>
</property>
<property>
<name>dfs.namenode.rpc-address.hacl.hn2</name>
<value>hacl-node2.pepstack.com:8020</value>
<description>the fully-qualified RPC address for each NameNode to listen on.</description>
</property>
<property>
<name>dfs.namenode.http-address.hacl.hn1</name>
<value>hacl-node1.pepstack.com:50070</value>
<description>the fully-qualified HTTP address for each NameNode to listen on.</description>
</property>
<property>
<name>dfs.namenode.http-address.hacl.hn2</name>
<value>hacl-node2.pepstack.com:50070</value>
<description>the fully-qualified HTTP address for each NameNode to listen on.</description>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hacl-node1.pepstack.com:8485;hacl-node2.pepstack.com:8485;hacl-node3.pepstack.com:8485/hacl</value>
<description>the URI which identifies the group of JNs where the NameNodes will write or read edits.</description>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/hacl/data/dfs/jn</value>
<description>the path where the JournalNode daemon will store its local state.</description>
</property>
<property>
<name>dfs.client.failover.proxy.provider.hacl</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
<description>the Java class that HDFS clients use to contact the Active NameNode.</description>
</property>
<!-- Automatic failover adds two new components to an HDFS deployment:
- a ZooKeeper quorum;
- the ZKFailoverController process (abbreviated as ZKFC).
Configuring automatic failover:
-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
<description>a list of scripts or Java classes which will be used to fence the Active NameNode during a failover.</description>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/var/lib/hadoop-hdfs/.ssh/id_dsa</value>
<description>The sshfence option SSHes to the target node and uses fuser to kill the process
listening on the service's TCP port. In order for this fencing option to work, it must be
able to SSH to the target node without providing a passphrase. Thus, one must also configure the
dfs.ha.fencing.ssh.private-key-files option, which is a comma-separated list of SSH private key files.
logon namenode machine:
cd /var/lib/hadoop-hdfs
su hdfs
ssh-keygen -t dsa
</description>
</property>
<!-- Optionally, one may configure a non-standard username or port to perform the SSH.
One may also configure a timeout, in milliseconds, for the SSH, after which this
fencing method will be considered to have failed. It may be configured like so:
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence([[username][:port]])</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
//-->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled.hacl</name>
<value>true</value>
</property>
<!-- Configurations for NameNode: -->
<property>
<name>dfs.namenode.name.dir</name>
<value>/hacl/data/dfs/nn</value>
<description>Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.</description>
</property>
<property>
<name>dfs.blocksize</name>
<value>268435456</value>
<description>HDFS blocksize of 256MB for large file-systems.</description>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
<description></description>
</property>
<property>
<name>dfs.namenode.handler.count</name>
<value>100</value>
<description>More NameNode server threads to handle RPCs from large number of DataNodes.</description>
</property>
<!-- Configurations for DataNode: -->
<property>
<name>dfs.datanode.data.dir</name>
<value>/hacl/data/dfs/dn</value>
<description>Comma separated list of paths on the local filesystem of a DataNode where it should store its blocks.</description>
</property>
</configuration>
2) core-site.xml配置如下:
<?xml version="1.0" ?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hacl</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/hdfs/data/tmp</value>
<description>chown -R hdfs:hdfs hadoop_tmp_dir</description>
</property>
<!-- Configuring automatic failover -->
<property>
<name>ha.zookeeper.quorum</name>
<value>hacl-node1.pepstack.com:2181,hacl-node2.pepstack.com:2181,hacl-node3.pepstack.com:2181</value>
<description>This lists the host-port pairs running the ZooKeeper service.</description>
</property>
<!-- Securing access to ZooKeeper -->
</configuration>
1. namenode格式化过程如下:
1) 启动所有journalnode,必须3个节点的JN都正确启动。关闭所有的namenode:
# service hadoop-hdfs-journalnode start # service hadoop-hdfs-namenode stop
2) namenode格式化。hacl-pepstack-com是我给集群起的名字,可以忽略。su - hdfs -c "..." 表示以hdfs用户格式化。
hdfs-site.xml和core-site.xml上指定的所有目录都必须赋予正确的权限:
# chown -R hdfs:hdfs /hacl/data/dfs
然后在任何一个namenode上格式化,比如在hn1上执行
########## hn1 # su - hdfs -c "hdfs namenode -format -clusterid hacl-pepstack-com -force" # service hadoop-hdfs-namenode start ##### hn1
首先必须把刚格式化好的hn1启动,然后在另一个namenode上(hn2)执行:
########## hn2 # su - hdfs -c "hdfs namenode -bootstrapStandby -force"# service hadoop-hdfs-namenode start ##### hn2
至此,2个namenode都格式化并且启动好了。
hadoop namenode格式化问题汇总的更多相关文章
- Hadoop源码:namenode格式化和启动过程实现
body { margin: 0 auto; font: 13px / 1 Helvetica, Arial, sans-serif; color: rgba(68, 68, 68, 1); padd ...
- hdfs格式化hadoop namenode -format错误
在对HDFS格式化,执行hadoop namenode -format命令时,出现未知的主机名的问题,异常信息如下所示: [shirdrn@localhost bin]$ hadoop namenod ...
- hadoop namenode多次格式化后,导致datanode启动不了
jps hadoop namenode -format dfs directory : /home/hadoop/dfs --data --current/VERSION #Wed Jul :: CS ...
- Hadoop笔记——技术点汇总
目录 · 概况 · Hadoop · 云计算 · 大数据 · 数据挖掘 · 手工搭建集群 · 引言 · 配置机器名 · 调整时间 · 创建用户 · 安装JDK · 配置文件 · 启动与测试 · Clo ...
- Hadoop namenode无法启动
最近遇到了一个问题,执行start-all.sh的时候发现JPS一下namenode没有启动 每次开机都得重新格式化一下namenode才可以 其实问题就出在tmp文件,默 ...
- namenode无法启动(namenode格式化失败)
格式化namenode root@node04 bin]# sudo -u hdfs hdfs namenode –format 16/11/14 10:56:51 INFO namenode.Nam ...
- Hadoop重新格式化HDFS的方法
1.查看hdfs-site.xml: <property> <name>dfs.name.dir</name> <value>/home/hadoop/ ...
- Hadoop记录-Hadoop NameNode 高可用 (High Availability) 实现解析
Hadoop NameNode 高可用 (High Availability) 实现解析 NameNode 高可用整体架构概述 在 Hadoop 1.0 时代,Hadoop 的两大核心组件 HDF ...
- 对hadoop namenode -format执行过程的探究
引言 本文出于一个疑问:hadoop namenode -format到底在我的linux系统里面做了些什么? 步骤 第1个文件bin/hadoop Hadoop脚本位于hadoop根目录下的bi ...
随机推荐
- 从Stage角度看cassandra write
声明 文章发布于CSDN cassandra concurrent 具体实现 cassandra并发技术文中介绍了java的concurrent实现,这里介绍cassandra如何基于java实现ca ...
- java记事本开发
今天我想要分享的是一个用java写的一个记事本程序.我知道现在市面上有各种各样的记事本了,但是我发现有的写的功能不够完善,或者是代码层次结构不够清晰,有的甚至看了之后云里雾里的,有的还不乏了非常明显的 ...
- 19 Handler 总结
Handler 一, 回顾异步任务 AsyncTask 二, android 使用线程的规则 1,在主线程 不能做阻塞操作 2,在主线程之外的线程不能更新Ui 三, Handler的作用 1,在子线程 ...
- Xcode7.2如何真机调试iOS 9.3的设备
大熊猫猪·侯佩原创或翻译作品.欢迎转载,转载请注明出处. 如果觉得写的不好请多提意见,如果觉得不错请多多支持点赞.谢谢! hopy ;) 本猫的mac系统为10.10,Xcode版本为7.2 本猫将i ...
- UE4读取scv文件 -- 数据驱动游戏性元素
官方文档链接:http://docs.unrealengine.com/latest/CHN/Gameplay/DataDriven/index.html 略懒,稍微麻烦重复的工作,总希望能找人帮忙一 ...
- FFmpeg源代码简单分析:configure
===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...
- 5.1、Android Studio用Logcat编写和查看日志
Android Studio在Android Monitor中包含了一个logcat的tab,可以打印系统事件,比如垃圾回收发生时,实时打印应用消息. 为了显示需要的信息,你可以创建过滤器,更改需要显 ...
- Android ColorMatrix类图像颜色处理-黑白老照片、泛黄旧照片、高对比度等效果
在Android中,对图像进行颜色方面的处理,如黑白老照片.泛黄旧照片.高对比度.低饱和度等效果,都可以通过使用颜色矩阵(ColorMatrix)来实现. 1.颜色矩阵(ColorMatrix)介绍 ...
- J2EE进阶(十三)Spring MVC常用的那些注解
Spring MVC常用的那些注解 前言 Spring从2.5版本开始在编程中引入注解,用户可以使用@RequestMapping, @RequestParam,@ModelAttribute等等这样 ...
- Linux技巧:一次删除一百万个文件的最快方法
最初的测评 昨天,我看到一个非常有趣的删除一个目录下的海量文件的方法.这个方法来自http://www.quora.com/How-can-someone-rapidly-delete-400-000 ...