Zookeeper的分布式部署 >> Hadoop的分布式部署

一、集群规划

主机名 node01 node02 node03
JDK
Zookeeper
NameNode  
JournalNode
DataNode
ResourceManager  
NodeManager

二、安装部署

1、将 hadoop-2.5.2.tar.gz 上传到node01、node02、node03的 /opt/software目录下

2、将 hadoop-2.5.2.tar.gz 解压到 /opt/module 目录下

[root@node01 software]# tar -zxvf hadoop-2.5.2.tar.gz -C /opt/module/
hadoop-2.5.2/
hadoop-2.5.2/bin/
hadoop-2.5.2/bin/hadoop
hadoop-2.5.2/bin/hdfs
hadoop-2.5.2/bin/mapred
hadoop-2.5.2/bin/yarn.cmd
hadoop-2.5.2/bin/hadoop.cmd
hadoop-2.5.2/bin/hdfs.cmd
hadoop-2.5.2/bin/mapred.cmd
......
[root@node01 software]#

3、修改配置文件/opt/module/hadoop-2.5.2/etc/hadoop/hadoop-env.sh

......
# 配置JDK
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.372.b07-1.el7_9.x86_64/ ......
# 定义一些变量
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
export HDFS_JOURNALNODE_USER=root
export HDFS_ZKFC_USER=root

注意:hadoop-env.sh脚本里必须指定JAVA_HOME的具体的值,要不然后面使用start-dfs.sh和stop-dfs.sh就会无效。

4、修改配置文件/opt/module/hadoop-2.5.2/etc/hadoop/core-site.xml

5、修改配置文件/opt/module/hadoop-2.5.2/etc/hadoop/hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <!-- Put site-specific property overrides in this file. --> <configuration>
<!-- 副本数默认为3副本 -->
<property>
<name>dfs.replication</name>
<value>2</value>
</property> <!-- 关闭权限检查 -->
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property> <!-- dfs.namenode.name.dir:namenode的目录放的路径在hadoop.tmp.dir之上做了修改
file://${hadoop.tmp.dir}/dfs/name dfs.datanode.data.dir:namenode的目录放的路径在hadoop.tmp.dir之上做了修改
file://${hadoop.tmp.dir}/dfs/data
-->
<!-- 为nameservice起一个别名 -->
<property>
<name>dfs.nameservices</name>
<value>nameservice1</value>
</property>
<property>
<name>dfs.ha.namenodes.nameservice1</name>
<value>nn1,nn2</value>
</property>
<!-- Active NN -->
<property>
<name>dfs.namenode.rpc-address.nameservice1.nn1</name>
<value>node01:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.nameservice1.nn1</name>
<value>node01:9870</value>
</property>
<!-- Secondary NN -->
<property>
<name>dfs.namenode.rpc-address.nameservice1.nn2</name>
<value>node02:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.nameservice1.nn2</name>
<value>node02:9870</value>
</property> <!-- Journalnode列表: 负责Hadoop与Zookeeper进行沟通 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://node01:8485;node02:8485;node03:8485/nameservice1</value>
</property> <!-- 自动切换namenode -->
<property>
<name>dfs.client.failover.proxy.provider.nameservice1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property> <!-- Journal的存储位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/data/hadoop/data/journal/</value>
</property> <!-- 启动故障转移 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property> <!-- SSH免密码登录 -->
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
</configuration>

dfs.replication:副本数、

dfs.permissions.enabled:权限检查,true:在HDFS中启用权限检查.false:关闭权限检查

dfs.nameservices:以逗号分隔的主节点列表(namenode组)

dfs.namenode.rpc-address.nameservice1.nn1:处理所有客户端请求的RPC地址

dfs.namenode.http-address.nameservice1.nn1:DFS NameNodeWebUI将侦听的地址和基本端口。

dfs.namenode.rpc-address.nameservice1.nn2:处理所有客户端请求的RPC地址

dfs.namenode.http-address.nameservice1.nn2:DFS NameNodeWebUI将侦听的地址和基本端口。

dfs.namenode.shared.edits.dir:HA集群中多个namenodes之间共享存储的目录。此目录将由活动目录写入,由备用目录读取,以保持名称空间同步。在非HA集群中保持为空。

dfs.client.failover.proxy.provider.nameservice1: 主机配置的故障转移代理提供程序的类名的前缀(加上所需的名称服务ID)。有关更详细的信息,请参阅HDFS高可用性文档的“配置详细信息”部分。

dfs.journalnode.edits.dir:存储日志编辑文件的目录。

dfs.ha.automatic-failover.enabled:是否启用自动故障转移。

dfs.ha.fencing.methods:免密登录

dfs.ha.fencing.ssh.private-key-files:私钥目录(/root/.ssh/id_rsa)

6、 修改配置文件:/opt/module/hadoop-2.5.2/etc/hadoop/mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <!-- Put site-specific property overrides in this file. --> <configuration>
<!-- yarn -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- ctrl+shift+/ -->
<!-- <property>
<name>mapreduce.application.classpath</name>
<value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
</property> --> <!-- 一旦启动了yarn,建议换成必须设置最大内存 -->
<property>
<name>mapreduce.map.memory.mb</name>
<value>200</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Xmx200M</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>200</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-Xmx200M</value>
</property>
</configuration>

7、修改配置文件: /opt/module/hadoop-2.5.2/etc/hadoop/yarn-site.xml

<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<configuration> <!-- Site specific YARN configuration properties -->
<!-- 配置yarn -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
<!-- 配置Yarn开启高可用 -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!-- 配置Yarn服务的名称 -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yarn1</value>
</property>
<!-- ResourceManager列表 -->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!-- 配置ResourceManager主机名以及webapp的端口号 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>node01</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>node01:8088</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>node02</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>node02:8088</value>
</property>
<!-- 指定Yarn所依赖的Zookeeper集群列表 -->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>node01:2181,node02:2181,node03:2181</value>
</property>
</configuration>

设置成hadoop.zk.address 之后,sbin/start-yarn.shsbin/stop-yarn.sh会失效,所以需要设置成:yarn.resourcemanager.zk-address

8、将解压后的hadoop-2.5.2分发到node02、node03节点的/opt/module目录下

[root@node01 module]# scp -r -p hadoop-2.5.2/ root@node02:$PWD
[root@node01 module]# scp -r -p hadoop-2.5.2/ root@node03:$PWD

首次启动初始化

1、启动zookeeper(在node01、node02、node03节点执行)

cd /opt/module/zookeeper-3.4.5/bin
./zkServer.sh restart

2、启动journalnode(在node01、node02、node03节点执行)

sbin/hadoop-daemon.sh start journalnode

脚本路径:/opt/module/hadoop-2.5.2/sbin/hadoop-daemon.sh

3、格式化namenode

[root@node01 hadoop-2.5.2]# bin/hdfs namenode -format
23/06/06 01:02:22 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = node01/192.168.56.121
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.5.2
......

脚本路径: /opt/module/hadoop-2.5.2/bin/hdfs

4、将格式化后的元数据分发到ctos79-02(2nn)的 /data/hadoop/data目录下

5、启动ctos79-01节点的NameNode(nn)

6、在ctos79-02节点执行bin/hdfs namenode -bootstrapStandby

注意:在命令执行之后会有提示输入的地方,输入Y即可。(应该是前面从ctos79-01分发了hadoop-root目录导致的。)

7、 启动node02的namenode(2nn)

# 启动namenode
[root@node02 hadoop-2.5.2]# sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /opt/module/hadoop-2.5.2/logs/hadoop-root-namenode-node02.out
[root@node02 hadoop-2.5.2]# jps
2080 Jps
1795 JournalNode
2009 NameNode
1647 QuorumPeerMain
[root@node02 hadoop-2.5.2]#

8、在其中一个zookeeper节点初始化 ZKFC注意:在zookeeper服务运行的情况下执行该操作)

9、群起/停 namenode、journalnode、datanode

[root@node01 hadoop-2.5.2]# sbin/start-dfs.sh
Starting namenodes on [node01 node02]
node01: namenode running as process 4059. Stop it first.
node02: namenode running as process 2752. Stop it first.
localhost: datanode running as process 3905. Stop it first.
Starting journal nodes [node01 node02 node03]
node03: journalnode running as process 1786. Stop it first.
node01: journalnode running as process 2032. Stop it first.
node02: journalnode running as process 1795. Stop it first.
Starting ZK Failover Controllers on NN hosts [node01 node02]
node01: starting zkfc, logging to /opt/module/hadoop-2.5.2/logs/hadoop-root-zkfc-node01.out
node02: starting zkfc, logging to /opt/module/hadoop-2.5.2/logs/hadoop-root-zkfc-node02.out
[root@node01 hadoop-2.5.2]#
[root@node01 hadoop-2.5.2]#
[root@node01 hadoop-2.5.2]#
[root@node01 hadoop-2.5.2]# sbin/stop-dfs.sh
Stopping namenodes on [node01 node02]
node01: no namenode to stop
node02: stopping namenode
localhost: stopping datanode
Stopping journal nodes [node01 node02 node03]
node01: stopping journalnode
node02: stopping journalnode
node03: stopping journalnode
Stopping ZK Failover Controllers on NN hosts [node01 node02]
node01: stopping zkfc
node02: stopping zkfc
[root@node01 hadoop-2.5.2]#

10、启停yarn(只能启动当前节点上yarn相关组件:ResourceManagerNodeManager

[root@node01 hadoop-2.5.2]# sbin/stop-yarn.sh
stopping yarn daemons
stopping resourcemanager
localhost: stopping nodemanager
no proxyserver to stop
[root@node01 hadoop-2.5.2]# jps
6579 DataNode
6933 DFSZKFailoverController
1943 QuorumPeerMain
6487 NameNode
6759 JournalNode
8314 Jps
[root@node01 hadoop-2.5.2]#
[root@node01 hadoop-2.5.2]#
[root@node01 hadoop-2.5.2]# sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /opt/module/hadoop-2.5.2/logs/yarn-root-resourcemanager-node01.out
localhost: starting nodemanager, logging to /opt/module/hadoop-2.5.2/logs/yarn-root-nodemanager-node01.out
[root@node01 hadoop-2.5.2]# jps
6579 DataNode
6933 DFSZKFailoverController
8470 NodeManager
1943 QuorumPeerMain
6487 NameNode
6759 JournalNode
8366 ResourceManager
8590 Jps
[root@node01 hadoop-2.5.2]#

11、群起/停 hdfs和yarn的脚本

sbin/start-all.sh = sbin/start-dfs.sh + sbin/start-yarn.sh

遇到的问题

1、使用start-dfs.sh、start-yarn.sh发现nodeManager和Datanode都没起来

问题解决:发现是 /opt/module/hadoop-3.3.6/etc/hadoop/workers中未配置hadoop集群三个节点的主机名。

页面

— 要养成终生学习的习惯 —

Hadoop - [04] 分布式部署的更多相关文章

  1. ubuntu下hadoop完全分布式部署

    三台机器分别命名为: hadoop-master ip:192.168.0.25 hadoop-slave1 ip:192.168.0.26 hadoop-slave2 ip:192.168.0.27 ...

  2. Hadoop 完全分布式部署

    完全分布式部署Hadoop 分析: 1)准备3台客户机(关闭防火墙.静态ip.主机名称) 2)安装jdk 3)配置环境变量 4)安装hadoop 5)配置环境变量 6)安装ssh 7)集群时间同步 7 ...

  3. Hadoop 完全分布式部署(三节点)

    用来测试,我在VMware下用Centos7搭起一个三节点的Hadoop完全分布式集群.其中NameNode和DataNode在同一台机器上,如果有条件建议大家把NameNode单独放在一台机器上,因 ...

  4. Hadoop伪分布式部署

    一.Hadoop组件依赖关系: 步骤 1)关闭防火墙和禁用SELinux 切换到root用户 关闭防火墙:service iptables stop Linux下开启/关闭防火墙的两种方法 1.永久性 ...

  5. ubuntu hadoop伪分布式部署

    环境 ubuntu hadoop2.8.1 java1.8 1.配置java1.8 2.配置ssh免密登录 3.hadoop配置 环境变量 配置hadoop环境文件hadoop-env.sh core ...

  6. hadoop完全分布式部署

    1.我们先看看一台节点的hdfs的信息:(已经安装了hadoop的虚拟机:安装hadoophttps://www.cnblogs.com/lyx666/p/12335360.html) start-d ...

  7. Hadoop+HBase分布式部署

    test 版本选择

  8. Apache Hadoop 2.9.2 完全分布式部署

    Apache Hadoop 2.9.2 完全分布式部署(HDFS) 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.环境准备 1>.操作平台 [root@node101.y ...

  9. Hadoop生态圈-zookeeper完全分布式部署

    Hadoop生态圈-zookeeper完全分布式部署 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 本篇博客部署是建立在Hadoop高可用基础之上的,关于Hadoop高可用部署请参 ...

  10. Hadoop生态圈-phoenix完全分布式部署以及常用命令介绍

    Hadoop生态圈-phoenix完全分布式部署 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. phoenix只是一个插件,我们可以用hive给hbase套上一个JDBC壳,但是你 ...

随机推荐

  1. docker安装cas

    直接docker pull apereo/cas ,docker run的时候各种报错: standard_init_linux.go:178: exec user process caused &q ...

  2. NATS: 请求-响应消息

    请求-回复消息 https://docs.nats.io/nats-concepts/core-nats/reqreply 请求-回复 在分布式系统中,请求-回复是一种常见的模式.发送请求之后,应用程 ...

  3. library initialization failed - unable to allocate file descriptor table - out of memoryAborte

    问题描述: 使用Docker run container 的时候, 容器在启动几秒后自动退出 , 或者不退出,但里面的服务无法启动成功. 此例的服务是用 java -jar 来启动一个服务. 使用 d ...

  4. H2数据UNIX_TIMESTAMP兼容问题

    开篇  今天某同事在spring结合H2实现mybatis DAO层单测的时候遇到一个问题,本着人道主义临时支持下我司大xx业务,就帮忙一起看了下,回想一下整个过程还是挺有意思的,就顺便写了这篇文章来 ...

  5. Qt音视频开发40-人脸识别离线版

    一.前言 上一篇文章写了在线调用人脸识别api进行处理,其实很多的客户需求是要求离线使用的,尤其是一些事业单位,严禁这些刷脸数据外泄上传到服务器,尽管各个厂家号称严格保密这些数据,但要阻止这些担心,唯 ...

  6. Qt音视频开发32-Onvif网络设置

    一.前言 用onvif协议来对设备的网络信息进行获取和设置,这个操作在众多的NVR产品中,用的很少,绝大部分用户都还是习惯直接通过摄像机的web页面进去配置,其实修改网络配置的功能在大部分的NVR中都 ...

  7. [转]CLion安装及无限试用

    Clion安装及无限试用:链接:https://pan.baidu.com/s/1mreUx5QyS4nkVQMOhdjf7g提取码:ylqw 翻译 搜索 复制

  8. [转]关于Visual Studio:如何使用cmake检测64位MSVC?

    1.如何使用 cmake 检测 64 位 MSVC? 2.关于Visual Studio:如何使用cmake检测64位MSVC?

  9. [转]xmanager和xshell什么关系 xmanager怎么使用

    xmanager是一款小巧实用且运行于Windows系统上的X服务器软件,可以帮助用户快速连接并访问Unix/Linux服务器.那xmanager和xshell什么关系,xmanager怎么使用,本文 ...

  10. 在命令中输入信息创建maven项目

    参考链接: 1.使用命令行创建maven web项目 2.Maven 三种archetype说明 3.maven创建项目时在generating project in interactive mode ...