hadoop 2.x HA(QJM)安装部署规划
一、主机服务规划:
|
db01 db02 db03 db04 db05 namenode namenode journalnode journalnode journalnode datanode datanode datanode datanode datanode zookeeper zookeeper zookeeper ZKFC ZKFC |
二、环境配置
1、创建hadoop用户用于安装软件
|
groupadd hadoop useradd -g hadoop hadoop echo "dbking588" | passwd --stdin hadoop 配置环境变量: export HADOOP_HOME=/opt/cdh-5.3.6/hadoop-2.5.0 export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH:$HOME/bin |
2、配置ssh免密码登录
|
--配置方法: $ ssh-keygen -t rsa $ ssh-copy-id db07.chavin.king (ssh-copy-id方式只能用于rsa加密秘钥配置,测试对于dsa加密配置无效) |
|
--验证: [hadoop@db01 ~]$ ssh db02 date Wed Apr 19 09:57:34 CST 2017 |
3、设置hadoop用户sudo权限
|
chmod u+w /etc/sudoers echo "hadoop ALL=(root)NOPASSWD:ALL" >> /etc/sudoers chmod u-w /etc/sudoers |
4、关闭防火墙并且禁用selinux
|
sed -i '/SELINUX=enforcing/d' /etc/selinux/config sed -i '/SELINUX=disabled/d' /etc/selinux/config echo "SELINUX=disabled" >> /etc/selinux/config |
|
sed -e 's/SELINUX=enforcing/SELINUX=disabled/d' /etc/selinux/config |
|
service iptables stop chkconfig iptables off |
5、设置文件打开数量及最大进程数
|
cp /etc/security/limits.conf /etc/security/limits.conf.bak echo "* soft nproc 32000" >>/etc/security/limits.conf echo "* hard nproc 32000" >>/etc/security/limits.conf echo "* soft nofile 65535" >>/etc/security/limits.conf echo "* hard nofile 65535" >>/etc/security/limits.conf |
6、配置集群时间同步服务
|
cp /etc/ntp.conf /etc/ntp.conf.bak cp /etc/sysconfig/ntpd /etc/sysconfig/ntpd.bak echo "restrict 192.168.100.0 mask 255.255.255.0 nomodify notrap" >> /etc/ntp.conf echo "SYNC_HWCLOCK=yes" >> /etc/sysconfig/ntpd service ntpd restart |
|
0-59/10 * * * * /opt/scripts/sync_time.sh # cat /opt/scripts/sync_time.sh /sbin/service ntpd stop /usr/sbin/ntpdate db01.chavin.king /sbin/service ntpd start |
7、安装java
|
[root@master ~]# vim /etc/profile 在末尾添加环境变量: export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$CLASSPATH 检查java是否安装成功: [root@master ~]# java -version |
8、安装hadoop软件
|
# cd /opt/software # tar -zxvf hadoop-2.5.0.tar.gz -C /opt/cdh-5.3.6/ # chown -R hadoop:hadoop /opt/cdh-5.3.6/hadoop-2.5.0 |
三、编辑hadoop配置文件
Hadoop HA需要配置的文件主要包括以下两类加粗内容,其他部分安装hadoop完全分布式部署方法搭建就可以了:
HDFS配置文件:
etc/hadoop/hadoop-env.sh
etc/hadoop/core-site.xml
etc/hadoop/hdfs-site.xml
etc/haoop/slaves
YARN配置文件:
etc/hadoop/yarn-env.sh
etc/hadoop/yarn-site.xml
etc/haoop/slaves
MapReduce配置文件:
etc/hadoop/mapred-env.sh
etc/hadoop/mapred-site.xml
HA相关配置文件内容如下:
|
[hadoop@db01 hadoop]$ cat core-site.xml <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://ns1</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop-2.5.0/data/tmp</value> </property> <property> <name>fs.trash.interval</name> <value>7000</value> </property> </configuration> |
|
[hadoop@db01 hadoop]$ cat hdfs-site.xml <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.nameservices</name> <value>ns1</value> </property> <property> <name>dfs.ha.namenodes.ns1</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.ns1.nn1</name> <value>db01:8020</value> </property> <property> <name>dfs.namenode.rpc-address.ns1.nn2</name> <value>db02:8020</value> </property> <property> <name>dfs.namenode.http-address.ns1.nn1</name> <value>db01:50070</value> </property> <property> <name>dfs.namenode.http-address.ns1.nn2</name> <value>db02:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://db01:8485;db02:8485;db03:8485/ns1</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/usr/local/hadoop-2.5.0/data/dfs/jn</value> </property> <property> <name>dfs.client.failover.proxy.provider.ns1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> </configuration> |
|
[hadoop@db01 hadoop-2.5.0]$ cat etc/hadoop/yarn-site.xml <?xml version="1.0"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>db02</value> </property> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <property> <name>yarn.log-aggregation.retain-seconds</name> <value>600000</value> </property> </configuration> |
|
[hadoop@db01 hadoop-2.5.0]$ cat etc/hadoop/mapred-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>db01:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>db01:19888</value> </property> </configuration> |
|
[hadoop@db01 hadoop-2.5.0]$ cat etc/hadoop/slaves db01 db02 db03 db04 db05 |
|
在以下文件中修改Java环境变量: etc/hadoop/hadoop-env.sh etc/hadoop/yarn-env.sh etc/hadoop/mapred-env.sh |
|
创建数据目录: /opt/cdh-5.3.6/hadoop-2.5.0/data/tmp /opt/cdh-5.3.6/hadoop-2.5.0/data/dfs/jn |
|
同步文件到其他节点: $ scp /opt/cdh-5.3.6/hadoop-2.5.0 hadoop@db02:/opt/cdh-5.3.6/hadoop-2.5.0 $ scp /opt/cdh-5.3.6/hadoop-2.5.0 hadoop@db03:/opt/cdh-5.3.6/hadoop-2.5.0 $ scp /opt/cdh-5.3.6/hadoop-2.5.0 hadoop@db04:/opt/cdh-5.3.6/hadoop-2.5.0 $ scp /opt/cdh-5.3.6/hadoop-2.5.0 hadoop@db05:/opt/cdh-5.3.6/hadoop-2.5.0 |
四、第一次启动集群
1、启动journalnode服务
[db01]$ sbin/hadoop-daemon.sh start journalnode
[db02]$ sbin/hadoop-daemon.sh start journalnode
[db03]$ sbin/hadoop-daemon.sh start journalnode
2、格式化hdfs文件系统
[db01]$ bin/hdfs namenode -format
3、在nn1上启动namenode
[db01]$ sbin/hadoop-daemon.sh start namenode
4、在nn2节点上同步nn1节点元数据(也可以直接cp元数据)
[db02]$ bin/hdfs namenode -bootstrapStandby
5、启动nn2上的namenode服务
[db02]$ sbin/hadoop-daemon.sh start namenode
6、启动所有的datanode服务
[db01]$ sbin/hadoop-daemon.sh start datanode
[db02]$ sbin/hadoop-daemon.sh start datanode
[db03]$ sbin/hadoop-daemon.sh start datanode
[db04]$ sbin/hadoop-daemon.sh start datanode
[db05]$ sbin/hadoop-daemon.sh start datanode
7、将nn1切换成active状态
[db01]$ bin/hdfs haadmin -transitionToActive nn1
[db01]$ bin/hdfs haadmin -getServiceState nn1
[db01]$ bin/hdfs haadmin -getServiceState nn2
至此、HDFS集群启动成功。
8、对HDFS文件系统进行基本测试
文件的创建、删除、上传、读取等等
五、手工方式验证namenode active和standby节点切换
[db01]$ bin/hdfs haadmin -transitionToStandby nn1
[db01]$ bin/hdfs haadmin -transitionToActive nn2
[db01]$ bin/hdfs haadmin -getServiceState nn1
standby
[db01]$ bin/hdfs haadmin -getServiceState nn2
active
进行HDFS基本功能测试。
六、使用zookeeper实现HDFS自动故障转移
1、根据服务规划安装zookeeper集群
|
安装zkserver集群: $ tar -zxvf zookeeper-3.4.5.tar.gz -C /usr/local/ $ chown -R hadoop:hadoop zookeeper-3.4.5/ $ cp zoo_sample.cfg zoo.cfg $vi zoo.cfg --在文件中添加下面内容 dataDir=/usr/local/zookeeper-3.4.5/data server.1=db01:2888:3888 server.2=db02:2888:3888 server.3=db03:2888:3888 配置myid文件: $cd /usr/local/zookeeper-3.4.5/data/ $vi myid 输入上述对应server编号1 同步安装文件到其他两个节点上: # scp -r zookeeper-3.4.5/ db02:/usr/local/ # scp -r zookeeper-3.4.5/ db03:/usr/local/ 修改各个服务器的myid文件。 |
|
分别启动zk集群服务器: db01$ bin/zkServer.sh start db02$ bin/zkServer.sh start db03$ bin/zkServer.sh start |
2、修改core-site.xml和hdfs-site.xml配置文件:
|
core-site.xml file, 添加以下内容: <property> <name>ha.zookeeper.quorum</name> <value>db01:2181,db02:2181,db03:2181</value> </property> |
|
hdfs-site.xml file, 添加以下内容: <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> |
修改后的core-site.cml和hdfs-site.xml文件主要内容如下:
|
core-site.xml: <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://ns1</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop-2.5.0/data/tmp</value> </property> <property> <name>fs.trash.interval</name> <value>7000</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>db01:2181,db02:2181,db03:2181</value> </property> </configuration> |
|
Hdfs-site.xml: <configuration> <property> <name>dfs.nameservices</name> <value>ns1</value> </property> <property> <name>dfs.ha.namenodes.ns1</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.ns1.nn1</name> <value>db01:8020</value> </property> <property> <name>dfs.namenode.rpc-address.ns1.nn2</name> <value>db02:8020</value> </property> <property> <name>dfs.namenode.http-address.ns1.nn1</name> <value>db01:50070</value> </property> <property> <name>dfs.namenode.http-address.ns1.nn2</name> <value>db02:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://db01:8485;db02:8485;db03:8485/ns1</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/usr/local/hadoop-2.5.0/data/dfs/jn</value> </property> <property> <name>dfs.client.failover.proxy.provider.ns1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> </configuration> |
配置文件修改完成后,关闭hdfs集群,并且同步文件到其他节点:
|
[db01]$ sbin/stop-dfs.sh [db01]$ scp -r etc/hadoop/core-site.xml etc/hadoop/hdfs-site.xml hadoop@db02:/opt/cdh-5.3.6/hadoop-2.5.0/etc/hadoop/ [db01]$ scp -r etc/hadoop/core-site.xml etc/hadoop/hdfs-site.xml hadoop@db03:/opt/cdh-5.3.6/hadoop-2.5.0/etc/hadoop/ [db01]$ scp -r etc/hadoop/core-site.xml etc/hadoop/hdfs-site.xml hadoop@db04:/opt/cdh-5.3.6/hadoop-2.5.0/etc/hadoop/ [db01]$ scp -r etc/hadoop/core-site.xml etc/hadoop/hdfs-site.xml hadoop@db05:/opt/cdh-5.3.6/hadoop-2.5.0/etc/hadoop/ |
3、hadoop初始化zookeeper
[db01]$ bin/hdfs zkfc -formatZK
可以在zkCli客户端下看到hadoop-ha的文件:
[zk: localhost:2181(CONNECTED) 3] ls /
[hadoop-ha, zookeeper]
4、启动hdfs集群
[db01]$ sbin/start-dfs.sh
七、测试自动故障转移功能
[hadoop@db01 hadoop-2.5.0]$ bin/hdfs haadmin -getServiceState nn1
standby
[hadoop@db01 hadoop-2.5.0]$ bin/hdfs haadmin -getServiceState nn2
Active
[hadoop@db02 hadoop-2.5.0]$ kill -9 25121
[hadoop@db01 hadoop-2.5.0]$ bin/hdfs haadmin -getServiceState nn1
active
[hadoop@db01 hadoop-2.5.0]$ bin/hdfs haadmin -getServiceState nn2
17/03/12 14:24:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/03/12 14:24:51 INFO ipc.Client: Retrying connect to server: db02/192.168.100.232:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep
(maxRetries=1, sleepTime=1000 MILLISECONDS)
Operation failed: Call From db01/192.168.100.231 to db02:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:
http://wiki.apache.org/hadoop/ConnectionRefused
自动转移功能配置成功,基于QJM方式搭建的hadoop HA方式大功告成。
hadoop 2.x HA(QJM)安装部署规划的更多相关文章
- Hadoop分布式HA的安装部署
Hadoop分布式HA的安装部署 前言 单机版的Hadoop环境只有一个namenode,一般namenode出现问题,整个系统也就无法使用,所以高可用主要指的是namenode的高可用,即存在两个n ...
- hadoop HA架构安装部署(QJM HA)
###################HDFS High Availability Using the Quorum Journal Manager########################## ...
- 【Hadoop学习】CDH5.2安装部署
[时间]2014年11月19日 [平台]Centos 6.5 [工具]scp [软件]jdk-7u67-linux-x64.rpm CDH5.2.0-hadoop2.5.0 [步骤] 1. 准备条件 ...
- Hadoop 学习【一】 安装部署
目标:测试Hadoop的集群安装 参考文档: [1]http://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-common/Sin ...
- hadoop 2.5.1单机安装部署伪集群
环境:ubuntu 14.04 server 64版本 hadoop 2.5.1 jdk 1.6 部署的步骤主要参考了http://blog.csdn.net/greensurfer/article/ ...
- OpenStack安装部署管理中常见问题解决方法
一.网络问题-network 更多网络原理机制可以参考<OpenStack云平台的网络模式及其工作机制>. 1.1.控制节点与网络控制器区别 OpenStack平台中有两种类型的物理节点, ...
- Apache Kylin安装部署
0x01 Kylin安装环境 Kylin依赖于hadoop大数据平台,安装部署之前确认,大数据平台已经安装Hadoop, HBase, Hive. 1.1 了解kylin的两种二进制包 预打包的二进制 ...
- spark2.10安装部署(集成hadoop2.7+)
这里默认你的hadoop是已经安装好的,master是node1,slaver是node2-3,hdfs启动在node1,yarn启动在node2,如果没安装好hadoop可以看我前面的文章 因为这里 ...
- centos7安装部署SVN
SVN介绍 SVN是个自由.开源的版本控制系统,绝大多数开源软件都使用SVN作为代码版本管理软件. SVN的官方网站http://subversion.apache.org/.目前SVN在开源社区 ...
随机推荐
- 不用数据线连接到Android手机进行调试
这两天USB线丢了,老是找同事借也不方便,于是就网上找各种方法,这里总结个最简单的,当然你的手机需要root: 1 要打开WIFI,手机要和电脑在同一局域网内,这个你可以使用你的开发机共享wifi即可 ...
- 5 -- Hibernate的基本用法 -- 要点
Hibernate的基本用法 ⊙ ORM的基本知识 ⊙ ORM和Hibernate的关系 ⊙ Hibernate的基本映射思想 ⊙ Hibernate入门知识 ⊙ 使用Eclipse开发Hiberna ...
- python使用代理访问服务器
python使用代理访问服务器主要有一下3个步骤: 1.创建一个代理处理器ProxyHandler: proxy_support = urllib.request.ProxyHandler(),Pro ...
- hive-数据模型
hive支持四种数据模型 • external table• table• partition• bucket 为了避免table名称冲突,hive用database作为顶层域名,如果不设定datab ...
- 【web端权限维持】利用ADS隐藏webshell
0X01 前言 未知攻,焉知防,在web端如何做手脚维护自己拿到的权限呢?首先要面临的是webshell查杀,那么通过利用ADS隐藏webshell,不失为一个好办法. 0X02 利用ADS隐藏web ...
- adb(android debug bridge)命令
adb(android debug bridge) adb devices --查看当前连接的模拟器/设备 adb remount --模拟器/设备重新启动,保证能用 adb push src des ...
- DRBD架构详解(原创)
DRBD概述Distributed Replicated Block Device(DRBD)是一种基于软件的,无共享,复制的存储解决方案,在服务器之间的对块设备(硬盘,分区,逻辑卷等)进行镜像.DR ...
- Java的I/O操作
一.概述 Java的IO支持通过java.io包下的类和接口来完成,在java.io包下主要有包括输入.输出两种IO流,每种输入输出流又可分为字节流和字符流两大类.从JDK1.4以后,Java在jav ...
- windows内核情景分析之—— KeRaiseIrql函数与KeLowerIrql()函数
windows内核情景分析之—— KeRaiseIrql函数与KeLowerIrql()函数 1.KeRaiseIrql函数 这个 KeRaiseIrql() 只是简单地调用 hal 模块的 KfRa ...
- Android设计和开发系列第二篇:Action Bar(Design)
Action Bar The action bar is a dedicated piece of real estate at the top of each screen that is gene ...