hadoop-2.2.0 的编译安装及HA配置
一 准备工作
准备工作中要求有
1.centOs 6.4,添加hadoop用户,配置集群内的/etc/hosts文件。
2.安装hadoop用户的ssh,并打通集群内所有机器,(ha执行fencing时会用到)。
3.下载社区版hadoop-2.2.0源码。
二 编译hadoop 2.2.0
(编译hadoop 2.2.0所需要的软件可在此处下载:http://pan.baidu.com/s/1mgodf40)
--------------------------------------------------------------------------------------------
yum -y install lzo-devel zlib-devel gcc autoconf automake libtool gcc-c++
yum install openssl-devel
yum install ncurses-devel
--------------------------------------------------------------------------------------------
Ant Maven ProtocolBuffer
findbugs CMake
#安装java
yum -y install jdk
Protobuf 编译安装
tar -zxvf protobuf-2.5.0.tar.gz cd protobuf-2.5.0
./configure --prefix=/usr/local/protobuf make make install
Ant 安装
tar -zxvf apache-ant-1.9.2-bin.tar.gz mv apache-ant-1.9.2/ /usr/local/ant
maven 安装
tar -zxvf apache-maven-3.0.5-bin.tar.gz mv apache-maven-3.0.5/ /usr/local/maven
findbugs 安装
tar -zxfv findbugs-2.0.2.tar.gz
mv findbugs-2.0.2/ /usr/local/findbugs
cmake 编译安装
tar -zvxf cmake-2.8.6.tar.gz cd cmake-2.8.6 ./bootstrap make make install
-------------------------------------------------------------------------------------------- 配置环境
#根据自己的环境具体配置
vi /etc/profile #java
export JAVA_HOME=/usr/java/jdk1.7.0_45 export JRE_HOME=/usr/java/jdk1.7.0_45/jre
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin #maven
export MAVEN_HOME=/usr/local/maven export MAVEN_OPTS="-Xms256m -Xmx512m" export CLASSPATH=.:$CLASSPATH:$MAVEN_HOME/lib export PATH=$PATH:$MAVEN_HOME/bin
#protobuf
export PROTOBUF_HOME=/usr/local/protobuf
export CLASSPATH=.:$CLASSPATH:$PROTOBUF_HOME/lib export PATH=$PATH:$PROTOBUF_HOME/bin #ant
export ANT_HOME=/usr/local/ant
export CLASSPATH=.:$CLASSPATH:$ANT_HOME/lib export PATH=$PATH:$ANT_HOME/bin
#findbugs
export FINDBUGS_HOME=/usr/local/findbugs
export CLASSPATH=.:$CLASSPATH:$FINDBUGS_HOME/lib export PATH=$PATH:$FINDBUGS_HOME/bin
source /etc/profile
--------------------------------------------------------------------------------------------
vi /hadoop-2.2.0/hadoop-common-project/hadoop-auth/pom.xml
<dependency>
<groupid>org.mortbay.jetty</groupid>
<artifactid>jetty</artifactid> <scope>test</scope> </dependency>
在上面代码后添加下面代码 <dependency>
<groupid>org.mortbay.jetty</groupid> <artifactid>jetty-util</artifactid> <scope>test</scope> </dependency>
注:不更改可能报下面错位 [ERROR]
Failed
to
execute
goal
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile (default-testCompile) on project hadoop-auth: Compilation failure: Compilation failure:
----------------------------------------------------------------------------------------------
重新编译:
tar -zvxf hadoop-2.2.0-src.tar cd hadoop-2.2.0-src
mvn clean package -DskipTests -Pdist,native,docs -Dtar # 漫长等待
(注:可能存在glibc版本问题,此类问题网上有较多评论可供参考)
三 安装hadoop
解压 hadoop-2.2.0.tar.gz
配置 hadoop用户的 ~/.bashrc 如下:
# User specific environment and startup programs
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=/usr/local/common/hadoop/conf
export HBASE_HOME=/usr/local/hbase
export HBASE_CONF_DIR=/usr/local/common/hbase/conf
export JAVA_HOME=/usr/java
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib/rt.jar
export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:/usr/local/zookeeper/bin:/data1/script
alias jps="jps -J-Djava.io.tmpdir=$HOME"
alias jstat="jstat -J-Djava.io.tmpdir=$HOME"
source ~/.bashrc
四 配置hadoop
在 $HADOOP_CONF_DIR目录下编辑hadoop的配置文件。
#配置 hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.nameservices</name>
<value>hbaseCluster</value>
</property>
<property>
<name>dfs.ha.namenodes.hbaseCluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.hbaseCluster.nn1</name>
<value>h112191.mars.grid.sina.com.cn:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.hbaseCluster.nn1</name>
<value>h112191.mars.grid.sina.com.cn:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.hbaseCluster.nn2</name>
<value>h112192.mars.grid.sina.com.cn:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.hbaseCluster.nn2</name>
<value>h112192.mars.grid.sina.com.cn:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>file:///data1/hadoop/namenode_nfs</value>
<description>指定用于HA存放edits的共享存储,通常是NFS挂载点</description>
</property>
<!--
<property>
<name>dfs.namenode.rpc-address.ns2</name>
<value>h112192.mars.grid.sina.com.cn:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.ns2</name>
<value>h112192.mars.grid.sina.com.cn:50070</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address.ns2</name>
<value>h112192.mars.grid.sina.com.cn:50090</value>
</property>
-->
<property>
<name>dfs.replication</name>
<value>3</value>
<final>true</final>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///data1/hadoop/namenode</value>
<final>true</final>
</property>
<property>
<name>dfs.data.dir</name>
<value>/data11/hadoop/data/datanode,/data2/hadoop/data/datanode,/data3/hadoop/data/datanode,/data4/hadoop/data/datanode,/data5/hadoop/data/datanode,/data6/hadoop/data/datanode,/data7/hadoop/data/datanode,/data8/hadoop/data/datanode,/data9/hadoop/data/datanode,/data10/hadoop/data/datanode</value>
<final>true</final>
</property>
<property>
<name>fs.checkpoint.dir</name>
<value>/data1/hadoop/namesecondary</value>
<final>true</final>
</property>
<property>
<name>dfs.block.size</name>
<value>134217728</value>
<final>true</final>
</property>
<property>
<name>dfs.hosts</name>
<value>/usr/local/common/hadoop/conf/include</value>
<final>true</final>
</property>
<property>
<name>dfs.hosts.exclude</name>
<value>/usr/local/common/hadoop/conf/exclude</value>
<final>true</final>
</property>
<property>
<name>dfs.datanode.max.xcievers</name>
<value>8192</value>
</property>
<property>
<name>dfs.namenode.handler.count</name>
<value>128</value>
</property>
<property>
<name>dfs.datanode.handler.count</name>
<value>32</value>
</property>
<property>
<name>dfs.web.ugi</name>
<value>hadoop,supergroup</value>
</property>
<property>
<name>dfs.balance.bandwidthPerSec</name>
<value>52428800</value>
</property>
<!--
Configuring automatic failover
-->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>zk4.mars.grid.sina.com.cn:2181,zk3.mars.grid.sina.com.cn:2181,zk2.mars.grid.sina.com.cn:2181,zk1.mars.grid.sina.com.cn:2181,zk5.mars.grid.sina.com.cn:2181</value>
</property>
<!--at least on fecing method-->
<property>
<name>dfs.client.failover.proxy.provider.hbaseCluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence(hadoop:26387)</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>10000</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/usr/home/hadoop/.ssh/id_rsa</value>
</property>
</configuration>
#配置core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<!-- Config verison 1.0 -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hbaseCluster</value>
<description>缺省文件服务的协议和NS逻辑名称,和hdfs-site里的对应
此配置替代了1.0里的fs.default.name</description>
</property>
<!--
<property>
<name>fs.defaultFS</name>
<value>hdfs://h112191.mars.grid.sina.com.cn:9000</value>
<final>true</final>
</property>
-->
<property>
<name>fs.trash.interval</name>
<value>30</value>
<final>true</final>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}-${hue.suffix}</value>
<final>true</final>
</property>
<property>
<name>io.compression.codecs</name>
<value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
<final>true</final>
</property>
<property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>
</configuration>
#配置 slaves文件
hostname1
hostname2
hostname3
五 hadoop初始化
一定要配置/etc/hosts!
首先,配置自动ha后,需要先启动所有的journalnode,需要到各journalnode机器上执行:
hadoop-daemon.sh start journalnode
其次,hdfs namenode -format [<clusterID>],在某一台namenode上执行即可,若另一台无法启动,则将集群停掉,将namenode的目录复制过去即可
再次,格式化ha的zk监控
$hdfs zkfc -formatZK
启动DFSZKFailoverController
hadoop-daemon.sh start zkfc
最后,启动HA,此处参考英文版的说明:
- If you are setting up a fresh HDFS cluster, you should first run the format command (hdfs namenode -format) on one of NameNodes.
- If you have already formatted the NameNode, or are converting a non-HA-enabled cluster to be HA-enabled, you should now copy over the contents of your NameNode metadata directories to the other, unformatted NameNode by running the command "hdfs namenode -bootstrapStandby" on the unformatted NameNode. Running this command will also ensure that the shared edits directory (as configured by dfs.namenode.shared.edits.dir) contains sufficient edits transactions to be able to start both NameNodes.
- If you are converting a non-HA NameNode to be HA, you should run the command "hdfs namenode -initializeSharedEdits", which will initialize the shared edits directory with the edits data from the local NameNode edits directories.
最后启动全部进程:
$start-dfs.sh
六 测试HA
kill active namenode,standby namenode 变成 active,耗费时间 3s
hadoop-2.2.0 的编译安装及HA配置的更多相关文章
- Hadoop集群搭建-03编译安装hadoop
Hadoop集群搭建-05安装配置YARN Hadoop集群搭建-04安装配置HDFS Hadoop集群搭建-03编译安装hadoop Hadoop集群搭建-02安装配置Zookeeper Hado ...
- 在CUDA8.0下编译安装OpenCV3.1.0来实现GPU加速(Compiling OpenCV3.1.0 with CUDA8.0 support)
在CUDA8.0下编译安装OpenCV3.1.0 一.本人电脑配置:ubuntu 14.04, NVIDIA GTX1060. 二.编译OpenCV3.1.0前,读者需要成功安装CUDA8.0(网上有 ...
- hadoop 2.2.0集群安装详细步骤(简单配置,无HA)
安装环境操作系统:CentOS 6.5 i586(32位)java环境:JDK 1.7.0.51hadoop版本:社区版本2.2.0,hadoop-2.2.0.tar.gz 安装准备设置集群的host ...
- 【转】linux 编译安装nginx,配置自启动脚本
linux 编译安装nginx,配置自启动脚本 本文章来给各位同学介绍一篇关于linux 编译安装nginx,配置自启动脚本教程,希望有需要了解的朋友可一起来学习学习哦. 在公司的suse服务器装ng ...
- linux 编译安装nginx,配置自启动脚本
本文章来给各位同学介绍一篇关于linux 编译安装nginx,配置自启动脚本教程,希望有需要了解的朋友可一起来学习学习哦. 在公司的suse服务器装nginx,记录下安装过程: 参照这篇文章:Linu ...
- 【转】编译安装PHP并配置PHP-FPM
1.前言上一篇讲述了如何编译安装MySQL,虽然可以通过yum install 或者rpm来安装,但是yum install和rpm安装有一个特点,就是有些参数是别人根据大众需求定制的,如果需要进行自 ...
- php编译安装configure完全配置够日常所用功能
php编译安装configure完全配置够日常所用功能 ./configure --prefix=/usr/local/php --with-config-file-path=/usr/local/p ...
- hadoop 2.2.0集群安装
相关阅读: hbase 0.98.1集群安装 本文将基于hadoop 2.2.0解说其在linux集群上的安装方法,并对一些重要的设置项进行解释,本文原文链接:http://blog.csdn.net ...
- hadoop2.2.0 centos 编译安装详解
http://blog.csdn.net/w13770269691/article/details/16883663 废话不讲,直切正题. 搭建环境:Centos x 6.4 64bit 1.安装JD ...
随机推荐
- [经验交流] 为 mesos framework 分配资源
前段时间我在办公网搭建了一套mesos平台,用于docker 集群相关的调研和测试,mesos + marathon + docker 架构运行正常.但是在启用了chronos后,marathon无法 ...
- 【STM32F4】读取芯片ID和芯片Flash Size
首先声明,手册上给出的FlashSize地址是错误的,正确的应该是0x1FFF7A20,取高16位.确切说应该是(0x1FFF7A23,0x1FFF7A22两个字节), 芯片的这96位ID是产品唯一身 ...
- randomAccess接口
http://www.blogjava.net/lzqdiy/archive/2007/04/22/112578.html
- OAF_开发系列13_实现OAF通过Vector动态查询设置(案例)
20150715 Created By BaoXinjian
- JS获取上个月(转)
1.yyyy-mm-dd获取上个月 function getUpMonth(t){ var tarr = t.split('-'); var year = tarr[0]; //获取当前日期的年 va ...
- Linux5.5安装10g rac
以前安装总结的,现把它贴出来,虽然10g现在慢慢越少了,但也有不少生产库跑10g的. 1.vi /etc/hosts 10.168.39.243 orcldb1 10.168.39.245 ...
- [转]windows 短文件名/短路径名规则
How Windows Generates 8.3 File Names from Long File Names Windows generates short file names from lo ...
- 使用delphi+intraweb进行微信开发4—微信消息加解密
示例代码已经放出!请移步使用delphi+intraweb进行微信开发1~4代码示例进行下载,虽为示例代码但是是从我项目中移出来的,封装很完备适于自行扩展和修改. 在上一讲当中我做了个简单的微信文本消 ...
- .Net中的泛型(where T : class的含义)
Eg: class A<T>where T:new() where表明了对类型变量T的约束关系.where T: A表示类型变量是继承于A的,或者是A本身.where T:new()指明了 ...
- 问题解决_WCF_WCF 接收我服务的 HTTP 响应时发生错误
原文地址:http://www.cnblogs.com/tianma3798/p/5470974.html 错误内容: System.ServiceModel.CommunicationExcepti ...