Shell脚本完成hadoop的集群安装
虽然整体实现的自动安装,但还是有很多需要完善的地方,比如说:
1. 代码目前只能在root权限下运行,否则会出错,这方面需要加权限判断;
2.另外可以增加几个函数,减少代码冗余;
3.还有一些判断不够智能;
......
苦于能力和时间都有限,只能写到这里了。
installHadoop文件代码如下:
#!/bin/bash
# root_password="123456"
jdk_tar=jdk-8u65-linux-i586.tar.gz
jdk_url=http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-i586.tar.gz
jdk_version=jdk1.8.0_65
java_version=1.8.0_65
jdk_install_path=/usr/local/development
hadoop_url=http://101.44.1.4/files/2250000004303235/mirrors.hust.edu.cn/apache/hadoop/common/stable1/hadoop-1.2.1.tar.gz
hadoop_version=hadoop-1.2.1
hadoop_tar=hadoop-1.2.1.tar.gz
hadoop_install_path=hadoop
hadoop_tmp_path=/home/hadoop/hadoop_tmp
hadoop_name_path=/home/hadoop/hdfs/name
hadoop_data_path=/home/hadoop/hdfs/data
user_name=hadoop
user_passwd=hadoop #su
#判断能否root
#if [ $? -ne 0 ] ;then
# echo "No root access"
# exit
#fi shFilePath=$(pwd) #check jdk installed or not
java -version &> /dev/null
if [ $? -ne 0 ] ;then
echo {Jdk has been installed in this pc}
java -version
else
#检查~/../usr/local/development目录存在否,不存在就创建
#先进入当前用户的家目录
#cd ~
#cd ../../usr/local/$jdk_install_path &> /dev/null
#if [ $? -ne 0 ] ;then
if [ ! -d $jdk_install_path ] ;then
echo "{Create $jdk_install_path folder to install jdk}"
mkdir $jdk_install_path
cd $jdk_install_path
echo "{Success to create $jdk_install_path folder}"
else
echo "{$jdk_install_path folder has already exists}"
cd $jdk_install_path
fi #检查jdk是否解压
#ls | grep "$jdk_version" &> /dev/null
if [ ! -d $jdk_version ] ;then
#检查jdk是否已有压缩包
if [ ! -f $jdk_tar ] ;then
echo "{Download $jdk_tar}"
wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" $jdk_url
fi
echo "{Untar $jdk_tar}"
tar -zxvf $jdk_tar else
echo "{$jdk_version folder has already exists in $jdk_install_path/}"
fi #set jdk environment
echo {set java environment}
cd ~
cd ../../../../etc/profile.d/
touch $jdk_install_path.sh
#echo "#!bin/bash" > $jdk_install_path.sh
echo "export JAVA_HOME=/usr/local/$jdk_install_path/$jdk_version" >> $jdk_install_path.sh
echo "export JRE_HOME=\$JAVA_HOME/jre" >> $jdk_install_path.sh
echo "export CLASSPATH=.:\$JAVA_HOME/lib:\$JRE_HOME/lib:\$CLASSPATH" >> $jdk_install_path.sh
echo "PATH=\$JAVA_HOME/bin:\$JRE_HOME/bin:\$PATH" >> $jdk_install_path.sh
source $jdk_install_path.sh #check the java version
java -version | grep "$java_version" &> /dev/null
if [ $? -ne 0 ] ;then
echo "{Success to install $jdk_version}"
fi
fi #no passwd when login via ss
echo "{Config ssh service and login via ssh without no passwd}"
sudo yum -y install ssh openssh-server
#update /etc/ssh/sshd_config
#RSAAuthentication
RSAAuthentication_lineNum=`awk '/RSAAuthentication yes/{print NR}' ~/../etc/ssh/sshd_config`
RSAAuthentication="RSAAuthentication yes"
sed -i "${RSAAuthentication_lineNum}s/^.*/${RSAAuthentication}/g" ~/../etc/ssh/sshd_config #PubkeyAuthentication
PubkeyAuthentication_lineNum=`awk '/PubkeyAuthentication yes/{print NR}' ~/../etc/ssh/sshd_config`
PubkeyAuthentication="PubkeyAuthentication yes"
sed -i "${PubkeyAuthentication_lineNum}s/^.*/${PubkeyAuthentication}/g" ~/../etc/ssh/sshd_config #AuthorizedKeysFile
AuthorizedKeysFile_lineNum=`awk '/AuthorizedKeysFile/{print NR}' ~/../etc/ssh/sshd_config`
AuthorizedKeysFile="AuthorizedKeysFile .ssh\/authorized_keys"
sed -i "${AuthorizedKeysFile_lineNum}s/^.*/${AuthorizedKeysFile}/g" ~/../etc/ssh/sshd_config echo "{You change in sshd_config as follow}"
sed -n "${RSAAuthentication_lineNum},${AuthorizedKeysFile_lineNum}p" ~/../etc/ssh/sshd_config #restart sshd service
~/../sbin/service sshd restart
echo "{Finish to update sshd_config}" #generate public key
if [ ! -d ~/.ssh ] ;then
mkdir ~/.ssh
fi cd ~/.ssh
echo y | ssh-keygen -t rsa -P '' -f id_rsa
if [ ! -f authorized_keys ] ;then
touch authorized_keys
cat id_rsa.pub > authorized_keys
else
cat id_rsa.pub >> authorized_keys fi chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys #Download hadoop
cd ~
cd ../home/$hadoop_install_path &> /dev/null
if [ $? -ne 0 ] ;then
echo "{Create /home/$hadoop_install_path folder to install jdk}"
cd ../home
mkdir $hadoop_install_path
cd $hadoop_install_path
echo "{Success to create $$hadoop_install_path folder}"
else
echo "{/home/$hadoop_install_path folder has already exists}"
cd ~
cd ../home/$hadoop_install_path
fi #check hadoop-2.7.0 folder is exists or not
if [ ! -d "$hadoop_version" ] ;then
#check hadoop-2.7.0.tar.gz is exist or not
if [ ! -f "$hadoop_tar" ] ;then
echo "{Download $hadoop_tar}"
wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" $hadoop_url
fi
echo "{Untar $hadoop_tar}"
tar -zxvf $hadoop_tar
else
echo "{$hadoop_version folder has already exists in /home/$hadoop_install_path/}"
fi #enter into config folder
cd $hadoop_version
if [ ! -d "conf" ] ;then
cd etc/hadoop/
else
cd conf
fi #update hadoop-env.sh
java_home_line_num=`awk '/export JAVA_HOME/{print NR}' hadoop-env.sh`
JAVAHOME="export JAVA_HOME=\/usr\/local\/"$jdk_install_path"\/"$jdk_version #-i is directly modify the source file
sed -i "${java_home_line_num}s/^.*/${JAVAHOME}/g" hadoop-env.sh
cat hadoop-env.sh | grep "JAVA_HOME"
echo "{Finish to update hadoop-env.sh}" hadoop_config_path=$(pwd)
#echo $cur_path
#echo $shFilePath
#unalias cp
#cp -rf core-site.xml $curPath/ cd $shFilePath #update core_site.xml
cat core-site.xml > $hadoop_config_path/core-site.xml
if [ ! -d $hadoop_tmp_path ] ;then
mkdir $hadoop_tmp_path
fi
rm -rf $hadoop_tmp_path/* if [ ! -d $hadoop_name_path ] ;then
mkdir $hadoop_name_path
fi
chmod g-w $hadoop_name_path
rm -rf $hadoop_name_path/* if [ ! -d $hadoop_data_path ] ;then
mkdir $hadoop_data_path
fi
chmod g-w $hadoop_data_path
rm -rf $hadoop_data_path/* #update mapred-site.xml
cat mapred-site.xml > $hadoop_config_path/mapred-site.xml #update hdfs-site.xml
cat hdfs-site.xml > $hadoop_config_path/hdfs-site.xml cd $hadoop_config_path
echo "{Check core-site.xml}"
#cat core-site.xml
echo "{Check mapred-site.xml}"
#cat mapred-site.xml
echo "{Check hdfs-site.xml}"
#cat hdfs-site.xml
echo "{Finish config hadoop}" #add hadoop account and has admin access
id $user_name
if [ $? -ne 0 ] ;then
echo "{add $user_name}"
sudo useradd -mr $user_name
fi
#set passwd for hadoop account
echo $user_passwd | sudo passwd --stdin $user_name echo "{Format hadoop}"
echo Y | ../bin/hadoop namenode -format
cd ../bin/
bash stop-all.sh
echo "{Start hadoop}"
bash start-all.sh result=`jps | awk '{print $2}' | xargs`
expect_result="JobTracker NameNode DataNode TaskTracker Jps SecondaryNameNode"
if [ "$result" == "$expect_result" ] ;then
echo "{Congratulations!!! Success to intall hadoop!}"
else
echo "{Sorry, fail to install hadoop and try to restart hadroop!}"
bash stop-all.sh
echo "{Start hadoop}"
bash start-all.sh
result=`jps | awk '{print $2}' | xargs`
if [ "$result" == "$expect_result" ] ;then
echo "{Sorry, fail to find all java thread and please check!}"
else
echo "{Congratulations!!! find all java thread, success to install hadoop!}"
fi
fi echo {!!!finish!!!}
此外为了实现自动化配置hadoop, 还需要把core-site.xml, hdfs-site.xml和mapred-site.xml文件放到与installHadoop文件同级目录下。
core-site.xml文件:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop_tmp</value>
<description>A base for other temporary directories.</description>
</property>
</configuration>
hdfs-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/hdfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
单机版的Hadoop安装可以参考:
http://www.linuxidc.com/Linux/2015-04/116447.htm
多台机器的Hadoop安装可以参考:
http://blog.csdn.net/ab198604/article/details/8250461
Shell脚本完成hadoop的集群安装的更多相关文章
- hadoop的集群安装
hadoop的集群安装 1.安装JDK,解压jar,配置环境变量 1.1.解压jar tar -zxvf jdk-7u79-linux-x64.tar.gz -C /opt/install //将jd ...
- 基于zookeeper的高可用Hadoop HA集群安装
(1)hadoop2.7.1源码编译 http://aperise.iteye.com/blog/2246856 (2)hadoop2.7.1安装准备 http://aperise.iteye.com ...
- Hadoop分布式集群安装
环境准备 操作系统使用ubuntu-16.04.2 64位 JDK使用jdk1.8 Hadoop使用Hadoop 2.8版本 镜像下载 操作系统 操作系统使用ubun ...
- shell 脚本实战笔记(6)--集群环境配置检测
1). 背景: 集群部署的时候, 需要一致的配置和环境设置. 对于虚拟机集群, 可以借助镜像拷贝, 复制和还原集群机器. 对与物理机集群而言, 则不一样, 如果机器一多, 多人去操作和配置, 对于成熟 ...
- 编写shell脚本一键启动zookeeper集群!!
踩了一个多小时坑终于解决了: 这里分享给大家,更主要的目的是记住这些坑,避免以后重复走!!! 首先,这里采用ssh秘钥方式进行集群主机之间免密登录执行启动命令 这里简单说下原理: 通过ssh去另外一台 ...
- hadoop 分布式集群安装
这一套环境搭完,你有可能碰到无数个意想不到的情况. 用了1周的时间,解决各种linux菜鸟级的问题,终于搭建好了.. 沿途的风景,甚是历练. 环境介绍: 系统:win7 内存:16G(最低4G,不然跑 ...
- shell 脚本实战笔记(3)--集群机器的时间同步设置
背景: 有些分布式服务(比如HBase服务), 依赖于系统时间戳, 如果集群各个节点, 系统时间不一致, 导致服务出现诡异的情况. 解决方案: 那如何同步集群各个节点之间的时间? 采用NTP(Netw ...
- CentOS下Hadoop-2.2.0集群安装配置
对于一个刚开始学习Spark的人来说,当然首先需要把环境搭建好,再跑几个例子,目前比较流行的部署是Spark On Yarn,作为新手,我觉得有必要走一遍Hadoop的集群安装配置,而不仅仅停留在本地 ...
- 一步步教你Hadoop多节点集群安装配置
1.集群部署介绍 1.1 Hadoop简介 Hadoop是Apache软件基金会旗下的一个开源分布式计算平台.以Hadoop分布式文件系统HDFS(Hadoop Distributed Filesys ...
随机推荐
- 走进docker的世界之入门篇
by zhouzhipeng from https://blog.zhouzhipeng.com/walk-in-docker-beginning.html本文可全文转载,但需要保留原作者和出处. 什 ...
- Java并发工具类CountDownLatch源码中的例子
Java并发工具类CountDownLatch源码中的例子 实例一 原文描述 /** * <p><b>Sample usage:</b> Here is a pai ...
- ae(ArcEngine) java swing开发入门系列(2):ae的类型转换和Proxy类说明
做过C#版ae的都知道,操作同一个“对象”,用他的不同功能要转换到相应的接口,但java版有时不能直接做类型转换 例如下图在C#是可以的 但在java不行,这样转会报错,看IFeatureClass的 ...
- STM8 PIN setting(output)
今日在设置引脚输出的时候,本想设置为open-drain输出,然后对其输出高低.但是发现无法输出高(初始化为开漏低电平),始终为低.后来改为push-pull 输出,就能输出高低了.真有意思,转到SP ...
- BaseAdapter.notifyDataSetChanged()之观察者设计模式及源码分析
BaseAdapter.notifyDataSetChanged()的实现涉及到设计模式-观察者模式,详情请参考我之前的博文设计模式之观察者模式 Ok,回到notifyDataSetChanged进行 ...
- sql优化实战:从1353秒到135秒(删除索引+修改数据+重建索引)
最近在优化日结存储过程,日结存储过程中大概包含了20多个存储过程. 发现其有一个存储过程代码有问题,进一步发现结存的数据中有一个 日期字段business_date 是有问题的,这个字段对应的类型是v ...
- Python+selenium之unittest单元测试(3)关于测试用例执行的顺序
一.测试用例执行的顺序 用例的执行顺序涉及多个层级,在多个测试目录的情况下,先执行哪个目录?在多个测试文件的情况下,先执行哪个文件?在多个测试类的情况下,先执行哪个测试类?,在多个测试方法(用例)的情 ...
- ABAP Development Tools的语法高亮实现原理
ABAP Development Tools的前端是Java,根本识别不了ABAP.那么在ADT里的ABAP语法高亮是如何实现的? 第一次打开一个report时,显示在ADT里的代码是没有任何语法高亮 ...
- Unity3D中使用Projector生成阴影
在Unity3D中使用Projector实现动态阴影 无意中看见一篇博客叙述使用Projector实现动态阴影可以在移动平台拥有非常好的性能,遂按照其想法实现了一遍,发现其中竟有许多细节,写下这篇博客 ...
- UVA 1606 Amphiphilic Carbon Molecules 两亲性分子 (极角排序或叉积,扫描法)
任意线可以贪心移动到两点上.直接枚举O(n^3),会TLE. 所以采取扫描法,选基准点,然后根据极角或者两两做叉积比较进行排排序,然后扫一遍就好了.旋转的时候在O(1)时间推出下一种情况,总复杂度为O ...