规划

ansible 节点
ansible controller
镜像rhel 8.2
镜像ansible hadoop 集群
master
slave1
slave2
镜像centos 1810

0.初始化

hadoop集群配网络修改主机名

10.104.44.25   master
10.104.44.49 slave1
10.104.44.36 slave2

一.配置ansible,配置集群hadoop用户免密

1.配置ansible controller etc/hosts

[root@localhost ansible]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.104.44.25 master
10.104.44.49 slave1
10.104.44.36 slave2

2.安装ansible并配置hosts

[root@localhost mnt]# ansible --version
ansible 2.9.11
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Dec 5 2019, 15:45:45) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] [root@localhost ansible]# cat hosts
master
slave1
slave2

3.集群添加hadoop用户

[root@localhost ansible]# cat user_hadoopcreate
- hosts: all
vars:
password: '123'
tasks:
- name: create
user:
name: hadoop
uid: 1200
password: "{{ password | password_hash('sha512') }}"
[root@localhost ansible]# [root@localhost ansible]# ansible-playbook user_hadoopcreate -k
PLAY RECAP *********************************************************************
master : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
slave1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
slave2 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

4.配置hadoop用户提权

ansible all -m  copy -a "dest=/etc/sudoers.d/ansible content='hadoop ALL=\(ALL\)  NOPASSWD:ALL'" -k

5.使root用户免密登录集群,使用Hadoop用户

ansible controller创建密匙

[root@localhost ansible]# ssh-keygen

ansible免密操控hadoop用户

[root@localhost .ssh]#     ansible  all  -m  copy -a 'src=/root/.ssh/id_rsa.pub dest=/home/hadoop/.ssh/authorized_keys   owner=hadoop group=hadoop'  -k

6.配置ansible 配置文件

host_key_checking = False  #不检查公钥

允许提权
become=True
become_method=sudo
become_user=root
become_ask_pass=False
远程用户为hadoop
remote_user = hadoop sed -i 's/remote_user = root/remote_user = hadoop/' ansible.cfg.bak
sed -i 's/^#become/become/g' ansible.cfg.bak
sed -i 's/^# host_key_checking/host_key_checking/g' ansible.cfg.bak

7.同步hosts

[root@localhost ansible]#     ansible  all  -m  copy -a 'src=/etc/hosts dest=/etc/hosts'

8.集群之间的配置hadoop用户免密登录

执行脚本前需要controller下载expect

yum reinstall --downloadonly --downloaddir=/etc/ansible/ -y libtcl*
yum reinstall --downloadonly --downloaddir=/etc/ansible/ -y expect

编写脚本,生产密钥,并发送公钥给其他集群

[root@localhost ansible]# cat ask.sh 

 expect <<EOF
set timeout 10
spawn ssh-keygen
expect {
"save the key" { send "\n"; exp_continue }
"passphrase" { send "\n"; exp_continue }
"passphrase again" { send "\n"; exp_continue } }
spawn ssh-copy-id hadoop@slave1
expect {
"yes/no" { send "yes\n";exp_continue }
"password" { send "123\n";exp_continue }
}
spawn ssh-copy-id hadoop@slave2
expect {
"yes/no" { send "yes\n";exp_continue }
"password" { send "123\n";exp_continue }
} spawn ssh-copy-id hadoop@master
expect {
"yes/no" { send "yes\n";exp_continue }
"password" { send "123\n";exp_continue }
}
EOF

编写playbook,hadoop节点安装expect软件,并执行免密脚本

[root@localhost ansible]# cat key_operation.yml
- hosts: slave2
remote_user: hadoop
become: false
vars:
package: /etc/ansible/tcl-8.6.8-2.el8.x86_64.rpm
package2: /etc/ansible/expect-5.45.4-5.el8.x86_64.rpm
tasks:
- name: deliver
copy:
src: "{{ item }}"
dest: /home/hadoop
loop:
- "{{ package }}"
- "{{ package2 }}" - name: rpm
yum:
name:
- /home/hadoop/tcl-8.6.8-2.el8.x86_64.rpm
- /home/hadoop/expect-5.45.4-5.el8.x86_64.rpm
state: present - name: try script
script: /etc/ansible/ask.sh

9.将以上操作整理成first脚本,可以跑脚本部署以上操作

    [root@localhost first]# cat first.sh
#!/bin/bash expect << EOF
set timeout 10
spawn ansible-playbook user_hadoopcreate.yml -k
expect {
"password" { send "123\n";exp_continue }
}
spawn ansible all -m file -a "path=/home/hadoop/.ssh state=directory owner=hadoop group=hadoop" -k
expect {
"password" { send "123\n";exp_continue }
}
spawn ansible all -m copy -a "dest=/etc/sudoers.d/ansible content='hadoop ALL=\(ALL\) NOPASSWD:ALL'" -k expect {
"password" { send "123\n";exp_continue }
}
spawn ansible all -m copy -a "src=/root/.ssh/id_rsa.pub dest=/home/hadoop/.ssh/authorized_keys owner=hadoop group=hadoop" -k
expect {
"password" { send "123\n";exp_continue }
}
EOF ansible all -m copy -a 'src=/etc/hosts dest=/etc/hosts' # change configuration
sed -i 's/remote_user = root/remote_user = hadoop/' ansible.cfg
sed -i 's/^#become/become/g' ansible.cfg
sed -i 's/^# host_key_checking/host_key_checking/g' ansible.cfg yum reinstall --downloadonly --downloaddir=/etc/ansible/ -y libtcl*
yum reinstall --downloadonly --downloaddir=/etc/ansible/ -y expect ansible-playbook key_operation.yml

二.配置hadoop基础配置

上传jdk与hadoop到ansible节点

四个配置文件内容

[root@localhost second]# cat hdfs.config
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/opt/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration> [root@localhost second]# cat yarn.config
<configuration>
<property>
<name>arn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration> [root@localhost second]# cat core.config
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/opt/hadoop/tmp</value>
</property>
</configuration> [root@localhost second]# cat mapred.config
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>

ansible-playbook 配置

cat clucig.yml 

#关闭防火墙与selinux
- hosts: all
tasks:
- name: Disable SELinux
selinux:
state: disabled - name: stop firewalld
systemd:
state: stopped
name: firewalld
enabled: no
ignore_errors: yes #将包解压到目标主机
- hosts: all
tasks: - name: tar
unarchive:
src: "/opt/{{ item }}"
dest: /opt/
loop:
- jdk-8u152-linux-x64.tar.gz
- hadoop-2.7.1.tar.gz - name: mv
shell: mv /opt/jdk1.8.0_152 /opt/jdk
- name: mv2
shell: mv /opt/hadoop-2.7.1 /opt/hadoop #更改环境变量
- hosts: all
tasks:
- name: copy
copy:
dest: /etc/profile.d/hadoop.sh
content: "export JAVA_HOME=/opt/jdk export HADOOP_HOME=/opt/hadoop export PATH=${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:$PATH\n" - name: echo
shell: echo 'export JAVA_HOME=/opt/jdk' >> /opt/hadoop/etc/hadoop/hadoop-env.sh #修改配置文件
#1
- name: delete something
shell: sed -i '/<configuration>/,/<\/configuration>/d' /opt/hadoop/etc/hadoop/hdfs-site.xml - name: copy
copy:
src: hdfs.config
dest: /opt/ - name: add configuration
shell: cat /opt/hdfs.config >> /opt/hadoop/etc/hadoop/hdfs-site.xml - name: shell operate
shell: mkdir -p /opt/hadoop/dfs/{name,data} #2
- name: delete something
shell: sed -i '/<configuration>/,/<\/configuration>/d' /opt/hadoop/etc/hadoop/core-site.xml - name: copy
copy:
src: core.config
dest: /opt/ - name: add configuration
shell: cat /opt/core.config >> /opt/hadoop/etc/hadoop/core-site.xml - name: shell operate2
shell: mkdir -p /opt/hadoop/tmp #3
- name: copy template
shell: cp /opt/hadoop/etc/hadoop/mapred-site.xml.template /opt/hadoop/etc/hadoop/mapred-site.xml - name: delete something
shell: sed -i '/<configuration>/,/<\/configuration>/d' /opt/hadoop/etc/hadoop/mapred-site.xml
- name: copy
copy:
src: mapred.config
dest: /opt/
- name: add configuration
shell: cat /opt/mapred.config >> /opt/hadoop/etc/hadoop/mapred-site.xml #4 - name: delete something
shell: sed -i '/<configuration>/,/<\/configuration>/d' /opt/hadoop/etc/hadoop/yarn-site.xml
- name: copy
copy:
src: yarn.config
dest: /opt/
- name: add configuration
shell: cat /opt/yarn.config >> /opt/hadoop/etc/hadoop/yarn-site.xml #configuration finish - name: master or slave
copy:
dest: /opt/hadoop/etc/hadoop/masters
content: "10.104.44.25\n"
- name: master or slave
copy:
dest: /opt/hadoop/etc/hadoop/slaves
content: "10.104.44.36\n10.104.44.49\n" - name: chmod
shell: chown -R hadoop.hadoop /opt/

剧本可以完成hadoop第四章配置文件

三.启动hadoop

windwos配置域名解析

c:\windows\system32\drivers\etc\hosts

手动启动hadoop

start-all.sh

hdfs namenode -format

start-dfs.sh

上传文件

hdfs dfs -put rhel-8.2-x86_64-dvd.iso

ansible 部署hadoop的更多相关文章

  1. CentOSLinux系统中Ansible自动化运维的安装以及利用Ansible部署JDK和Hadoop

    Ansible 安装和配置 Ansible 说明 Ansible 官网:https://www.ansible.com/ Ansible 官网 Github:https://github.com/an ...

  2. 使用ansible部署CDH 5.15.1大数据集群

    使用ansible离线部署CDH 5.15.1大数据集群 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 在此之前,我之前分享过使用shell自定义脚本部署大数据集群,不管是部署CD ...

  3. Docker部署Hadoop集群

    Docker部署Hadoop集群 2016-09-27 杜亦舒 前几天写了文章"Hadoop 集群搭建"之后,一个朋友留言说希望介绍下如何使用Docker部署,这个建议很好,Doc ...

  4. 使用Ambari快速部署Hadoop大数据环境

    使用Ambari快速部署Hadoop大数据环境   发布于2013-5-24   前言 做大数据相关的后端开发工作一年多来,随着Hadoop社区的不断发展,也在不断尝试新的东西,本文着重来讲解下Amb ...

  5. 001.Ansible部署RHCS存储集群

    一 前期准备 1.1 前置条件 至少有三个不同的主机运行monitor (MON)节点: 至少三个直接存储(非外部SAN硬件)的OSD节点主: 至少两个不同的manager (MGR)节点: 如果使用 ...

  6. 如何部署hadoop集群

    假设我们有三台服务器,他们的角色我们做如下划分: 10.96.21.120 master 10.96.21.119 slave1 10.96.21.121 slave2 接下来我们按照这个配置来部署h ...

  7. 使用Ansible部署etcd 3.2高可用集群

    之前写过一篇手动搭建etcd 3.1集群的文章<etcd 3.1 高可用集群搭建>,最近要初始化一套新的环境,考虑用ansible自动化部署整套环境, 先从部署etcd 3.2集群开始. ...

  8. ansible部署,规划

    部署管理服务器 第一步:先检查有没有ssh服务 [root@iZm5eeyc1al5vzh8bo57zyZ ~]# rpm -qf /etc/init.d/sshd openssh-server-5. ...

  9. 批量部署Hadoop集群环境(1)

    批量部署Hadoop集群环境(1) 1. 项目简介: 前言:云火的一塌糊涂,加上自大二就跟随一位教授做大数据项目,所以很早就产生了兴趣,随着知识的积累,虚拟机已经不能满足了,这次在服务器上以生产环境来 ...

  10. CentOS7.5 -- Ansible部署与应用

    第1章 Ansible概述 Ansible是一个配置管理系统configuration management system python 语言是运维人员必须会的语言 ansible 是一个基于pyth ...

随机推荐

  1. 开源一站式敏捷测试管理,极简项目管理平台 itest(爱测试) 6.6.2 发布,便捷迫切功能增强

    (一)itest 简介及更新说明 itest 开源敏捷测试管理,testOps 践行者,极简的任务管理,测试管理,缺陷管理,测试环境管理,接口测试5合1,又有丰富的统计分析.可按测试包分配测试用例执行 ...

  2. The solution of P5339

    problem 容斥好题,结果题解里面一堆 \(\text{NTT}\). 如果我们去掉有多少个人喜欢什么东西的条件,那么这个题就直接枚举有 \(i\) 组同学会一起讨论蔡徐坤.这一个问题十分容易. ...

  3. 夜莺项目发布 v6.4.0 版本,新增全局宏变量功能

    大家好,夜莺项目发布 v6.4.0 版本,新增全局宏变量功能,本文为大家简要介绍一下相关更新内容. 全局宏变量功能 像 SMTP 的配置中密码类型的信息,之前都是以明文的方式在页面展示,夜莺支持全局宏 ...

  4. 流程控制之case

    1.case语句作用 case和if一样,都是用于处理多分支的条件判断 但是在条件较多的情况,if嵌套太多就不够简洁了 case语句就更简洁和规范了 2.case用法参考 常见用法就是如根据用户输入的 ...

  5. CRP关键渲染路径笔记

    关键渲染路径CRP笔记 关键渲染路径(Critical Render Process)是浏览器将HTML.CSS和JavaScript代码转换为屏幕上像素的步骤序列,它包含了DOM(Document ...

  6. 面试官:谈谈对SpringAI的理解?

    Spring AI 已经发布了好长时间了,目前已经更新到 1.0 版本了,所以身为 Java 程序员的你,如果还对 Spring AI 一点都不了解的话,那就有点太落伍了. 言归正传,那什么是 Spr ...

  7. postman Could not get any response 无法请求

    外网访问接口地址,刚开始考虑到是阿里云服务器上面的ECS网络安全策略拦截,添加了白名单, 首先在浏览器中回车访问,页面有反应. 但是在postman中请求,仍然返回 Could not get any ...

  8. JS常用的工具方法

    记录一些经常使用的JS通用工具方法,代码来自互联网,佛性更新 空字符串校验 /** * 判断字符串是不是NULL或空串或空格组成 * @param str 被判断的字符串 * @return {boo ...

  9. uniapp windows 上架 apple store

    香蕉云 蒲公英 ios上架助手iOS Development 开发!先用上架助手在certificates里面生成一个p12文件在profiles里面生成mobileprovision文件就欧克了 需 ...

  10. TXT文本文件存储

    用解析器解析出数据之后,接下来就是存储数据了.保存的形式可以多种多样,最简单的形式是直接保存为文本文件,如 TXT.JSON.CSV 等.另外,还可以保存到数据库中,如关系型数据库 MySQL,非关系 ...