ansible系列(33)--ansible实战之部署WEB集群架构(3)
1. 应用环境部署
1.1 nginx编译部署
首先创建
nginx的role目录结构:[root@xuzhichao cluster-roles]# mkdir nginx/{tasks,templates,handlers,files,meta} -p
编写
nginx的task文件,编译安装过程如下:[root@xuzhichao cluster-roles]# cat nginx/tasks/install_source_nginx.yml
#编译安装nginx
#
#1.创建nginx安装目录
- name: Create Nginx Install Path
file:
name: "{{ nginx_install_directory }}/nginx" <==yaml文件的变量一般都需要使用引号引起来
state: directory
owner: "{{ web_user }}"
group: "{{ web_group }}"
mode: "0644" #2.拷贝并解压nginx源码文件到目标主机
- name: Unarchive Nginx Packages
unarchive:
src: "{{ nginx_filename_tar }}"
dest: "/root" #3.安装nginx的依赖软件包
- name: Install Dependencies For Building Nginx
yum:
name: "{{ item }}"
state: present
loop:
- pcre-devel
- openssl-devel
- zlib-devel
- pcre
- openssl
- zlib
- "@Development tools" <==注意安装包组的时候,需要使用双引号 #4.预编译nginx,指定安装目录和编译选项
- name: Configure Nginx
shell:
cmd: "./configure --prefix={{ nginx_install_directory }}/nginx --user={{ web_user }} --group={{ web_group }} {{ nginx_configure_options }}"
chdir: "/root/{{ nginx_version }}"
changed_when: false #5.编译nginx
- name: Build Nginx
shell:
cmd: "make && make install"
chdir: "/root/{{ nginx_version }}"
changed_when: false
编写启动
nginx的任务:[root@xuzhichao cluster-roles]# cat nginx/tasks/start_nginx.yml
#1.拷贝nginx的systemd的unit文件
- name: Copy Nginx Unit File
template:
src: nginx.service.j2
dest: /usr/lib/systemd/system/nginx.service
# notify: Reload Systemd #2.重新加载systemd,让新增的nginx的unit文件生效
- name: Reload Systemd
systemd:
daemon_reload: yes #3.拷贝nginx主配置文件
- name: Copy Nginx Main Configure File
template:
src: nginx.conf.j2
dest: "{{ nginx_install_directory }}/nginx/conf/nginx.conf"
owner: "{{ web_user }}"
group: "{{ web_group }}"
notify: Restart Nginx #4.检查nginx的配置文件是否正确
- name: Check Nginx Configure File
shell: "{{ nginx_install_directory }}/nginx/sbin/nginx -t"
register: Check_Nginx_Status
changed_when:
- Check_Nginx_Status.stdout.find('successful')
- false #5.创建nginx子配置文件目录
- name: Create Confihure Directory
file:
path: "{{ nginx_install_directory }}/nginx/conf/conf.d"
state: directory #6.启动nginx服务
- name: Start Nginx
systemd:
name: nginx
state: started
task的main.yml文件如下:[root@xuzhichao cluster-roles]# cat nginx/tasks/main.yml
- include: install_source_nginx.yml
- include: start_nginx.yml
handlers文件如下:[root@xuzhichao cluster-roles]# cat nginx/handlers/main.yml
- name: Restart Nginx
systemd:
name: nginx
state: restarted
nginx的基础配置文件模板如下:[root@xuzhichao cluster-roles]# cat nginx/templates/nginx.conf.j2
user {{ web_user }};
worker_processes {{ ansible_processor_vcpus }}; events {
worker_connections {{ worker_connections_num }};
} http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout {{ keepalive_timeout }};
gzip {{ gzip_contorl }};
include {{ nginx_install_directory }}/nginx/conf/conf.d/*.conf;
}
nginx的unit模板文件如下:[root@xuzhichao cluster-roles]# cat nginx/templates/nginx.service.j2
[Unit]
Description=nginx - high performance web server
Documentation=http://nginx.org/en/docs/
After=network-online.target remote-fs.target nss-lookup.target
Wants=network-online.target [Service]
Type=forking
PIDFile={{ nginx_install_directory }}/nginx/logs/nginx.pid
ExecStartPre=/usr/bin/rm -f {{ nginx_install_directory }}/nginx/logs/nginx.pid
ExecStartPre={{ nginx_install_directory }}/nginx/sbin/nginx -t
ExecStart={{ nginx_install_directory }}/nginx/sbin/nginx
ExecReload=/bin/sh -c "/bin/kill -s HUP $(/bin/cat {{ nginx_install_directory }}/nginx/logs/nginx.pid)"
ExecStop=/bin/sh -c "/bin/kill -s TERM $(/bin/cat {{ nginx_install_directory }}/nginx/logs/nginx.pid)" [Install]
WantedBy=multi-user.target
变量文件如下:
[root@xuzhichao cluster-roles]# cat group_vars/all
#创建基础环境变量
web_group: nginx
web_gid: 887
web_user: nginx
web_uid: 887 #nginx相关变量
nginx_install_directory: /soft
nginx_filename_tar: nginx-1.20.1.tar.gz
nginx_version: nginx-1.20.1
nginx_configure_options: --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_dav_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre --with-stream --with-stream_ssl_module --with-stream_realip_module --with-file-aio
gzip_contorl: "on" <==此处的on要使用双引号,否则会被渲染为True
keepalive_timeout: 65
worker_connections_num: 35566
playbook入口文件如下:[root@xuzhichao cluster-roles]# cat wordpress_site.yml
- hosts: all
roles:
- role: base-module
tags: base-module - hosts: webservers
roles:
- role: nginx
tags: nginx - hosts: lbservers
roles:
- role: nginx
tags: nginx
运行
playbook文件:[root@xuzhichao cluster-roles]# ansible-playbook -t nginx wordpress_site.yml
nginx整体目录结构如下:[root@xuzhichao cluster-roles]# tree nginx/
nginx/
├── files
│ └── nginx-1.20.1.tar.gz
├── handlers
│ └── main.yml
├── meta
├── tasks
│ ├── install_source_nginx.yml
│ ├── main.yml
│ └── start_nginx.yml
└── templates
├── nginx.conf.j2
└── nginx.service.j2 5 directories, 7 files
在被控端查看部署情况:
[root@web01 ~]# /soft/nginx/sbin/nginx -V
nginx version: nginx/1.20.1
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC)
built with OpenSSL 1.0.2k-fips 26 Jan 2017
TLS SNI support enabled
configure arguments: --prefix=/soft/nginx --user=nginx --group=nginx --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_dav_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre --with-stream --with-stream_ssl_module --with-stream_realip_module --with-file-aio [root@web01 ~]# cat /soft/nginx/conf/nginx.conf
user nginx;
worker_processes 1; events {
worker_connections 35566;
} http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
} [root@web01 ~]# cat /usr/lib/systemd/system/nginx.service
[Unit]
Description=nginx - high performance web server
Documentation=http://nginx.org/en/docs/
After=network-online.target remote-fs.target nss-lookup.target
Wants=network-online.target [Service]
Type=forking
PIDFile=/soft/nginx/logs/nginx.pid
ExecStartPre=/usr/bin/rm -f /soft/nginx/logs/nginx.pid
ExecStartPre=/soft/nginx/sbin/nginx -t
ExecStart=/soft/nginx/sbin/nginx
ExecReload=/bin/sh -c "/bin/kill -s HUP $(/bin/cat /soft/nginx/logs/nginx.pid)"
ExecStop=/bin/sh -c "/bin/kill -s TERM $(/bin/cat /soft/nginx/logs/nginx.pid)" [Install]
WantedBy=multi-user.target
遗留问题:
nginx的预编译和编译环节无法实现幂等性,每次执行都会重新编译一次。
1.2 PHP编译部署
创建php-fpm目录结构:
[root@xuzhichao cluster-roles]# mkdir php-fpm/{tasks,handlers,templates,files,meta} -p
编写php编译安装task:
[root@xuzhichao cluster-roles]# cat php-fpm/tasks/install_source_php.yml
#编译安装PHP
#
#1.创建PHP安装目录
- name: Create PHP Install Path
file:
name: "{{ PHP_install_directory }}/php"
state: directory #2.拷贝解压PHP安装包
- name: Unarchive PHP Packages
unarchive:
src: "{{ PHP_tar_packages }}"
dest: /root #3.安装PHP的依赖包
- name: Install Dependencies For Building PHP
yum:
name: "{{ item }}"
state: present
loop:
- libxml2
- libxml2-devel
- openssl
- openssl-devel
- curl
- curl-devel
- libpng
- libpng-devel
- freetype
- freetype-devel
- libmcrypt-devel
- libzip-devel
- pcre
- pcre-devel
- bzip2-devel
- libicu-devel
- gcc
- gcc-c++
- autoconf
- libjpeg
- libjpeg-devel
- zlib
- zlib-devel
- glibc
- glibc-devel
- glib2
- glib2-devel
- ncurses
- ncurses-devel
- krb5-devel
- libidn
- libidn-devel
- openldap
- openldap-devel
- nss_ldap
- jemalloc-devel
- cmake
- boost-devel
- bison
- automake
- libevent
- libevent-devel
- gd
- gd-devel
- libtool*
- mcrypt
- mhash
- libxslt
- libxslt-devel
- readline
- readline-devel
- gmp
- gmp-devel
- libcurl
- libcurl-devel
- openjpeg-devel #4.预编译安装PHP,指定安装目录和编译选项
- name: Configure PHP
shell:
cmd: "./configure --prefix={{ nginx_install_directory }}/php --with-fpm-user={{ web_user }} --with-fpm-group={{ web_group }} {{ PHP_configure_options }}"
chdir: "/root/{{ PHP_version }}"
changed_when: false
when: php_path is exists #5.编译安装
- name: Build PHP
shell:
cmd: "make && make install"
chdir: "/root/{{ PHP_version }}"
changed_when: false
when: php_path is exists
编写php的启动任务:
[root@xuzhichao cluster-roles]# cat php-fpm/tasks/start_php-fpm.yml
#1.拷贝nginx的systemd的unit文件
- name: Copy PHP-FPM Unit File
template:
src: php-fpm.service.j2
dest: /usr/lib/systemd/system/php-fpm.service #2.重新加载systemd,让新增的nginx的unit文件生效
- name: Reload Systemd
systemd:
daemon_reload: yes #3.创建PHP的日志目录
- name: Create Log Path
file:
path: "{{ PHP_install_directory }}/php/log"
state: directory #4.拷贝PHP相关配置文件
- name: Copy PHP and PHP-FPM Configure File
template:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
loop:
- { src: "php.ini.j2", dest: "{{ PHP_install_directory }}/php/etc/php.ini" }
- { src: "www.conf.j2", dest: "{{ PHP_install_directory }}/php/etc/php-fpm.d/www.conf" }
- { src: "php-fpm.conf.j2", dest: "{{ PHP_install_directory }}/php/etc/php-fpm.conf" } #5.检查PHP配置文件
- name: Check PHP Configure File
shell: "{{ PHP_install_directory }}/php/sbin/php-fpm -t"
register: Check_PHP_Status
changed_when:
- Check_PHP_Status.stdout.find('successful')
- false
notify: Restart PHP-FPM #6.启动PHP-FPM
- name: Start PHP-FPM
systemd:
name: php-fpm
state: started
编写php任务的main.yml文件:
[root@xuzhichao cluster-roles]# cat php-fpm/tasks/main.yml
- include: install_source_php.yml
- include: start_php-fpm.yml
php的systemd unit模板文件如下:
[root@xuzhichao cluster-roles]# cat php-fpm/templates/php-fpm.service.j2
[Unit]
Description=The PHP FastCGI Process Manager
After=syslog.target network.target [Service]
Type=forking
PIDFile={{ PHP_install_directory }}/php/var/run/php-fpm.pid
#EnvironmentFile=/etc/sysconfig/php-fpm
ExecStart={{ PHP_install_directory }}/php/sbin/php-fpm
ExecReload=/bin/kill -USR2 $MAINPID
PrivateTmp=true [Install]
WantedBy=multi-user.target
php-fpm的配置模板文件如下:
[root@xuzhichao cluster-roles]# cat php-fpm/templates/php-fpm.conf.j2
[global]
pid = run/php-fpm.pid
include={{ PHP_install_directory }}/php/etc/php-fpm.d/*.conf [root@xuzhichao cluster-roles]# cat php-fpm/templates/www.conf.j2
[www]
user = {{ web_user }}
group = {{ web_group }}
listen = {{ php_fpm_listen_address }}:{{ php_fpm_listen_port }}
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = {{ pm_max_children_num }}
pm.start_servers = 10
pm.min_spare_servers = 10
pm.max_spare_servers = 20
pm.max_requests = 50000
pm.status_path = /pm_status
ping.path = /ping
ping.response = pong
access.log = log/$pool.access.log
slowlog = log/$pool.log.slow [root@xuzhichao cluster-roles]# cat php-fpm/templates/php.ini.j2
[PHP]
engine = On
short_open_tag = Off
precision = 14
output_buffering = 4096
zlib.output_compression = Off
implicit_flush = Off
unserialize_callback_func =
serialize_precision = -1
disable_functions =
disable_classes =
zend.enable_gc = On
expose_php = On
max_execution_time = 30
max_input_time = 60
memory_limit = 128M
error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT
display_errors = Off
display_startup_errors = Off
log_errors = On
log_errors_max_len = 1024
ignore_repeated_errors = Off
ignore_repeated_source = Off
report_memleaks = On
html_errors = On
variables_order = "GPCS"
request_order = "GP"
register_argc_argv = Off
auto_globals_jit = On
post_max_size = 8M
auto_prepend_file =
auto_append_file =
default_mimetype = "text/html"
default_charset = "UTF-8"
doc_root =
user_dir =
enable_dl = Off
file_uploads = On
upload_max_filesize = 2M
max_file_uploads = 20
allow_url_fopen = On
allow_url_include = Off
default_socket_timeout = 60
extension={{ PHP_install_directory }}/php/lib/php/extensions/no-debug-non-zts-20180731/redis.so
[CLI Server]
cli_server.color = On
[Date]
[filter]
[iconv]
[imap]
[intl]
[sqlite3]
[Pcre]
[Pdo]
[Pdo_mysql]
pdo_mysql.default_socket=
[Phar]
[mail function]
SMTP = localhost
smtp_port = 25
mail.add_x_header = Off
[ODBC]
odbc.allow_persistent = On
odbc.check_persistent = On
odbc.max_persistent = -1
odbc.max_links = -1
odbc.defaultlrl = 4096
odbc.defaultbinmode = 1
[Interbase]
ibase.allow_persistent = 1
ibase.max_persistent = -1
ibase.max_links = -1
ibase.timestampformat = "%Y-%m-%d %H:%M:%S"
ibase.dateformat = "%Y-%m-%d"
ibase.timeformat = "%H:%M:%S"
[MySQLi]
mysqli.max_persistent = -1
mysqli.allow_persistent = On
mysqli.max_links = -1
mysqli.default_port = 3306
mysqli.default_socket =
mysqli.default_host =
mysqli.default_user =
mysqli.default_pw =
mysqli.reconnect = Off
[mysqlnd]
mysqlnd.collect_statistics = On
mysqlnd.collect_memory_statistics = Off
[OCI8]
[PostgreSQL]
pgsql.allow_persistent = On
pgsql.auto_reset_persistent = Off
pgsql.max_persistent = -1
pgsql.max_links = -1
pgsql.ignore_notice = 0
pgsql.log_notice = 0
[bcmath]
bcmath.scale = 0
[browscap]
[Session]
session.save_handler = redis
session.save_path = "tcp://192.168.20.61:6379"
session.use_strict_mode = 0
session.use_cookies = 1
session.use_only_cookies = 1
session.name = PHPSESSID
session.auto_start = 0
session.cookie_lifetime = 0
session.cookie_path = /
session.cookie_domain =
session.cookie_httponly =
session.cookie_samesite =
session.serialize_handler = php
session.gc_probability = 1
session.gc_divisor = 1000
session.gc_maxlifetime = 1440
session.referer_check =
session.cache_limiter = nocache
session.cache_expire = 180
session.use_trans_sid = 0
session.sid_length = 26
session.trans_sid_tags = "a=href,area=href,frame=src,form="
session.sid_bits_per_character = 5
[Assertion]
zend.assertions = -1
[COM]
[mbstring]
[gd]
[exif]
[Tidy]
tidy.clean_output = Off
[soap]
soap.wsdl_cache_enabled=1
soap.wsdl_cache_dir="/tmp"
soap.wsdl_cache_ttl=86400
soap.wsdl_cache_limit = 5
[sysvshm]
[ldap]
ldap.max_links = -1
[dba]
[opcache]
[curl]
[openssl]
变量文件如下:
[root@xuzhichao cluster-roles]# cat group_vars/all
#创建基础环境变量
web_group: nginx
web_gid: 887
web_user: nginx
web_uid: 887 #nginx相关变量
nginx_install_directory: /soft
nginx_filename_tar: nginx-1.20.1.tar.gz
nginx_version: nginx-1.20.1
nginx_configure_options: --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_dav_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre --with-stream --with-stream_ssl_module --with-stream_realip_module --with-file-aio
gzip_contorl: "on"
keepalive_timeout: 65
worker_connections_num: 35566
nginx_path: /soft/nginx/sbin/nginx #PHP相关变量
PHP_install_directory: /soft
PHP_tar_packages: php-7.3.16.tar.xz
PHP_version: php-7.3.16 PHP_configure_options: --enable-fpm --with-pear --with-mysqli=mysqlnd --with-openssl --with-pdo-mysql=mysqlnd --enable-mbstring --with-freetype-dir --with-jpeg-dir --with-png-dir --with-zlib --with-libxml-dir=/usr --enable-xml --enable-sockets --with-curl --with-freetype-dir --with-iconv --disable-debug --with-mhash --with-xmlrpc --with-xsl --enable-soap --enable-exif --enable-wddx --enable-bcmath --enable-calendar --enable-shmop --enable-sysvsem --enable-sysvshm --enable-syssvmsg php_fpm_listen_address: 127.0.0.1
php_fpm_listen_port: 9000
pm_max_children_num: 50
php_path: /soft/php/sbin/php-fpm
playbook文件如下:
[root@xuzhichao cluster-roles]# cat wordpress_site.yml
- hosts: all
roles:
- role: base-module
tags: base-module - hosts: webservers
roles:
- role: nginx
- role: php-fpm
tags:
- nginx
- php-fpm - hosts: lbservers
roles:
- role: nginx
tags: nginx
php-fpm的整体目录结构:
[root@xuzhichao cluster-roles]# tree php-fpm/
php-fpm/
├── files
│ └── php-7.3.16.tar.xz
├── handlers
│ └── main.yml
├── meta
├── tasks
│ ├── install_source_php.yml
│ ├── main.yml
│ └── start_php-fpm.yml
└── templates
├── php-fpm.conf.j2
├── php-fpm.service.j2
├── php.ini.j2
└── www.conf.j2 5 directories, 9 files
测试运行playbook文件:
[root@xuzhichao cluster-roles]# ansible-playbook -t php-fpm wordpress_site.yml
在被控主机上检查运行情况:
[root@web02 ~]# cat /soft/php/etc/php-fpm.d/www.conf
[www]
user = nginx
group = nginx
listen = 127.0.0.1:9000
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = 50
pm.start_servers = 10
pm.min_spare_servers = 10
pm.max_spare_servers = 20
pm.max_requests = 50000
pm.status_path = /pm_status
ping.path = /ping
ping.response = pong
access.log = log/$pool.access.log
slowlog = log/$pool.log.slow [root@web02 ~]# cat /usr/lib/systemd/system/php-fpm.service
[Unit]
Description=The PHP FastCGI Process Manager
After=syslog.target network.target [Service]
Type=forking
PIDFile=/soft/php/var/run/php-fpm.pid
#EnvironmentFile=/etc/sysconfig/php-fpm
ExecStart=/soft/php/sbin/php-fpm
ExecReload=/bin/kill -USR2 $MAINPID
PrivateTmp=true [Install]
WantedBy=multi-user.target [root@web02 ~]# systemctl status php-fpm
● php-fpm.service - The PHP FastCGI Process Manager
Loaded: loaded (/usr/lib/systemd/system/php-fpm.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2021-08-10 10:48:59 CST; 1 day 11h ago
Process: 108232 ExecStart=/soft/php/sbin/php-fpm (code=exited, status=0/SUCCESS)
Main PID: 108233 (php-fpm)
CGroup: /system.slice/php-fpm.service
├─108233 php-fpm: master process (/soft/php/etc/php-fpm.conf)
├─108234 php-fpm: pool www
├─108235 php-fpm: pool www
├─108236 php-fpm: pool www
├─108237 php-fpm: pool www
├─108238 php-fpm: pool www
├─108239 php-fpm: pool www
├─108240 php-fpm: pool www
├─108241 php-fpm: pool www
├─108242 php-fpm: pool www
└─108243 php-fpm: pool www Aug 10 10:48:59 web02 systemd[1]: Starting The PHP FastCGI Process Manager...
Aug 10 10:48:59 web02 systemd[1]: Started The PHP FastCGI Process Manager.
1.3 mariadb二级制部署
建立mariadb相关目录结构:
[root@xuzhichao cluster-roles]# mkdir mariadb/{tasks,handlers,templates,files,meta} -p
编写mariadb的安装任务:
[root@xuzhichao cluster-roles]# cat mariadb/tasks/main.yml
#二进制安装mariadb数据库
#
#1.创建mysql账号
- name: Create Mysql Group
group:
name: "{{ mysql_group }}"
state: present - name: Create Mysql User
user:
name: "{{ mysql_user }}"
group: "{{ mysql_group }}"
shell: /sbin/nologin
create_home: no
state: present #2.创建mysql相关工作目录
- name: Create Mysql Work Directory
file:
path: "{{ item }}"
state: directory
owner: "{{ mysql_user }}"
group: "{{ mysql_group }}"
loop:
- /var/lib/mysql/
- "{{ mysql_data_directory }}" #3.拷贝解压mariadb数据包
- name: Unarchive Mariadb Package
unarchive:
src: "{{ mysql_tar_ball }}"
dest: "/usr/local/src/" - name: Create Mariadb Link File
file:
src: "/usr/local/src/{{ mysql_version }}"
dest: "{{ mysql_link_file_path }}"
state: link #4.创建数据库文件:
- name: Init Mysql Database
shell:
cmd: "{{ mysql_link_file_path }}/scripts/mysql_install_db --user={{ mysql_user }} --datadir={{ mysql_data_directory }} --basedir={{ mysql_base_directory }}"
changed_when: false #5.创建mariadb的服务启动文件
- name: Copy Mariadb Service File
template:
src: mysqld.j2
dest: /etc/init.d/mysqld
mode: "0755" #6.拷贝mariadb配置文件
- name: Copy Mariadb Configure File
template:
src: my.cnf.j2
dest: /etc/my.cnf
notify: Restart Mariadb Server #7.启动mariadb
- name: Start Mariadb Server
systemd:
name: mysqld
state: started
enabled: yes #8.设备数据库root密码
#- name: Create Mysql.sock Link File
# file:
# src: /var/lib/mysql/mysql.sock
# dest: /tmp/mysql.sock
# state: link #- name: Grant Database User
# mysql_user:
# name: root
# password: 123456
# update_password: on_create
# host: '%'
# priv: '*.*:ALL'
# state: present
编写handlers文件:
[root@xuzhichao cluster-roles]# cat mariadb/handlers/main.yml
- name: Restart Mariadb Server
systemd:
name: mysqld
state: retarted
编写变量文件:
[root@xuzhichao cluster-roles]# cat group_vars/all
......
#Mysql相关变量
mysql_user: mysql
mysql_group: mysql
mysql_base_directory: /usr/local/mysql
mysql_data_directory: /data/mysql
mysql_tar_ball: mariadb-10.5.2-linux-x86_64.tar.gz
mysql_version: mariadb-10.5.2-linux-x86_64
mysql_link_file_path: /usr/local/mysql
mysqld_file: /etc/init.d/mysqld
mariadb的配置文件如下:
[root@xuzhichao cluster-roles]# cat mariadb/templates/my.cnf.j2
[mysqld]
datadir={{ mysql_data_directory }}
user={{ mysql_user }}
innodb_file_per_table=on
skip_name_resolve=on
max_connections=10000
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd [client]
port=3306
socket=/var/lib/mysql/mysql.sock [mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
#log-error=/var/log/mysqld.log
#pid-file=/var/lib/mysql/mysql.sock
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
playbook文件如下:
[root@xuzhichao cluster-roles]# cat wordpress_site.yml
- hosts: all
roles:
- role: base-module
tags: base-module - hosts: webservers
roles:
- role: nginx
- role: php-fpm
tags:
- nginx
- php-fpm - hosts: lbservers
roles:
- role: nginx
tags: nginx - hosts: mysql
roles:
- role: mariadb
tags: mysql
运行palybook:
[root@xuzhichao cluster-roles]# ansible-playbook -t mysql wordpress_site.yml
遗留问题:没有为初始的root用户创建密码。
1.4 redis部署
建立redis的相关目录结构:
[root@xuzhichao cluster-roles]# mkdir redis/{tasks,handlers,meta,files,templates} -p
编写redis的任务文件:
[root@xuzhichao cluster-roles]# cat redis/tasks/main.yml
- name: Install Redis
yum:
name: redis
state: present - name: Copy Configure File
template:
src: redis.conf.j2
dest: /etc/redis.conf
owner: "redis"
group: "root"
mode: "0644"
notify: Restart Redis - name: Start Redis
systemd:
name: redis
state: started
enabled: yes
编写handlers文件:
[root@xuzhichao cluster-roles]# cat redis/handlers/main.yml
- name: Restart Redis
systemd:
name: redis
state: restarted
模板文件如下:
[root@xuzhichao cluster-roles]# cat redis/templates/redis.conf.j2
......
bind 127.0.0.1 {{ ansible_eth1.ipv4.address }}
......
redis的目录机构如下:
[root@xuzhichao cluster-roles]# tree redis/
redis/
├── files
├── handlers
│ └── main.yml
├── meta
├── tasks
│ └── main.yml
└── templates
└── redis.conf.j2
playbook文件如下:
[root@xuzhichao cluster-roles]# cat wordpress_site.yml
- hosts: all
roles:
- role: base-module
tags: base-module - hosts: webservers
roles:
- role: nginx
- role: php-fpm
tags:
- nginx
- php-fpm - hosts: lbservers
roles:
- role: nginx
tags: nginx - hosts: mysql
roles:
- role: mariadb
tags: mysql - hosts: redis
roles:
- role: redis
tags: redis
运行palybook文件:
[root@xuzhichao cluster-roles]# ansible-playbook -t redis wordpress_site.yml
1.5 NFS部署
创建nfs的相关目录结构:
[root@xuzhichao cluster-roles]# mkdir nfs/{tasks,handlers,templates,meta,files} -p
编写nfs的任务文件:
[root@xuzhichao cluster-roles]# cat nfs/tasks/main.yml
- name: Install NFS Server
yum:
name: nfs-utils
state: present - name: Configure NFS Server
template:
src: exports.j2
dest: /etc/exports
notify: Restrat NFS Service - name: Init NFS Server
file:
path: "{{ nfs_share_path }}"
state: directory
owner: "{{ web_user }}"
group: "{{ web_group }}"
mode: "0644" - name: Start NFS service
systemd:
name: nfs
state: started
enabled: yes
编写handlers文件:
[root@xuzhichao cluster-roles]# cat nfs/handlers/main.yml
- name: Restrat NFS Service
systemd:
name: nfs
state: restarted
模板文件如下:
[root@xuzhichao cluster-roles]# cat nfs/templates/exports.j2
{{ nfs_share_path }} {{ nfs_share_iprange }}(rw,all_squash,anonuid={{ web_uid }},anongid={{ web_gid }})
nfs相关变量文件如下:
#NFS相关变量
nfs_share_path: /data/nfs
nfs_share_iprange: 192.168.20.0/24
nfs的目录结构如下:
[root@xuzhichao cluster-roles]# tree nfs/
nfs/
├── files
├── handlers
│ └── main.yml
├── meta
├── tasks
│ └── main.yml
└── templates
└── exports.j2 5 directories, 3 files
playbook文件如下:
[root@xuzhichao cluster-roles]# cat wordpress_site.yml o
- hosts: all
roles:
- role: base-module
tags: base-module - hosts: webservers
roles:
- role: nginx
- role: php-fpm
tags:
- nginx
- php-fpm - hosts: lbservers
roles:
- role: nginx
tags: nginx - hosts: mysql
roles:
- role: mariadb
tags: mysql - hosts: redis
roles:
- role: redis
tags: redis - hosts: nfs
roles:
- role: nfs
tags: nfs
运行playbook,查看nfs启动情况:
[root@xuzhichao cluster-roles]# ansible-playbook -t nfs wordpress_site.yml [root@nfs01 ~]# cat /etc/exports
/data/nfs 192.168.20.0/24(rw,all_squash,anonuid=887,anongid=887) [root@xuzhichao cluster-roles]# showmount -e 192.168.20.30
Export list for 192.168.20.30:
/data/nfs 192.168.20.0/24
1.6 keepalived+LVS部署
创建keepalived相关工作目录:
[root@xuzhichao cluster-roles]# mkdir keepalived/{tasks,handlers,files,meta,templates} -p
编写keepalived的主任务文件:
[root@xuzhichao cluster-roles]# cat keepalived/tasks/main.yml
- name: Install Keepalived
yum:
name: keepalived
state: present - name: Copy Notify Script
template:
src: notify.sh.j2
dest: /etc/keepalived/notify.sh
mode: "0755" - name: Copy Configure File
template:
src: keepalived.conf.j2
dest: /etc/keepalived/keepalived.conf
notify: Restart Keepalived - name: Start Keepalived
systemd:
name: keepalived
state: started
enabled: yes
编写keepalived的handlers文件:
[root@xuzhichao cluster-roles]# cat keepalived/handlers/main.yml
- name: Restart Keepalived
systemd:
name: keepalived
state: restarted
查看keepalived的配置模板文件:
[root@xuzhichao cluster-roles]# cat keepalived/templates/keepalived.conf.j2
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id {{ ansible_hostname }}
script_user root
enable_script_security
} vrrp_instance VI_1 {
{% if ansible_hostname == "lvs01" %} <==根据主机名判断MASTER和SLAVE情况
state MASTER
priority 120
{% elif ansible_hostname == "lvs02" %}
state SLAVE
priority 100
{% endif %}
interface {{ vrrp_interface }}
virtual_router_id {{ virtual_router_id1 }}
advert_int 3
authentication {
auth_type PASS
auth_pass {{ auth_pass }}
}
virtual_ipaddress {
{{ virtual_ipaddress1 }} dev {{ vrrp_interface }}
} track_interface {
{{ vrrp_interface }}
} notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
} vrrp_instance VI_2 {
{% if ansible_hostname == "lvs02" %}
state MASTER
priority 120
{% elif ansible_hostname == "lvs01" %}
state SLAVE
priority 100
{% endif %}
interface {{ vrrp_interface }}
virtual_router_id {{ virtual_router_id2 }}
advert_int 3
authentication {
auth_type PASS
auth_pass {{ auth_pass }}
}
virtual_ipaddress {
{{ virtual_ipaddress2 }} dev {{ vrrp_interface }}
} track_interface {
{{ vrrp_interface }}
} notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
} {% for vip in vips %} <==使用多重循环生成lvs的相关配置
{% for port in track_ports %}
virtual_server {{ vip }} {{ port }} {
delay_loop 6
lb_algo {{ lb_algo }}
lb_kind {{ lb_kind }}
protocol {{ protocol }} sorry_server 192.168.20.24 {{ port }}
{% for rip in groups["lbservers"] %} <==根据hosts文件中lbservers组成员自动生成后端主机
real_server {{ rip }} {{ port }} {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
{% endfor %}
}
{% endfor %}
{% endfor %}
查看keepalived的脚本通知模板文件:
[root@xuzhichao cluster-roles]# cat keepalived/templates/notify.sh.j2
#!/bin/bash contact='root@localhost'
notify() {
local mailsubject="$(hostname) to be $1, vip floating"
local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
echo "$mailbody" | mail -s "$mailsubject" $contact
} case $1 in
master)
notify master
;;
backup)
notify backup
;;
fault)
notify fault
;;
*)
echo "Usage: $(basename $0) {master|backup|fault}"
exit 1
;;
esac
查看相关变量的定义:
[root@xuzhichao cluster-roles]# cat group_vars/all
......
#keepalived相关变量
vrrp_interface: eth1
virtual_router_id1: 51
auth_pass: 1111
virtual_ipaddress1: 192.168.20.200/24
virtual_router_id2: 52
virtual_ipaddress2: 192.168.20.201/24
vips:
- 192.168.20.200
- 192.168.20.201
track_ports:
- 443
- 80
lb_algo: rr
lb_kind: DR
playbook文件如下:
[root@xuzhichao cluster-roles]# cat wordpress_site.yml
- hosts: all
roles:
- role: base-module
tags: base-module - hosts: webservers
roles:
- role: nginx
- role: php-fpm
tags:
- nginx
- php-fpm - hosts: lbservers
roles:
- role: nginx
tags: nginx - hosts: mysql
roles:
- role: mariadb
tags: mysql - hosts: redis
roles:
- role: redis
tags: redis - hosts: nfs
roles:
- role: nfs
tags: nfs - hosts: lvs
roles:
- role: keepalived
tags: keepalived
执行playbook文件:
[root@xuzhichao cluster-roles]# ansible-playbook -t keepalived wordpress_site.yml
在lvs01主机上查看生成的配置文件:
[root@lvs01 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id lvs01
script_user root
enable_script_security
} vrrp_instance VI_1 {
state MASTER
priority 120
interface eth1
virtual_router_id 51
advert_int 3
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.20.200/24 dev eth1
} track_interface {
eth1
} notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
} vrrp_instance VI_2 {
state SLAVE
priority 100
interface eth1
virtual_router_id 52
advert_int 3
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.20.201/24 dev eth1
} track_interface {
eth1
} notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
} virtual_server 192.168.20.200 443 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP sorry_server 192.168.20.24 443
real_server 192.168.20.19 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.20.20 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
virtual_server 192.168.20.200 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP sorry_server 192.168.20.24 80
real_server 192.168.20.19 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.20.20 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
virtual_server 192.168.20.201 443 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP sorry_server 192.168.20.24 443
real_server 192.168.20.19 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.20.20 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
virtual_server 192.168.20.201 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP sorry_server 192.168.20.24 80
real_server 192.168.20.19 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.20.20 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
} [root@lvs01 ~]# cat /etc/keepalived/notify.sh
#!/bin/bash contact='root@localhost'
notify() {
local mailsubject="$(hostname) to be $1, vip floating"
local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
echo "$mailbody" | mail -s "$mailsubject" $contact
} case $1 in
master)
notify master
;;
backup)
notify backup
;;
fault)
notify fault
;;
*)
echo "Usage: $(basename $0) {master|backup|fault}"
exit 1
;;
esac
在lvs02主机上查看生成的配置文件:
[root@lvs02 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id lvs02
script_user root
enable_script_security
} vrrp_instance VI_1 {
state SLAVE
priority 100
interface eth1
virtual_router_id 51
advert_int 3
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.20.200/24 dev eth1
} track_interface {
eth1
} notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
} vrrp_instance VI_2 {
state MASTER
priority 120
interface eth1
virtual_router_id 52
advert_int 3
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.20.201/24 dev eth1
} track_interface {
eth1
} notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
} virtual_server 192.168.20.200 443 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP sorry_server 192.168.20.24 443
real_server 192.168.20.19 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.20.20 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
virtual_server 192.168.20.200 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP sorry_server 192.168.20.24 80
real_server 192.168.20.19 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.20.20 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
virtual_server 192.168.20.201 443 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP sorry_server 192.168.20.24 443
real_server 192.168.20.19 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.20.20 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
virtual_server 192.168.20.201 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP sorry_server 192.168.20.24 80
real_server 192.168.20.19 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.20.20 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
查看lvs01节点虚ip地址的情况及lvs的运行状态:(因为后端两个lb节点暂时没有虚拟主机,没有监听80和443端口,因此lvs探测失败,没有显示后端主机)
[root@lvs01 ~]# ip add show eth1
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:21:84:93 brd ff:ff:ff:ff:ff:ff
inet 192.168.20.31/24 brd 192.168.20.255 scope global noprefixroute eth1
valid_lft forever preferred_lft forever
inet 192.168.20.200/24 scope global secondary eth1 [root@lvs01 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.20.200:80 rr
-> 192.168.20.24:80 Route 1 0 0
TCP 192.168.20.200:443 rr
-> 192.168.20.24:443 Route 1 0 0
TCP 192.168.20.201:80 rr
-> 192.168.20.24:80 Route 1 0 0
TCP 192.168.20.201:443 rr
-> 192.168.20.24:443 Route 1 0 0
查看lvs02节点虚ip地址的情况及lvs的运行状态:
[root@lvs02 ~]# ip add show eth1
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:e4:cf:0d brd ff:ff:ff:ff:ff:ff
inet 192.168.20.32/24 brd 192.168.20.255 scope global noprefixroute eth1
valid_lft forever preferred_lft forever
inet 192.168.20.201/24 scope global secondary eth1 [root@lvs02 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.20.200:80 rr
-> 192.168.20.24:80 Route 1 0 0
TCP 192.168.20.200:443 rr
-> 192.168.20.24:443 Route 1 0 0
TCP 192.168.20.201:80 rr
-> 192.168.20.24:80 Route 1 0 0
TCP 192.168.20.201:443 rr
-> 192.168.20.24:443 Route 1 0 0
1.7 dns部署
创建dns相关工作目录:
[root@xuzhichao cluster-roles]# mkdir dns/{tasks,templates,files,handlers,meta} -p
编写dns的主任务文件:
[root@xuzhichao cluster-roles]# cat dns/tasks/main.yml
- name: Install Dns Server
yum:
name: bind
state: present - name: Copy Configure File And Zone File
template:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
owner: "root"
group: "named"
mode: "0640"
loop:
- { src: "named.conf.j2", dest: "/etc/named.conf" }
- { src: "xuzhichao.com.zone.j2", dest: "/var/named/xuzhichao.com.zone" }
- { src: "named.xuzhichao.com.zone.j2", dest: "/etc/named.xuzhichao.com.zone" }
- { src: "20.168.192.in-addr.arpa.zone.j2", dest: "/var/named/20.168.192.in-addr.arpa.zone" }
notify: Restart Dns Server - name: Start Dns Server
systemd:
name: named
state: started
enabled: yes
编写dns的handlers文件:
[root@xuzhichao cluster-roles]# cat dns/handlers/main.yml
- name: Restart Dns Server
systemd:
name: named
state: restarted
dns的配置相关的模板文件如下:
[root@xuzhichao cluster-roles]# cat dns/templates/named.conf.j2
options {
listen-on port 53 { localhost; };
listen-on-v6 port 53 { localhost; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
recursing-file "/var/named/data/named.recursing";
secroots-file "/var/named/data/named.secroots";
allow-query { any; }; recursion yes;
allow-recursion { 192.168.20.0/24; 192.168.50.0/24; }; allow-transfer {192.168.20.71;};
also-notify {192.168.20.71;}; bindkeys-file "/etc/named.root.key"; managed-keys-directory "/var/named/dynamic"; pid-file "/run/named/named.pid";
session-keyfile "/run/named/session.key";
}; logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
}; zone "." IN {
type hint;
file "named.ca";
}; include "/etc/named.rfc1912.zones";
include "/etc/named.xuzhichao.com.zone";
include "/etc/named.root.key"; [root@xuzhichao cluster-roles]# cat dns/templates/named.xuzhichao.com.zone.j2
zone "xuzhichao.com" IN {
type master;
file "xuzhichao.com.zone";
notify yes;
}; zone "20.168.192.in-addr.arpa" IN {
type master;
file "20.168.192.in-addr.arpa.zone";
notify yes;
}; [root@xuzhichao cluster-roles]# cat dns/templates/xuzhichao.com.zone.j2
$TTL 86400 xuzhichao.com. IN SOA ns1.xuzhichao.com. mail.xuzhichao.com. (
2021071603
10800
900
604800
86400
) xuzhichao.com. IN NS ns1.xuzhichao.com.
xuzhichao.com. IN NS ns2.xuzhichao.com. ns1 IN A 192.168.20.70
ns2 IN A 192.168.20.71 ;业务域 xuzhichao.com. IN MX 10 mx1.xuzhichao.com.
mx1 IN A 192.168.20.11 wordpress.xuzhichao.com. IN A 192.168.50.200
wordpress.xuzhichao.com. IN A 192.168.50.201 web.xuzhichao.com. IN CNAME wordpress.xuzhichao.com. ;主机域 nginx02.xuzhichao.com. IN A 192.168.20.22
ngxin03.xuzhichao.com. IN A 192.168.20.23 nginx-lb01.xuzhichao.com. IN A 192.168.20.19
nginx-lb02.xuzhichao.com. IN A 192.168.20.20 apache01.xuzhichao.com. IN A 192.168.20.21 lvs01.xuzhichao.com. IN A 192.168.20.31
lvs02.xuzhichao.com. IN A 192.168.20.32 mysql01.xuzhichao.com. IN A 192.168.20.50 redis01.xuzhichao.com. IN A 192.168.20.61 nfs01.xuzhichao.com. IN A 192.168.20.30 dns01.xuzhichao.com. IN A 192.168.20.70
dns02.xuzhichao.com. IN A 192.168.20.71 [root@xuzhichao cluster-roles]# cat dns/templates/20.168.192.in-addr.arpa.zone.j2
$TTL 86400 @ IN SOA ns1.xuzhichao.com. mail.xuzhichao.com. (
2021071602
10800
900
604800
86400
) @ IN NS ns1.xuzhichao.com.
@ IN NS ns2.xuzhichao.com. 70 IN PTR ns1.xuzhichao.com.
71 IN PTR ns2.xuzhichao.com. ;@ IN MX 10 mx1.xuzhichao.com.
;11 IN PTR mx1.xuzhichao.com.
;mx1.xuzhichao.com. IN A 192.168.20.11 ;主机域 22 IN PTR nginx02.xuzhichao.com.
23 IN PTR ngxin03.xuzhichao.com. 19 IN PTR nginx-lb01.xuzhichao.com.
20 IN PTR nginx-lb02.xuzhichao.com. 21 IN PTR apache01.xuzhichao.com. 31 IN PTR lvs01.xuzhichao.com.
32 IN PTR lvs02.xuzhichao.com. 50 IN PTR mysql01.xuzhichao.com. 61 IN PTR redis01.xuzhichao.com. 30 IN PTR nfs01.xuzhichao.com. 70 IN PTR dns01.xuzhichao.com.
71 IN PTR dns02.xuzhichao.com.
dns的整体目录结构如下:
[root@xuzhichao cluster-roles]# tree dns/
dns/
├── files
├── handlers
│ └── main.yml
├── meta
├── tasks
│ └── main.yml
└── templates
├── 20.168.192.in-addr.arpa.zone.j2
├── named.conf.j2
├── named.xuzhichao.com.zone.j2
└── xuzhichao.com.zone.j2 5 directories, 6 files
playbook文件如下:
[root@xuzhichao cluster-roles]# cat wordpress_site.yml
- hosts: all
roles:
- role: base-module
tags: base-module - hosts: webservers
roles:
- role: nginx
- role: php-fpm
tags:
- nginx
- php-fpm - hosts: lbservers
roles:
- role: nginx
tags: nginx - hosts: mysql
roles:
- role: mariadb
tags: mysql - hosts: redis
roles:
- role: redis
tags: redis - hosts: nfs
roles:
- role: nfs
tags: nfs - hosts: lvs
roles:
- role: keepalived
tags: keepalived - hosts: dns
roles:
- role: dns
tags: dns
运行playbook文件:
[root@xuzhichao cluster-roles]# ansible-playbook -t dns wordpress_site.yml
测试dns是否可以正常查询,并且对两个外网地址进行轮询:
[root@xuzhichao ~]# dig wordpress.xuzhichao.com @192.168.20.70 +short
192.168.50.200
192.168.50.201
[root@xuzhichao ~]# dig wordpress.xuzhichao.com @192.168.20.70 +short
192.168.50.201
192.168.50.200
[root@xuzhichao ~]# dig wordpress.xuzhichao.com @192.168.20.70 +short
192.168.50.200
192.168.50.201
ansible系列(33)--ansible实战之部署WEB集群架构(3)的更多相关文章
- Linux Web集群架构详细(亲测可用!!!)
注意:WEB服务器和数据库需要分离,同时WEB服务器也需要编译安装MySQL. 做集群架构的重要思想就是找到主干,从主干区域向外延展. WEB服务器: apache nginx 本地做三个产品 de ...
- CentOS7-自动化部署web集群
一.项目要求 1.创建role,通过role完成项目(可能需要多个role) 2.部署nginx调度器(node2主机) 3.部署2台lnmp服务器(node3,node4主机) 4.部署mariad ...
- Centos 7 部署lnmp集群架构
前言介绍 lnmp的全程是 linux + nginx + mysql + php; lnmp就是上述系统及应用程序的简写组合: lnmp其实已经代表了一个用户正常对一个页面请求的流程,nginx接收 ...
- (二)Kubernetes kubeadm部署k8s集群
kubeadm介绍 kubeadm是Kubernetes项目自带的及集群构建工具,负责执行构建一个最小化的可用集群以及将其启动等的必要基本步骤,kubeadm是Kubernetes集群全生命周期的管理 ...
- Ansible自动化部署K8S集群
Ansible自动化部署K8S集群 1.1 Ansible介绍 Ansible是一种IT自动化工具.它可以配置系统,部署软件以及协调更高级的IT任务,例如持续部署,滚动更新.Ansible适用于管理企 ...
- 003 ansible部署ceph集群
介绍:在上一次的deploy部署ceph,虽然出了结果,最后的结果并没有满足最初的目的,现在尝试使用ansible部署一遍,看是否会有问题 一.环境准备 ceph1充当部署节点,ceph2,ceph3 ...
- ansible playbook部署ELK集群系统
一.介绍 总共4台机器,分别为 192.168.1.99 192.168.1.100 192.168.1.210 192.168.1.211 服务所在机器为: redis:192.168.1.211 ...
- 实战Centos系统部署Codis集群服务
导读 Codis 是一个分布式 Redis 解决方案, 对于上层的应用来说, 连接到 Codis Proxy 和连接原生的 Redis Server 没有明显的区别 (不支持的命令列表), 上层应用可 ...
- kubernetes系列03—kubeadm安装部署K8S集群
本文收录在容器技术学习系列文章总目录 1.kubernetes安装介绍 1.1 K8S架构图 1.2 K8S搭建安装示意图 1.3 安装kubernetes方法 1.3.1 方法1:使用kubeadm ...
- 《跟老男孩学Linux运维:Web集群实战》读书笔记
Linux 介绍 Linux 安装 Linux 调优 Web 基础 Nginx 应用 LNMP 应用 PHP 缓存加速 Nginx 调优 MySQL 应用 NFS 网络文件共享 Nginx 反向代理与 ...
随机推荐
- C++ 通用锁管理
lock_guard 类 lock_guard 是互斥体包装器,为在作用域块期间占有互斥提供便利 RAII 风格机制. 创建 lock_guard 对象时,它试图接收给定互斥的所有权.控制离开创建 l ...
- #莫比乌斯反演,欧拉函数#洛谷 5518 [MtOI2019]幽灵乐团
题目传送门 分析 前置知识:\(\sum_{d|n}\mu(d)=[n==1]\),\(\sum_{d|n}\mu(d)\frac{n}{d}=\varphi(n)\) 把最小公倍数拆开可以得到 \[ ...
- #拉格朗日插值,线性筛#洛谷 5442 【XR-2】约定 (加强版)
题目 一个\(n\)个点的完全图, 第\(i\)个点到第\(j\)个点的边权是\((i+j)^k\), 现在把这个完全图变成一棵树, 求这棵树边权和的期望值 \((n\leq 10^{10000},k ...
- Java 构造函数与修饰符详解:初始化对象与控制权限
Java 构造函数 Java 构造函数 是一种特殊的类方法,用于在创建对象时初始化对象的属性.它与类名相同,并且没有返回值类型. 构造函数的作用: 为对象的属性设置初始值 执行必要的初始化操作 提供创 ...
- Windows wsl2支持systemd
背景 很多Linux发行版都是使用systemd来管理程序进程,但是在WSL中默认是用init来管理进程的. 为了符合长久的使用习惯,且省去不必要的学习成本,就在WSL的发行版(我这里安装的是Ubun ...
- 干货分享|身为顶尖的Hr,这个Excel插件你不能不知道,用上它事业开挂!
第一季度,老板看了历年不同地区各销售业绩数据表的总结,说想知道新人进来多久才能成为成熟的销售,成长周期有多长? 我们人事被老板这个灵光一现的想法吓到了,大家伙上上下下为这件事情忙了4个日夜. 整整五年 ...
- 基于istio实现单集群地域故障转移
本文分享自华为云社区<基于istio实现单集群地域故障转移>,作者:可以交个朋友. 一 背景 随着应用程序的增长并变得更加复杂,微服务的数量也会增加,失败的可能性也会增加.微服务的故障可能 ...
- 【未测试】CentOS 6.5快速部署HTTP WEB服务器和FTP服务器
CentOS 6.5快速部署HTTP WEB服务器和FTP服务器 [题记]本文使用CentOS 6.5minimal快速搭建HTTP服务器和仅供授权用户登陆的FTP服务器.意在使用授权FTP用户通过登 ...
- IIS 出现405
前言 在一次配置服务器中,出现一个问题,那就是使用put和delete 出现405. 当时我蒙了,调试的时候好好的,部署405. 原因是put和delete是非简单请求,也就是说非安全请求了. 这时候 ...
- c++ 中const 原理
前言 在c++ 中和别的语言不一样,高级语言是将const编译了,c又不同这里不介绍,而c++ 是实现了. 正文 const 原理 请看一个解析: const a=10; int*p=&a; ...