实例拓扑图:

DR1和DR2部署Keepalived和lvs作主从架构或主主架构,RS1和RS2部署nginx搭建web站点。

注意:各节点的时间需要同步(ntpdate ntp1.aliyun.com);关闭firewalld(systemctl stop firewalld.service,systemctl disable firewalld.service),设置selinux为permissive(setenforce 0);同时确保DR1和DR2节点的网卡支持MULTICAST(多播)通信。通过命令ifconfig可以查看到是否开启了MULTICAST:

Keepalived的主从架构

搭建RS1:

[root@RS1 ~]# yum -y install nginx   #安装nginx
[root@RS1 ~]# vim /usr/share/nginx/html/index.html #修改主页
<h1> 192.168.4.118 RS1 server </h1>
[root@RS1 ~]# systemctl start nginx.service #启动nginx服务
[root@RS1 ~]# vim RS.sh #配置lvs-dr的脚本文件
#!/bin/bash
#
vip=192.168.4.120
mask=255.255.255.255
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig lo:0 $vip netmask $mask broadcast $vip up
route add -host $vip dev lo:0
;;
stop)
ifconfig lo:0 down
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
;;
*)
echo "Usage $(basename $0) start|stop"
exit 1
;;
esac
[root@RS1 ~]# bash RS.sh start

参考RS1的配置搭建RS2。

搭建DR1:

[root@DR1 ~]# yum -y install ipvsadm keepalived   #安装ipvsadm和keepalived
[root@DR1 ~]# vim /etc/keepalived/keepalived.conf #修改keepalived.conf配置文件
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id 192.168.4.116
vrrp_skip_check_adv_addr
vrrp_mcast_group4 224.0.0.10
} vrrp_instance VIP_1 {
state MASTER
interface eno16777736

virtual_router_id 1
priority 100

advert_int 1
authentication {
auth_type PASS
auth_pass %&hhjj99
}
virtual_ipaddress {
  192.168.4.120/24 dev eno16777736 label eno16777736:0
}
} virtual_server 192.168.4.120 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP real_server 192.168.4.118 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.4.119 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@DR1 ~]# systemctl start keepalived
[root@DR1 ~]# ifconfig
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.4.116 netmask 255.255.255.0 broadcast 192.168.4.255
inet6 fe80::20c:29ff:fe93:270f prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:93:27:0f txqueuelen 1000 (Ethernet)
RX packets 14604 bytes 1376647 (1.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6722 bytes 653961 (638.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eno16777736:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.4.120 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:93:27:0f txqueuelen 1000 (Ethernet)
[root@DR1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.4.120:80 rr
-> 192.168.4.118:80 Route 1 0 0
-> 192.168.4.119:80 Route 1 0 0

DR2的搭建基本同DR1,主要修改一下配置文件中/etc/keepalived/keepalived.conf的state和priority:state BACKUP、priority 90. 同时我们发现作为backup的DR2没有启用eno16777736:0的网口:

客户端进行测试:

[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done   #客户端正常访问
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
[root@DR1 ~]# systemctl stop keepalived.service #关闭DR1的keepalived服务
[root@DR2 ~]# systemctl status keepalived.service #观察DR2,可以看到DR2已经进入MASTER状态
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2018-09-04 11:33:04 CST; 7min ago
Process: 12983 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 12985 (keepalived)
CGroup: /system.slice/keepalived.service
├─12985 /usr/sbin/keepalived -D
├─12988 /usr/sbin/keepalived -D
└─12989 /usr/sbin/keepalived -D Sep 04 11:37:41 happiness Keepalived_healthcheckers[12988]: SMTP alert successfully sent.
Sep 04 11:40:22 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Transition to MASTER STATE
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Entering MASTER STATE
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) setting protocol VIPs.
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Sending/queueing gratuitous ARPs on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done #可以看到客户端正常访问
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>

Keepalived的主主架构

修改RS1和RS2,添加新的VIP:

[root@RS1 ~]# cp RS.sh RS_bak.sh
[root@RS1 ~]# vim RS_bak.sh #添加新的VIP
#!/bin/bash
#
vip=192.168.4.121
mask=255.255.255.255
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig lo:1 $vip netmask $mask broadcast $vip up
route add -host $vip dev lo:1
;;
stop)
ifconfig lo:1 down
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
;;
*)
echo "Usage $(basename $0) start|stop"
exit 1
;;
esac
[root@RS1 ~]# bash RS_bak.sh start
[root@RS1 ~]# ifconfig
...
lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 192.168.4.120 netmask 255.255.255.255
loop txqueuelen 0 (Local Loopback) lo:1: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 192.168.4.121 netmask 255.255.255.255
loop txqueuelen 0 (Local Loopback)
[root@RS1 ~]# scp RS_bak.sh root@192.168.4.119:~
root@192.168.4.119's password:
RS_bak.sh 100% 693 0.7KB/s 00:00 [root@RS2 ~]# bash RS_bak.sh #直接运行脚本添加新的VIP
[root@RS2 ~]# ifconfig
...
lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 192.168.4.120 netmask 255.255.255.255
loop txqueuelen 0 (Local Loopback) lo:1: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 192.168.4.121 netmask 255.255.255.255
loop txqueuelen 0 (Local Loopback)

修改DR1和DR2:

[root@DR1 ~]# vim /etc/keepalived/keepalived.conf   #修改DR1的配置文件,添加新的实例,配置服务器组
...
vrrp_instance VIP_2 {
state BACKUP
interface eno16777736
virtual_router_id 2
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass UU**99^^
}
virtual_ipaddress {
192.168.4.121/24 dev eno16777736 label eno16777736:1
}
} virtual_server_group ngxsrvs {
192.168.4.120 80
192.168.4.121 80
}
virtual_server group ngxsrvs {
...
}
[root@DR1 ~]# systemctl restart keepalived.service #重启服务
[root@DR1 ~]# ifconfig #此时可以看到eno16777736:1,因为DR2还未配置
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.4.116 netmask 255.255.255.0 broadcast 192.168.4.255
inet6 fe80::20c:29ff:fe93:270f prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:93:27:0f txqueuelen 1000 (Ethernet)
RX packets 54318 bytes 5480463 (5.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 38301 bytes 3274990 (3.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eno16777736:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.4.120 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:93:27:0f txqueuelen 1000 (Ethernet) eno16777736:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.4.121 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:93:27:0f txqueuelen 1000 (Ethernet)
[root@DR1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.4.120:80 rr
-> 192.168.4.118:80 Route 1 0 0
-> 192.168.4.119:80 Route 1 0 0
TCP 192.168.4.121:80 rr
-> 192.168.4.118:80 Route 1 0 0
-> 192.168.4.119:80 Route 1 0 0 [root@DR2 ~]# vim /etc/keepalived/keepalived.conf #修改DR2的配置文件,添加实例,配置服务器组
...
vrrp_instance VIP_2 {
state MASTER
interface eno16777736
virtual_router_id 2
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass UU**99^^
}
virtual_ipaddress {
192.168.4.121/24 dev eno16777736 label eno16777736:1
}
} virtual_server_group ngxsrvs {
192.168.4.120 80
192.168.4.121 80
}
virtual_server group ngxsrvs {
...
}
[root@DR2 ~]# systemctl restart keepalived.service #重启服务
[root@DR2 ~]# ifconfig
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.4.117 netmask 255.255.255.0 broadcast 192.168.4.255
inet6 fe80::20c:29ff:fe3d:a31b prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:3d:a3:1b txqueuelen 1000 (Ethernet)
RX packets 67943 bytes 6314537 (6.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 23250 bytes 2153847 (2.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eno16777736:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.4.121 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:3d:a3:1b txqueuelen 1000 (Ethernet)
[root@DR2 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.4.120:80 rr
-> 192.168.4.118:80 Route 1 0 0
-> 192.168.4.119:80 Route 1 0 0
TCP 192.168.4.121:80 rr
-> 192.168.4.118:80 Route 1 0 0
-> 192.168.4.119:80 Route 1 0 0

客户端测试:

[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
[root@client ~]# for i in {1..20};do curl http://192.168.4.121;done
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>

Keepalived搭建主从架构、主主架构实例的更多相关文章

  1. redis搭建主从(1主2从)

    一,先附上配置文件 1,master(6379.conf)上面的配置文件 daemonize yes pidfile /usr/local/redis/logs/redis_6379.pid port ...

  2. redsi搭建主从和多主多从

  3. 搭建MySQL的主从、半同步、主主复制架构

    复制其最终目的是让一台服务器的数据和另外的服务器的数据保持同步,已达到数据冗余或者服务的负载均衡.一台主服务器可以连接多台从服务器,并且从服务器也可以反过来作为主服务器.主从服务器可以位于不同的网络拓 ...

  4. Centos7+nginx+keepalived集群及双主架构案例

    目录简介 一.简介 二.部署nginx+keepalived 集群 三.部署nginx+keepalived双主架构 四.高可用之调用辅助脚本进行资源监控,并根据监控的结果状态实现动态调整 一.简介 ...

  5. LVS+Keepalived+Mysql+主主数据库架构[2台]

    架构图 安装步骤省略. 158.140 keepalived.conf ! Configuration File for keepalived global_defs { #全局标识模块 notifi ...

  6. MySQL + KeepAlived + LVS 单点写入主主同步高可用架构实验

    分类: MySQL 架构设计 2013-05-08 01:40 5361人阅读 评论(8) 收藏 举报 mysql 高可用 keepalive ㈠ 实战环境 服务器名· IP OS MySQL odd ...

  7. lvs+keepalive主从和主主架构

    下面配置主从 1)关闭SELinux和防火墙 vi /etc/sysconfig/selinux SELINUX=disabled setenforce 临时关闭SELinux,文件配置后,重启生效 ...

  8. mysql高可用架构方案之中的一个(keepalived+主主双活)

    Mysql双主双活+keepalived实现高可用           文件夹 1.前言... 4 2.方案... 4 2.1.环境及软件... 4 2.2.IP规划... 4 2.3.架构图... ...

  9. MySQL主从、主主、半同步节点架构的的原理及实验总结

    一.原理及概念: MySQL 主从复制概念 MySQL 主从复制是指数据可以从一个MySQL数据库服务器主节点复制到一个或多个从节点.MySQL 默认采用异步复制方式,这样从节点不用一直访问主服务器来 ...

随机推荐

  1. easyui datagrid 本地json数据 实现删除

    html代码:<a href='javascript:void(0);' onclick='Delete(\""+ index +"\")' class= ...

  2. 工作流一期上线原创小故事——【加签】OR【不准】

    亲!您有过选择[加签]还是审核[不准]的烦恼吗? 加签分为:向前加签和向后加签,这个相信大家都很熟悉了吧. 审核分为:准和不准,就是√和×,这个相信大家也很熟悉了. 提示①:相邻的2个人审核时,如果意 ...

  3. Android XMPP 例子(Openfire+asmack+spark) 出现登陆连接错误

    Android XMPP 例子(Openfire+asmack+spark) 运行出来没问题,但是登陆的时候出现如下错误: 出现错误: 09-17 15:24:16.388: E/AndroidRun ...

  4. wxpython wx.windows的API

    wx.Window is the base class for all windows and represents any visible object on screen. All control ...

  5. 上传文件到Maven仓库

    1.上传jar到本地仓库 mvn install:install-file -DgroupId=org.csource -DartifactId=fastdfs-client-java -Dversi ...

  6. GO Lang学习笔记 - 基础知识

    Go lang Learn Note 标签(空格分隔): Go Go安装和Go目录 设置环境变量GOROOT和GOPATH,前者是go的安装目录,后者是开发工作目录.go get包只会将包下载到第一个 ...

  7. Qt socket制作一个简单的文件传输系统

    服务器 用qt designer设计出服务器界面: 上代码: Server.pro #------------------------------------------------- # # Pro ...

  8. 设计模式——外观模式(FacadePattern)

    外观模式:为子系统中的一组接口提供一个一致的界面,次模式定义了一个高层接口,这个接口使得这一子系统更加容易使用. UML图: 外观类: package com.cnblog.clarck; /** * ...

  9. bzoj3820 虫逢

    Description 小强和阿米巴是好朋友. 阿米巴告诉小强,变形虫(又叫阿米巴虫)和绝大多数生物一样,也是有 DNA 的.并且,变形虫可以通过分裂的方式进行无性繁殖. 我们把一个变形虫的基因组抽象 ...

  10. 利用Graphviz画出图

    graphviz官网:http://www.graphviz.org/ 背景:有画图需要,之前见到别人用graphviz画,画出来的图漂亮,且自动帮你排版安排布局,所以自己想尝试用它画. 其中遇到的几 ...