LVS之DR模式实战及高可用性
author:JevonWei
版权声明:原创作品
LVS-DR实现同网段调度web模式
- 拓扑环境

网络环境
RS1
RIP 192.168.198.138/24
VIP 192.168.198.100/32
GW 192.168.198.130
RS2
RIP 192.168.198.132/24
VIP 192.168.198.100/32
GW 192.168.198.130
VS
DIP 192.168.198.128/24
VIP 192.168.198.100/32
GW 192.168.198.130
route
192.168.198.130/24
172.16.253.166/16
Client
172.16.254.150/16
GW 172.16.253.166
RS1,RS2的网关指向192.168.198.130
RS1
[root@RS1 html]# route del default gw 192.168.198.128
[root@RS1 html]# route add default gw 192.168.198.130
[root@RS1 ~]# iptables -F
[root@RS1 ~]# yum -y install httpd
[root@RS1 ~]# vim /var/www/html/index.html
welcome to RS1
[root@RS1 ~]# service httpd start
RS2
[root@RS2 network-scripts]# route add -net 172.16.0.0/16 gw 192.168.198.130
[root@RS2 network-scripts]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
172.16.0.0 192.168.198.130 255.255.0.0 UG 0 0 0 ens34
192.168.198.0 0.0.0.0 255.255.255.0 U 100 0 0 ens34
[root@RS2 ~]# iptables -F
[root@RS2 ~]# yum -y install httpd
[root@RS2 ~]# vim /var/www/html/index.html
welcome to RS2
[root@RS2 ~]# systemctl start httpd
VS
添加网关路由信息
[root@VS ~]# route add default gw 192.168.198.130
[root@VS ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.198.130 0.0.0.0 UG 0 0 0 ens34
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
192.168.198.0 0.0.0.0 255.255.255.0 U 100 0 0 ens34
[root@VS ~]# vim lvs_dr.sh
#! /bin/bash
vip=192.168.198.100
server=$vip:80
rip1=192.168.198.138
rip2=192.168.198.132
sch=wlc
dev=ens34:1 \\绑定网卡ens34
case $1 in
start)
ifconfig $dev $vip/32 broadcast $vip \\绑定vip到ens34网卡上
iptables -F
ipvsadm -A -t $server -s $sch
ipvsadm -a -t $server -r $rip1 -g -w 3
ipvsadm -a -t $server -r $rip2 -g -w 1
;;
stop)
ipvsadm -C
ipconfig $dev down
;;
*)
echo "Usage:$(basename $0) start|stop"
exit 1
;;
esac
[root@danran ~]# bash lvs_dr.sh start
[root@danran ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.198.100:80 wlc
-> 192.168.198.132:80 Route 1 0 0
-> 192.168.198.138:80 Route 3 0 0
添加RS1和RS2的VIP
dr_vip_rs.sh 为添加RS服务端VIP地址的脚本
[root@RS1 ~]# vim dr_vip_rs.sh
#!/bin/bash
#
vip=192.168.198.100
mask='255.255.255.255'
dev=lo:1
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig $dev $vip netmask $mask broadcast $vip up
route add -host $vip dev $dev
echo "VS server is Ready "
;;
stop)
ifconfig $dev down
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "VS server is Cancel"
;;
*)
echo "Usage $(basename $0) start|stop"
exit 1
;;
esac
[root@RS1 ~]# bash dr_vip_rs.sh start
VS server is Ready
[root@RS2 ~]# bash dr_vip_rs.sh start
VS server is Ready
client
添加路由
[root@danran ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
172.16.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 eth0
[root@danran ~]# route del default gw 172.16.0.1
[root@danran ~]# route add default gw 172.16.253.166 \\添加默认路由
[root@danran ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
172.16.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
0.0.0.0 172.16.253.166 0.0.0.0 UG 0 0 0 eth0
测试
[root@danran ~]# for i in {1..10};do curl --connect-timeout 1 192.168.198.100 ;sleep 1;done
welcome to RS2
welcome to RS1
welcome to RS1
welcome to RS1
welcome to RS2
welcome to RS1
welcome to RS1
welcome to RS1
welcome to RS2
welcome to RS1
LVS-DR实现跨网段
网络拓扑

网络环境
RS1
RIP 192.168.198.138/24
VIP 192.168.80.100/32
GW 192.168.198.130
RS2
RIP 192.168.198.132/24
VIP 192.168.80.100/32
GW 192.168.198.130
VS
DIP 192.168.198.128/24
VIP 192.168.198.100/32
GW 192.168.198.130
route
192.168.198.130/24
192.168.80.130/8
172.16.253.166/16
GW 192.168.198.130
Client
172.16.254.150/16
GW 172.16.253.166
RS1,RS2的网关指向192.168.198.130
route
ens38网卡添加第二个IP
[root@route network-scripts]# nmcli connection modify ens38 +ipv4.addresses 192.168.80.130/8
[root@route ~]# nmcli connection up ens38 \\启动ens38网卡
[root@route ~]# ip a
[root@route ~]# route add default gw 192.168.198.130
VS
编辑LVS_DR的配置脚本
[root@VS ~]# vim lvs_dr.sh
#! /bin/bash
vip=192.168.80.100
server=$vip:80
rip1=192.168.198.138
rip2=192.168.198.132
sch=rr
dev=ens34:1
case $1 in
start)
ifconfig $dev $vip/32 broadcast $vip
ipvsadm -A -t $server -s $sch
ipvsadm -a -t $server -r $rip1 -g -w 3
ipvsadm -a -t $server -r $rip2 -g -w 1
;;
stop)
ipvsadm -C
ifconfig $dev down
;;
*)
echo "Usage:$(basename $0) start|stop"
exit 1
;;
esac
添加网关及默认路由
[root@VS ~]# route add default gw 192.168.198.130
[root@VS ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.198.130 0.0.0.0 UG 0 0 0 ens34
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
192.168.198.0 0.0.0.0 255.255.255.0 U 100 0 0 ens34
RS1和RS2配置vip IP
[root@RS1 ~]# vim dr_vip_rs.sh
#!/bin/bash
#
vip=192.168.80.100
mask='255.255.255.255'
dev=lo:1
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig $dev $vip netmask $mask broadcast $vip up
# route add -host $vip dev $dev
echo "VS server is Ready "
;;
stop)
ifconfig $dev down
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "VS server is Cancel"
;;
*)
echo "Usage $(basename $0) start|stop"
exit 1
;;
esac
[root@RS1 ~]# bash dr_vip_rs.sh start
VS server is Ready
[root@RS2 ~]# bash dr_vip_rs.sh start
VS server is Ready
路由信息
[root@RS2 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.198.130 0.0.0.0 UG 100 0 0 ens34
192.168.198.0 0.0.0.0 255.255.255.0 U 100 0 0 ens34
[root@RS1 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.198.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
0.0.0.0 192.168.198.130 0.0.0.0 UG 0 0 0 eth1
client
[root@client ~]# for i in {1..10};do curl 192.168.80.100 ;done
welcome to RS2
welcome to RS1
welcome to RS2
welcome to RS1
welcome to RS2
welcome to RS1
welcome to RS2
welcome to RS1
welcome to RS2
welcome to RS1
将http和https两个不同的服务打标签,从而使http和https做成一个集群服务
FireWall Mark技术
VS
[root@VS ~]# iptables -t mangle -A PREROUTING -d 192.168.80.100 -p tcp -m multiport --dports 80,443 -j MARK --set-mark 10
[root@VS ~]# vim lvs_dr_vs_fwm.sh
#! /bin/bash
vip=192.168.80.100
server=10
rip1=192.168.198.138
rip2=192.168.198.132
sch=rr
dev=ens34:1
case $1 in
start)
ifconfig $dev $vip/32 broadcast $vip
ipvsadm -A -f $server -s $sch
ipvsadm -a -f $server -r $rip1 -g -w 3
ipvsadm -a -f $server -r $rip2 -g -w 1
;;
stop)
ipvsadm -C
ifconfig $dev down
;;
*)
echo "Usage:$(basename $0) start|stop"
exit 1
;;
esac
[root@VS ~]# bash lvs_dr_vs_fwm.sh start
[root@VS ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
FWM 10 rr
-> 192.168.198.132:0 Route 1 0 0
-> 192.168.198.138:0 Route 3 0 0
client
[root@client ~]# curl 192.168.80.100;curl -k https://192.168.80.100
实现DR持久连接
PFWMC基于防火墙的持久连接
VS
[root@VS ~]# iptables -t mangle -A PREROUTING -d 192.168.80.100 -p tcp -m multiport --dports 80,443 -j MARK --set-mark 10
[root@VS ~]# vim lvs_dr_vs_fwm.sh
#! /bin/bash
vip=192.168.80.100
server=10
rip1=192.168.198.138
rip2=192.168.198.132
sch=rr
dev=ens34:1
case $1 in
start)
ifconfig $dev $vip/32 broadcast $vip
ipvsadm -A -f $server -s $sch -p 600 \\-p 设置持久连接为600s
ipvsadm -a -f $server -r $rip1 -g -w 3
ipvsadm -a -f $server -r $rip2 -g -w 1
;;
stop)
ipvsadm -C
ifconfig $dev down
;;
*)
echo "Usage:$(basename $0) start|stop"
exit 1
; ;
esac
[root@VS ~]# bash lvs_dr_vs_fwm.sh start
[root@VS ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
FWM 10 rr persistent 600 \\持久连接为600s
-> 192.168.198.132:0 Route 1 0 0
-> 192.168.198.138:0 Route 3 0 0
client
[root@client ~]# curl 192.168.80.100
welcome to RS2
[root@client ~]# curl 192.168.80.100
welcome to RS2
[root@client ~]# curl 192.168.80.100
welcome to RS2
[root@client ~]# curl https://192.168.80.100
welcome to RS2
[root@client ~]# curl https://192.168.80.100
welcome to RS2
PCC基于0端口的持久连接
VS
[root@VS ~]# iptables -t mangle -A PREROUTING -d 192.168.80.100 -p tcp -m multiport --dports 80,443 -j MARK --set-mark 10
[root@VS ~]# vim lvs_dr_vs_per.sh
#! /bin/bash
vip=192.168.80.100
server=$vip:0
rip1=192.168.198.138
rip2=192.168.198.132
sch=rr
dev=ens34:1
case $1 in
start)
ifconfig $dev $vip netmask 255.255.255.255 broadcast $vip
ipvsadm -A -t $server -s $sch -p 600
ipvsadm -a -t $server -r $rip1 -g -w 3
ipvsadm -a -t $server -r $rip2 -g -w 1
;;
stop)
ipvsadm -C
ifconfig $dev down
;;
*)
echo "Usage:$(basename $0) start|stop"
exit 1
;;
esac
[root@VS ~]# bash lvs_dr_vs_per.sh start
[root@VS ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.80.100:0 rr persistent 600
-> 192.168.198.132:0 Route 1 0 0
-> 192.168.198.138:0 Route 3 0 0
LVS高可用性
VS(编写脚本判断RS服务器是否故障)
[root@VS ~]# vim lvs_dr_vs.sh
#! /bin/bash
vip=192.168.80.100
server=$vip:80
rip1=192.168.198.138
rip2=192.168.198.132
sch=rr
dev=ens34:1
case $1 in
start)
ifconfig $dev $vip/32 broadcast $vip
ipvsadm -A -t $server -s $sch
ipvsadm -a -t $server -r $rip1 -g -w 3
ipvsadm -a -t $server -r $rip2 -g -w 1
;;
stop)
ipvsadm -C
ifconfig $dev down
;;
*)
echo "Usage:$(basename $0) start|stop"
exit 1
;;
esac
[root@VS ~]# bash lvs_dr_vs.sh start
ldirectord实现LVS的高可用性
当RS服务端崩溃时,自动从LVS中删除
VS
[root@VS ~]# iptables -t mangle -A PREROUTING -d 192.168.80.100 -p tcp -m multiport --dports 80,443 -j MARK --set-mark 10
下载ldirectord软件包(pub/Source/7.x86/crmsh/)
[root@VS ~]# yum -y install ldirectord-3.9.6-0rc1.1.1.x86_64.rpm \\需有完整yum源
[root@VS ~]# rpm -ql ldirectord
/etc/ha.d
/etc/ha.d/resource.d
/etc/ha.d/resource.d/ldirectord
/etc/logrotate.d/ldirectord
/usr/lib/ocf/resource.d/heartbeat/ldirectord
/usr/lib/systemd/system/ldirectord.service
/usr/sbin/ldirectord
/usr/share/doc/ldirectord-3.9.6
/usr/share/doc/ldirectord-3.9.6/COPYING
/usr/share/doc/ldirectord-3.9.6/ldirectord.cf
/usr/share/man/man8/ldirectord.8.gz
[root@VS ~]# cp /usr/share/doc/ldirectord-3.9.6/ldirectord.cf /etc/ha.d
[root@VS ~]# vim /etc/ha.d/ldirectord.cf
checktimeout=3 \\超时时间
checkinterval=1 \\检查间隔
fallback=127.0.0.1:80 \\Sorry Server,错误的网页
autoreload=yes \\自动加载配置文件
logfile="/var/log/ldirectord.log" \\日志文件
quiescent=no \\当RS宕机时是否将RS记录从ipvsadm记录中删除,no表示宕机即删除
virtual=192.168.80.100:80 \\VS服务端IP
real=192.168.198.138:80 gate 2 \\RS服务端IP,gate表示dr类型
real=192.168.198.132:80 gate 1 \\RS服务端IP,gate表示dr类型
fallback=127.0.0.1:80 gate
service=http
scheduler=wrr \\调度算法
protocol=tcp \\tcp协议
checktype=negotiate
checkport=80 \\检查端口
request="index.html" \\检查网页
receive="danran" \\检查网页字符,若包含该字符,则表示RS服务端正常
[root@VS ~]# systemctl start ldirectord
[root@VS ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.80.100:80 rr
-> 192.168.198.132:80 Route 1 0 0
-> 192.168.198.138:80 Route 1 0 0
client
[root@client ~]# curl 192.168.80.100
welcome to RS2
[root@client ~]# curl 192.168.80.100
welcome to RS1
[root@client ~]# curl 192.168.80.100
welcome to RS2
[root@client ~]# curl 192.168.80.100
welcome to RS1
使用标签实现ldirectord将多个服务定义为一个集群服务
使用打标签时需删除protocol=tcp选项
[root@VS ~]# iptables -t mangle -A PREROUTING -d 192.168.80.100 -p tcp -m multiport --dports 80,443 -j MARK --set-mark 10 \\标签定义为10
[root@VS ~]# iptables -t mangle -nvL
Chain PREROUTING (policy ACCEPT 41 packets, 3944 bytes)
pkts bytes target prot opt in out source destination
0 0 MARK tcp -- * * 0.0.0.0/0 192.168.80.100 multiport dports 80,443 MARK set 0xa
[root@VS ~]# vim /etc/ha.d/ldirectord.cf
checktimeout=3 \\超时时间
checkinterval=1 \\检查间隔
fallback=127.0.0.1:80 \\Sorry Server,错误的网页
autoreload=yes \\自动加载配置文件
logfile="/var/log/ldirectord.log" \\日志文件
quiescent=no \\当RS宕机时是否将RS记录从ipvsadm记录中删除,no表示宕机即删除
virtual=10 \\VS标签为10
real=192.168.198.138:80 gate 2 \\RS服务端IP,gate表示dr类型
real=192.168.198.132:80 gate 1 \\RS服务端IP,gate表示dr类型
fallback=127.0.0.1:80 gate
service=http
scheduler=wrr \\调度算法
checktype=negotiate
checkport=80 \\检查端口
request="index.html" \\检查网页
receive="danran" \\检查网页字符,若包含该字符,则表示RS服务端正常
[root@VS ~]# systemctl start ldirectord
[root@VS ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
FWM 10 rr
-> 192.168.198.132:80 Route 1 0 0
-> 192.168.198.138:80 Route 1 0 0
LVS之DR模式实战及高可用性的更多相关文章
- LVS:DR模式(Direct Routing)部署实验
本文介绍怎样在kvm的虚拟环境下,部署实验LVS的DR模式.包含网络结构图,怎样配置.以及使用tcpdump分析ip包. 网络结构图 kvm ...
- LVS的DR模式
DR模式: 请求由LVS接受,由真实提供服务的服务器(RealServer, RS)直接返回给用户,返回的时候不经过LVS. DR模式下需要LVS和绑定同一个VIP(RS通过将VIP绑定在loopba ...
- LVS的DR模式负载均衡
参考项目:http://www.cnblogs.com/along21/p/7833261.html#auto_id_3 LVS的DR模式实现负载均衡 1.环境 lvs-server :192.168 ...
- Lvs Keepalive DR模式高可用配置
Lvs Keepalive DR模式配置 一.环境 #DIP# eth0:192.168.233.145#VIP# eth0:0 192.168.233.250/32 #RIP1:192.168.23 ...
- lvs中dr模式配置脚本
1 dr模式介绍 1.1 lvs的安装 安装具体解释:http://blog.csdn.net/CleverCode/article/details/50586957. 1.2 lvs模式 lvs有三 ...
- lvs部署-DR模式
DR模式 角色 IP地址 备注 LVS负载均衡器 192.168.119.132 VIP:192.168.119.150 ipvsadm http_Real server 192.168.119 ...
- CentOS6.4 配置LVS(DR模式)
DR模式中LVS主机与实际服务器都有一块网卡连在同一物理网段上. IP分配 VIP:10.10.3.170 RIP1:10.10.3.140 RIP2:10.10.3.141 1.安装所需的依赖包 y ...
- LVS的DR模式测试案例<仅个人记录>
初始概念 大家都知道LVS,是章文嵩博士创建的,所以首先推一下主站吧!http://zh.linuxvirtualserver.org/ LVS集群分为三层结构: 负载调度器(load balance ...
- LVS+keepalived DR模式配置高可用负载均衡集群
实验环境 LVS-Master 10.0.100.201 VIP:10.0.100.203 LVS-Slave 10.0.100.204 WEB1-Tomcat 10.0.2.29 gat ...
随机推荐
- 暑假学习计划:Day_1.JSP&Servlet&Tocat 环境搭建到基础的认识。
1.了解JSP和Servlet(百度了解即可). 2.了解B/S和C/S.分别是 浏览器/服务器 和 客户端/服务器. 其中 B/S 被称为瘦模式(主流模式). 3.了解并下载Tomcat服务器 ...
- webIDE 第二篇博文
这是我做webIDE过程中的第二篇博文,之所以隔了这么长时间没更,因为确实是没有啥进度啊,没什么可写的,现在虽然依然没啥进度,但中途遇到很多坑,这些坑还是有记录下来的必要的. 因个人水平问题,可能有的 ...
- 暑假集训D11总结
%dalao 今天某学长来讲一个极其高深的数据结构——线段树(woc哪里高深了),然而并没有时间整理笔记= =,所以明天在扔笔记咯= = 考试 今天考试,一看数据范围,woc暴力分给的真足,然后高高兴 ...
- python--DenyHttp项目(1)--socket编程:客户端与服务器端
查找了许多资料,实现了客户端与服务器端的连接,通过虚拟机进行测试 服务器端IP:192.168.37.129 端口1122 客户端IP: 192.168.37.1 端口1122 Server: #co ...
- 自己动手封装一个url参数解释器( ghostWuUrlParser.js )
ghostWuUrlParser.js的作用是分析一段url中的查询参数,即: '?'号后面的 键值对参数. ghostWuUrlParser.js 使用说明: ghostWuUrlParser( ' ...
- jQuery相关知识总结一
1day-jquery 1. 1 jQuery 1概念 * JavaScript(ECMA/DOM/BOM)在实际开发中,使用比较麻烦,有浏览器兼容问题. * JavaScript类库(JS库) 的目 ...
- 2017年05月10日记一次微项目投产 | 安卓版微信内置浏览器不能解析gzip压缩过的mp4视频的问题
前言 今天投产了一个小项目,一个很简单的H5,有播放视频功能,使用了videojs插件. 之前也做过数个视频播放,视频的转压都按照既定流程进行,文件放到FTP后,iphone和安卓机测试下来都没有问题 ...
- Jmeter==HTTP信息头管理器的作用
HTTP信息头管理器在Jmeter的使用过程中起着很重要的作用,通常我们在通过Jmeter向服务器发送http请求(get或者post)的时候,往往后端需要一些验证信息,比如说web服务器需要带过去c ...
- Kettle安装和配置
0x01 Kettle软件概览 Spoon:集成开发环境 Kitchen:作业的命令行运行程序,可以通过Schell脚本来调用 Pan:转换的命令行程序 Carte:轻量级的HTTP服务,后台运行,监 ...
- WebSphere服务器已启动但是初始化失败问题
--WebSphere服务器已启动但是初始化失败问题 -----------------------------------------------2014/03/06 经常有开发同事反映,环境用着用 ...