LVS负载均衡(7)-- LVS+keepalived实现高可用
1. LVS+keepalived实现高可用
LVS 可以实现负载均衡功能,但是没有健康检查机制,如果一台 RS 节点故障,LVS 任然会将请求调度至该故障 RS 节点服务器;可以使用 Keepalived 来实现解决:
- 1.使用 Keepalived 可以实现 LVS 的健康检查机制, RS 节点故障,则自动剔除该故障的 RS 节点,如果 RS 节点恢复则自动加入集群。 
- 2.使用 Keeplaived 可以解决 LVS 单点故障,以此实现 LVS 的高可用。 
1.1 实验环境说明
实验拓扑图如下,使用LVS的DR模型:
- 客户端:主机名:xuzhichao;地址:eth1:192.168.20.17;
- 路由器:主机名:router;地址:eth1:192.168.20.50;eth2:192.168.50.50;
- LVS负载均衡:
- 主机名:lvs-01;地址:eth2:192.168.50.31;
- 主机名:lvs-02;地址:eth2:192.168.50.32;
- VIP地址:192.168.50.100和192.168.50.101;
 
- WEB服务器,使用nginx1.20.1:
- 主机名:nginx02;地址:eth2:192.168.50.22;
- 主机名:nginx03;地址:eth2:192.168.50.23;
 

1.2 路由器配置
- ROUTER设备的IP地址和路由信息如下: - [root@router ~]# ip add
 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 00:0c:29:4f:a9:ca brd ff:ff:ff:ff:ff:ff
 inet 192.168.20.50/24 brd 192.168.20.255 scope global noprefixroute eth1
 valid_lft forever preferred_lft forever
 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 00:0c:29:4f:a9:d4 brd ff:ff:ff:ff:ff:ff
 inet 192.168.50.50/24 brd 192.168.50.255 scope global noprefixroute eth2
 valid_lft forever preferred_lft forever #此场景中无需配置路由
 [root@router ~]# route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric Ref Use Iface
 192.168.20.0 0.0.0.0 255.255.255.0 U 101 0 0 eth1
 192.168.50.0 0.0.0.0 255.255.255.0 U 104 0 0 eth2
 
- 打开router设备的ip_forward功能: - [root@router ~]# echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
 [root@router ~]# sysctl -p
 net.ipv4.ip_forward = 1
 
- 把LVS的虚IP地址的80和443端口映射到路由器外网地址的80和443端口,也可以使用地址映射: - #端口映射:
 [root@router ~]# iptables -t nat -A PREROUTING -d 192.168.20.50 -p tcp --dport 80 -j DNAT --to 192.168.50.100:80
 [root@router ~]# iptables -t nat -A PREROUTING -d 192.168.20.50 -p tcp --dport 443 -j DNAT --to 192.168.50.100:443 #地址映射:
 [root@router ~]# iptables -t nat -A PREROUTING -d 192.168.20.50 -j DNAT --to 192.168.50.100 #源NAT,让内部主机上网使用
 [root@router ~]# iptables -t nat -A POSTROUTING -s 192.168.50.0/24 -j SNAT --to 192.168.20.50 #查看NAT配置:
 [root@router ~]# iptables -t nat -vnL
 Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source destination
 0 0 DNAT tcp -- * * 0.0.0.0/0 192.168.20.50 tcp dpt:80 to:192.168.50.100:80
 0 0 DNAT tcp -- * * 0.0.0.0/0 192.168.20.50 tcp dpt:443 to:192.168.50.100:443 Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source destination
 0 0 SNAT all -- * * 192.168.50.0/24 0.0.0.0/0 to:192.168.20.50
 
1.3 WEB服务器nginx配置
- nginx02主机的网络配置如下: - #1.在lo接口配置两个VIP地址:
 [root@nginx02 ~]# cat /etc/sysconfig/network-scripts/ifcfg-lo:0
 DEVICE=lo:0
 BOOTPROTO=none
 IPADDR=192.168.50.100
 NETMASK=255.255.255.255 <==注意:此处的掩码不能与RIP的掩码配置的一样,否则其他主机无法学习到RIP的ARP信息,会影响RIP的直连路由,而且设置的掩码不能过大,让VIP和CIP计算成同一网段,建议设置为32位掩码。
 ONBOOT=yes
 NAME=loopback #2.重启网卡生效:
 [root@nginx02 ~]# ifdown lo:0 && ifup lo:0
 [root@nginx02 ~]# ifconfig lo:0
 lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
 inet 192.168.50.100 netmask 255.255.255.255
 loop txqueuelen 1000 (Local Loopback) #3.eth2接口地址如下:
 [root@nginx02 ~]# ip add
 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 00:0c:29:d9:f9:7d brd ff:ff:ff:ff:ff:ff
 inet 192.168.50.22/24 brd 192.168.50.255 scope global noprefixroute eth2
 valid_lft forever preferred_lft forever #4.路由配置:网关指向路由器192.168.50.50
 [root@nginx02 ~]# ip route add default via 192.168.50.50 dev eth2 <==默认路由必须指定下一跳地址和出接口,否则有可能会从lo:0接口出去,导致不通。 [root@nginx02 ~]# route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric Ref Use Iface
 0.0.0.0 192.168.50.50 0.0.0.0 UG 0 0 0 eth2
 192.168.50.0 0.0.0.0 255.255.255.0 U 103 0 0 eth2
 
- 配置 arp ,不对外宣告本机 VIP 地址,也不响应其他节点发起 ARP 请求 本机的VIP - [root@nginx02 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
 [root@nginx02 ~]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
 [root@nginx02 ~]# echo 1 > /proc/sys/net/ipv4/conf/default/arp_ignore [root@nginx02 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
 [root@nginx02 ~]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
 [root@nginx02 ~]# echo 2 > /proc/sys/net/ipv4/conf/default/arp_announce
 
- nginx03主机的网络配置如下: - #1.在lo接口配置VIP地址:
 [root@nginx03 ~]# cat /etc/sysconfig/network-scripts/ifcfg-lo:0
 DEVICE=lo:0
 BOOTPROTO=none
 IPADDR=192.168.50.100
 NETMASK=255.255.255.255 <==注意:此处的掩码不能与RIP的掩码配置的一样,否则其他主机无法学习到RIP的ARP信息,会影响RIP的直连路由,而且设置的掩码不能过大,让VIP和CIP计算成同一网段,建议设置为32位掩码。
 ONBOOT=yes
 NAME=loopback #2.重启网卡生效:
 [root@nginx03 ~]# ifdown lo:0 && ifup lo:0
 [root@nginx03 ~]# ifconfig lo:0
 lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
 inet 192.168.50.100 netmask 255.255.255.255
 loop txqueuelen 1000 (Local Loopback) #3.eth2接口地址如下:
 [root@nginx03 ~]# ip add show eth2
 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 00:0c:29:0a:bf:63 brd ff:ff:ff:ff:ff:ff
 inet 192.168.50.23/24 brd 192.168.50.255 scope global noprefixroute eth2
 valid_lft forever preferred_lft forever #4.路由配置:网关指向路由器192.168.50.50
 [root@nginx03 ~]# ip route add default via 192.168.50.50 dev eth2 <==默认路由必须指定下一跳地址和出接口,否则有可能会从lo:0接口出去,导致不通。 [root@nginx03 ~]# route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric Ref Use Iface
 0.0.0.0 192.168.50.50 0.0.0.0 UG 0 0 0 eth2
 192.168.50.0 0.0.0.0 255.255.255.0 U 103 0 0 eth2
 
- 配置 arp ,不对外宣告本机 VIP 地址,也不响应其他节点发起 ARP 请求 本机的VIP - [root@nginx03 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
 [root@nginx03 ~]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
 [root@nginx03 ~]# echo 1 > /proc/sys/net/ipv4/conf/default/arp_ignore [root@nginx03 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
 [root@nginx03 ~]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
 [root@nginx03 ~]# echo 2 > /proc/sys/net/ipv4/conf/default/arp_announce
 
- nginx配置文件两台WEB服务器保持一致: - [root@nginx03 ~]# cat /etc/nginx/conf.d/xuzhichao.conf
 server {
 listen 80 default_server;
 listen 443 ssl;
 server_name www.xuzhichao.com;
 access_log /var/log/nginx/access_xuzhichao.log access_json;
 charset utf-8,gbk; #SSL配置
 ssl_certificate_key /apps/nginx/certs/www.xuzhichao.com.key;
 ssl_certificate /apps/nginx/certs/www.xuzhichao.com.crt;
 ssl_session_cache shared:ssl_cache:20m;
 ssl_session_timeout 10m;
 ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
 keepalive_timeout 65; #防盗链
 valid_referers none blocked server_names *.b.com b.* ~\.baidu\. ~\.google\.; if ( $invalid_referer ) {
 return 403;
 } client_max_body_size 10m; #浏览器图标
 location = /favicon.ico {
 root /data/nginx/xuzhichao;
 } location / {
 root /data/nginx/xuzhichao;
 index index.html index.php; #http自动跳转https
 if ($scheme = http) {
 rewrite ^/(.*)$ https://www.xuzhichao.com/$1;
 }
 }
 } #重启nginx服务:
 [root@nginx03 ~]# nginx -t
 nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
 nginx: configuration file /etc/nginx/nginx.conf test is successful
 [root@nginx03 ~]# systemctl reload nginx.service
 
- nginx02主机的主页文件如下: - [root@nginx02 certs]# cat /data/nginx/xuzhichao/index.html
 node1.xuzhichao.com page
 
- nginx03主机的主页文件如下: - [root@nginx03 ~]# cat /data/nginx/xuzhichao/index.html
 node2.xuzhichao.com page
 
- 测试访问: - [root@lvs-01 ~]# curl -Hhost:www.xuzhichao.com -k https://192.168.50.23
 node2.xuzhichao.com page
 [root@lvs-01 ~]# curl -Hhost:www.xuzhichao.com -k https://192.168.50.22
 node1.xuzhichao.com page
 
1.4 LVS+keepalived配置
1.4.1 keepalived检测后端服务器状态语法
虚拟服务器:
配置参数:
	virtual_server IP port |
	virtual_server fwmark int
	{
		...
		real_server {
			...
		}
		...
	}
常用参数:
	 delay_loop <INT>:服务轮询的时间间隔;
	 lb_algo rr|wrr|lc|wlc|lblc|sh|dh:定义调度方法;
	 lb_kind NAT|DR|TUN:集群的类型;
	 persistence_timeout <INT>:持久连接时长;
	 protocol TCP:服务协议;
	 sorry_server <IPADDR> <PORT>:备用服务器地址;
	 real_server <IPADDR> <PORT>
	{
		 weight <INT>   定义RS权重
		 notify_up <STRING>|<QUOTED-STRING>  定义RS上线时调用的脚本
		 notify_down <STRING>|<QUOTED-STRING>  定义RS下线或故障时调用的脚本
		 HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK { ... }:定义当前主机的健康状态检测方法;
	 }
HTTP_GET|SSL_GET:应用层检测
HTTP_GET|SSL_GET {
	url {
		    path <URL_PATH>:定义要监控的URL;
			status_code <INT>:判断上述检测机制为健康状态的响应码;
			digest <STRING>:判断上述检测机制为健康状态的响应的内容的校验码;
		}
		nb_get_retry <INT>:重试次数;
		delay_before_retry <INT>:重试之前的延迟时长,间隔时长;
		connect_ip <IP ADDRESS>:向当前RS的哪个IP地址发起健康状态检测请求,默认为real_server定义的地址
		connect_port <PORT>:向当前RS的哪个PORT发起健康状态检测请求,默认为real_server定义的端口
		bindto <IP ADDRESS>:发出健康状态检测请求时使用的源地址;默认为出接口地址
		bind_port <PORT>:发出健康状态检测请求时使用的源端口;
		connect_timeout <INTEGER>:连接请求的超时时长;
	}
传输层检测:
TCP_CHECK {
	connect_ip <IP ADDRESS>:向当前RS的哪个IP地址发起健康状态检测请求
	connect_port <PORT>:向当前RS的哪个PORT发起健康状态检测请求
	bindto <IP ADDRESS>:发出健康状态检测请求时使用的源地址;
	bind_port <PORT>:发出健康状态检测请求时使用的源端口;
	connect_timeout <INTEGER>:连接请求的超时时长;
}
1.4.2 keepalived配置实例
- 安装keepalived软件包: - [root@lvs-01 ~]# yum install keepalived -y
 
- lvs01节点的keepalived配置文件: - #1.keepalived配置文件如下:
 [root@lvs-01 ~]# cat /etc/keepalived/keepalived.conf
 ! Configuration File for keepalived global_defs {
 notification_email {
 root@localhost
 }
 notification_email_from keepalived@localhost
 smtp_server 127.0.0.1
 smtp_connect_timeout 30
 router_id LVS01
 script_user root
 enable_script_security
 } vrrp_instance VI_1 {
 state MASTER
 interface eth2
 virtual_router_id 51
 priority 120
 advert_int 3
 authentication {
 auth_type PASS
 auth_pass 1111
 }
 virtual_ipaddress {
 192.168.50.100/32 dev eth2
 } track_interface {
 eth2
 } notify_master "/etc/keepalived/notify.sh master"
 notify_backup "/etc/keepalived/notify.sh backup"
 notify_fault "/etc/keepalived/notify.sh fault"
 } virtual_server 192.168.50.100 443 {
 delay_loop 6
 lb_algo rr
 lb_kind DR
 protocol TCP sorry_server 192.168.20.24 443 real_server 192.168.50.22 443 {
 weight 1
 SSL_GET {
 url {
 path /index.html
 status_code 200
 }
 connect_timeout 3
 nb_get_retry 3
 delay_before_retry 3
 }
 } real_server 192.168.50.23 443 {
 weight 1
 SSL_GET {
 url {
 path /index.html
 status_code 200
 }
 connect_timeout 3
 nb_get_retry 3
 delay_before_retry 3
 }
 }
 } virtual_server 192.168.50.100 80 {
 delay_loop 6
 lb_algo rr
 lb_kind DR
 protocol TCP real_server 192.168.50.22 80 {
 weight 1
 TCP_CHECK {
 connect_timeout 3
 nb_get_retry 3
 delay_before_retry 3
 }
 } real_server 192.168.50.23 80 {
 weight 1
 TCP_CHECK {
 connect_timeout 3
 nb_get_retry 3
 delay_before_retry 3
 }
 }
 } #2.keepalived的notify.sh脚本
 [root@lvs-01 keepalived]# cat notify.sh
 #!/bin/bash contact='root@localhost'
 notify() {
 local mailsubject="$(hostname) to be $1, vip floating"
 local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
 echo "$mailbody" | mail -s "$mailsubject" $contact
 } case $1 in
 master)
 notify master
 ;;
 backup)
 notify backup
 ;;
 fault)
 notify fault
 ;;
 *)
 echo "Usage: $(basename $0) {master|backup|fault}"
 exit 1
 ;;
 esac #增加执行权限
 [root@lvs-01 keepalived]# chmod +x notify.sh #3.增加默认路由指向路由器网关
 [root@lvs-01 ~]# ip route add default via 192.168.50.50 dev eth2 [root@lvs-01 ~]# route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric Ref Use Iface
 0.0.0.0 192.168.50.50 0.0.0.0 UG 0 0 0 eth2
 192.168.50.0 0.0.0.0 255.255.255.0 U 102 0 0 eth2 #4.启动keepalived服务:
 [root@lvs-01 ~]# systemctl start keepalived.service #5.查看自动生成的ipvs规则:
 [root@lvs-01 ~]# ipvsadm -Ln
 IP Virtual Server version 1.2.1 (size=4096)
 Prot LocalAddress:Port Scheduler Flags
 -> RemoteAddress:Port Forward Weight ActiveConn InActConn
 TCP 192.168.50.100:80 rr
 -> 192.168.50.22:80 Route 1 0 0
 -> 192.168.50.23:80 Route 1 0 0
 TCP 192.168.50.100:443 rr
 -> 192.168.50.22:443 Route 1 0 0
 -> 192.168.50.23:443 Route 1 0 0 #6.查看VIP所在的主机:
 [root@lvs-01 ~]# ip add
 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 00:0c:29:21:84:9d brd ff:ff:ff:ff:ff:ff
 inet 192.168.50.31/24 brd 192.168.50.255 scope global noprefixroute eth2
 valid_lft forever preferred_lft forever
 inet 192.168.50.100/32 scope global eth2
 valid_lft forever preferred_lft forever
 
- lvs02节点的keepalived配置文件: - #1.keepalived配置文件如下:
 [root@lvs-02 ~]# cat /etc/keepalived/keepalived.conf
 ! Configuration File for keepalived global_defs {
 notification_email {
 root@localhost
 }
 notification_email_from keepalived@localhost
 smtp_server 127.0.0.1
 smtp_connect_timeout 30
 router_id LVS02
 script_user root
 enable_script_security
 } vrrp_instance VI_1 {
 state BACKUP
 interface eth2
 virtual_router_id 51
 priority 100
 advert_int 3
 authentication {
 auth_type PASS
 auth_pass 1111
 }
 virtual_ipaddress {
 192.168.50.100/32 dev eth2
 } track_interface {
 eth2
 } notify_master "/etc/keepalived/notify.sh master"
 notify_backup "/etc/keepalived/notify.sh backup"
 notify_fault "/etc/keepalived/notify.sh fault"
 } virtual_server 192.168.50.100 443 {
 delay_loop 6
 lb_algo rr
 lb_kind DR
 protocol TCP sorry_server 192.168.20.24 443 real_server 192.168.50.22 443 {
 weight 1
 SSL_GET {
 url {
 path /index.html
 status_code 200
 }
 connect_timeout 3
 nb_get_retry 3
 delay_before_retry 3
 }
 } real_server 192.168.50.23 443 {
 weight 1
 SSL_GET {
 url {
 path /index.html
 status_code 200
 }
 connect_timeout 3
 nb_get_retry 3
 delay_before_retry 3
 }
 }
 } virtual_server 192.168.50.100 80 {
 delay_loop 6
 lb_algo rr
 lb_kind DR
 protocol TCP real_server 192.168.50.22 80 {
 weight 1
 TCP_CHECK {
 connect_timeout 3
 nb_get_retry 3
 delay_before_retry 3
 }
 } real_server 192.168.50.23 80 {
 weight 1
 TCP_CHECK {
 connect_timeout 3
 nb_get_retry 3
 delay_before_retry 3
 }
 }
 } #2.keepalived的notify.sh脚本
 [root@lvs-02 keepalived]# cat notify.sh
 #!/bin/bash contact='root@localhost'
 notify() {
 local mailsubject="$(hostname) to be $1, vip floating"
 local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
 echo "$mailbody" | mail -s "$mailsubject" $contact
 } case $1 in
 master)
 notify master
 ;;
 backup)
 notify backup
 ;;
 fault)
 notify fault
 ;;
 *)
 echo "Usage: $(basename $0) {master|backup|fault}"
 exit 1
 ;;
 esac #增加执行权限
 [root@lvs-02 keepalived]# chmod +x notify.sh #3.增加默认路由指向路由器网关
 [root@lvs-02 ~]# ip route add default via 192.168.50.50 dev eth2 [root@lvs-02 ~]# route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric Ref Use Iface
 0.0.0.0 192.168.50.50 0.0.0.0 UG 0 0 0 eth2
 192.168.50.0 0.0.0.0 255.255.255.0 U 102 0 0 eth2 #4.启动keepalived服务:
 [root@lvs-02 ~]# systemctl start keepalived.service #5.查看自动生成的ipvs规则:
 [root@lvs-02 ~]# ipvsadm -Ln
 IP Virtual Server version 1.2.1 (size=4096)
 Prot LocalAddress:Port Scheduler Flags
 -> RemoteAddress:Port Forward Weight ActiveConn InActConn
 TCP 192.168.50.100:80 rr
 -> 192.168.50.22:80 Route 1 0 0
 -> 192.168.50.23:80 Route 1 0 0
 TCP 192.168.50.100:443 rr
 -> 192.168.50.22:443 Route 1 0 0
 -> 192.168.50.23:443 Route 1 0 0 #6.查看VIP,不在本机:
 [root@lvs-02 ~]# ip add
 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 00:0c:29:e4:cf:17 brd ff:ff:ff:ff:ff:ff
 inet 192.168.50.32/24 brd 192.168.50.255 scope global noprefixroute eth2
 valid_lft forever preferred_lft forever
 
- 使用客户端测试 - 客户端网络配置如下: - [root@xuzhichao ~]# ip add
 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 00:0c:29:2f:d0:da brd ff:ff:ff:ff:ff:ff
 inet 192.168.20.17/24 brd 192.168.20.255 scope global noprefixroute eth1
 valid_lft forever preferred_lft forever [root@xuzhichao ~]# route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric Ref Use Iface
 192.168.20.0 0.0.0.0 255.255.255.0 U 101 0 0 eth1
 
- 测试访问: - #1.测试使用http方式访问,重定向到https
 [root@xuzhichao ~]# for i in {1..10} ;do curl -k -L -Hhost:www,xuzhichao.com http://192.168.20.50; done
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page #2.测试直接使用https方式访问
 [root@xuzhichao ~]# for i in {1..10} ;do curl -k -Hhost:www,xuzhichao.com https://192.168.20.50; done
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 
 
1.5 RS故障场景测试
- 把nginx02节点的nginx服务停止 - [root@nginx02 ~]# systemctl stop nginx.service
 
- 查看两个节点的日志和ipvs规则变化: - #1.查看日志,发现检测后端主机失败,将RS从集群中移除
 [root@lvs-01 ~]# tail -f /var/log/keepalived.log
 Jul 13 20:00:57 lvs-01 Keepalived_healthcheckers[13466]: TCP connection to [192.168.50.22]:80 failed.
 Jul 13 20:00:59 lvs-01 Keepalived_healthcheckers[13466]: Error connecting server [192.168.50.22]:443.
 Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: TCP connection to [192.168.50.22]:80 failed.
 Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: Check on service [192.168.50.22]:80 failed after 1 retry.
 Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: Removing service [192.168.50.22]:80 from VS [192.168.50.100]:80
 Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: Remote SMTP server [127.0.0.1]:25 connected.
 Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: SMTP alert successfully sent.
 Jul 13 20:01:02 lvs-01 Keepalived_healthcheckers[13466]: Error connecting server [192.168.50.22]:443.
 Jul 13 20:01:05 lvs-01 Keepalived_healthcheckers[13466]: Error connecting server [192.168.50.22]:443.
 Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: Error connecting server [192.168.50.22]:443.
 Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: Check on service [192.168.50.22]:443 failed after 3 retry.
 Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: Removing service [192.168.50.22]:443 from VS [192.168.50.100]:443
 Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: Remote SMTP server [127.0.0.1]:25 connected.
 Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: SMTP alert successfully sent. #2.查看ipvs规则,192.168.50.22主机已经被移除集群:
 [root@lvs-01 ~]# ipvsadm -Ln
 IP Virtual Server version 1.2.1 (size=4096)
 Prot LocalAddress:Port Scheduler Flags
 -> RemoteAddress:Port Forward Weight ActiveConn InActConn
 TCP 192.168.50.100:80 rr
 -> 192.168.50.23:80 Route 1 0 0
 TCP 192.168.50.100:443 rr
 -> 192.168.50.23:443 Route 1 0 0
 
- 客户端测试,访问全部分配给nginx03节点: - [root@xuzhichao ~]# for i in {1..10} ;do curl -L -k -Hhost:www.xuzhichao.com http://192.168.20.50 ;done
 node2.xuzhichao.com page
 node2.xuzhichao.com page
 node2.xuzhichao.com page
 node2.xuzhichao.com page
 node2.xuzhichao.com page
 node2.xuzhichao.com page
 node2.xuzhichao.com page
 node2.xuzhichao.com page
 node2.xuzhichao.com page
 node2.xuzhichao.com page
 
- 恢复nginx02节点,查看两个lvs节点的日志和ipvs规则: - #1.打开nginx02节点的nginx服务:
 [root@nginx02 ~]# systemctl start nginx.service #2.查看lvs01的keepalived日志,nginx02节点检测成功,加入后端主机:
 [root@lvs-01 ~]# tail -f /var/log/keepalived.log
 Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: HTTP status code success to [192.168.50.22]:443 url(1).
 Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: Remote Web server [192.168.50.22]:443 succeed on service.
 Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: Adding service [192.168.50.22]:443 to VS [192.168.50.100]:443
 Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: Remote SMTP server [127.0.0.1]:25 connected.
 Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: SMTP alert successfully sent.
 Jul 13 20:06:49 lvs-01 Keepalived_healthcheckers[13466]: TCP connection to [192.168.50.22]:80 success.
 Jul 13 20:06:49 lvs-01 Keepalived_healthcheckers[13466]: Adding service [192.168.50.22]:80 to VS [192.168.50.100]:80
 Jul 13 20:06:49 lvs-01 Keepalived_healthcheckers[13466]: Remote SMTP server [127.0.0.1]:25 connected.
 Jul 13 20:06:49 lvs-01 Keepalived_healthcheckers[13466]: SMTP alert successfully sent. #3.查看ipvs规则:
 [root@lvs-01 ~]# ipvsadm -Ln
 IP Virtual Server version 1.2.1 (size=4096)
 Prot LocalAddress:Port Scheduler Flags
 -> RemoteAddress:Port Forward Weight ActiveConn InActConn
 TCP 192.168.50.100:80 rr
 -> 192.168.50.22:80 Route 1 0 0
 -> 192.168.50.23:80 Route 1 0 0
 TCP 192.168.50.100:443 rr
 -> 192.168.50.22:443 Route 1 0 0
 -> 192.168.50.23:443 Route 1 0 0
 
- 此时使用客户端测试,两个nginx节点恢复正常访问: - [root@xuzhichao ~]# for i in {1..10} ;do curl -L -k -Hhost:www.xuzhichao.com http://192.168.20.50 ;done
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 
1.6 lvs设备故障场景测试
- 把lvs-01节点的keepalived服务关闭,模拟lvs-01节点故障,查看负载均衡集群情况: - #1.把lvs-01节点的keepalived服务关闭:
 [root@lvs-01 ~]# systemctl stop keepalived.service #2.查看keepalived日志情况:
 [root@lvs-01 ~]# tail -f /var/log/keepalived.log
 Jul 13 20:11:08 lvs-01 Keepalived[13465]: Stopping
 Jul 13 20:11:08 lvs-01 Keepalived_vrrp[13467]: VRRP_Instance(VI_1) sent 0 priority
 Jul 13 20:11:08 lvs-01 Keepalived_vrrp[13467]: VRRP_Instance(VI_1) removing protocol VIPs.
 Jul 13 20:11:08 lvs-01 Keepalived_healthcheckers[13466]: Removing service [192.168.50.22]:80 from VS [192.168.50.100]:80
 Jul 13 20:11:08 lvs-01 Keepalived_healthcheckers[13466]: Removing service [192.168.50.23]:80 from VS [192.168.50.100]:80
 Jul 13 20:11:08 lvs-01 Keepalived_healthcheckers[13466]: Stopped
 Jul 13 20:11:09 lvs-01 Keepalived_vrrp[13467]: Stopped
 Jul 13 20:11:09 lvs-01 Keepalived[13465]: Stopped Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2 [root@lvs-02 ~]# tail -f /var/log/keepalived.log
 Jul 13 20:11:09 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Transition to MASTER STATE
 Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Entering MASTER STATE
 Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) setting protocol VIPs.
 Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: Sending gratuitous ARP on eth2 for 192.168.50.100
 Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth2 for 192.168.50.100
 Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: Sending gratuitous ARP on eth2 for 192.168.50.100
 Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: Sending gratuitous ARP on eth2 for 192.168.50.100 #3.查看VIP情况,已经转移到lvs-02节点:
 [root@lvs-02 ~]# ip add
 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 00:0c:29:e4:cf:17 brd ff:ff:ff:ff:ff:ff
 inet 192.168.50.32/24 brd 192.168.50.255 scope global noprefixroute eth2
 valid_lft forever preferred_lft forever
 inet 192.168.50.100/32 scope global eth2
 valid_lft forever preferred_lft forever [root@lvs-01 ~]# ip add
 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 00:0c:29:21:84:9d brd ff:ff:ff:ff:ff:ff
 inet 192.168.50.31/24 brd 192.168.50.255 scope global noprefixroute eth2
 valid_lft forever preferred_lft forever #4.测试客户端访问正常:
 [root@xuzhichao ~]# for i in {1..10} ;do curl -L -k -Hhost:www.xuzhichao.com http://192.168.20.50 ;done
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 
- 把lvs-01节点恢复,观察负载均衡集群情况: - #1.打开lvs-01节点的keepalived服务:
 [root@lvs-01 ~]# systemctl start keepalived.service #2.查看keepalived日志情况:
 [root@lvs-01 ~]# tail -f /var/log/keepalived.log
 Jul 13 20:15:36 lvs-01 Keepalived_vrrp[13724]: VRRP_Instance(VI_1) Transition to MASTER STATE
 Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: VRRP_Instance(VI_1) Entering MASTER STATE
 Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: VRRP_Instance(VI_1) setting protocol VIPs.
 Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: Sending gratuitous ARP on eth2 for 192.168.50.100
 Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth2 for 192.168.50.100
 Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: Sending gratuitous ARP on eth2 for 192.168.50.100 [root@lvs-02 ~]# tail -f /var/log/keepalived.log
 Jul 13 20:15:36 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Received advert with higher priority 120, ours 100
 Jul 13 20:15:36 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Entering BACKUP STATE
 Jul 13 20:15:36 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) removing protocol VIPs.
 Jul 13 20:15:36 lvs-02 Keepalived_vrrp[2247]: Opening script file /etc/keepalived/notify.sh #3.查看VIP情况,回到lvs-01节点:
 [root@lvs-01 ~]# ip add
 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 00:0c:29:21:84:9d brd ff:ff:ff:ff:ff:ff
 inet 192.168.50.31/24 brd 192.168.50.255 scope global noprefixroute eth2
 valid_lft forever preferred_lft forever
 inet 192.168.50.100/32 scope global eth2
 valid_lft forever preferred_lft forever [root@lvs-02 ~]# ip add
 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 00:0c:29:e4:cf:0d brd ff:ff:ff:ff:ff:ff
 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 00:0c:29:e4:cf:17 brd ff:ff:ff:ff:ff:ff
 inet 192.168.50.32/24 brd 192.168.50.255 scope global noprefixroute eth2
 valid_lft forever preferred_lft forever #4.客户端测试访问正常:
 [root@xuzhichao ~]# for i in {1..10} ;do curl -L -k -Hhost:www.xuzhichao.com http://192.168.20.50 ;done
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 node2.xuzhichao.com page
 node1.xuzhichao.com page
 
LVS负载均衡(7)-- LVS+keepalived实现高可用的更多相关文章
- LVS负载均衡(LVS简介、三种工作模式、十种调度算法)
		一.LVS简介 LVS(Linux Virtual Server)即Linux虚拟服务器,是由章文嵩博士主导的开源负载均衡项目,目前LVS已经被集成到Linux内核模块中.该项目在Linux内核中实现 ... 
- 浅谈web应用的负载均衡、集群、高可用(HA)解决方案(转)
		1.熟悉几个组件 1.1.apache —— 它是Apache软件基金会的一个开放源代码的跨平台的网页服务器,属于老牌的web服务器了,支持基于Ip或者域名的虚拟主机,支持代理服务器,支持安 ... 
- keepalived+LVS 实现双机热备、负载均衡、失效转移 高性能 高可用 高伸缩性 服务器集群
		本章笔者亲自动手,使用LVS技术实现实现一个可以支持庞大访问量.高可用性.高伸缩性的服务器集群 在读本章之前,可能有不少读者尚未使用该技术,或者部分读者使用Nginx实现应用层的负载均衡.这里大家都可 ... 
- 浅谈web应用的负载均衡、集群、高可用(HA)解决方案
		http://aokunsang.iteye.com/blog/2053719 声明:以下仅为个人的一些总结和随写,如有不对之处,还请看到的网友指出,以免误导. (详细的配置方案请google,这 ... 
- web应用的负载均衡、集群、高可用(HA)解决方案
		看看别人的文章: 1.熟悉几个组件 1.1.apache —— 它是Apache软件基金会的一个开放源代码的跨平台的网页服务器,属于老牌的web服务器了,支持基于Ip或者域名的虚拟主机,支持代 ... 
- lvs负载均衡的搭建
		lvs负载均衡的搭建 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 在部署环境前,我们需要了解一下一些协议 一.什么是arp 地址解析协议,即ARP(Addr ... 
- LVS负载均衡DR模式
		什么是集群? 一组相互独立的计算机,利用高速通信网络组成的一个计算机系统,对于客户机来说像是一个单一服务器,实际上是一组服务器.简而言之,一堆机器协同工作就是集群.集群的基本特点:高性能.高并发.高吞 ... 
- [Linux系统] (6)LVS负载均衡
		部分内容转自:https://blog.csdn.net/weixin_40470303/article/details/80541639 一.LVS简介 LVS(Linux Virtual Ser ... 
- LVS负载均衡模型及算法概述
		集群类型 LB: Load Balancing,负载均衡 HA:High Availability, 高可用 HP:High Performance, 高性能 负载均衡 负载均衡设备 Hardwa ... 
- LVS负载均衡服务
		LVS负载均衡服务 LVS负载均衡调度技术是在Linux内核中实现的,因此被称为Linux虚拟服务器.使用LVS时,不能直接配置内核中的ipvs,而需要使用ipvs的管理工具ipvsadm进行管理. ... 
随机推荐
- ue4-c++定时器和时间轴简易模板
			定时器Delay 在头文件中需要声明TimerHandle和功能函数,功能函数是计时结束后执行的功能 在源文件中利用GetWorldTimerManager()实现定时器的开启(绑定功能函数)和清除. ... 
- 开源鸿蒙(OpenHarmonyOS)代码下载及编译
			开源鸿蒙的代码仓在码云上,可以通过以下命令下载源码并编译 本机安装虚拟机 如本地已经安装可以忽略此步 安装指导:https://thoughts.teambition.com/share/614c49 ... 
- 【Java面试题】Mybatis
			五.MyBatis 40)谈谈 MyBatis Mybatis 是一个半自动化的 ORM 框架,它对 jdbc 的操作数据库的过程进行封装,使得开发者只需要专注于 SQL 语句本身,而不用去关心注册驱 ... 
- Spark的基本原理
			Application Application是在使用spark-submit 提交的打包程序,也就是需要写的代码.完整的Application一般包含以下步骤:(1)获取数据(2)计算逻辑(3)输出 ... 
- #裴蜀定理#洛谷 2520 [HAOI2011]向量
			题目 分析 首先若 \(a,b\) 都为 0 要特判. 若 \(\begin{cases}x=pa+qb+p'a+q'b\\y=qa+pb-q'a-p'b\end{cases}\) 合并同类项可以得到 ... 
- #线段树#洛谷 4428 [BJOI2018]二进制
			题目 有一个长为 \(n\) 的二进制串,支持单个位置取反,对于这个二进制串的一个子区间, 求出其有多少位置不同的连续子串,满足在重新排列后(可包含前导0)是一个 3 的倍数. 分析 考虑对于单个位置 ... 
- PWA 实践/应用(Google Workbox)
			桌面端 PWA 应用: 移动端添加到桌面: 1 什么是 PWA PWA(Progressive Web App - 渐进式网页应用)是一种理念,由 Google Chrome 在 2015 年提出.P ... 
- WPF 像CSS一样使用 Font Awesome 图标字体
			WPF 像CSS一样使用 Font Awesome 图标字体 编写目的 WPF中使用这种图标字体不免会出现可读性差的问题,现阶段网络上有的大部分实现方式都是建立枚举,我感觉这样后续维护起来有些麻烦,需 ... 
- The First 寒假集训の小总结
			转眼间十五天的寒假集训已经结束,也学习到了许多新知识,dp,线段树,单调栈和单调队列......,假期过得还是很有意义的,虽然我的两次考试成绩不尽人意(只能怪我自己没有好好理解知识点还有好好做题),但 ... 
- CentOS8 / CentOS7 yum源最新修改搭建 2022.3.1
			Part I CentOS 8 源更新 ========================================== 2022年过完后,发现公司里面的所有服务器yum都不能用了,一直报错 按照 ... 
