Centos7+LVS-DR+keepalived实验(包含sorry-server、日志、及HTTP-GET的健康检测)
一、简介
1、lvs-dr原理请参考原理篇
2、keepalived原理请参考原理篇
3、基于lvs-dr+keepalived故障切换架构图如下:

二、部署
1、环境
| web1 | lvs+keepalived | 192.168.216.51 |
| web2 | lvs+keepalived | 192.168.216.52 |
| web3 | web | 192.168.216.53 |
| web4 | web | 192.168.216.54 |
| client | 物理机 |
注意:确保每台机器防火墙、selinux关闭,时间同步
2、准备RS的web服务,这里安装httpd
web3/web4
yum install httpd -y
web3
echo "welcome to web3" >/var/www/html/index.html
systemctl start httpd
systemctl enable httpd
web4
echo "welcome to web4" >/var/www/html/index.html
systemctl start httpd
systemctl enable httpd
互相访问一下,在客户机浏览器上也访问一下
[root@web3 ~]# curl 192.168.216.54
welcome to web4
[root@web3 ~]# [root@web4 ~]# curl 192.168.216.54
welcome to web4
[root@web4 ~]#

arp抑制的意义 ,修改的应答级别
arp_ignore 改为1的意义是,响应报文,请求报文从哪个地址进来的,就只能这个接口地址响应
arp_announce 改为2的意义是通知,不通告不同网段
脚本实现:web3/web4,都运行一下
[root@web3 ~]# cd /arp
[root@web3 arp]# ll
total
-rwxr-xr-x. root root Apr : arp.sh
[root@web3 arp]# cat arp.sh
#!/bin/bash
case $ in
start)
echo >/proc/sys/net/ipv4/conf/all/arp_ignore
echo >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo >/proc/sys/net/ipv4/conf/all/arp_announce
echo >/proc/sys/net/ipv4/conf/lo/arp_announce
;;
stop)
echo >/proc/sys/net/ipv4/conf/all/arp_ignore
echo >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo >/proc/sys/net/ipv4/conf/all/arp_announce
echo >/proc/sys/net/ipv4/conf/lo/arp_announce
;;
esac [root@web3 arp]# chmod +x arp.sh
[root@web3 arp]# ./arp.sh
4、RS配置VIP接口
web3/web4 同时配置
首先几个问题解释一下:
为什么配置到lo接口
既然需要rs能够处理目标地址的vip的ip报文,首先需要接收这个包,在lo上配置vip就能够完全接收包并将结果返回client
配置到其他网卡上,会影响客户端的arp request,影响arp表,从而影响负载均衡
为什么是rs的掩码是255.255.255.255
由于rs的vip不对外通信,用做侦首部,所以一定要设置位32位掩码
1 ifconfig lo:0 192.168.216.200 netmask 255.255.255.255 broadcast 192.168.216.200 up
2 route add -host 192.168.216.200 dev lo:0 [root@web3 arp]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.216.2 0.0.0.0 UG ens33
192.168.122.0 0.0.0.0 255.255.255.0 U virbr0
192.168.216.0 0.0.0.0 255.255.255.0 U ens33
192.168.216.200 0.0.0.0 255.255.255.255 UH lo
5、准备director的ipvsadm
web1/web2
yum install ipvsadm -y
[root@web2 keepalived]# ipvsadm -C
[root@web2 keepalived]# ipvsadm -A -t 192.168.216.200: -s rr
[root@web2 keepalived]# ipvsadm -a -t 192.168.216.200: -r 192.168.216.53 -g -w
[root@web2 keepalived]# ipvsadm -a -t 192.168.216.200: -r 192.168.216.54 -g -w
6、sorry-server的配置
web1/web2-安装web软件
yum install nginx -y
web1-
echo "sorry,under maintanance #####web1" >/usr/share/nginx/html/index.html
web2
echo "sorry,under maintanance #####web2" >/usr/share/nginx/html/index.html
web1/web2
systemctl start nginx
systemctl enable nginx
客户端访问web应用是否正常

后面在keepalived配置文件virtual_server区域添加sorry_server 127.0.0.1 80
7、配置keepalived,及基于HTTP-GET做监控检测
web1/web2-安装软件
yum install keepalived -y
web1-master配置
[root@web1 keepalived]# cat keepalived.conf
2 ! Configuration File for keepalived
3
4 global_defs {
# notification_email {
# acassen@firewall.loc
# failover@firewall.loc
# sysadmin@firewall.loc
# }
# notification_email_from Alexandre.Cassen@firewall.loc
# smtp_server 192.168.200.1
# smtp_connect_timeout
13 router_id LVS_DEVEL
# vrrp_skip_check_adv_addr
# vrrp_strict
# vrrp_garp_interval
# vrrp_gna_interval
}
19 vrrp_script chk_maintanance {
20
21 script "/etc/keepalived/chkdown.sh"
22 interval 1
23 weight -20
24 }
#vrrp_script chk_nginx {
# script "/etc/keepalived/chknginx.sh"
# interval
# weight -
#} 31 #VIP1
32 vrrp_instance VI_1 {
33 state MASTER
34 interface ens33
35 virtual_router_id 50
36 priority 100
37 advert_int 1
38 authentication {
39 auth_type PASS
40 auth_pass 1111
41 }
42 virtual_ipaddress {
43 192.168.216.200
44 }
45 track_script {
46 chk_maintanance
47 }
# track_script {
# chk_nginx
# }
}
#VIP2
#vrrp_instance VI_2 {
# state BAKCUP
# interface ens33
# virtual_router_id
# priority
# advert_int
# authentication {
# auth_type PASS
# auth_pass
# }
# virtual_ipaddress {
# 192.168.216.210
# }
# track_script {
# chk_maintanance
# }
# track_script {
# chk_nginx
# }
#} 74 virtual_server 192.168.216.200 80{
75 delay_loop 6
76 lb_algo wrr
77 lb_kind DR
78 nat_mask 255.255.0.0
79 protocol TCP
80
81 real_server 192.168.216.53 80 {
82 weight 1
83 HTTP_GET {
84 url {
85 path /
86 status_code 200
87 }
88 connect_timeout 3
89 nb_get_retry 3
90 delay_before_retry 3
91 }
92 }
93
94 real_server 192.168.216.54 80 {
95 weight 2
96 HTTP_GET {
97 url {
98 path /
99 status_code 200
100 }
101 connect_timeout 3
102 nb_get_retry 3
103 delay_before_retry 3
104 }
105 }
106 }
web2-backup配置
[root@web2 keepalived]# cat keepalived.conf
! Configuration File for keepalived global_defs {
# notification_email {
# acassen@firewall.loc
# failover@firewall.loc
# sysadmin@firewall.loc
# }
# notification_email_from Alexandre.Cassen@firewall.loc
# smtp_server 192.168.200.1
# smtp_connect_timeout
router_id LVS_DEVEL1
# vrrp_skip_check_adv_addr
# vrrp_strict
# vrrp_garp_interval
# vrrp_gna_interval
}
vrrp_script chk_maintanance { #这里是脚本通过实现动态切换在Centos7+nginx+keepalived集群及双主架构案例文章有介绍
script "/etc/keepalived/chkdown.sh”
interval
weight -
} vrrp_script chk_nginx {
script "/etc/keepalived/chknginx.sh"
interval
weight -
} #VIP1
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id
priority
advert_int
authentication {
auth_type PASS
auth_pass
}
virtual_ipaddress {
192.168.216.200
}
track_script {
chk_maintanance
}
# track_script {
# chk_nginx
# }
} #VIP2
#vrrp_instance VI_2 {
# state MASTER
# interface ens33
# virtual_router_id
# priority
# advert_int
# authentication {
# auth_type PASS
# auth_pass
# }
# virtual_ipaddress {
# 192.168.216.210
# }
# track_script {
# chk_maintanance
# }
# track_script {
# chk_nginx
# }
#} virtual_server 192.168.216.200 { #vip区域
delay_loop 6 #延迟轮询时间
lb_algo wrr #后端算法
lb_kind DR #调度类型
nat_mask 255.255.0.0 #
protocol TCP #监控服务协议类型
sorry_server 127.0.0.1 80 #sorry-server
real_server 192.168.216.53 { #真实服务器
weight 1 #权重
HTTP_GET { #健康检测方式 HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK,这里用的HTTP_GET据说效率比TCP_CHECK高
url {
path / #请求rs上的路径
status_code 200 #状态码检测
}
connect_timeout 3 #超时时长
nb_get_retry 3 #重复次数
delay_before_retry 3 #下次重试时间延迟
}
}
real_server 192.168.216.54 {
weight
HTTP_GET {
url {
path /
status_code
}
connect_timeout
nb_get_retry
delay_before_retry
}
}
}
添加keepalived ,down脚本
[root@web1 keepalived]# cat chkdown.sh
#!/bin/bash [[ -f /etc/keepalived/down ]]&&exit || exit [root@web1 keepalived]#
8、开启日志功能
vim /etc/sysconfig/keepalived
KEEPALIVED_OPTIONS="-D" 修改成KEEPALIVED_OPTIONS="-D -d -S 0"
[root@web1 keepalived]# cat /etc/sysconfig/keepalived
# Options for keepalived. See `keepalived --help' output and keepalived(8) and
# keepalived.conf() man pages for a list of all options. Here are the most
# common ones :
#
# --vrrp -P Only run with VRRP subsystem.
# --check -C Only run with Health-checker subsystem.
# --dont-release-vrrp -V Dont remove VRRP VIPs & VROUTEs on daemon stop.
# --dont-release-ipvs -I Dont remove IPVS topology on daemon stop.
# --dump-conf -d Dump the configuration data.
# --log-detail -D Detailed log messages.
# --log-facility -S - Set local syslog facility (default=LOG_DAEMON)
# 15 KEEPALIVED_OPTIONS="-D -d -S 0"
开启rsyslog
vim /etc/rsyslog.conf
#keepalived -S 0
local0.* /var/log/keepalived.log
重启服务
systemctl restart keepalived
systemctl start rsyslog
systemctl enable rsyslog
三、验证
1、验证keepalived
web1上
touch down
ip a #查看vip 消失
rm -rf down
ip a #vip自动跳回
[root@web1 keepalived]# touch down
[root@web1 keepalived]# ll
total
-rwxr-xr-x root root Apr : chkdown.sh
-rwxr-xr-x root root Apr : chkmysql.sh
-rwxr-xr-x root root Apr : chknginx.sh
7 -rw-r--r-- 1 root root 0 Apr 24 17:31 down
-rw-r--r-- root root Apr : keepalived.conf
-rw-r--r-- root root Apr : notify.sh
10 [root@web1 keepalived]# ip a
: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN qlen
link/loopback ::::: brd :::::
inet 127.0.0.1/ scope host lo
valid_lft forever preferred_lft forever
inet6 ::/ scope host
valid_lft forever preferred_lft forever
: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc pfifo_fast state UP qlen
link/ether :0c::1c:8b: brd ff:ff:ff:ff:ff:ff
inet 192.168.216.51/ brd 192.168.216.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80:::e73d:1ef:2e1/ scope link
valid_lft forever preferred_lft forever
: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu qdisc noqueue state DOWN qlen
link/ether ::::a5:7c brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/ brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
: virbr0-nic: <BROADCAST,MULTICAST> mtu qdisc pfifo_fast master virbr0 state DOWN qlen
link/ether ::::a5:7c brd ff:ff:ff:ff:ff:ff
[root@web1 keepalived]# rm -rf downn
30 [root@web1 keepalived]# rm -rf down
31 [root@web1 keepalived]# ip a
: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN qlen
link/loopback ::::: brd :::::
inet 127.0.0.1/ scope host lo
valid_lft forever preferred_lft forever
inet6 ::/ scope host
valid_lft forever preferred_lft forever
: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc pfifo_fast state UP qlen
link/ether :0c::1c:8b: brd ff:ff:ff:ff:ff:ff
inet 192.168.216.51/ brd 192.168.216.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80:::e73d:1ef:2e1/ scope link
valid_lft forever preferred_lft forever
: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu qdisc noqueue state DOWN qlen
link/ether ::::a5:7c brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/ brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
: virbr0-nic: <BROADCAST,MULTICAST> mtu qdisc pfifo_fast master virbr0 state DOWN qlen
link/ether ::::a5:7c brd ff:ff:ff:ff:ff:ff
50 [root@web1 keepalived]# ip a
: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN qlen
link/loopback ::::: brd :::::
inet 127.0.0.1/ scope host lo
valid_lft forever preferred_lft forever
inet6 ::/ scope host
valid_lft forever preferred_lft forever
: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc pfifo_fast state UP qlen
link/ether :0c::1c:8b: brd ff:ff:ff:ff:ff:ff
inet 192.168.216.51/ brd 192.168.216.255 scope global ens33
valid_lft forever preferred_lft forever
61 inet 192.168.216.200/32 scope global ens33 #vip自动跳回
valid_lft forever preferred_lft forever
inet6 fe80:::e73d:1ef:2e1/ scope link
valid_lft forever preferred_lft forever
: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu qdisc noqueue state DOWN qlen
link/ether ::::a5:7c brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/ brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
: virbr0-nic: <BROADCAST,MULTICAST> mtu qdisc pfifo_fast master virbr0 state DOWN qlen
link/ether ::::a5:7c brd ff:ff:ff:ff:ff:ff
[root@web1 keepalived]#
2、验证健康检测
1)、首先检查一下ipvsadm,并访问
[root@web1 keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2. (size=)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.216.200: wrr
-> 192.168.216.53: Route
-> 192.168.216.54: Route
[root@web1 keepalived]#
正常状态
2)、web3 停止httpd测试健康检测
systemctl stop httpd
web1上查看,ipvs策略已经剔除web3 ,日志文件也显示Removing service
[root@web1 keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2. (size=)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.216.200: wrr
-> 192.168.216.54: Route
[root@web1 keepalived]# cat /var/log/keepalived.log |tail -
Apr :: web1 Keepalived_vrrp[]: Sending gratuitous ARP on ens33 for 192.168.216.200
Apr :: web1 Keepalived_vrrp[]: Sending gratuitous ARP on ens33 for 192.168.216.200
Apr :: web1 Keepalived_vrrp[]: Sending gratuitous ARP on ens33 for 192.168.216.200
Apr :: web1 Keepalived_vrrp[]: Sending gratuitous ARP on ens33 for 192.168.216.200
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
10 Apr 24 17:40:43 web1 Keepalived_healthcheckers[50390]: Check on service [192.168.216.53]:80 failed after 3 retry.
11 Apr 24 17:40:43 web1 Keepalived_healthcheckers[50390]: Removing service [192.168.216.53]:80 from VS [192.168.216.200]:0
[root@web1 keepalived]#
恢复web3的httpd
systemctl start httpd
web1上查看已经添加到负载均衡上,日志文件显示HTTP status code success 和adding service to VS
[root@web1 keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2. (size=)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
5 TCP 192.168.216.200:80 wrr
6 -> 192.168.216.53:80 Route 1 0 0
7 -> 192.168.216.54:80 Route 2 0 0
[root@web1 keepalived]# cat /var/log/keepalived.log |tail -
Apr :: web1 Keepalived_vrrp[]: Sending gratuitous ARP on ens33 for 192.168.216.200
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Check on service [192.168.216.53]: failed after retry.
Apr :: web1 Keepalived_healthcheckers[]: Removing service [192.168.216.53]: from VS [192.168.216.200]:
16 Apr 24 17:44:37 web1 Keepalived_healthcheckers[50390]: HTTP status code success to [192.168.216.53]:80 url(1).
17 Apr 24 17:44:37 web1 Keepalived_healthcheckers[50390]: Remote Web server [192.168.216.53]:80 succeed on service.
18 Apr 24 17:44:37 web1 Keepalived_healthcheckers[50390]: Adding service [192.168.216.53]:80 to VS [192.168.216.200]:80
[root@web1 keepalived]#
3、验证sorry-server
web3/web4
systemctl stop httpd
web1上查看

[root@web1 keepalived]# cat /var/log/keepalived.log |tail -
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Check on service [192.168.216.53]: failed after retry.
Apr :: web1 Keepalived_healthcheckers[]: Removing service [192.168.216.53]: from VS [192.168.216.200]:
Apr :: web1 Keepalived_healthcheckers[]: HTTP status code success to [192.168.216.53]: url().
Apr :: web1 Keepalived_healthcheckers[]: Remote Web server [192.168.216.53]: succeed on service.
Apr :: web1 Keepalived_healthcheckers[]: Adding service [192.168.216.53]: to VS [192.168.216.200]:
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.54]:.
[root@web1 keepalived]# cat /var/log/keepalived.log |tail -
Apr :: web1 Keepalived_healthcheckers[]: Adding service [192.168.216.53]: to VS [192.168.216.200]:
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.54]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Check on service [192.168.216.53]: failed after retry.
Apr :: web1 Keepalived_healthcheckers[]: Removing service [192.168.216.53]: from VS [192.168.216.200]:
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.54]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.54]:.
[root@web1 keepalived]# cat /var/log/keepalived.log |tail -
Apr :: web1 Keepalived_healthcheckers[]: Check on service [192.168.216.53]: failed after retry.
Apr :: web1 Keepalived_healthcheckers[]: Removing service [192.168.216.53]: from VS [192.168.216.200]:
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.54]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.54]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.54]:.
Apr :: web1 Keepalived_healthcheckers[]: Check on service [192.168.216.54]: failed after retry.
Apr :: web1 Keepalived_healthcheckers[]: Removing service [192.168.216.54]: from VS [192.168.216.200]:
Apr :: web1 Keepalived_healthcheckers[]: Lost quorum -= > for VS [192.168.216.200]:
32 Apr 24 17:47:46 web1 Keepalived_healthcheckers[50390]: Adding sorry server [127.0.0.1]:80 to VS [192.168.216.200]:80
33 Apr 24 17:47:46 web1 Keepalived_healthcheckers[50390]: Removing alive servers from the pool for VS [192.168.216.200]:80
日志显示,Adding sorry server
转载请注明出处:https://www.cnblogs.com/zhangxingeng/p/10743501.html
Centos7+LVS-DR+keepalived实验(包含sorry-server、日志、及HTTP-GET的健康检测)的更多相关文章
- centos LB负载均衡集群 三种模式区别 LVS/NAT 配置 LVS/DR 配置 LVS/DR + keepalived配置 nginx ip_hash 实现长连接 LVS是四层LB 注意down掉网卡的方法 nginx效率没有LVS高 ipvsadm命令集 测试LVS方法 第三十三节课
centos LB负载均衡集群 三种模式区别 LVS/NAT 配置 LVS/DR 配置 LVS/DR + keepalived配置 nginx ip_hash 实现长连接 LVS是四层LB ...
- LVS DR模式实验
LVS DR模式实验 三台虚拟机,两个台节点机(Apache),一台DR实验调度机 一:关闭相关安全机制 systemctl stop firewalld iptables -F setenforce ...
- lvs/dr+keepalived搭建成功后,开启防火墙,虚拟IP不能访问,但是真实IP却可以访问
lvs/dr+keepalived搭建成功后,开启防火墙,虚拟IP不能访问,但是真实IP却可以访问,如果关闭防火墙虚拟IP就可以访问网站了,这个问题肯定是防火墙在作怪. 经过这两的不懈奋斗和大家的帮助 ...
- RHEL 5.4下部署LVS(DR)+keepalived实现高性能高可用负载均衡
原文地址:http://www.cnblogs.com/mchina/archive/2012/05/23/2514728.html 一.简介 LVS是Linux Virtual Server的简写, ...
- Linux下部署LVS(DR)+keepalived+Nginx负载均衡
架构部署 LVS/keepalived(master):192.168.21.3 LVS/keepalived(Slave):192.168.21.6 Nginx1:192.168.21.4 N ...
- VM虚拟机上 实现CentOS 6.X下部署LVS(DR)+keepalived实现高性能高可用负载均衡
一.简介 LVS是Linux Virtual Server的简写,意即Linux虚拟服务器,是一个虚拟的服务器集群系统.本项目在1998年5月由章文嵩博士成立,是中国国内最早出现的自由软件项目之一. ...
- RHEL 5.4下部署LVS(DR)+keepalived实现高性能高可用负载均衡(转)
一.简介 LVS是Linux Virtual Server的简写,意即Linux虚拟服务器,是一个虚拟的服务器集群系统.本项目在1998年5月由章文嵩博士成立,是中国国内最早出现的自由软件项目之一. ...
- 虚拟机 搭建LVS + DR + keepalived 高可用负载均衡
一:环境说明: LVS-DR-Master: 10.3.0.82 LVS-DR-Backup: 10.3.0.70 VIP: 10.3.0.60 ...
- lvs(dr)+keepalived
系统:centos6.5mini 环境: 机器名 Ip地址 角色 Vip-web: 192.168.20.50 Vip-mysql: 192.168.20.60 lvs01 192.168.20.10 ...
随机推荐
- BZOJ_4197_[Noi2015]寿司晚宴_状态压缩动态规划
BZOJ_4197_[Noi2015]寿司晚宴_状态压缩动态规划 Description 为了庆祝 NOI 的成功开幕,主办方为大家准备了一场寿司晚宴.小 G 和小 W 作为参加 NOI 的选手,也被 ...
- CSS 盒模型与box-sizing
一.盒模型 一个web页面由许多html元素组成,而每一个html元素都可以表示为一个矩形的盒子,CSS盒模型正是描述这些矩形盒子的存在. MDN的描述: When laying out a docu ...
- Jmeter-----【mac电脑】配置web浏览器的代理抓取请求
在测试中,不仅会涉及到APP中的数据测试,时常我们APP的数据需要与后台进行交互,因此我们不可避免的也需要对web进行接口测试,更准确的来说是使用web的接口来快速的帮我们实现App中所需的数据录入, ...
- Linux 文件/文件夹无法删除问题解决方案
最近我们的服务器被黑客攻击,然后有些文件的属性被修改,导致我们无法删除病毒文件,同时采用 root 用户也无法删除,现在把解决方案记录下来. 普通删除 如果文件是当前用户的,那么使用 rm 命令就可以 ...
- 【.NET异步编程系列3】取消异步操作
在.Net和C#中运行异步代码相当简单,因为我们有时候需要取消正在进行的异步操作,通过本文,可以掌握 通过CancellationToken取消任务(包括non-cancellable任务). 早期 ...
- 微服务架构 - SpringCloud整合分布式服务跟踪zipkin
1.zipkin zipkin是Twitter的一个开源项目,它基于Google Dapper实现.我们可以使用它来收集各个服务器上请求链路的跟踪数据,并通过它提供的REST API接口来辅助我们查询 ...
- 为自己搭建一个分布式 IM 系统二【从查找算法聊起】
前言 最近这段时间确实有点忙,这篇的目录还是在飞机上敲出来了的. 言归正传,上周更新了 cim 第一版:没想到反响热烈,最高时上了 GitHub Trending Java 版块的首位,一天收到了 3 ...
- 更好用的css命名方式——BEM命名
一.什么是BEM? BEM代表块(Block),元素(Element),修饰符(Modifier).无论是什么网站页面,都可以拆解成这三部分. 二.带你认识网页 我们来看一下qq的官网,它可以由三个块 ...
- ElasticSearch入门 附.Net Core例子
1.什么是ElasticSearch? Elasticsearch是基于Lucene的搜索引擎.它提供了一个分布式,支持多租户的全文搜索引擎,它具有HTTP Web界面和无模式JSON文档. Elas ...
- Vue.js 学习笔记 第3章 计算属性
本篇目录: 3.1 什么是计算属性 3.2 计算属性用法 3.3 计算属性缓存 模板内容的表达式常用语简单的运算,当其过长或逻辑复杂时,会难以维护,本章的计算属性就是用于解决该问题的. 3.1 什么是 ...