keepalived基本应用解析
原地址:http://blog.csdn.net/moqiang02/article/details/37921051
概念简单认知:
Keepalived:它的诞生最初是为ipvs(一些服务,内核中的一些规则)提供高可用性的,最初最主要目的是能够自主调用ipvsadm来生成规则,并且能够自动实现将用户访问的地址转移到其他节点上进行实现的。
Keepalived:核心包含两个ckeckers和VRRP协议。
ckeckers:检查服务检查reserved的健康状况的,基于脚本也可检查服务本身的健康状况。这里是实现ipvs后端健康状况的检测的。
VRRP:是一种容错协议,它保证当主机的下一跳路由器出现故障时,由另一台路由器来代替出现故障的路由器进行工作,从而保持网络通信的连续性和可靠性。VRRP中每个节点之间都有优先级的一般为0-255(0,255有特殊用法)数字越大优先级越高。
相关术语解析:
虚拟路由器:由一个Master路由器和多个Backup路由器组成。主机将虚拟路由器当作默认网关。
VRID:虚拟路由器的标识。有相同VRID的一组路由器构成一个虚拟路由器。
Master路由器:虚拟路由器中承担报文转发任务的路由器。
Backup路由器:Master路由器出现故障时,能够代替Master路由器工作的路由器。
虚拟IP 地址:虚拟路由器的IP 地址。一个虚拟路由器可以拥有一个或多个IP地址。
IP地址拥有者:接口IP地址与虚拟IP地址相同的路由器被称为IP地址拥有者。
虚拟MAC地址:一个虚拟路由器拥有一个虚拟MAC地址。虚拟MAC地址的格式为00-00-5E-00-01-{VRID}。通常情况下,虚拟路由器回应ARP请求使用的是虚拟MAC地址,只有虚拟路由器做特殊配置的时候,才回应接口的真实MAC地址。
优先级:VRRP根据优先级来确定虚拟路由器中每台路由器的地位。
非抢占方式:如果Backup路由器工作在非抢占方式下,则只要Master路由器没有出现故障Backup路由器即使随后被配置了更高的优先级也不会成为Master路由器。
抢占方式:如果Backup路由器工作在抢占方式下,当它收到VRRP报文后,会将自己的优先级与通告报文中的优先级进行比较。如果自己的优先级比当前的Master路由器的优先级高,就会主动抢占成为Master路由器;否则,将保持Backup状态。
平台信息介绍:
Master:172.16.18.7
backup:172.16.18.9
系统版本:centosx86_64
keepalived版本:1.2.7
主配置文件:/etc/keepalived/keepalived.conf
服务脚本:/etc/rc.d/init,d/keepalived
应用实践:
将两个节点的时间同步
|
1
2
3
4
5
6
7
8
9
10
11
12
|
##############实现双机互信################node1#######ssh-keygen -t rsa -P ''ssh-copy-id -i .ssh/id_rsa.pub root@172.16.18.9#########node2#######ssh-keygen -t rsa -P ''ssh-copy-id -i .ssh/id_rsa.pub root@172.16.18.7##############查看时间###########[root@node1 ~]# date;ssh node2 'date'#####为实现同步可使用下面同步#####crontab -e*/5 * * * * /usr/sbin/ntpdate 172.16.0.1 &> /dev/null |
安装:
|
1
|
[root@node1 ~]# yum -y install keepalived |
查看编辑配置文件:
|
1
|
vim /etc/keepalived/keepalived.conf |
解析配置文件:
配置文件有三部分组成:
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
(1):GLOBAL CONFIGURATION 全局配置段 有两个字段: Global definitions #全局定义 Static routes #静态路由(2):VRRPD CONFIGURATION 配置VRRP子进程协议段又称定义虚拟路由的 有两个字段: VRRP synchronization group(s) #VRRP的同步组(一般不用) 什么是同步组?就是一台机器上有配置两个VIP,为了实现两个VIP要同步工作同时转移出去,所以必须要定义成同步组从而当成一个资源来转移。 VRRP instance(s) #VRRP的实例 任何一个虚拟路由定义好之后在任何一个节点上都应该定义一个keepalived运行实例,这两个节点上的实例要匹配。keepalived最令人头疼的是两个节点上的初始实例是不一样的,因为每一个节点都有初始状态而且它有默认的优先级,高的为Master低的为Backup所以导致了两个节点上的虚拟路由的实例配置是不一样的。(3):LVS CONFIGURATION LVS配置段 有两个字段: Virtual server group(s) #虚拟服务器组 Virtual server(s) #虚拟服务器(ipvs规则) |
详细解析:
keepalived.conf
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
global_defs { #全局配置,这里额外的静态路由并未添加因为它是非必要的,除非我们在当前或特定的主机上生成特殊的静态路由等 notification_email { #收件人信息 acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc #发件人信息(可以随意伪装) smtp_server 192.168.200.1 #发邮件的服务器(一定不可为外部地址) smtp_connect_timeout 30 #连接超时时间 router_id LVS_DEVEL #路由器的标识(可以随便改动)}vrrp_instance VI_1 { #配置虚拟路由器的(VI_1是实例名称) state MASTER #初始状态,master|backup,当state指定的instance的初始化状态,在两台服务器都启动以后,马上发生竞选,优先级高的成为MASTER,所以这里的MASTER并不是表示此台服务器一直是MASTER interface eth0 #通告选举所用端口 virtual_router_id 51 #虚拟路由的ID号(一般不可大于255) priority 100 #优先级信息 advert_int 1 #初始化通告几个 authentication { #认证 auth_type PASS #认证机制 auth_pass 1111 #密码(尽量使用随机) } virtual_ipaddress { #虚拟地址(VIP地址) 192.168.200.16 192.168.200.17 192.168.200.18 }} |
编辑设置配置信息:
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
|
[root@node1 ~]# vim /etc/keepalived/keepalived.conf #主节点global_defs { notification_email { root@localhost } notification_email_from keadmin@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL}vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 55 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.16.18.100 }}[root@node1 ~]# scp /etc/keepalived/keepalived.conf 172.16.18.9:/etc/keepalived/ #复制至备节点[root@node2 keepalived]# vim keepalived.conf #备节点global_defs { notification_email { root@localhost } notification_email_from keadmin@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL} vrrp_instance VI_1 { state BACKUP #状态 interface eth0 virtual_router_id 55 #一定要和主节点一致 priority 90 #优先级别一定低于主节点 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.16.18.100 }} |
启动主节点:
|
1
|
[root@node1 ~]# service keepalived start |
启动备节点:
|
1
|
[root@node2 ~]# service keepalived start |
查看状态:
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
|
###############节点一############[root@node1 ~]# ip addr show1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:06:a6:49 brd ff:ff:ff:ff:ff:ff inet 172.16.18.7/16 brd 172.16.255.255 scope global eth0 inet 172.16.18.100/32 scope global eth0 #此时VIP在node1节点上 inet6 fe80::20c:29ff:fe06:a649/64 scope link valid_lft forever preferred_lft forever###############节点二############[root@node2 keepalived]# ip addr show1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:c8:b5 brd ff:ff:ff:ff:ff:ff inet 172.16.18.9/16 brd 172.16.255.255 scope global eth0 inet6 fe80::20c:29ff:fe12:c8b5/64 scope link valid_lft forever preferred_lft forever |
查看主节点启动日志信息:
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
|
##############主节点#############[root@node1 ~]# tail -20 /var/log/messagesSep 25 17:32:17 node1 Keepalived_healthcheckers[16628]: Netlink reflector reports IP 172.16.18.7 addedSep 25 17:32:17 node1 Keepalived_healthcheckers[16628]: Netlink reflector reports IP fe80::20c:29ff:fe06:a649 addedSep 25 17:32:17 node1 Keepalived_healthcheckers[16628]: Registering Kernel netlink reflectorSep 25 17:32:17 node1 Keepalived_healthcheckers[16628]: Registering Kernel netlink command channelSep 25 17:32:17 node1 Keepalived_healthcheckers[16628]: Opening file '/etc/keepalived/keepalived.conf'.Sep 25 17:32:17 node1 Keepalived_healthcheckers[16628]: Configuration is using : 6832 BytesSep 25 17:32:18 node1 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP)Sep 25 17:32:18 node1 kernel: IPVS: Connection hash table configured (size=4096, memory=64Kbytes)Sep 25 17:32:18 node1 kernel: IPVS: ipvs loaded.Sep 25 17:32:18 node1 Keepalived_healthcheckers[16628]: Using LinkWatch kernel netlink reflector...Sep 25 17:32:18 node1 Keepalived_vrrp[16629]: Opening file '/etc/keepalived/keepalived.conf'.Sep 25 17:32:18 node1 Keepalived_vrrp[16629]: Configuration is using : 62657 BytesSep 25 17:32:18 node1 Keepalived_vrrp[16629]: Using LinkWatch kernel netlink reflector...Sep 25 17:32:18 node1 Keepalived_vrrp[16629]: VRRP sockpool: [ifindex(2), proto(112), fd(11,12)]Sep 25 17:32:19 node1 Keepalived_vrrp[16629]: VRRP_Instance(VI_1) Transition to MASTER STATE #事务开始转换为Master状态Sep 25 17:32:20 node1 Keepalived_vrrp[16629]: VRRP_Instance(VI_1) Entering MASTER STATE #进入master状态Sep 25 17:32:20 node1 Keepalived_vrrp[16629]: VRRP_Instance(VI_1) setting protocol VIPs.Sep 25 17:32:20 node1 Keepalived_vrrp[16629]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 172.16.18.100Sep 25 17:32:20 node1 Keepalived_healthcheckers[16628]: Netlink reflector reports IP 172.16.18.100 added #添加IP172.16.18.100Sep 25 17:32:25 node1 Keepalived_vrrp[16629]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 172.16.18.100###############备节点##########[root@node2 keepalived]# tail -20 /var/log/messagesSep 25 17:32:23 node2 Keepalived_healthcheckers[20357]: Interface queue is emptySep 25 17:32:23 node2 Keepalived_vrrp[20358]: Interface queue is emptySep 25 17:32:23 node2 Keepalived_healthcheckers[20357]: Netlink reflector reports IP 172.16.18.9 addedSep 25 17:32:23 node2 Keepalived_vrrp[20358]: Netlink reflector reports IP 172.16.18.9 addedSep 25 17:32:23 node2 Keepalived_healthcheckers[20357]: Netlink reflector reports IP fe80::20c:29ff:fe12:c8b5 addedSep 25 17:32:23 node2 Keepalived_vrrp[20358]: Netlink reflector reports IP fe80::20c:29ff:fe12:c8b5 addedSep 25 17:32:23 node2 Keepalived_vrrp[20358]: Registering Kernel netlink reflectorSep 25 17:32:23 node2 Keepalived_healthcheckers[20357]: Registering Kernel netlink reflectorSep 25 17:32:23 node2 Keepalived_healthcheckers[20357]: Registering Kernel netlink command channelSep 25 17:32:23 node2 Keepalived_vrrp[20358]: Registering Kernel netlink command channelSep 25 17:32:23 node2 Keepalived_vrrp[20358]: Registering gratuitous ARP shared channelSep 25 17:32:23 node2 Keepalived_healthcheckers[20357]: Opening file '/etc/keepalived/keepalived.conf'.Sep 25 17:32:23 node2 Keepalived_healthcheckers[20357]: Configuration is using : 6985 BytesSep 25 17:32:23 node2 Keepalived_vrrp[20358]: Opening file '/etc/keepalived/keepalived.conf'.Sep 25 17:32:23 node2 Keepalived_vrrp[20358]: Configuration is using : 62678 BytesSep 25 17:32:23 node2 Keepalived_vrrp[20358]: Using LinkWatch kernel netlink reflector...Sep 25 17:32:23 node2 Keepalived_vrrp[20358]: VRRP_Instance(VI_1) Entering BACKUP STATE #进入BACKUP状态Sep 25 17:32:23 node2 Keepalived_vrrp[20358]: VRRP sockpool: [ifindex(2), proto(112), fd(10,11)]Sep 25 17:32:23 node2 Keepalived_healthcheckers[20357]: Using LinkWatch kernel netlink reflector... |
测试:将node1关闭,node2会不会将地址取走??
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
##############主节点############[root@node1 ~]# service keepalived stopStopping keepalived: [ OK ][root@node1 ~]# ip addr show1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:06:a6:49 brd ff:ff:ff:ff:ff:ff inet 172.16.18.7/16 brd 172.16.255.255 scope global eth0 inet6 fe80::20c:29ff:fe06:a649/64 scope link valid_lft forever preferred_lft forever###############备节点##########[root@node2 keepalived]# ip addr show1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:c8:b5 brd ff:ff:ff:ff:ff:ff inet 172.16.18.9/16 brd 172.16.255.255 scope global eth0 inet 172.16.18.100/32 scope global eth0 inet6 fe80::20c:29ff:fe12:c8b5/64 scope link valid_lft forever preferred_lft forever |
测试结果:这样是成立的,若node1重新上线会立即将VIP获取走。
如何使用keepalived调用外部脚本或手动执行命令实现VIP转移??
思路:通过addr_script(脚本)定义检测机制;然后通过track_script在实例中追踪这个脚本。
如:下面这个检测机制
|
1
2
3
4
5
|
vrrp_script chk_mantaince_down { # chk_mantaince_down定义脚本的名称,可随意取 script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0" #命令(其实这里可以是自己定义好的脚本路径也可以是判断命令)#这里的意思是如果在这个文件下有down这个文件就表示期望这 个节点为备用状态。 interval 1 #每隔1秒钟执行一次 weight -2 #一旦命令执行失败,权重降低2个} |
(1)将此此检查机制应用到我们的示例中测试实现过程:
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
|
#############主节点###########[root@node1 keepalived]# vim keepalived.conf router_id LVS_DEVEL} vrrp_script chk_main { 脚本 script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0" interval 1 weight -2} vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 157 #如果环境中操作者比较多,尽量在每次更改配置文件之后改变一下这个值,从而实现ARPs快速转接。 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.16.18.100 } track_script { #追踪脚本 chk_main }}##############备节点############[root@node2 keepalived]# vim keepalived.confsmtp_connect_timeout 30 router_id LVS_DEVEL} vrrp_script chk_main { script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0" interval 1 weight -2} vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 157 priority 99 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.16.18.100 } track_script { chk_main }} |
(2)测试
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
|
#######未添加文件之前:节点一########[root@node1 keepalived]# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:06:a6:49 brd ff:ff:ff:ff:ff:ff inet 172.16.18.7/16 brd 172.16.255.255 scope global eth0 inet 172.16.18.100/32 scope global eth0 inet6 fe80::20c:29ff:fe06:a649/64 scope link valid_lft forever preferred_lft forever#######未添加文件之前:节点二########[root@node2 keepalived]# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:c8:b5 brd ff:ff:ff:ff:ff:ff inet 172.16.18.9/16 brd 172.16.255.255 scope global eth0 inet6 fe80::20c:29ff:fe12:c8b5/64 scope link valid_lft forever preferred_lft forever############添加文件########[root@node1 keepalived]# touch down###########查看状态#########[root@node1 keepalived]# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:06:a6:49 brd ff:ff:ff:ff:ff:ff inet 172.16.18.7/16 brd 172.16.255.255 scope global eth0 inet6 fe80::20c:29ff:fe06:a649/64 scope link valid_lft forever preferred_lft forever[root@node2 keepalived]# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:c8:b5 brd ff:ff:ff:ff:ff:ff inet 172.16.18.9/16 brd 172.16.255.255 scope global eth0 inet 172.16.18.100/32 scope global eth0 inet6 fe80::20c:29ff:fe12:c8b5/64 scope link valid_lft forever preferred_lft forever |
如何在状态转换时进行通知??
(1)keepalive内部提供了两个配置指令详细参考man keepalived.conf,一般在vrrp_instance或者vrrp_sync_group中使用中使用:
第一类指令:
|
1
2
3
4
5
6
|
# to MASTER transitionnotify_master /path/to_master.sh #转换为master状态时使用此脚本通知# to BACKUP transitionnotify_backup /path/to_backup.sh #转换为backup状态时使用此脚本通知# FAULT transitionnotify_fault "/path/fault.sh VG_1" #如果变成了fault就是用此脚本通知,如果脚本带有参数也就是有空格必须使用引号 |
第二类指令:使用notify直接引用
|
1
2
3
4
5
|
# $1 = "GROUP"|"INSTANCE" #参数1:必须能够指定接受组或实例# $2 = name of group or instance #这个组或实例的名称# $3 = target state of transition #指定转换成哪个状态进行通知的# ("MASTER"|"BACKUP"|"FAULT")notify /path/notify.sh notify #脚本的路径(自行写) |
(2)通知脚本定义:
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
|
[root@node1 keepalived]# vim notify.sh#!/bin/bash#vip=172.16.18.100 #指定VIPcontact='root@localhost' #通知给谁thisip=`ifconfig eth0 | awk '/inet addr:/{print $2}' | awk -F: '{print $2}'` #获取当前节点IP地址notify() { mailsubject="$thisip to be $1: $vip floating" mailbody="`date '+%F %H:%M:%S'`: vrrp transition, $thisip changed to be $1" echo $mailbody | mail -s "$mailsubject" $contact}case "$1" in master) notify master exit 0 ;; backup) notify backup exit 0 ;; fault) notify fault exit 0 ;; *) echo 'Usage: `basename $0` {master|backup|fault}' exit 1 ;;esac##########赋予此脚本执行权限###########[root@node1 keepalived]# chmod +x notify.sh###########测试脚本####################[root@node1 keepalived]# ./notify.sh master###########查看邮件信息################[root@node1 keepalived]# mailHeirloom Mail version 12.4 7/29/08. Type ? for help."/var/spool/mail/root": 1 message 1 new>N 1 root Wed Sep 25 22:24 18/693 "172.16.18.7 to be master: 172.16.18.100 flo"& 1 #第一封邮件Message 1:From root@node1.magedu.com Wed Sep 25 22:24:40 2013Return-Path: <root@node1.magedu.com>X-Original-To: root@localhostDelivered-To: root@localhost.magedu.comDate: Wed, 25 Sep 2013 22:24:39 +0800To: root@localhost.magedu.comSubject: 172.16.18.7 to be master: 172.16.18.100 floatingUser-Agent: Heirloom mailx 12.4 7/29/08Content-Type: text/plain; charset=us-asciiFrom: root@node1.magedu.com (root)Status: R2013-09-25 22:24:39: vrrp transition, 172.16.18.7 changed to be master #内容###########使用quit退出邮件############## |
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
|
#############配置测试状态转换##############[root@node1 keepalived]# vim keepalived.confvrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 157 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.16.18.100 } track_script { chk_main } notify_master "/etc/keepalived/notify.sh master" #指定切换到Master状态时执行的脚本 notify_backup "/etc/keepalived/notify.sh backup" #指定切换到Backup状态时执行的脚本 notify_fault "/etc/keepalived/notify.sh fault" #指定切换到Mfault状态时执行的脚本}##########注意将上面此代码写入备节点中############notify_master "/etc/keepalived/notify.sh master" #指定切换到Master状态时执行的脚本notify_backup "/etc/keepalived/notify.sh backup" #指定切换到Backup状态时执行的脚本notify_fault "/etc/keepalived/notify.sh fault" #指定切换到Mfault状态时执行的脚本##########脚本同样在备节点中存在##################[root@node1 keepalived]# scp notify.sh 172.16.18.9:/etc/keepalived/ |
(3)测试
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
[root@node1 keepalived]# touch down[root@node1 keepalived]# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:06:a6:49 brd ff:ff:ff:ff:ff:ff inet 172.16.18.7/16 brd 172.16.255.255 scope global eth0 inet6 fe80::20c:29ff:fe06:a649/64 scope link valid_lft forever preferred_lft forever[root@node1 keepalived]# mail #节点1上Heirloom Mail version 12.4 7/29/08. Type ? for help."/var/spool/mail/root": 3 messages 2 unread 1 root Wed Sep 25 22:24 19/704 "172.16.18.7 to be master: 172.16.18.100 flo">U 2 root Wed Sep 25 22:47 19/703 "172.16.18.7 to be master: 172.16.18.100 flo" U 3 root Wed Sep 25 22:47 19/703 "172.16.18.7 to be backup: 172.16.18.100 flo"&[root@node2 keepalived]# mail #节点2上Heirloom Mail version 12.4 7/29/08. Type ? for help."/var/spool/mail/root": 3 messages 3 new>N 1 root Wed Sep 25 22:46 18/693 "172.16.18.9 to be backup: 172.16.18.100 flo" N 2 root Wed Sep 25 22:47 18/693 "172.16.18.9 to be backup: 172.16.18.100 flo" N 3 root Wed Sep 25 22:47 18/693 "172.16.18.9 to be master: 172.16.18.100 flo"& |
关于ipvs配置生成规则实现负载均衡和web服务器实现高可用等更多关于keepalived高级应用将在后续博客中持续更新,请继续关注!!谢谢!!
非抢占模式
master从故障中恢复后,不会抢占备份节点的vip
1> MASTER(192.168.1.201):
global_defs {
router_id nginx_01 #标识本节点的名称,通常为hostname
}
vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh"
interval 2
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface enp0s3
virtual_router_id 51
mcast_src_ip 192.168.1.201
priority 100
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.210
}
track_script {
chk_nginx # nginx存活状态检测脚本
}
}
2> BACKUP(192.168.1.202)
global_defs {
router_id nginx_02
}
vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh"
interval 2
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface enp0s3
virtual_router_id 51
mcast_src_ip 192.168.1.202
priority 90
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.210
}
track_script {
chk_nginx
}
}
和非抢占模式的配置相比,只改了两个地方:
1> 在vrrp_instance块下两个节点各增加了nopreempt指令,表示不争抢vip
2> 节点的state都为BACKUP
两个keepalived节点都启动后,默认都是BACKUP状态,双方在发送组播信息后,会根据优先级来选举一个MASTER出来。由于两者都配置了nopreempt,所以MASTER从故障中恢复后,不会抢占vip。这样会避免VIP切换可能造成的服务延迟。
keepalived基本应用解析的更多相关文章
- 高性能集群软件Keepalived(1)
1介绍 Keepalived是linux下一个轻量级的高可用解决方案,与HeartBeat,RoseHA实现的功能类似,但是还是有差别.HeartBeat是一个专业的功能完善的高可用软件,它提供了HA ...
- Keepalived详解(一):Keepalived介绍【转】
一.Keepalived介绍: Keepalived是Linux下一个轻量级的高可用解决方案,它与HeartBeat.RoseHA实现的功能类似,都可以实现服务或者网络的高可用,但是又 ...
- Linux下实现高可用软件-Keepalived基础知识梳理
Keepalived介绍 Keepalived软件起初是专门为LVS负载均衡软件设计的,用来管理并监控LVS集群系统中各个服务节点的状态,后来又加入了可以实现高可用的VRRP功能.因此,Keepali ...
- 高性能Linux服务器 第11章 构建高可用的LVS负载均衡集群
高性能Linux服务器 第11章 构建高可用的LVS负载均衡集群 libnet软件包<-依赖-heartbeat(包含ldirectord插件(需要perl-MailTools的rpm包)) l ...
- keepalived的工作原理解析以及安装使用
一.keepalived keepalived是集群管理中保证集群高可用的一个服务软件,其功能类似于heartbeat,用来防止单点故障. keepalived官网http://www.keepali ...
- keepalived 配置文件解析
! Configuration File for keepalived global_defs { #全局定义部分 notification_email { #设置报警邮件地址,可设置多个 acass ...
- keepalived 安装和配置解析
Keepalived的特性 配置文件简单:配置文件比较简单,可通过简单配置实现高可用功能 稳定性强:keepalived是一个类似于layer3, 4 & 7交换机制的软件,具 ...
- 【大型网站技术实践】初级篇:借助LVS+Keepalived实现负载均衡
一.负载均衡:必不可少的基础手段 1.1 找更多的牛来拉车吧 当前大多数的互联网系统都使用了服务器集群技术,集群即将相同服务部署在多台服务器上构成一个集群整体对外提供服务,这些集群可以是Web应用服务 ...
- Keepalived使用梳理
keepalived介绍keepalived观察其名可知,保持存活,在网络里面就是保持在线了,也就是所谓的高可用或热备,它集群管理中保证集群高可用的一个服务软件,其功能类似于heartbeat,用来防 ...
随机推荐
- 九度OJ 1326:Waiting in Line(排队) (模拟)
时间限制:1 秒 内存限制:32 兆 特殊判题:否 提交:220 解决:64 题目描述: Suppose a bank has N windows open for service. There is ...
- java对IO的操作
import java.io.*; public class HelloWorld { //Main method. public static void main(String[] args) { ...
- ASP-Dictionary对象-基础用法
1.存在 dim d set d=Server.CreateObject("Scripting.Dictionary") d.Add "c", "Ch ...
- ASP获取上月本月下月的第一天和最后一天
上月第一天:<%=dateadd("m",-1,year(date)&"-"&month(date)&"-1" ...
- BZOJ1505: [NOI2004]小H的小屋
BZOJ1505: [NOI2004]小H的小屋 Description 小H发誓要做21世纪最伟大的数学家.他认为,做数学家与做歌星一样,第一步要作好包装,不然本事再大也推不出去. 为此他决定先在自 ...
- mysql 导入数据是报错:2006 - MySQL server has gone away
导SQL数据库结构+数据时,如果数据是批量插入的话会报错:2006 - MySQL server has gone away. 解决办法:找到你的mysql目录下的my.ini配置文件,加入以下代码 ...
- pom.xml配置文件详解(转发)
setting.xml主要用于配置maven的运行环境等一系列通用的属性,是全局级别的配置文件:而pom.xml主要描述了项目的maven坐标,依赖关系,开发者需要遵循的规则,缺陷管理系统,组织和li ...
- QT设置QToolBar带有图标和文字
ui->mainToolBar->setToolButtonStyle(Qt::ToolButtonTextBesideIcon);
- pinpoint本地开发——collector
本地启动collector 启动前准备 启动之前,要先确保本地已经可以正常package,install 必须保证install成功,才能进行后续步骤,无法install或者package参考[pin ...
- Java多线程系列 JUC线程池02 线程池原理解析(一)
转载 http://www.cnblogs.com/skywang12345/p/3509960.html ; http://www.cnblogs.com/skywang12345/p/35099 ...