Lvs+Keepalived+Bind+web构建高可用负载均衡系统
-------------------------------
一、前言
二、环境
三、配置
1.LB-Master及LB-Backup配置
(1)LB-Master及LB-Backup安装keepalived和ipvsadm
(2)LB-Master的keepalived主配置文档
(3)LB-Backup的keepalived主配置文档
2.DNS-Master及DNS-Backup配置
(1)DNS-Master及DNS-Backup网卡设置
(2)DNS-Master及DNS-Backup安装bind及ARP设置
(3)DNS-Master配置
(4)DNS-Backup配置
3.web1及web2配置
四、测试
五、故障模拟
模拟LB-Master失效
模拟DNS-Master失效
模拟web1失效
-------------------------------
一、前言
这个实验是实现企业高可用负载均衡的web案例,无论Master或Backup宕掉任意一台,业务照常可用。分析拓扑,其中LB-Master负责web的调度,LB-Master负责DNS的调度,实现负载均衡,如果其中一台出现故障,那么web和DNS服务会立即转移到另一台服务器上,DNS分为主辅同步,实现高可用,web在生产环境下页面应该是一致的,为了试验效果,web页面设置不一样,在实际环境下,web后端应架设共享存储,实现web页面的一致性。
二、环境
系统:CentOS6.4 32位 6台
拓扑图:

IP划分:

三、配置
1.LB-Master及LB-Backup配置
(1)LB-Master及LB-Backup安装keepalived和ipvsadm
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
|
# yum groupinstall "Additional Development" //安装开发工具# yum groupinstall "Development tools"# tar -zxvf keepalived-1.2.1.tar.gz -C /usr/local/src/# cd /usr/local/src/keepalived-1.2.1# ./cnfigureKeepalived configuration------------------------Keepalived version : 1.2.1Compiler : gccCompiler flags : -g -O2Extra Lib : -lpopt -lssl -lcrypto Use IPVS Framework : No //配置出现错误IPVS sync daemon support : NoUse VRRP Framework : YesUse Debug flags : No解决方法:# yum install kernel-devel ipvsadm# ln -s /usr/src/kernels/2.6.32-358.el6.i686/ /usr/src/linux# ./cnfigure //再次配置环境# make //编译# make install //安装 # cd /usr/local/etc //keepalived默认安装路径# lldrwxr-xr-x. 3 root root 4096 May 24 00:37 keepaliveddrwxr-xr-x. 3 root root 4096 May 24 00:29 rc.ddrwxr-xr-x. 2 root root 4096 May 24 00:29 sysconfig 配置以系统方式service启动# cp /usr/local/etc/rc.d/init.d/keepalived /etc/rc.d/init.d/ # cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/# mkdir /etc/keepalived# cp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/# cp /usr/local/sbin/keepalived /usr/sbin/ |
(2)LB-Master的keepalived主配置文档
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
|
# cat /etc/keepalived/keepalived.conf#guration File for keepalived#global defineglobal_defs { router_id Haweb_1 }vrrp_sync_group VGM { group { VI_WEB }}vrrp_sync_group VGN { group { HA_DNS }}# vvrp_instance define #vrrp_instance VI_WEB { state MASTER interface eth0 lvs_sync_daemon_inteface eth0 virtual_router_id 55 priority 100 advert_int 5 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 192.168.2.200/24 dev eth0 }}########## LVS ###########virtual_server 192.168.2.200 80 { delay_loop 6 lb_algo rr lb_kind DR# persistence_timeout 20 protocol TCP real_server 192.168.2.50 80 { weight 100 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } real_server 192.168.2.60 80 { weight 100 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 80 } }}### HA DNS START ###vrrp_instance HA_DNS { state BACKUP interface eth0 lvs_sync_daemon_inteface eth0 virtual_router_id 56 priority 90 advert_int 5 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 192.168.2.100/24 dev eth0 }}########## LVS DNS ###########virtual_server 192.168.2.100 53 { delay_loop 6 lb_algo rr lb_kind DR# persistence_timeout 6 protocol UDP real_server 192.168.2.30 53 { weight 100 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 53 } } real_server 192.168.2.40 53 { weight 100 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 53 } }}################################## |
(3)LB-Backup的keepalived主配置文档
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
|
# cat /etc/keepalived/keepalived.conf#guration File for keepalived#global defineglobal_defs { router_id Haweb_1 }vrrp_sync_group VGM { group { VI_WEB }}vrrp_sync_group VGN { group { HA_DNS }}# vvrp_instance define #vrrp_instance VI_WEB { state BACKUP interface eth0 lvs_sync_daemon_inteface eth0 virtual_router_id 55 priority 90 advert_int 5 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 192.168.2.200/24 dev eth0 }}########## LVS ###########virtual_server 192.168.2.200 80 { delay_loop 6 lb_algo rr lb_kind DR# persistence_timeout 20 protocol TCP real_server 192.168.2.50 80 { weight 100 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } real_server 192.168.2.60 80 { weight 100 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 80 } }}### HA DNS START ###vrrp_instance HA_DNS { state MASTER interface eth0 lvs_sync_daemon_inteface eth0 virtual_router_id 56 priority 100 advert_int 5 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 192.168.2.100/24 dev eth0 }}########## LVS DNS ###########virtual_server 192.168.2.100 53 { delay_loop 6 lb_algo rr lb_kind DR# persistence_timeout 6 protocol UDP real_server 192.168.2.30 53 { weight 100 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 53 } } real_server 192.168.2.40 53 { weight 100 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 53 } }}################################## |
2.DNS-Master及DNS-Backup配置
(1)DNS-Master及DNS-Backup网卡设置
DNS-Master及DNS-Backup在各自网卡基础上再添加lo:0端口。

(2)DNS-Master及DNS-Backup安装bind及ARP设置
|
1
2
3
4
5
|
# yum install bind bind-chroot# vim /etc/sysctl.conf //添加以下两行net.ipv4.conf.all.arp_announce = 2 //关闭所有接口的ARP响应和发布net.ipv4.conf.all.arp_ignore = 1# sysctl -p |
(3)DNS-Master配置
|
1
2
3
4
5
6
|
# service named start Generating /etc/rndc.key: [ OK ]Starting named: [ OK ]# cd /var/named/chroot/etc/# vim named.conf //编辑主配置文档 |

|
1
|
# vim named.rfc1912.zones //编辑区域声明文件[object Object] |

|
1
2
3
4
5
|
# scp named.conf named.rfc1912.zones 192.168.2.40:/var/named/chroot/etc/# cd /var/named/chroot/var/named/# cp -p named.localhost abc.com.zone# cp -p named.localhost abc.com.local# vim abc.com.zone |

|
1
|
# vim abc.com.local |

(4)DNS-Backup配置
|
1
2
3
|
# cd /var/named/chroot/etc/# chgrp named named.conf named.rfc1912.zones# vim named.rfc1912.zones |

|
1
2
3
4
5
6
|
# service named start# cd /var/named/chroot/var/named/slaves# ll //区域文件已经同步-rw-r--r--. 1 named named 325 May 27 19:31 abc.com.local-rw-r--r--. 1 named named 340 May 27 19:39 abc.com.zone# vim abc.com.zone |

|
1
|
# vim abc.com.local |

3.web1及web2配置
在原有网卡基础上再添加一块lo:0网卡

|
1
2
3
4
5
6
7
8
|
# yum install httpd# echo "This is web1.">/var/www/html/index.html //web1配置# echo "This is web2.">/var/www/html/index.html //web2配置# vim /etc/sysctl.conf //添加以下两行net.ipv4.conf.all.arp_announce = 2net.ipv4.conf.all.arp_ignore = 1# sysctl -p# service httpd start |
四、测试:
测试客户端IP配置:

客户端浏览器访问http://www.abc.com,并不断刷新,可以看出web1和web2交替不断出现,在实际生产环境下web内容是一样的 。


LB-Master(负责web轮询):
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
# ipvsadmIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 192.168.2.200:http rr -> 192.168.2.50:http Route 100 0 4 -> 192.168.2.60:http Route 100 0 4 UDP 192.168.2.100:domain rr -> 192.168.2.30:domain Route 100 0 0 -> 192.168.2.40:domain Route 100 0 0 # ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:0c:29:1c:d4:8d brd ff:ff:ff:ff:ff:ff inet 192.168.2.10/24 brd 192.168.2.255 scope global eth0 inet 192.168.2.200/24 scope global secondary eth0 inet6 fe80::20c:29ff:fe1c:d48d/64 scope link valid_lft forever preferred_lft forever |
LB-Backup(负责DNS轮询):
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
IP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 192.168.2.200:http rr -> 192.168.2.50:http Route 100 0 0 -> 192.168.2.60:http Route 100 0 0 UDP 192.168.2.100:domain rr -> 192.168.2.30:domain Route 100 0 6 -> 192.168.2.40:domain Route 100 0 6 # ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:0c:29:22:3d:01 brd ff:ff:ff:ff:ff:ff inet 192.168.2.20/24 brd 192.168.2.255 scope global eth0 inet 192.168.2.100/24 scope global secondary eth0 inet6 fe80::20c:29ff:fe22:3d01/64 scope link valid_lft forever preferred_lft forever |
五、故障模拟
模拟LB-Master失效
(1)停掉LB-Master的keepalived服务
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
# service keepalived stopStopping keepalived: [ OK ]# ipvsadmIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:0c:29:1c:d4:8d brd ff:ff:ff:ff:ff:ff inet 192.168.2.10/24 brd 192.168.2.255 scope global eth0 inet6 fe80::20c:29ff:fe1c:d48d/64 scope link valid_lft forever preferred_lft forever |
(2)查看LB-Backup状态(不断在测试客户端刷新网页)
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:0c:29:22:3d:01 brd ff:ff:ff:ff:ff:ff inet 192.168.2.20/24 brd 192.168.2.255 scope global eth0 inet 192.168.2.100/24 scope global secondary eth0 inet 192.168.2.200/24 scope global secondary eth0 inet6 fe80::20c:29ff:fe22:3d01/64 scope link valid_lft forever preferred_lft forever# ipvsadmIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 192.168.2.200:http rr -> 192.168.2.50:http Route 100 0 9 -> 192.168.2.60:http Route 100 0 10 UDP 192.168.2.100:domain rr -> 192.168.2.30:domain Route 100 0 6 -> 192.168.2.40:domain Route 100 0 6 |
2.模拟DNS-Master失效
(1)停掉DNS-Master的named服务,并恢复LB-Master的服务
|
1
2
|
# service named stopStopping named: . [ OK ] |
(2)查看LB-Master状态(不断在测试客户端刷新网页)
|
1
2
3
4
5
6
7
8
9
|
# ipvsadmIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 192.168.2.200:http rr -> 192.168.2.50:http Route 100 0 13 -> 192.168.2.60:http Route 100 0 14 UDP 192.168.2.100:domain rr -> 192.168.2.40:domain Route 100 0 0 |
(3)查看LB-Backup状态
|
1
2
3
4
5
6
7
8
9
|
# ipvsadmIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 192.168.2.200:http rr -> 192.168.2.50:http Route 100 0 0 -> 192.168.2.60:http Route 100 0 0 UDP 192.168.2.100:domain rr -> 192.168.2.40:domain Route 100 0 6 |
3.模拟web1失效
停掉web1的httpd服务即可测试,这步比较简单,不再赘述。
本文出自 “一诺千金” 博客,请务必保留此出处http://hatech.blog.51cto.com/8360868/1417899
Lvs+Keepalived+Bind+web构建高可用负载均衡系统的更多相关文章
- LVS+keepalived DR模式配置高可用负载均衡集群
实验环境 LVS-Master 10.0.100.201 VIP:10.0.100.203 LVS-Slave 10.0.100.204 WEB1-Tomcat 10.0.2.29 gat ...
- Keepalived+Nginx+Tomcat配置高可用负载均衡系统示例
前言 此示例为keepalived+nginx+tomcat的基础配置示例,某些特定配置此例中不会出现,在示例中会用到三个虚拟机:两个纯命令行用于模拟服务端配置,一个带桌面环境的用于模拟客户端访问,这 ...
- Lvs+Keepalived+MySQL Cluster架设高可用负载均衡Mysql集群
------------------------------------- 一.前言 二.MySQL Cluster基本概念 三.环境 四.配置 1.LB-Master及LB-Backup配置 2.M ...
- 搭建 Keepalived + Nginx + Tomcat 的高可用负载均衡架构
1 概述 初期的互联网企业由于业务量较小,所以一般单机部署,实现单点访问即可满足业务的需求,这也是最简单的部署方式,但是随着业务的不断扩大,系统的访问量逐渐的上升,单机部署的模式已无法承载现有的业务量 ...
- Keepalived + Nginx + Tomcat 的高可用负载均衡架构搭建
Keepalived + Nginx + Tomcat 的高可用负载均衡架构搭建 Nginx 是一个高性能的 HTTP反向代理服务器 Keepalived 是一个基于VRRP协议来实现的LVS服务高可 ...
- Nginx(haproxy)+keepalived+Tomcat双主高可用负载均衡
周末的时候一个正在学Linux的朋友问我,高可用怎么玩?我和他微信了将近三个小时,把Nginx和haproxy双主高可用教给他了,今天突然想把这个给写进博客里,供给那些正在学习Linux系统的朋友们, ...
- 搭建Keepalived + Nginx + Tomcat的高可用负载均衡架构
1 概述 初期的互联网企业由于业务量较小,所以一般单机部署,实现单点访问即可满足业务的需求,这也是最简单的部署方式,但是随着业务的不断扩大,系统的访问量逐渐的上升,单机部署的模式已无法承载现有的业务量 ...
- 案例一(haproxy+keepalived高可用负载均衡系统)【转】
1.搭建环境描述: 操作系统: [root@HA-1 ~]# cat /etc/redhat-release CentOS release 6.7 (Final) 地址规划: 主机名 IP地址 集群角 ...
- Heartbeat+LVS构建高可用负载均衡集群
1.heartbeat简介: Heartbeat 项目是 Linux-HA 工程的一个组成部分,它实现了一个高可用集群系统.心跳服务和集群通信是高可用集群的两个关键组件,在 Heartbeat 项目里 ...
随机推荐
- android 5.0 -- Palette
Palette用来从图片资源中获取颜色内容. 下面是个对颜色值使用的工具类: public class PaletteUtils { public static int getColorWithDef ...
- android 实现透明状态栏
主要使用https://github.com/jgilfelt/SystemBarTint这个开源库 1 ,导入SystemBarTintManager类 2 ,BaseFragmentActivit ...
- 数据库及SQL优化
一.数据库结构的设计 如果不能设计一个合理的数据库模型,不仅会增加客户端和服务器段程序的编程和维护的难度,而且将会影响系统实际运行的性能.所以,在一个系统开始实施之前,完备的数据库模型的设计是必须的. ...
- day01(RESTful Web Service、SVN)
今日大纲 搭建SSM环境 基于SSM环境实现用户管理系统 学习RESTful Web Service 学习SVN 统一开发环境 JDK1.7 32? 64? -- 64 Eclipse 使用4.4.1 ...
- L2-001. 紧急救援
L2-001. 紧急救援 题目链接:https://www.patest.cn/contests/gplt/L2-001 Dijstra 本题是dijstra的拓展,在求最短路的同时,增加了不同的最短 ...
- 百度用AR复现朝阳门,野心渐明直指AR平台
近日,支付宝推出基于"AR+LBS"的AR实景红包后,BAT的另一个巨头百度也忍不住展示了自家AR技术.12月22日上午11点,百度邀请了一众媒体朋友前往朝阳门地铁站F口,体验期最 ...
- 【Sort】插入排序
今晚更新几个排序算法 插入排序,时间复杂度O(n^2),从前往后遍历,每遍历到一个值的时候,其前面的所有值已经完成排序,把这个值插入适当位置 void intersort(int *nums,int ...
- aps.net 页面事件执行顺序
- json格式数据 ,将数据库中查询的结果转换为json(方式2)
controller: /*** * 返回所有版本的信息,json的形式返回到前台 * @return */ @RequestMapping(value="/getAllVersion&qu ...
- Talking Ben App砸壳记
需求: 导出Talking Ben app的头文件 实施: 1)准备材料: 越狱IOS设备一部,并安装Talking Ben游戏 IOS设备上安装open SSH IOS设备的/usr/bin 中安装 ...