mysqlfailover测试
mysqlfailover是mysql官方用python语言写的一款工具,包含在mysql utilities工具集中。主要作用是保障mysql高可用。他会定时检测节点状态,当master节点不可用时,会自动转移到从节点上,同时剩余的从节点都会指向转以后的节点。如何保证数据一致,在下面分析中会有说明。
mysqlfailover使用前提:
1、必须开启GTID模式,在GTID模式下,复制延迟已经减小到最低。用压测工具会有3秒左右的延迟。这取决于设置多少SQL线程。如果秒插1万,可以设置为16。
2、配置文件中必须添加:
report-host=
report-port=
master-info-repository=TABLE
relay-log-info-repository=TABLE
用于从节点可以被检测到。
3、权限:
必须要有with grant option权限。
安装也非常简单。
下载好mysql utilities工具集:https://downloads.mysql.com/archives/utilities/
unzip mysql-utilities-1.6.5.zip
cd mysql-utilities-1.6.5
python ./setup.py build
python ./setup.py install
到此为止安装完成。
使用:
mysqlfailover --master=failover:123456@'192.168.0.106':3306 --discover-slaves-login=failover:123456 --daemon=start --log=/data/failover.log
建立好主从服务。这里略。
检测事物是否完整转移:
这里使用sysbentch工具来进行批量插入。
sysbench --test=oltp --mysql-db=test --mysql-user=root --mysql-password=123456 --oltp-table-size=1000000000 --oltp-num-tables=15 prepare 批量插入
sysbench 0.4.12.10: multi-threaded system evaluation benchmark
No DB drivers specified, using mysql
Creating table 'sbtest1'...
Creating table 'sbtest5'...
Creating table 'sbtest4'...
Creating table 'sbtest8'...
Creating table 'sbtest9'...
Creating table 'sbtest6'...
Creating table 'sbtest2'...
Creating table 'sbtest'...
Creating table 'sbtest3'...
Creating table 'sbtest14'...
Creating table 'sbtest10'...
Creating table 'sbtest12'...
Creating table 'sbtest11'...
Creating table 'sbtest7'...
Creating table 'sbtest13'...
Creating 1000000000 records in table 'sbtest11'...
Creating 1000000000 records in table 'sbtest6'...
Creating 1000000000 records in table 'sbtest4'...
Creating 1000000000 records in table 'sbtest5'...
Creating 1000000000 records in table 'sbtest8'...
Creating 1000000000 records in table 'sbtest14'...
Creating 1000000000 records in table 'sbtest3'...
Creating 1000000000 records in table 'sbtest13'...
Creating 1000000000 records in table 'sbtest9'...
Creating 1000000000 records in table 'sbtest10'...
Creating 1000000000 records in table 'sbtest1'...
Creating 1000000000 records in table 'sbtest12'...
Creating 1000000000 records in table 'sbtest'...
Creating 1000000000 records in table 'sbtest7'...
Creating 1000000000 records in table 'sbtest2'...
等待几分钟后:
kill -9 17448
kill -9 18350
之后,该工具自动转移输出,可以看到已经转移到了丛机上:
Q-quit R-refresh H-health G-GTID Lists U-UUIDs
Failed to reconnect to the master after 3 attemps.
Failover starting in 'auto' mode...
# Checking eligibility of slave 192.168.0.109:3306 for candidate.
# GTID_MODE=ON ... Ok
# Replication user exists ... Ok
# Candidate slave 192.168.0.109:3306 will become the new master.
# Checking slaves status (before failover).
# Preparing candidate for failover.
WARNING: IP lookup by name failed for 44,reason: Unknown host
WARNING: IP lookup by address failed for 192.168.0.109,reason: Unknown host
WARNING: IP lookup by address failed for 192.168.0.112,reason: Unknown host
# Missing transactions found on 192.168.0.112:3306. SELECT gtid_subset() = 0
# LOCK STRING: FLUSH TABLES WITH READ LOCK
# Read only is ON for 192.168.0.112:3306.
# Connecting candidate to 192.168.0.112:3306 as a temporary slave to retrieve unprocessed GTIDs.
# Change master command for 192.168.0.109:3306
# CHANGE MASTER TO MASTER_HOST = '192.168.0.112', MASTER_USER = 'backup', MASTER_PASSWORD = '123456', MASTER_PORT = 3306, MASTER_AUTO_POSITION=1
# Read only is OFF for 192.168.0.112:3306.
# UNLOCK STRING: UNLOCK TABLES
# Waiting for candidate to catch up to slave 192.168.0.112:3306.
# Slave 192.168.0.109:3306:
# QUERY = SELECT WAIT_UNTIL_SQL_THREAD_AFTER_GTIDS('c142ca67-b898-11e8-86e8-000c29367e64:1', 300)
# Return Code = 3
# Slave 192.168.0.109:3306:
# QUERY = SELECT WAIT_UNTIL_SQL_THREAD_AFTER_GTIDS('c777e02f-b898-11e8-86a0-000c29c6f346:1-4', 300)
# Return Code = 0
# Creating replication user if it does not exist.
# Stopping slaves.
# Performing STOP on all slaves.
WARNING: IP lookup by name failed for 44,reason: Unknown host
WARNING: IP lookup by address failed for 192.168.0.109,reason: Unknown host
WARNING: IP lookup by address failed for 192.168.0.112,reason: Unknown host
# Executing stop on slave 192.168.0.109:3306 WARN - slave is not configured with this master
# Executing stop on slave 192.168.0.109:3306 Ok
WARNING: IP lookup by address failed for 192.168.0.106,reason: Unknown host
# Executing stop on slave 192.168.0.112:3306 WARN - slave is not configured with this master
# Executing stop on slave 192.168.0.112:3306 Ok
WARNING: IP lookup by name failed for 44,reason: Unknown host
WARNING: IP lookup by address failed for 192.168.0.109,reason: Unknown host
# Switching slaves to new master.
# Change master command for 192.168.0.112:3306
# CHANGE MASTER TO MASTER_HOST = '192.168.0.109', MASTER_USER = 'backup', MASTER_PASSWORD = '123456', MASTER_PORT = 3306, MASTER_AUTO_POSITION=1
# Disconnecting new master as slave.
# Execute on 192.168.0.109:3306: RESET SLAVE ALL
# Starting slaves.
# Performing START on all slaves.
# Executing start on slave 192.168.0.112:3306 Ok
# Checking slaves for errors.
# 192.168.0.112:3306 status: Ok
# Failover complete.
# Discovering slaves for master at 192.168.0.109:3306
Failover console will restart in 5 seconds.
# Attempting to contact 192.168.0.109 ... Success
# Attempting to contact 192.168.0.112 ... Success
MySQL Replication Failover Utility
Failover Mode = auto Next Interval = Sat Sep 15 14:15:30 2018
Master Information
------------------
Binary Log File Position Binlog_Do_DB Binlog_Ignore_DB
mysql-bin.000001 657
GTID Executed Set
b5c5054c-b898-11e8-8670-000c299e1daf:1 [...]
# Attempting to contact 192.168.0.109 ... Success
# Attempting to contact 192.168.0.112 ... Success
Replication Health Status
+----------------+-------+---------+--------+------------+---------+-------------+-------------------+-----------------+------------+-------------+--------------+------------------+---------------+-----------+----------------+------------+---------------+
| host | port | role | state | gtid_mode | health | version | master_log_file | master_log_pos | IO_Thread | SQL_Thread | Secs_Behind | Remaining_Delay | IO_Error_Num | IO_Error | SQL_Error_Num | SQL_Error | Trans_Behind |
+----------------+-------+---------+--------+------------+---------+-------------+-------------------+-----------------+------------+-------------+--------------+------------------+---------------+-----------+----------------+------------+---------------+
| 192.168.0.109 | 3306 | MASTER | UP | ON | OK | 5.7.22-log | mysql-bin.000001 | 657 | | | | | | | | | |
| 192.168.0.112 | 3306 | SLAVE | UP | ON | OK | 5.7.22-log | mysql-bin.000001 | 657 | Yes | Yes | 0 | No | 0 | | 0 | | 0 |
+----------------+-------+---------+--------+------------+---------+-------------+-------------------+-----------------+------------+-------------+--------------+------------------+---------------+-----------+----------------+------------+---------------+
分析:
当程序检测到master服务停止后:
1、检查指定的候选服务器是否正常,检查GTID模式是否开启
2、锁表,防止事物提交带来的数据不一致问题。
3、如果开启了read_only模式,则会自动将其关闭,并且先change master to到另一台从机上以保证数据一致
4、解锁表,保证候选服务器和另一台从机的事物一致
5、检测候选服务器的事物号,然后停止全部从机:stop slave;
6、切换到新master,也就是候选服务器,将所有从机指向候选服务器。断开与原master的连接,执行reset slave语句
7、在从机开启start slave,开始复制,这时从机都已经指向了新master。故障转移完成。
现在在主机上输出二进制日志,看最后一次插入是哪个事物:
mysqlbinlog --base64-output=decode-rows -v mysql-bin.000005 > ~/bin.log
vim ~/bin.log
截取最后一部分:
### INSERT INTO `test`.`sbtest8`
### SET
### @1=289999
### @2=0
### @3=''
### @4='qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt'
### INSERT INTO `test`.`sbtest8`
### SET
### @1=290000
### @2=0
### @3=''
### @4='qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt'
# at 265373582
#180901 15:41:10 server id 1 end_log_pos 265373613 CRC32 0xa53bca62 Xid = 7014
COMMIT/*!*/;
SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;
DELIMITER ;
# End of log file
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;
可以看到在主库上最后一次插入的操作是在test库下的sbtest8表,第一列值为290000,也就是id列。
现在切换到从库上进入sbtest8这张表,看看这条事物是否已经复制到了从库:
mysql> use test
Database changed
mysql> select * from sbtest8 where id = '290000';
+--------+---+---+----------------------------------------------------+
| id | k | c | pad |
+--------+---+---+----------------------------------------------------+
| 290000 | 0 | | qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt |
+--------+---+---+----------------------------------------------------+
1 row in set (0.00 sec)
可以看到已经有了数据,看看是不是最后一条,从库是否回滚了未提交的事物:
mysql> select * from sbtest8 where id = '290001';
Empty set (0.00 sec)
mysql> select * from sbtest8 order by id desc limit 1;
+--------+---+---+----------------------------------------------------+
| id | k | c | pad |
+--------+---+---+----------------------------------------------------+
| 290000 | 0 | | qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt |
+--------+---+---+----------------------------------------------------+
1 row in set (0.00 sec)
mysql> \q
Bye
可以看出id=290000确实是最后一条事物,如果有未提交的事物可能已经回滚,证明主库与复制到从库的事物不会丢失。
最后,可以使用mysqldiff工具来检查主从之间的不一致:
[root@node2 data]# mysqldiff --server1=failover:123456@192.168.0.109:3306 --server2=failover:123456@192.168.0.112:3306 --difftype=sql test:test
# WARNING: Using a password on the command line interface can be insecure.
# server1 on 192.168.0.109: ... connected.
# server2 on 192.168.0.112: ... connected.
# Comparing `test` to `test` [PASS]
# Comparing `test`.`sbtest` to `test`.`sbtest` [PASS]
# Comparing `test`.`sbtest1` to `test`.`sbtest1` [PASS]
# Comparing `test`.`sbtest10` to `test`.`sbtest10` [PASS]
# Comparing `test`.`sbtest11` to `test`.`sbtest11` [PASS]
# Comparing `test`.`sbtest12` to `test`.`sbtest12` [PASS]
# Comparing `test`.`sbtest13` to `test`.`sbtest13` [PASS]
# Comparing `test`.`sbtest14` to `test`.`sbtest14` [PASS]
# Comparing `test`.`sbtest2` to `test`.`sbtest2` [PASS]
# Comparing `test`.`sbtest3` to `test`.`sbtest3` [PASS]
# Comparing `test`.`sbtest4` to `test`.`sbtest4` [PASS]
# Comparing `test`.`sbtest5` to `test`.`sbtest5` [PASS]
# Comparing `test`.`sbtest6` to `test`.`sbtest6` [PASS]
# Comparing `test`.`sbtest7` to `test`.`sbtest7` [PASS]
# Comparing `test`.`sbtest8` to `test`.`sbtest8` [PASS]
# Comparing `test`.`sbtest9` to `test`.`sbtest9` [PASS]
# Success. All objects are the same.
说明在延迟的情况下,事物并没有丢失。
注意:
mysqlfailover程序适合于只做纯粹的单点写入复制架构。
不适合于从机当测试库或从机做审计做其他服务器等操作。必须要严格保证所有从库没有任何的写入。
在使用MySQLfailover时,最好在所有从库开启read_only参数,以保证数据一致性。
在多从拓补中,如果master挂掉后,要将master再重新加入到原来的拓补中,并且还是将旧master设置为主。server1为旧master,server2为故障转移后的master。
1、停止mysqlfailover故障转移工具。并且启动旧master实例。server1
2、将旧master服务器设置为现在的master的从服务器,用以检查事物完整性和二进制日志完整性:
mysqlreplicate --master=failover:123456@192.168.88.196:3307 --slave=failover:123456@192.168.88.194:3307 --rpl-user=backup:123456
3、用mysqlrpladmin 工具将旧master设置为整个拓补的新主:
mysqlrpladmin --master=failover:123456@192.168.88.196:3307 --new-master=failover:123456@192.168.88.194:3307 --discover-slaves-login=failover:123456 --demote-master switchover
4、恢复mysqlfailover工具启动,这里要使用--force选项来启动。
未经允许,谢绝转载
mysqlfailover测试的更多相关文章
- Keepalived + MySQLfailover + GTIDs 高可用
架构图 10.1.1.207 mysql master + keepalived 10.1.1.206 mysql slave ( backup master ) + ke ...
- mysqlfailover高可用与proxysql读写分离配置
proxysql官方推荐两种高可用方案: 1.MHA+proxysql 2.mysqlrpladmin+proxysql MySQLfailover工具包含在mysqlrpladmin工具中,所以两者 ...
- SignalR系列续集[系列8:SignalR的性能监测与服务器的负载测试]
目录 SignalR系列目录 前言 也是好久没写博客了,近期确实很忙,嗯..几个项目..头要炸..今天忙里偷闲.继续我们的小系列.. 先谢谢大家的支持.. 我们来聊聊SignalR的性能监测与服务器的 ...
- Apache Ignite之集群应用测试
集群发现机制 在Ignite中的集群号称是无中心的,而且支持命令行启动和嵌入应用启动,所以按理说很简单.而且集群有自动发现机制感觉对于懒人开发来说太好了,抱着试一试的心态测试一下吧. 在Apache ...
- 测试一下StringBuffer和StringBuilder及字面常量拼接三种字符串的效率
之前一篇里写过字符串常用类的三种方式<java中的字符串相关知识整理>,只不过这个只是分析并不知道他们之间会有多大的区别,或者所谓的StringBuffer能提升多少拼接效率呢?为此写个简 ...
- TechEmpower 13轮测试中的ASP.NET Core性能测试
应用性能直接影响到托管服务的成本,因此公司在开发应用时需要格外注意应用所使用的Web框架,初创公司尤其如此.此外,糟糕的应用性能也会影响到用户体验,甚至会因此受到相关搜索引擎的降级处罚.在选择框架时, ...
- .NET Core系列 :4 测试
2016.6.27 微软已经正式发布了.NET Core 1.0 RTM,但是工具链还是预览版,同样的大量的开源测试库也都是至少发布了Alpha测试版支持.NET Core, 这篇文章 The Sta ...
- 渗透测试工具BurpSuite做网站的安全测试(基础版)
渗透测试工具BurpSuite做网站的安全测试(基础版) 版权声明:本文为博主原创文章,未经博主允许不得转载. 学习网址: https://t0data.gitbooks.io/burpsuite/c ...
- 在ubuntu16.10 PHP测试连接MySQL中出现Call to undefined function: mysql_connect()
1.问题: 测试php7.0 链接mysql数据库的时候发生错误: Fatal error: Uncaught Error: Call to undefined function mysqli_con ...
随机推荐
- dumpe2fs: Bad magic number in super-block
今天使用tune2fs和dumpe2fs来查看文件系统信息,出现如下图所示错误提示: 解决方法: 1.原来tune2fs和dumpe2fs只能打开ext3/ext4等文件类型. dumpe2fs - ...
- [题解](背包)luogu_P4095 eden的新背包问题
有一点乱搞吧......对人对背包的理解有些考验,要想知道去掉某个点的值,可以选择对前缀求一次背包,后缀求一次背包,而且不省掉价钱那一维, 这样每个点就可以由前后组合成了,枚举一下价钱取max即可 直 ...
- CentOS 部署RabbitMQ集群
1. 准备两台CentOS,信息如下: node1:10.0.0.123 node2:10.0.0.124 修改hostname请参照: $ hostname # 查看当前的hostname $ ho ...
- how browser supports https
1. pre-installed certificate authorities 2. ssl/tls encription ssl/tls handshake flow: 1. exchange d ...
- 转 如何快速清理 chrom 缓存
谷歌浏览器(Chrome)如何手动清除缓存 听语音 | 浏览:13267 | 更新:2014-05-15 01:00 | 标签:谷歌 chrome 浏览器的缓存可以帮助我们更好地使用一些程序,但时间长 ...
- 105 Construct Binary Tree from Preorder and Inorder Traversal 从前序与中序遍历序列构造二叉树
给定一棵树的前序遍历与中序遍历,依据此构造二叉树.注意:你可以假设树中没有重复的元素.例如,给出前序遍历 = [3,9,20,15,7]中序遍历 = [9,3,15,20,7]返回如下的二叉树: ...
- Jquery多选框互相内容交换
<head runat="server"> <title>无标题页</title> <script type="text/jav ...
- 6、旋转数组的最小位置------------>剑指offer系列
题目 把一个数组最开始的若干个元素搬到数组的末尾,我们称之为数组的旋转. 输入一个非减排序的数组的一个旋转,输出旋转数组的最小元素. 例如数组{3,4,5,1,2}为{1,2,3,4,5}的一个旋转, ...
- ES6中新增的字符串方法
实例方法:includes(), startsWith(), endsWith() 传统上,JavaScript 只有indexOf方法,可以用来确定一个字符串是否包含在另一个字符串中.ES6 又提供 ...
- lambda表达式的简单入门
前言:本人在看<Java核心技术I>的时候对lamdba表达式还不是太上心,只是当做一个Java 8的特性了解一下而已,可是在<Java核心技术II>里面多次用到,所以重新入门 ...