关于writeset,一直以来我都是所有节点同时配置下面参数:

binlog_transaction_dependency_tracking=WRITESET
transaction_write_set_extraction=xxhash64

但是这几天在尝试整理的时候,突然发现writeset的概念并不是想象中的那么清晰,

也想要验证一下老师提到的结论:

  1. 8.0的设计是针对从库,主库开启writeset后(个人补充说明), 即使主库在串行提交的事务,只要不互相冲突,在slave上也可以并行回放。
  2. 如果主库配置了binlog group commit,从库开了writeset,会优先使用writeset。
  • 从group commit进化的角度、及writeset的hash表原理来看,参数应该设置在master
  • 从结论2的角度来看,参数应该设置在slave

为此,做了一个实验,分别在两个角色上配置以上2个参数,来验证究竟哪一边设置才是有效的。

环境简介

IP port role info
192.168.188.81 3316 node1 master
192.168.188.82 3316 node2 slave1
192.168.188.83 3316 node3 slave2

1主2从, MySQL版本8.0.19

实验一:writeset配置在slave上

  • slave配置writeset:binlog_transaction_dependency_tracking=WRITESET & transaction_write_set_extraction=xxhash64
  • master只配置:binlog_group_commit_sync_delay & binlog_group_commit_sync_no_delay_count

配置参数

master:
binlog_group_commit_sync_delay =100
binlog_group_commit_sync_no_delay_count = 10 gtid_mode =on
enforce_gtid_consistency =on
binlog_format =row
skip_slave_start =1
master_info_repository =table
relay_log_info_repository =table
slave:
binlog_transaction_dependency_tracking=WRITESET
transaction_write_set_extraction=xxhash64 skip_slave_start =1
master_info_repository =table
relay_log_info_repository =table
slave_parallel_type =logical_clock
slave_parallel_workers =4
#slave-preserve-commit-order =ON

验证实验

  • 为方便查看,slave上先切换日志
root@slave1 [kk]>flush logs;
Query OK, 0 rows affected (0.05 sec)
  • 在master上创建1张表,并插入数据
root@master [kk]>flush logs;
Query OK, 0 rows affected (0.05 sec) root@master [kk]>create table k3 (id int auto_increment primary key , dtl varchar(20) default 'a');
Query OK, 0 rows affected (0.05 sec) root@master [kk]>insert into k3(dtl) values ('a');
Query OK, 1 row affected (0.02 sec) root@master [kk]>insert into k3(dtl) values ('b');
Query OK, 1 row affected (0.01 sec) root@master [kk]>insert into k3(dtl) values ('c');
Query OK, 1 row affected (0.01 sec) root@master [kk]>insert into k3(dtl) values ('d');
Query OK, 1 row affected (0.02 sec)
  • 解析master的binlog,可以看到,master的binlog上每一个事务都是自成一组(每一个事务一个last_committed)
[root@ms81 logs]# mysqlbinlog -vvv --base64-output=decode-rows mysql-bin.000006 |grep last_committed
#200514 11:09:58 server id 813316 end_log_pos 272 CRC32 0xa8811d1b GTID last_committed=0 sequence_number=1 rbr_only=no original_committed_timestamp=1589425798790755immediate_commit_timestamp=1589425798790755 transaction_length=242
#200514 11:10:05 server id 813316 end_log_pos 516 CRC32 0x8f66cd2f GTID last_committed=1 sequence_number=2 rbr_only=yes original_committed_timestamp=1589425805035310immediate_commit_timestamp=1589425805035310 transaction_length=335
#200514 11:10:06 server id 813316 end_log_pos 851 CRC32 0x909932ba GTID last_committed=2 sequence_number=3 rbr_only=yes original_committed_timestamp=1589425806709355immediate_commit_timestamp=1589425806709355 transaction_length=335
#200514 11:10:08 server id 813316 end_log_pos 1186 CRC32 0x50c4e104 GTID last_committed=3 sequence_number=4 rbr_only=yes original_committed_timestamp=1589425808607557immediate_commit_timestamp=1589425808607557 transaction_length=335
#200514 11:10:10 server id 813316 end_log_pos 1521 CRC32 0xf074c523 GTID last_committed=4 sequence_number=5 rbr_only=yes original_committed_timestamp=1589425810449588immediate_commit_timestamp=1589425810449588 transaction_length=335
  • 然后去slave上解析binlog,可以看到,slave的binlog上这几个insert的事务成为了一组(几个事务在一个last_committed中)
[root@ms82 logs]# mysqlbinlog -vvv --base64-output=decode-rows mysql-bin.000003 |grep last_committed
#200514 11:09:58 server id 813316 end_log_pos 279 CRC32 0xbf39f999 GTID last_committed=0 sequence_number=1 rbr_only=no original_committed_timestamp=1589425798790755immediate_commit_timestamp=1589425798840582 transaction_length=249
#200514 11:10:05 server id 813316 end_log_pos 530 CRC32 0x7e0b2634 GTID last_committed=1 sequence_number=2 rbr_only=yes original_committed_timestamp=1589425805035310immediate_commit_timestamp=1589425805046566 transaction_length=337
#200514 11:10:06 server id 813316 end_log_pos 867 CRC32 0x79b980e9 GTID last_committed=1 sequence_number=3 rbr_only=yes original_committed_timestamp=1589425806709355immediate_commit_timestamp=1589425806723726 transaction_length=337
#200514 11:10:08 server id 813316 end_log_pos 1204 CRC32 0x09b728d3 GTID last_committed=1 sequence_number=4 rbr_only=yes original_committed_timestamp=1589425808607557immediate_commit_timestamp=1589425808616207 transaction_length=337
#200514 11:10:10 server id 813316 end_log_pos 1541 CRC32 0x499da890 GTID last_committed=1 sequence_number=5 rbr_only=yes original_committed_timestamp=1589425810449588immediate_commit_timestamp=1589425810459612 transaction_length=337
  • 查看一下master的配置,虽然my.cnf中没配置binlog_transaction_dependency_tracking参数,但是该参数在8.0中默认设置为COMMIT_ORDER
root@localhost [kk]>show global variables like '%tracking%';
+----------------------------------------+--------------+
| Variable_name | Value |
+----------------------------------------+--------------+
| binlog_transaction_dependency_tracking | COMMIT_ORDER |
+----------------------------------------+--------------+
1 row in set (0.02 sec)

实验一的结论

根据实验一的现象,套用复制流程可推测为:

1. master生成串行事务日志到binlog,

2. 通过复制结构将binlog拉取到slave,称为relay log

3. slave会按照master的binlog内容(relay log)进行apply

4. apply后再写入到slave的binlog

那么slave的binlog能说明slave是怎么应用的relay log么? 还是因为slave配置了writeset,所以slave生成的binlog中发生了write-set?

实验二:writeset配置在master上

  • slave不配置writeset:binlog_transaction_dependency_tracking=WRITESET & transaction_write_set_extraction=xxhash64
  • master增加配置:binlog_transaction_dependency_tracking=WRITESET & transaction_write_set_extraction=xxhash64

配置参数,并重启实例

master:
binlog_transaction_dependency_tracking=WRITESET
transaction_write_set_extraction=xxhash64 binlog_group_commit_sync_delay =100
binlog_group_commit_sync_no_delay_count = 10 gtid_mode =on
enforce_gtid_consistency =on
binlog_format =row
skip_slave_start =1
master_info_repository =table
relay_log_info_repository =table
slave:
#binlog_transaction_dependency_tracking=WRITESET #注释掉
#transaction_write_set_extraction=xxhash64 #注释掉 skip_slave_start =1
master_info_repository =table
relay_log_info_repository =table
slave_parallel_type =logical_clock
slave_parallel_workers =4
#slave-preserve-commit-order =ON

验证实验

  • 还是为方便查看,slave上先切换日志
root@slave1 [kk]>flush logs;
Query OK, 0 rows affected (0.05 sec)
  • 在master上创建1张表,并插入数据
root@master [(none)]>flush logs;
Query OK, 0 rows affected (0.03 sec) root@master [kk]>create table k4 (id int auto_increment primary key , dtl varchar(20) default 'a');
Query OK, 0 rows affected (0.05 sec) root@master [kk]>insert into k4(dtl) values ('a');
Query OK, 1 row affected (0.02 sec) root@master [kk]>insert into k4(dtl) values ('b');
Query OK, 1 row affected (0.01 sec) root@master [kk]>insert into k4(dtl) values ('c');
Query OK, 1 row affected (0.01 sec) root@master [kk]>insert into k4(dtl) values ('d');
Query OK, 1 row affected (0.01 sec)
  • 解析master的binlog,可以看到,master上insert事务组成了一组(具有相同的last_committed)
[root@ms81 logs]# mysqlbinlog -vvv --base64-output=decode-rows mysql-bin.000008 |grep last_committed
#200514 11:25:55 server id 813316 end_log_pos 272 CRC32 0x1a3b45da GTID last_committed=0 sequence_number=1 rbr_only=no original_committed_timestamp=1589426755949559immediate_commit_timestamp=1589426755949559 transaction_length=242
#200514 11:26:06 server id 813316 end_log_pos 516 CRC32 0x9f51382b GTID last_committed=1 sequence_number=2 rbr_only=yes original_committed_timestamp=1589426766237292immediate_commit_timestamp=1589426766237292 transaction_length=335
#200514 11:26:08 server id 813316 end_log_pos 851 CRC32 0xb02fc356 GTID last_committed=1 sequence_number=3 rbr_only=yes original_committed_timestamp=1589426768166475immediate_commit_timestamp=1589426768166475 transaction_length=335
#200514 11:26:09 server id 813316 end_log_pos 1186 CRC32 0x615fb932 GTID last_committed=1 sequence_number=4 rbr_only=yes original_committed_timestamp=1589426769816765immediate_commit_timestamp=1589426769816765 transaction_length=335
#200514 11:26:12 server id 813316 end_log_pos 1521 CRC32 0x13bceeb8 GTID last_committed=1 sequence_number=5 rbr_only=yes original_committed_timestamp=1589426772153679immediate_commit_timestamp=1589426772153679 transaction_length=335

看来writeset在主库上影响了binlog的内容了,接下来看一下slave的binlog

  • 然后去slave上解析binlog,可以看到,slave上这几个insert的事务各自成了一组
[root@ms82 logs]# mysqlbinlog -vvv --base64-output=decode-rows mysql-bin.000005 |grep last_committed
#200514 11:25:55 server id 813316 end_log_pos 279 CRC32 0x96a31487 GTID last_committed=0 sequence_number=1 rbr_only=no original_committed_timestamp=1589426755949559immediate_commit_timestamp=1589426755999296 transaction_length=249
#200514 11:26:06 server id 813316 end_log_pos 530 CRC32 0x54711cb2 GTID last_committed=1 sequence_number=2 rbr_only=yes original_committed_timestamp=1589426766237292immediate_commit_timestamp=1589426766253024 transaction_length=337
#200514 11:26:08 server id 813316 end_log_pos 867 CRC32 0xf20ad235 GTID last_committed=2 sequence_number=3 rbr_only=yes original_committed_timestamp=1589426768166475immediate_commit_timestamp=1589426768176639 transaction_length=337
#200514 11:26:09 server id 813316 end_log_pos 1204 CRC32 0xa3b00643 GTID last_committed=3 sequence_number=4 rbr_only=yes original_committed_timestamp=1589426769816765immediate_commit_timestamp=1589426769825978 transaction_length=337
#200514 11:26:12 server id 813316 end_log_pos 1541 CRC32 0xce0fd88f GTID last_committed=4 sequence_number=5 rbr_only=yes original_committed_timestamp=1589426772153679immediate_commit_timestamp=1589426772164468 transaction_length=337

此时slave的参数binlog_transaction_dependency_tracking为默认值

root@slave1 [(none)]>show global variables like '%tracking%';
+----------------------------------------+--------------+
| Variable_name | Value |
+----------------------------------------+--------------+
| binlog_transaction_dependency_tracking | COMMIT_ORDER |
+----------------------------------------+--------------+
1 row in set (0.01 sec)

实验二结论

根据实验二的现象,套用复制流程可推测为:

1. master生成串行事务,writeset特性将事务按照write-set进行分组,写到binlog中

2. 通过复制结构将binlog拉取到slave,称为relay log

3. slave会按照master的binlog内容(relay log)进行apply

4. apply后再写入到slave的binlog

参考官方文档,我倾向于认为,slave应用relay log时是按照master的writeset分组进行并行apply的。

那么,目前的实验结论就与前面的结论2相悖了。

The source of dependency information that the master uses to determine which transactions can be executed in parallel by the slave's multithreaded applier. This variable can take one of the three values described in the following list:

  • COMMIT_ORDER: Dependency information is generated from the master's commit timestamps. This is the default. This mode is also used for any transactions without write sets, even if this variable's is WRITESET or WRITESET_SESSION; this is also the case for transactions updating tables without primary keys and transactions updating tables having foreign key constraints.
  • WRITESET: Dependency information is generated from the master's write set, and any transactions which write different tuples can be parallelized.
  • WRITESET_SESSION: Dependency information is generated from the master's write set, but no two updates from the same session can be reordered.

writeset参数配置探索——究竟在哪个角色上配置参数?的更多相关文章

  1. 在Windows 2008/2008 R2 上配置IIS 7.0/7.5 故障转移集群

    本文主要是从:http://support.microsoft.com/kb/970759/zh-cn,直接转载,稍作修改裁剪而来,其中红色粗体部分,是我特别要说明的 若要配置 IIS 7.0 和 7 ...

  2. mac上eclipse上配置hadoop

    在mac上安装了eclipse之后,配置hadoop其实跟在linux上配置差不多,只是mac上得eclipse和界面和linux上得有点不同. 一:安装eclipse eclipse得安装比较简单, ...

  3. Eclipse/MyEclipse上配置Spring环境

    在MyEclipse上配置Spring环境 myeclipse其实已经集成Spring的开发环境,我们只需在新建的项目上添加spring的配置环境就可以 新建一个java项目 选中创建好的项目之后,在 ...

  4. Network基础(五):配置静态路由、配置浮动路由、配置多路由的静态路由、配置默认路由

    一.配置静态路由 目标: 配置路由接口IP地址并通过静态路由的配置实现全网的互通. 方案: 按如下网络拓扑配置接口IP地址并通过静态路由的配置实现全网的互通如下图所示: 步骤: 步骤一:配置静态路由 ...

  5. 实验一:在FW上配置静态路由实现互通

    实验:在FW上配置静态路由实现互通 网络拓扑图 一.配置步骤 1.配置IP地址 R1: FW: ISP:       2.配置路由 ①在R2上面配置静态路由 ②在ISP上面配置静态路由 3.在FW上配 ...

  6. Springboot日志配置探索(主要看logback)(一)

    这篇博客是springboot日志配置探索的第一篇,主要讲默认配置下springboot的logback日志框架的配置(即直接使用是怎样的) 首先,是一个SpringBoot的有关日志的说明文档:ht ...

  7. Hive设置配置参数的方法,列举8个常用配置

    Hive设置配置参数的方法 Hive提供三种可以改变环境变量的方法,分别是: (1).修改${HIVE_HOME}/conf/hive-site.xml配置文件: (2).命令行参数: (3).在已经 ...

  8. ASP.NET Core的配置(4):多样性的配置来源[上篇]

    较之传统通过App.config和Web.config这两个XML文件承载的配置系统,ASP.NET Core采用的这个全新的配置模型的最大一个优势就是针对多种不同配置源的支持.我们可以将内存变量.命 ...

  9. 阿里云服务器Linux CentOS安装配置(六)resin多端口配置、安装、部署

    阿里云服务器Linux CentOS安装配置(六)resin多端口配置.安装.部署 1.下载resin包 http://125.39.66.162/files/2183000003E08525/cau ...

随机推荐

  1. com.aliyun.oss.ClientException: Connection error due to: Connection pool shut down

    com.aliyun.oss.ClientException: Connection error due to: Connection pool shut down[ErrorCode]: Unkno ...

  2. Win10系统下安装VC6.0教程

    学习一门语言最重要的一步是搭建环境,许多人搭建在搭建环境上撞墙了,就有些放弃的心理了:俗话说,工欲善其事,必先利其器:所以接下来我们进行学习C的第一步下载编程所用的工具;当然也有其它的软件,只不过初学 ...

  3. kubernetes个人笔记(一)

    一.证书工具 CFSSL keytools,openssl 1.介绍 CFSSL is CloudFlare's PKI/TLS swiss army knife. It is both a comm ...

  4. burp插件之跨站payload批量注入-xssValidator

    环境搭建 Phantomjs下载 csdn-burp使用xssValidator插件 cnblog-burp插件之xssValidator xssValidator使用 参考链接 cnblog-bur ...

  5. Redis 未授权访问漏洞批量提权

    一.getshell前提 ①能有对 /root/.ssh/目录写入的权限 ②目标机开启22端口 二.安装依赖 sudo easy_install redis 三.使用 redis python hac ...

  6. bWAPP----HTML Injection - Reflected (URL)

    HTML Injection - Reflected (URL) 核心代码 1 <div id="main"> 2 3 <h1>HTML Injection ...

  7. 常见web漏洞修复方法

    方法如下: 漏洞修复.(输入过滤,输出转义) 1.在连接数据库时,在接收参数后进行转义,$id = mysql_real_escape_string($id); 2.在网页源码中在接收参数后可用htm ...

  8. 不是吧!做了两年java还没弄懂JVM堆?进来看看你就明白了

    堆的核心概述 一个JVM实例只存在一个堆内存,堆也是java内存管理的核心区域Java堆区在jvm启动的时候被创建,其空间大小也就确定了.是jvm管理的最大一块内存空间.(堆内存的大小可以调节)< ...

  9. 模拟赛38 B. T形覆盖 大模拟

    题目描述 如果玩过俄罗斯方块,应该见过如下图形: 我们称它为一个 \(T\) 形四格拼板 .其中心被标记为\(×\). 小苗画了一个 \(m\) 行 \(n\) 列的长方形网格.行从 \(0\) 至 ...

  10. python的pip安装超时问题解决

    使用pip install 安装python第三方库时出现了如下错误:pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionP ...