环境信息

IP port role info
192.168.188.81 3316 node1 master
192.168.188.82 3316 node2 slave1
192.168.188.83 3316 node3 slave2
  • CentOS Linux release 7.6.1810 (Core)
  • MySQL Ver 8.0.19 for linux-glibc2.12 on x86_64 (MySQL Community Server - GPL)
  • MySQL Router Ver 8.0.20 for Linux on x86_64 (MySQL Community - GPL)
  • MySQL Shell Ver 8.0.20 for Linux on x86_64 - for MySQL 8.0.20 (MySQL Community Server (GPL))

软件位置

在三个节点上部署好MySQL、MySQL Router、MySQL Shell。

搭建复制环境,并开启增强半同步

  • 所有节点配置
root@localhost [(none)]>set global super_read_only=0;
Query OK, 0 rows affected (0.00 sec) root@localhost [(none)]>create user 'rep'@'192.168.188.%' identified by 'rep';
Query OK, 0 rows affected (0.02 sec) root@localhost [(none)]>grant replication slave on *.* to 'rep'@'192.168.188.%';
Query OK, 0 rows affected (0.02 sec) root@localhost [(none)]>install plugin rpl_semi_sync_slave soname 'semisync_slave.so';
Query OK, 0 rows affected (0.01 sec) root@localhost [(none)]>install plugin rpl_semi_sync_master soname 'semisync_master.so';
Query OK, 0 rows affected (0.02 sec)
  • master节点配置
root@localhost [(none)]>set global rpl_semi_sync_master_enabled=ON;
Query OK, 0 rows affected (0.01 sec) root@localhost [(none)]>show global variables like '%semi%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | ON |
| rpl_semi_sync_master_timeout | 10000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
| rpl_semi_sync_slave_enabled | OFF |
| rpl_semi_sync_slave_trace_level | 32 |
+-------------------------------------------+------------+
8 rows in set (0.00 sec) root@localhost [(none)]>reset master;
Query OK, 0 rows affected (0.04 sec)
  • slave节点配置
root@localhost [(none)]>set global rpl_semi_sync_slave_enabled=ON;
Query OK, 0 rows affected (0.00 sec) root@localhost [(none)]>change master to master_host='192.168.188.81',master_port=3316,master_user='rep',master_password='rep',master_auto_position=1,get_master_public_key=1;
Query OK, 0 rows affected, 2 warnings (0.04 sec) root@localhost [(none)]>reset master;
Query OK, 0 rows affected (0.04 sec)
  • slave 启动复制
root@localhost [(none)]>start slave;
Query OK, 0 rows affected (0.03 sec) root@localhost [(none)]>show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.188.81
Master_User: rep
Master_Port: 3316
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 155
Relay_Log_File: ms82-relay-bin.000002
Relay_Log_Pos: 369
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
...
...
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
Master_Retry_Count: 86400
...
...
1 row in set (0.00 sec)
  • master查看半同步状态
root@localhost [(none)]>show global status like '%semi%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 2 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 0 |
| Rpl_semi_sync_master_no_times | 0 |
| Rpl_semi_sync_master_no_tx | 0 |
| Rpl_semi_sync_master_status | ON |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 0 |
| Rpl_semi_sync_master_tx_wait_time | 0 |
| Rpl_semi_sync_master_tx_waits | 0 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 0 |
| Rpl_semi_sync_slave_status | OFF |
+--------------------------------------------+-------+
15 rows in set (0.00 sec)

模拟业务,使用脚本产生事务

  • 建表
root@localhost [(none)]>create database kk;
Query OK, 1 row affected (0.03 sec)
root@localhost [(none)]>use kk
Database changed
root@localhost [kk]>create table k1 ( id int auto_increment primary key , dtl varchar(20) default 'abc');
Query OK, 0 rows affected (0.05 sec)
  • 开启一个session,运行脚本产生事务
[root@ms81 ~]# while :; do  echo "insert into kk.k1(dtl) values('duangduangduang');" | mysql -S /data/mysql/mysql3316/tmp/mysql.sock; sleep 1;done

手动配置MGR

配置Master,将Master转为MGR

  • 配置参数
root@localhost [kk]>install plugin group_replication soname 'group_replication.so';
Query OK, 0 rows affected (0.03 sec) root@localhost [kk]>set persist binlog_checksum=NONE;
Query OK, 0 rows affected (0.02 sec) root@localhost [kk]>set persist transaction_write_set_extraction=XXHASH64;
Query OK, 0 rows affected (0.00 sec) root@localhost [kk]>select uuid();
+--------------------------------------+
| uuid() |
+--------------------------------------+
| 3260d70c-966e-11ea-ba8b-0242c0a8bc51 |
+--------------------------------------+
1 row in set (0.00 sec) root@localhost [kk]>set persist group_replication_group_name='3260d70c-966e-11ea-ba8b-0242c0a8bc51';
Query OK, 0 rows affected (0.00 sec) root@localhost [kk]>set persist group_replication_local_address="192.168.188.81:13306";
Query OK, 0 rows affected (0.00 sec) root@localhost [kk]>set persist group_replication_group_seeds="192.168.188.81:13306,192.168.188.82:13306,192.168.188.83:13306";
Query OK, 0 rows affected (0.00 sec) #也要加上这个,具体见文末
SET persist group_replication_recovery_get_public_key = 1; root@localhost [kk]>set persist group_replication_bootstrap_group=off;
Query OK, 0 rows affected (0.00 sec) root@localhost [kk]>set persist group_replication_start_on_boot=off;
Query OK, 0 rows affected (0.00 sec) root@localhost [kk]>set global group_replication_bootstrap_group=on;
Query OK, 0 rows affected (0.00 sec) root@localhost [kk]>start group_replication;
Query OK, 0 rows affected (3.36 sec) root@localhost [kk]>set global group_replication_bootstrap_group=off;
Query OK, 0 rows affected (0.00 sec) root@localhost [kk]>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 29ea8b7f-966d-11ea-937c-0242c0a8bc51 | ms81 | 3316 | ONLINE | PRIMARY | 8.0.19 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
1 row in set (0.01 sec)
  • 此时发现发生事务的session出现了提醒
[root@ms81 ~]# while :; do  echo "insert into kk.k1(dtl) values('duangduangduang');" | mysql -S /data/mysql/mysql3316/tmp/mysql.sock; sleep 1;done
ERROR 1290 (HY000) at line 1: The MySQL server is running with the --super-read-only option so it cannot execute this statement
ERROR 1290 (HY000) at line 1: The MySQL server is running with the --super-read-only option so it cannot execute this statement
ERROR 1290 (HY000) at line 1: The MySQL server is running with the --super-read-only option so it cannot execute this statement

去配置slave1 ,转换为MGR

root@localhost [(none)]>install plugin group_replication soname 'group_replication.so';
Query OK, 0 rows affected (0.01 sec) root@localhost [(none)]>set persist binlog_checksum=NONE;
Query OK, 0 rows affected (0.03 sec) root@localhost [(none)]>set persist transaction_write_set_extraction=XXHASH64;
Query OK, 0 rows affected (0.00 sec) root@localhost [(none)]>set persist group_replication_group_name='3260d70c-966e-11ea-ba8b-0242c0a8bc51';
Query OK, 0 rows affected (0.01 sec) root@localhost [(none)]>set persist group_replication_local_address="192.168.188.82:13306";
Query OK, 0 rows affected (0.00 sec) root@localhost [(none)]>set persist group_replication_group_seeds="192.168.188.81:13306,192.168.188.82:13306,192.168.188.83:13306";
Query OK, 0 rows affected (0.00 sec) #也要加上这个,具体见文末
SET persist group_replication_recovery_get_public_key = 1; root@localhost [(none)]>set persist group_replication_bootstrap_group=off;
Query OK, 0 rows affected (0.00 sec) root@localhost [(none)]>set persist group_replication_start_on_boot=off;
Query OK, 0 rows affected (0.00 sec) root@localhost [(none)]>start group_replication;
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.
root@localhost [(none)]>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 2cbcfaa5-966d-11ea-8707-0242c0a8bc52 | ms82 | 3316 | OFFLINE |
| |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
1 row in set (0.01 sec) root@localhost [(none)]>stop group_replication;
Query OK, 0 rows affected (4.78 sec) root@localhost [(none)]>change master to master_user='rep',master_password='rep' for channel 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.03 sec) root@localhost [(none)]>start group_replication;
Query OK, 0 rows affected (3.88 sec) root@localhost [(none)]>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 29ea8b7f-966d-11ea-937c-0242c0a8bc51 | ms81 | 3316 | ONLINE | PRIMARY | 8.0.19 |
| group_replication_applier | 2cbcfaa5-966d-11ea-8707-0242c0a8bc52 | ms82 | 3316 | ONLINE | SECONDARY | 8.0.19 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)

如法炮制,改造slave2

在改造之前,我突然想到,现有的架构成为了: node1(master)\node2(slave1) 为MGR, node3(slave2)是node1(master)的从库,

那么检查一下当前三个节点的情况:

node1:
root@localhost [kk]>select count(*) from kk.k1;
+----------+
| count(*) |
+----------+
| 456 |
+----------+
1 row in set (0.00 sec) root@localhost [kk]>show master status ;
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
| mysql-bin.000002 | 154142 | | | 3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-350,
f78a6902-9679-11ea-b136-0242c0a8bc51:1-111 |
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
1 row in set (0.00 sec) node2:
root@localhost [(none)]>select count(*) from kk.k1;
+----------+
| count(*) |
+----------+
| 456 |
+----------+
1 row in set (0.00 sec) root@localhost [(none)]>show master status ;
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
| mysql-bin.000002 | 109956 | | | 3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-350,
f78a6902-9679-11ea-b136-0242c0a8bc51:1-111 |
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
1 row in set (0.00 sec) ##注意,node2的 IO、SQL THREAD没有运行,但是 Executed_Gtid_Set 是跟进的噢
root@localhost [(none)]>show slave status\G
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: 192.168.188.81
Master_User: rep
Master_Port: 3316
Connect_Retry: 60
Master_Log_File: mysql-bin.000002
Read_Master_Log_Pos: 74606
Relay_Log_File: ms82-relay-bin.000004
Relay_Log_Pos: 74820
Relay_Master_Log_File: mysql-bin.000002
Slave_IO_Running: No
Slave_SQL_Running: No
...
...
Master_Server_Id: 813316
Master_UUID: f78a6902-9679-11ea-b136-0242c0a8bc51
Master_Info_File: mysql.slave_master_info
...
Retrieved_Gtid_Set: 3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-121,
f78a6902-9679-11ea-b136-0242c0a8bc51:1-111
Executed_Gtid_Set: 3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-350,
f78a6902-9679-11ea-b136-0242c0a8bc51:1-111
Auto_Position: 1
...
1 row in set (0.00 sec) node3:
root@localhost [(none)]>select count(*) from kk.k1;
+----------+
| count(*) |
+----------+
| 456 |
+----------+
1 row in set (0.00 sec) root@localhost [(none)]>show master status ;
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
| mysql-bin.000001 | 169340 | | | 3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-350,
f78a6902-9679-11ea-b136-0242c0a8bc51:1-111 |
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
1 row in set (0.00 sec) root@localhost [(none)]>show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.188.81
Master_User: rep
Master_Port: 3316
Connect_Retry: 60
Master_Log_File: mysql-bin.000002
Read_Master_Log_Pos: 154142
Relay_Log_File: ms83-relay-bin.000004
Relay_Log_Pos: 154356
Relay_Master_Log_File: mysql-bin.000002
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
...
...
Master_UUID: f78a6902-9679-11ea-b136-0242c0a8bc51
Master_Info_File: mysql.slave_master_info
...
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
...
Retrieved_Gtid_Set: 3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-350,
f78a6902-9679-11ea-b136-0242c0a8bc51:1-111
Executed_Gtid_Set: 3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-350,
f78a6902-9679-11ea-b136-0242c0a8bc51:1-111
Auto_Position: 1
...
1 row in set (0.00 sec)
  • 转换slave2
root@localhost [(none)]>install plugin group_replication soname 'group_replication.so';
Query OK, 0 rows affected (0.02 sec) root@localhost [(none)]>set persist binlog_checksum=NONE;
Query OK, 0 rows affected (0.03 sec) root@localhost [(none)]>set persist transaction_write_set_extraction=XXHASH64;
Query OK, 0 rows affected (0.00 sec) root@localhost [(none)]>set persist group_replication_group_name='3260d70c-966e-11ea-ba8b-0242c0a8bc51';
Query OK, 0 rows affected (0.00 sec) root@localhost [(none)]>set persist group_replication_local_address="192.168.188.83:13306";
Query OK, 0 rows affected (0.00 sec) root@localhost [(none)]>set persist group_replication_group_seeds="192.168.188.81:13306,192.168.188.82:13306,192.168.188.83:13306";
group_rQuery OK, 0 rows affected (0.00 sec) #也要加上这个,具体见文末
SET persist group_replication_recovery_get_public_key = 1; root@localhost [(none)]>set persist group_replication_bootstrap_group=off;
Query OK, 0 rows affected (0.00 sec) root@localhost [(none)]>set persist group_replication_start_on_boot=off;
Query OK, 0 rows affected (0.00 sec) root@localhost [(none)]>stop slave;
Query OK, 0 rows affected (0.01 sec) root@localhost [(none)]>change master to master_user='rep',master_password='rep' for channel 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.05 sec) plicatioroot@localhost [(none)]>start group_replication;
Query OK, 0 rows affected (4.64 sec) root@localhost [(none)]>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 29ea8b7f-966d-11ea-937c-0242c0a8bc51 | ms81 | 3316 | ONLINE | PRIMARY | 8.0.19 |
| group_replication_applier | 2cbcfaa5-966d-11ea-8707-0242c0a8bc52 | ms82 | 3316 | ONLINE | SECONDARY | 8.0.19 |
| group_replication_applier | 2db7ddf1-966d-11ea-a7b3-0242c0a8bc53 | ms83 | 3316 | ONLINE | SECONDARY | 8.0.19 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec) root@localhost [(none)]>

一些tips

参数文件!

手动转换为MGR与通过MySQL Shell转换的最大区别是,后者会自动通过set persist 方式将变更写到mysqld-auto.cnf文件中,而手动操作需要注意这一点。

上述实验完全没编辑my.cnf ,如果使用set global,在MGR三节点再次冷启动的时候,MGR的配置参数就没了,无法启动MGR。

解决方法是:

  1. 在配置过程中用set persist 来代替set global , 持久化保存配置
  2. 如果已经手快重启了全部节点清空了临时配置,那么可以使用set persist再次设定一遍,设定好后理论上可以直接生效,启动GR。

sha2_password魔咒

  • 我通过set global 配置后,重启了一下节点,再进行set persist持久化配置后,启动MGR后, master顺利online ,但是在做node2加入GR时,一直处于RECOVERING
  • 检查errlog后发现:
2020-05-15T14:35:46.869802+08:00 21 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='ms81', master_port= 3316, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='ms81', master_port= 3316, master_log_file='', master_log_pos= 4, master_bind=''.
2020-05-15T14:35:46.906422+08:00 28 [Warning] [MY-010897] [Repl] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2020-05-15T14:35:46.907876+08:00 28 [ERROR] [MY-010584] [Repl] Slave I/O for channel 'group_replication_recovery': error connecting to master 'rep@ms81:3316' - retry-time: 60 retries: 1 message: Authentication plugin 'caching_sha2_password' reported error: Authentication requires secure connection. Error_code: MY-002061
2020-05-15T14:35:46.923832+08:00 21 [ERROR] [MY-011582] [Repl] Plugin group_replication reported: 'There was an error when connecting to the donor server. Please check that group_replication_recovery channel credentials and all MEMBER_HOST column values of performance_schema.replication_group_members table are correct and DNS resolvable.'
2020-05-15T14:35:46.923887+08:00 21 [ERROR] [MY-011583] [Repl] Plugin group_replication reported: 'For details please check performance_schema.replication_connection_status table and error log messages of Slave I/O for channel group_replication_recovery.'
  • 检查 performance_schema.replication_connection_status
root@localhost [(none)]>select * from  performance_schema.replication_connection_status\G
...
...
...
*************************** 3. row ***************************
CHANNEL_NAME: group_replication_recovery
GROUP_NAME:
SOURCE_UUID:
THREAD_ID: NULL
SERVICE_STATE: OFF
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00.000000
RECEIVED_TRANSACTION_SET:
LAST_ERROR_NUMBER: 2061
LAST_ERROR_MESSAGE: error connecting to master 'rep@ms81:3316' - retry-time: 60 retries: 1 message: Authentication plugin 'caching_sha2_password' reported error: Authentication requires secure connection. ...
...
3 rows in set (0.01 sec)

退化到recovering状态,遇到连接问题,尝试在change master上增加:

root@localhost [(none)]>change master to master_user='rep',master_password='rep',get_master_public_key=1 for channel 'group_replication_recovery';
ERROR 3139 (HY000): CHANGE MASTER with the given parameters cannot be performed on channel 'group_replication_recovery'.

这就尴尬了。

  • 临时解决方法
[root@ms82 ~]# mysql -h 192.168.188.81 -P 3316 -urep -prep
rep@192.168.188.81 [(none)]>exit [root@ms82 ~]# mysql -S /data/mysql/mysql3316/tmp/mysql.sock
root@localhost [(none)]>stop group_replication;
Query OK, 0 rows affected (4.75 sec) root@localhost [(none)]>start group_replication;
Query OK, 0 rows affected (5.75 sec) root@localhost [(none)]>select * from performance_schema.replication_connection_status\G
*************************** 1. row ***************************
CHANNEL_NAME:
GROUP_NAME:
SOURCE_UUID: 29ea8b7f-966d-11ea-937c-0242c0a8bc51
THREAD_ID: NULL
SERVICE_STATE: OFF
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00.000000
RECEIVED_TRANSACTION_SET: 29ea8b7f-966d-11ea-937c-0242c0a8bc51:1-530,
3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-343
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION:
LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
QUEUEING_TRANSACTION:
QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
GROUP_NAME: 3260d70c-966e-11ea-ba8b-0242c0a8bc51
SOURCE_UUID: 3260d70c-966e-11ea-ba8b-0242c0a8bc51
THREAD_ID: NULL
SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00.000000
RECEIVED_TRANSACTION_SET: 29ea8b7f-966d-11ea-937c-0242c0a8bc51:1-530,
3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-781:787
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION: 3260d70c-966e-11ea-ba8b-0242c0a8bc51:787
LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 2020-05-15 14:38:54.721851
LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 2020-05-15 14:38:54.721874
QUEUEING_TRANSACTION:
QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
*************************** 3. row ***************************
CHANNEL_NAME: group_replication_recovery
GROUP_NAME:
SOURCE_UUID:
THREAD_ID: NULL
SERVICE_STATE: OFF
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00.000000
RECEIVED_TRANSACTION_SET:
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION:
LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
QUEUEING_TRANSACTION:
QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
3 rows in set (0.00 sec)
  • 正规军解决方法
SET GLOBAL group_replication_recovery_use_ssl = ON;

SET GLOBAL group_replication_recovery_get_public_key = 1;  #已合并到操作中

SET GLOBAL group_replication_recovery_public_key_path = 'path to RSA public key file';

MGR冷启动

  • 将三节点全部关掉
mysql > shutdown ;

  • 启动node1
[root@ms81 ~]# mysqld --defaults-file=/data/mysql/mysql3316/my3316.cnf  &
[root@ms81 ~]# mysql -S /data/mysql/mysql3316/tmp/mysql.sock
root@localhost [(none)]>set global group_replication_bootstrap_group=ON;
Query OK, 0 rows affected (0.00 sec) root@localhost [(none)]>start group_replication;
Query OK, 0 rows affected (3.16 sec) root@localhost [(none)]>set global group_replication_bootstrap_group=OFF;
Query OK, 0 rows affected (0.00 sec) root@localhost [(none)]>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | f78a6902-9679-11ea-b136-0242c0a8bc51 | ms81 | 3316 | ONLINE | PRIMARY | 8.0.19 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
1 rows in set (0.01 sec)
  • 启动node2
[root@ms82 ~]# mysqld --defaults-file=/data/mysql/mysql3316/my3316.cnf  &
[root@ms82 ~]# mysql -S /data/mysql/mysql3316/tmp/mysql.sock
root@localhost [(none)]>start group_replication;
Query OK, 0 rows affected (3.45 sec) root@localhost [(none)]>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | f78a6902-9679-11ea-b136-0242c0a8bc51 | ms81 | 3316 | ONLINE | PRIMARY | 8.0.19 |
| group_replication_applier | faaab4c3-9679-11ea-896f-0242c0a8bc52 | ms82 | 3316 | ONLINE | SECONDARY | 8.0.19 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)

-同理,启动node3

[root@ms83 ~]# mysqld --defaults-file=/data/mysql/mysql3316/my3316.cnf  &
[root@ms83 ~]# mysql -S /data/mysql/mysql3316/tmp/mysql.sock
root@localhost [(none)]>start group_replication;
Query OK, 0 rows affected (3.45 sec) root@localhost [(none)]>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | f78a6902-9679-11ea-b136-0242c0a8bc51 | ms81 | 3316 | ONLINE | PRIMARY | 8.0.19 |
| group_replication_applier | faaab4c3-9679-11ea-896f-0242c0a8bc52 | ms82 | 3316 | ONLINE | SECONDARY | 8.0.19 |
| group_replication_applier | fb358b40-9679-11ea-94cb-0242c0a8bc53 | ms83 | 3316 | ONLINE | SECONDARY | 8.0.19 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.01 sec)

主从复制架构直接转换MGR(manual)的更多相关文章

  1. MySQL 数据库主从复制架构

    前文<MySQL 数据库事务与复制>分析了 MySQL 复制过程中如何保证 binlog 和事务数据之间的一致性,本文进一步分析引入从库后需要保证主从的数据一致性需要考虑哪些方面. 原生复 ...

  2. 项目实战7—Mysql实现企业级数据库主从复制架构实战

    Mysql实现企业级数据库主从复制架构实战 环境背景:公司规模已经形成,用户数据已成为公司的核心命脉,一次老王一不小心把数据库文件删除,通过mysqldump备份策略恢复用了两个小时,在这两小时中,公 ...

  3. NoSQL初探之人人都爱Redis:(4)Redis主从复制架构初步探索

    一.主从复制架构简介 通过前面几篇的介绍中,我们都是在单机上使用Redis进行相关的实践操作,从本篇起,我们将初步探索一下Redis的集群,而集群中最经典的架构便是主从复制架构.那么,我们首先来了解一 ...

  4. 【转】 NoSQL初探之人人都爱Redis:(4)Redis主从复制架构初步探索

    一.主从复制架构简介 通过前面几篇的介绍中,我们都是在单机上使用Redis进行相关的实践操作,从本篇起,我们将初步探索一下Redis的集群,而集群中最经典的架构便是主从复制架构.那么,我们首先来了解一 ...

  5. Mysql实现企业级数据库主从复制架构实战

    场景 公司规模已经形成,用户数据已成为公司的核心命脉,一次老王一不小心把数据库文件删除,通过mysqldump备份策略恢复用了两个小时,在这两小时中,公司业务中断,损失100万,老王做出深刻反省,公司 ...

  6. (8) MySQL主从复制架构使用方法

    一. 单个数据库服务器的缺点 数据库服务器存在单点问题 数据库服务器资源无法满足增长的读写请求 高峰时数据库连接数经常超过上限 二. 如何解决单点问题 增加额外的数据库服务器,组建数据库集群 同一集群 ...

  7. 在线建立或重做mysql主从复制架构方法(传统模式和GTID模式)【转】

    mysql主从复制架构,是mysql数据库主要特色之一,绝大多数公司都有用到. 而GTID模式是基于事务的复制模式的意思,发展到现在也是越来越多人用. 以前很多文章,介绍搭建mysql主从复制架构,是 ...

  8. Linux mariadb(Mysql)的主从复制架构

    mysql的主从复制架构,需要准备两台机器,并且可以通信,安装好2个mysql,保持版本一致性 mysql -v 查看数据库版本 1.准备主库的配置文件  /etc/my.cnf 写入开启主库的参数[ ...

  9. MySQL主从复制架构使用方法

    原文:MySQL主从复制架构使用方法 一. 单个数据库服务器的缺点 数据库服务器存在单点问题 数据库服务器资源无法满足增长的读写请求 高峰时数据库连接数经常超过上限 二. 如何解决单点问题 增加额外的 ...

随机推荐

  1. .NET 5 带来的新特性 [MemberNotNull] 与 [MemberNotNullWhen]

    MemberNotNullAttribute是 .NET 5 的新增特性,位于System.Diagnostics.CodeAnalysis.该特性用于显式声明,调用此方法后该值不再为 Null.示例 ...

  2. 精尽 MyBatis 源码分析 - MyBatis 初始化(一)之加载 mybatis-config.xml

    该系列文档是本人在学习 Mybatis 的源码过程中总结下来的,可能对读者不太友好,请结合我的源码注释(Mybatis源码分析 GitHub 地址.Mybatis-Spring 源码分析 GitHub ...

  3. 算法基础——KMP字符串匹配

    原题链接 题目: 给定一个模式串S,以及一个模板串P,所有字符串中只包含大小写英文字母以及阿拉伯数字. 模板串P在模式串S中多次作为子串出现. 求出模板串P在模式串S中所有出现的位置的起始下标. 输入 ...

  4. 需要登录才能下载的文件可以用Folx下载吗

    用苹果电脑的小伙伴有没有发现,有时候文件即时有下载链接也还是要先登录才能下载,那这样的文件用下载器Folx还能下载码?下面小编将在Mac系统平台上,通过一篇教程教大家利用Folx 5的密码管理来保存网 ...

  5. 为什么企业需要CRM系统?CRM的作用及其重要性分析

    客户管理系统(CRM)是企业核心应用软件之一,对于提高企业业绩起着至关重要的作用,现在很多企业都在客户发展方面投入大量的资金,以求获得更好的回报. 关于CRM CRM是一个客户数据中心,在CRM中,你 ...

  6. 统一软件开发过程(RUP)的概念和方法

    统一软件开发过程(Rational Unified Process,RUP)是一种面向对象且基于网络的程序开发方法论. 根据Rational(Rational Rose和统一建模语言的开发者)的说法, ...

  7. 现代富文本编辑器Quill的模块化机制

    DevUI是一支兼具设计视角和工程视角的团队,服务于华为云DevCloud平台和华为内部数个中后台系统,服务于设计师和前端工程师.官方网站:devui.designNg组件库:ng-devui(欢迎S ...

  8. HTML-webstorm添加快捷键

    快速输入标签: 先输入标签p,按Tab键变成<p></p>,光标会在标签中间 输入内容后按end键 快速复制粘贴光标所在的一整行内容Ctrl+D 快速删除光标所在的行 Ctrl ...

  9. Spring Cloud 学习 (七) Spring Cloud Sleuth

    微服务架构是一个分布式架构,微服务系统按业务划分服务单元,一个微服务系统往往有很多个服务单元.由于服务单元数量众多,业务的复杂性较高,如果出现了错误和异常,很难去定位.主要体现在一个请求可能需要调用很 ...

  10. 使用douban源下载python包

    需求 python默认使用国外源下载依赖包,由于一些其它因素(例如网络差了,国外机器炸了,我们强大的祖国了...)经常导致下载安装失败,so出现了以豆瓣为主的国内下载源 如何使用豆瓣进行下载 豆瓣下载 ...