Preface

    We've learned the machenism of MGR yesterday,Let's configurate an environment and have some test today.MGR can be installed as plugin like semisynchronous replication.
 
Node information
 
ID IP Hostname Database Port Port of Seed Server ID
1 192.168.1.101 zlm2 MySQL 5.7.21 3306 33061 1013306
2 192.168.1.102 zlm3 MySQL 5.7.21 3306 33062 1023306
3 192.168.1.103 zlm4 MySQL 5.7.21 3306 33063 1033306
 
 
 
 
 
 
Configuration 
 
 ##Check "/etc/hosts" file on all servers and make sure the right mapping relationship of ip & hostname.
[root@zlm2 :: ~]
#cat /etc/hosts
127.0.0.1 zlm2 zlm2
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
#:: localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.100 zlm1 zlm1
192.168.1.101 zlm2 zlm2
192.168.1.102 zlm3 zlm3
192.168.1.103 zlm4 zlm4 ##Check the parameter in my.cnf which Group Replication needs on server zlm2.
[root@zlm2 :: ~]
#vim /data/mysql/mysql3306/my.cnf
... -- Omitted the other parameter
#group replication -- These parameters beneath are reauired by Group Replication.
server_id=
gtid_mode=ON -- Group Replication lies on GTID,so that it should be set "on".
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
log_slave_updates=ON -- Make sure the GTID information can be write into binary logs instead of mysql.gtid_executed table.
log_bin=binlog
binlog_format=ROW
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="ed142e35-6ed1-11e8-86c6-080027de0e0e" -- This is UUID which can be generate by SELECT UUID();
loose-group_replication_start_on_boot=off -- Only if you've finished configuration of Group Replication,then you can set it to "on".
loose-group_replication_local_address= "zlm2:33061"
loose-group_replication_group_seeds= "zlm2:33061,zlm3:33062,zlm4:33063" -- Candidate members of group,the port can be different from mysqld.
loose-group_replication_bootstrap_group=off -- Notice,it merely can be set to "on" in the member who has created the group and started first. ##Restart mysqld and add user of Group Replication.
(root@localhost mysql3306.sock)[(none)]::>SET SQL_LOG_BIN=;
Query OK, rows affected (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>CREATE USER rpl_mgr@'%' IDENTIFIED BY 'rpl4mgr';
Query OK, rows affected (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>GRANT REPLICATION SLAVE ON *.* TO rpl_mgr@'%';
Query OK, rows affected (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>FLUSH PRIVILEGES;
Query OK, rows affected (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>SET SQL_LOG_BIN=;
Query OK, rows affected (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>CHANGE MASTER TO MASTER_USER='rpl_mgr', MASTER_PASSWORD='rpl4mgr' FOR CHANNEL 'group_replication_recovery'; -- The name of channel is fixed and cannot be changed.
Query OK, rows affected, warnings (0.03 sec) ##Install the Group Replication plugin.
(root@localhost mysql3306.sock)[(none)]::>INSTALL PLUGIN group_replication SONAME 'group_replication.so';
Query OK, rows affected (0.03 sec) (root@localhost mysql3306.sock)[(none)]::>show plugins;
+----------------------------+----------+--------------------+----------------------+---------+
| Name | Status | Type | Library | License |
+----------------------------+----------+--------------------+----------------------+---------+
| binlog | ACTIVE | STORAGE ENGINE | NULL | GPL |
| mysql_native_password | ACTIVE | AUTHENTICATION | NULL | GPL |
| sha256_password | ACTIVE | AUTHENTICATION | NULL | GPL |
| PERFORMANCE_SCHEMA | ACTIVE | STORAGE ENGINE | NULL | GPL |
| MRG_MYISAM | ACTIVE | STORAGE ENGINE | NULL | GPL |
| MEMORY | ACTIVE | STORAGE ENGINE | NULL | GPL |
| InnoDB | ACTIVE | STORAGE ENGINE | NULL | GPL |
| INNODB_TRX | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_LOCKS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_LOCK_WAITS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMPMEM | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMPMEM_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP_PER_INDEX | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP_PER_INDEX_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_BUFFER_PAGE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_BUFFER_PAGE_LRU | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_BUFFER_POOL_STATS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_TEMP_TABLE_INFO | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_METRICS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_DEFAULT_STOPWORD | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_DELETED | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_BEING_DELETED | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_CONFIG | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_INDEX_CACHE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_INDEX_TABLE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_TABLES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_TABLESTATS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_INDEXES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_COLUMNS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_FIELDS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_FOREIGN | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_FOREIGN_COLS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_TABLESPACES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_DATAFILES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_VIRTUAL | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| CSV | ACTIVE | STORAGE ENGINE | NULL | GPL |
| MyISAM | ACTIVE | STORAGE ENGINE | NULL | GPL |
| ARCHIVE | ACTIVE | STORAGE ENGINE | NULL | GPL |
| partition | ACTIVE | STORAGE ENGINE | NULL | GPL |
| BLACKHOLE | ACTIVE | STORAGE ENGINE | NULL | GPL |
| FEDERATED | DISABLED | STORAGE ENGINE | NULL | GPL |
| ngram | ACTIVE | FTPARSER | NULL | GPL |
| group_replication | ACTIVE | GROUP REPLICATION | group_replication.so | GPL |
+----------------------------+----------+--------------------+----------------------+---------+
rows in set (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>select * from performance_schema.replication_group_members;
+---------------------------+-----------+-------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+-----------+-------------+-------------+--------------+
| group_replication_applier | | | NULL | OFFLINE | -- there's a record here after install the plugin.
+---------------------------+-----------+-------------+-------------+--------------+
row in set (0.00 sec) ##Set server zlm2 the seed member of group,then start up the Group Replicaiton.
(root@localhost mysql3306.sock)[(none)]::>SET GLOBAL group_replication_bootstrap_group=ON; -- This "on" value merely can be set once.
Query OK, rows affected (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>START GROUP_REPLICATION;
Query OK, rows affected (2.05 sec) (root@localhost mysql3306.sock)[(none)]::>SET GLOBAL group_replication_bootstrap_group=OFF; -- Disable it after starting.
Query OK, rows affected (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 1b7181ee-6eaf-11e8-998e-080027de0e0e | zlm2 | | ONLINE | -- There's a memeber in it.
+---------------------------+--------------------------------------+-------------+-------------+--------------+
row in set (0.00 sec) ##Let's do some operation on the server zlm2.
(root@localhost mysql3306.sock)[(none)]::>show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
rows in set (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>create database zlm;
Query OK, row affected (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>use zlm;
Database changed
(root@localhost mysql3306.sock)[zlm]::>create table test_mgr (id int primary key, name char() not null);
Query OK, rows affected (0.02 sec) (root@localhost mysql3306.sock)[zlm]::>insert into test_mgr VALUES (, 'aaron8219');
Query OK, row affected (0.01 sec) (root@localhost mysql3306.sock)[zlm]::>select * from test_mgr;
+----+-----------+
| id | name |
+----+-----------+
| | aaron8219 |
+----+-----------+
row in set (0.00 sec) (root@localhost mysql3306.sock)[zlm]::>show binlog events;
+---------------+------+----------------+-----------+-------------+-------------------------------------------------------------------------------+
| Log_name | Pos | Event_type | Server_id | End_log_pos | Info |
+---------------+------+----------------+-----------+-------------+-------------------------------------------------------------------------------+
| binlog. | | Format_desc | | | Server ver: 5.7.-log, Binlog ver: |
| binlog. | | Previous_gtids | | | |
| binlog. | | Gtid | | | SET @@SESSION.GTID_NEXT= 'ed142e35-6ed1-11e8-86c6-080027de0e0e:1' |
| binlog. | | Query | | | BEGIN |
| binlog. | | View_change | | | view_id=: |
| binlog. | | Query | | | COMMIT |
| binlog. | | Gtid | | | SET @@SESSION.GTID_NEXT= 'ed142e35-6ed1-11e8-86c6-080027de0e0e:2' |
| binlog. | | Query | | | create database zlm |
| binlog. | | Gtid | | | SET @@SESSION.GTID_NEXT= 'ed142e35-6ed1-11e8-86c6-080027de0e0e:3' |
| binlog. | | Query | | | use `zlm`; create table test_mgr (id int primary key, name char() not null) |
| binlog. | | Gtid | | | SET @@SESSION.GTID_NEXT= 'ed142e35-6ed1-11e8-86c6-080027de0e0e:4' |
| binlog. | | Query | | | BEGIN |
| binlog. | | Table_map | | | table_id: (zlm.test_mgr) |
| binlog. | | Write_rows | | | table_id: flags: STMT_END_F |
| binlog. | | Xid | | | COMMIT /* xid=59 */ |
+---------------+------+----------------+-----------+-------------+-------------------------------------------------------------------------------+
rows in set (0.00 sec) (root@localhost mysql3306.sock)[zlm]::> ##Configure the other two servers like what i've done on server zlm2:
-- Omitted. ##START Group Replication
(root@localhost mysql3306.sock)[(none)]::>START GROUP_REPLICATION;
ERROR (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.
(root@localhost mysql3306.sock)[(none)]::>(root@localhost mysql3306.sock)[(none)]::>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 5c77c31b-4add-11e8-81e2-080027de0e0e | zlm3 | | OFFLINE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
row in set (0.00 sec) ##There's something wrong when I execute "START GROUP_REPLICATION;".the server zlm3 doesn't join the right group create by server zlm2.
the error.log shows below:
--13T07::.249829Z [Note] mysqld (mysqld 5.7.-log) starting as process ...
--13T07::.256669Z [Note] InnoDB: PUNCH HOLE support available
--13T07::.256701Z [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
--13T07::.256705Z [Note] InnoDB: Uses event mutexes
--13T07::.256708Z [Note] InnoDB: GCC builtin __sync_synchronize() is used for memory barrier
--13T07::.256708Z [Note] InnoDB: Compressed tables use zlib 1.2.
--13T07::.256708Z [Note] InnoDB: Using Linux native AIO
--13T07::.256708Z [Note] InnoDB: Number of pools:
--13T07::.256718Z [Note] InnoDB: Using CPU crc32 instructions
--13T07::.258124Z [Note] InnoDB: Initializing buffer pool, total size = 100M, instances = , chunk size = 100M
--13T07::.263012Z [Note] InnoDB: Completed initialization of buffer pool
--13T07::.264222Z [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
--13T07::.289331Z [Note] InnoDB: Highest supported file format is Barracuda.
--13T07::.475746Z [Note] InnoDB: Creating shared tablespace for temporary tables
--13T07::.475831Z [Note] InnoDB: Setting file './ibtmp1' size to MB. Physically writing the file full; Please wait ...
--13T07::.781737Z [Note] InnoDB: File './ibtmp1' size is now MB.
--13T07::.782469Z [Note] InnoDB: redo rollback segment(s) found. redo rollback segment(s) are active.
--13T07::.782482Z [Note] InnoDB: non-redo rollback segment(s) are active.
--13T07::.783403Z [Note] InnoDB: Waiting for purge to start
--13T07::.960368Z [Note] InnoDB: 5.7. started; log sequence number
--13T07::.960713Z [Note] Plugin 'FEDERATED' is disabled.
--13T07::.964346Z [Note] InnoDB: Loading buffer pool(s) from /data/mysql/mysql3306/data/ib_buffer_pool
--13T07::.968486Z [Warning] unknown variable 'loose_tokudb_cache_size=100M'
--13T07::.968509Z [Warning] unknown variable 'loose_tokudb_directio=ON'
--13T07::.968511Z [Warning] unknown variable 'loose_tokudb_fsync_log_period=1000'
--13T07::.968513Z [Warning] unknown variable 'loose_tokudb_commit_sync=0'
--13T07::.968515Z [Warning] unknown variable 'loose-group_replication_group_name=a5e7836a-6edc-11e8-a20d-080027de0e0e'
--13T07::.968516Z [Warning] unknown variable 'loose-group_replication_start_on_boot=off'
--13T07::.968518Z [Warning] unknown variable 'loose-group_replication_local_address=zlm3:33062'
--13T07::.968520Z [Warning] unknown variable 'loose-group_replication_group_seeds=zlm2:33061,zlm3:33062,zlm4:33063'
--13T07::.968521Z [Warning] unknown variable 'loose-group_replication_bootstrap_group=off'
--13T07::.983518Z [Warning] Failed to set up SSL because of the following SSL library error: SSL context is not usable without certificate and private key
--13T07::.983631Z [Note] Server hostname (bind-address): '*'; port:
--13T07::.983667Z [Note] IPv6 is available.
--13T07::.983673Z [Note] - '::' resolves to '::';
--13T07::.983690Z [Note] Server socket created on IP: '::'.
--13T07::.036682Z [Note] Event Scheduler: Loaded events
--13T07::.037391Z [Note] mysqld: ready for connections.
Version: '5.7.21-log' socket: '/tmp/mysql3306.sock' port: MySQL Community Server (GPL)
--13T07::.083468Z [Note] InnoDB: Buffer pool(s) load completed at ::
--13T08::.631676Z [Note] Aborted connection to db: 'unconnected' user: 'root' host: 'localhost' (Got timeout reading communication packets)
--13T08::.693094Z [Note] Aborted connection to db: 'unconnected' user: 'root' host: 'localhost' (Got timeout reading communication packets)
--13T08::.529090Z [Note] Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'
--13T08::.529197Z [Note] Plugin group_replication reported: '[GCS] Added automatically IP ranges 10.0.2.15/24,127.0.0.1/8,192.168.1.102/24 to the whitelist'
--13T08::.529394Z [Note] Plugin group_replication reported: '[GCS] Translated 'zlm3' to 192.168.1.102'
--13T08::.529486Z [Warning] Plugin group_replication reported: '[GCS] Automatically adding IPv4 localhost address to the whitelist. It is mandatory that it is added.'
--13T08::.531296Z [Note] Plugin group_replication reported: '[GCS] SSL was not enabled'
--13T08::.531336Z [Note] Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: "a5e7836a-6edc-11e8-a20d-080027de0e0e"; group_replication_local_address: "zlm3:33062"; group_replication_group_seeds: "zlm2:33061,zlm3:33062,zlm4:33063"; group_replication_bootstrap_group: false; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: "AUTOMATIC"'
--13T08::.531375Z [Note] Plugin group_replication reported: 'Member configuration: member_id: 1023306; member_uuid: "5c77c31b-4add-11e8-81e2-080027de0e0e"; single-primary mode: "true"; group_replication_auto_increment_increment: 7; '
--13T08::.549240Z [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='', master_port= , master_log_file='', master_log_pos= , master_bind=''. New state master_host='<NULL>', master_port= , master_log_file='', master_log_pos= , master_bind=''.
--13T08::.568485Z [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position , relay log './relay-bin-group_replication_applier.000001' position:
--13T08::.569516Z [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
--13T08::.569528Z [Note] Plugin group_replication reported: 'auto_increment_increment is set to 7'
--13T08::.569531Z [Note] Plugin group_replication reported: 'auto_increment_offset is set to 1023306'
--13T08::.569631Z [Note] Plugin group_replication reported: 'state 0 action xa_init'
--13T08::.589865Z [Note] Plugin group_replication reported: 'Successfully bound to 0.0.0.0:33062 (socket=62).'
--13T08::.589970Z [Note] Plugin group_replication reported: 'Successfully set listen backlog to 32 (socket=62)!'
--13T08::.590011Z [Note] Plugin group_replication reported: 'Successfully unblocked socket (socket=62)!'
--13T08::.590098Z [Note] Plugin group_replication reported: 'Ready to accept incoming connections on 0.0.0.0:33062 (socket=62)!'
--13T08::.590549Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
--13T08::.590788Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 63'
--13T08::.593734Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
--13T08::.593853Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 65'
--13T08::.593966Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
--13T08::.594016Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 67'
--13T08::.595449Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
--13T08::.595554Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 60'
--13T08::.595792Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
--13T08::.595887Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 70'
--13T08::.596009Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
--13T08::.596069Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 72'
--13T08::.596168Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.596594Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.596622Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.596629Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.596947Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.596965Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.596971Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.597300Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.597314Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.597320Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.597547Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.597568Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.597582Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.597931Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.597960Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.597966Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.598270Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.598297Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.598303Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.598561Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.598583Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.598590Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.598849Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.598876Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.598882Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.599181Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.599199Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.599205Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.599519Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.599549Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.599572Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.599884Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.599896Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.599901Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.600125Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.600139Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.600145Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.600879Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.600930Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.600938Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.601423Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.601449Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.601464Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.604719Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.604760Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.604768Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.605086Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.605103Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.605110Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.606780Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.606820Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.606828Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.607219Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.607232Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.607237Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.608667Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.608702Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.608710Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.609062Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.609080Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.609086Z [ERROR] Plugin group_replication reported: '[GCS] Error connecting to all peers. Member join failed. Local port: 33062'
--13T08::.609134Z [Note] Plugin group_replication reported: 'state 4338 action xa_terminate'
--13T08::.609141Z [Note] Plugin group_replication reported: 'new state x_start'
--13T08::.609143Z [Note] Plugin group_replication reported: 'state 4338 action xa_exit'
--13T08::.609182Z [Note] Plugin group_replication reported: 'Exiting xcom thread'
--13T08::.609186Z [Note] Plugin group_replication reported: 'new state x_start'
--13T08::.618446Z [Warning] Plugin group_replication reported: 'read failed'
--13T08::.618546Z [ERROR] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33062'
--13T08::.570227Z [ERROR] Plugin group_replication reported: 'Timeout on wait for view after joining group'
--13T08::.570326Z [Note] Plugin group_replication reported: 'Requesting to leave the group despite of not being a member'
--13T08::.570364Z [ERROR] Plugin group_replication reported: '[GCS] The member is leaving a group without being on one.'
--13T08::.570551Z [Note] Plugin group_replication reported: 'auto_increment_increment is reset to 1'
--13T08::.570559Z [Note] Plugin group_replication reported: 'auto_increment_offset is reset to 1'
--13T08::.570655Z [Note] Error reading relay log event for channel 'group_replication_applier': slave SQL thread was killed
--13T08::.570836Z [Note] Plugin group_replication reported: 'The group replication applier thread was killed' ##Finally,i find out the firewall is not disabled on server zlm2.
[root@zlm2 :: ~]
#systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled)
Active: active (running) since Wed -- :: CEST; 7h ago
Main PID: (firewalld)
CGroup: /system.slice/firewalld.service
└─ /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid Jun :: localhost.localdomain systemd[]: Started firewalld - dynamic firewall daemon. [root@zlm2 :: ~]
#systemctl stop firewalld [root@zlm2 :: ~]
#systemctl disable firewalld
rm '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service'
rm '/etc/systemd/system/basic.target.wants/firewalld.service' [root@zlm2 :: ~]
#systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled)
Active: inactive (dead) Jun :: localhost.localdomain systemd[]: Starting firewalld - dynamic firewall daemon...
Jun :: localhost.localdomain systemd[]: Started firewalld - dynamic firewall daemon.
Jun :: zlm2 systemd[]: Stopping firewalld - dynamic firewall daemon...
Jun :: zlm2 systemd[]: Stopped firewalld - dynamic firewall daemon. ##Start Group Replication again.
(root@localhost mysql3306.sock)[(none)]::>START GROUP_REPLICATION;
ERROR (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.
(root@localhost mysql3306.sock)[(none)]::> --13T08::.361028Z [ERROR] Plugin group_replication reported: '[GCS] Timeout while waiting for the group communication engine to be ready!'
--13T08::.361070Z [ERROR] Plugin group_replication reported: '[GCS] The group communication engine is not ready for the member to join. Local port: 33062'
--13T08::.361171Z [Note] Plugin group_replication reported: 'state 4338 action xa_terminate'
--13T08::.361185Z [Note] Plugin group_replication reported: 'new state x_start'
--13T08::.361188Z [Note] Plugin group_replication reported: 'state 4338 action xa_exit'
--13T08::.361254Z [Note] Plugin group_replication reported: 'Exiting xcom thread'
--13T08::.361258Z [Note] Plugin group_replication reported: 'new state x_start'
--13T08::.371810Z [Warning] Plugin group_replication reported: 'read failed'
--13T08::.387635Z [ERROR] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33062'
--13T08::.349695Z [ERROR] Plugin group_replication reported: 'Timeout on wait for view after joining group'
--13T08::.349732Z [Note] Plugin group_replication reported: 'Requesting to leave the group despite of not being a member'
--13T08::.349745Z [ERROR] Plugin group_replication reported: '[GCS] The member is leaving a group without being on one.'
--13T08::.349969Z [Note] Plugin group_replication reported: 'auto_increment_increment is reset to 1'
--13T08::.349975Z [Note] Plugin group_replication reported: 'auto_increment_offset is reset to 1'
--13T08::.350079Z [Note] Error reading relay log event for channel 'group_replication_applier': slave SQL thread was killed
--13T08::.350240Z [Note] Plugin group_replication reported: 'The group replication applier thread was killed'
    The other two servers(zlm3,zlm4) cannot join the group created by zlm2,I've not figured out what's wrong with it yet.I'll test it again in sometime later.
 

MySQL高可用之MGR安装测试的更多相关文章

  1. MySQL高可用之MGR安装测试(续)

      Preface       I've implemented the Group Replication with three servers yesterday,What a shame it ...

  2. 【DB宝45】MySQL高可用之MGR+Consul架构部署

    目录 一.MGR+Consul架构简介 二.搭建MGR 2.1.申请3台MGR机器 2.2.3台主机安装MySQL环境 2.3.修改MySQL参数 2.4.重启MySQL环境 2.5.安装MGR插件( ...

  3. MySQL高可用架构-MMM安装教程

    安装指南: 一.架构以及服务器信息 基本安装包含至少2个数据库服务器和1个监视服务器.本例中使用2个监视服务器和5个数据库服务器(服务器系统为CentOS 7) 用途 IP 主机名 Server-id ...

  4. MySQL高可用之PXC安装部署(续)

      Preface       Yesterday I implemented a three-nodes PXC,but there were some errors when proceeding ...

  5. MySQL高可用之PXC安装部署

      Preface       Today,I'm gonna implement a PXC,Let's see the procedure.   Framework   Hostname IP P ...

  6. MySQL高可用之MHA切换测试(switchover & failover)

      Preface       I've installed MasterHA yesterday,Now let's test the master-slave switch and failove ...

  7. MySQL高可用之MHA安装

      Preface       MasterHA is a tool which can be used in MySQL HA architecture.I'm gonna implement it ...

  8. 032:基于Consul和MGR的MySQL高可用架构

    目录 一.Consul 1.Consul简介 2.准备环境 3.Consul 安装 4.Consul配置文件 5.Consul 服务检查脚本 6.Consul启动 二.MGR搭建 1.MGR配置 2. ...

  9. MySQL高可用新玩法之MGR+Consul

    前面的文章有提到过利用consul+mha实现mysql的高可用,以及利用consul+sentinel实现redis的高可用,具体的请查看:http://www.cnblogs.com/gomysq ...

随机推荐

  1. JavaWeb中Servlet和JSP的分工案例

    jsp和Servlet的分工:   * JSP:     > 作为请求发起页面,例如显示表单.超链接.     > 作为请求结束页面,例如显示数据.   * Servlet:     &g ...

  2. Android ListView复制、删除的实现

    适配器MyAdapter: package com.zihao.adapter; import java.util.List; import com.zihao.popdemo.R; import c ...

  3. Goclipse的Eclipse插件包安装升级地址

    http://goclipse.github.io/releases/ Eclipse Software Site for Goclipse This URL is an Eclipse softwa ...

  4. EF单实对应多表

    一.单实体对应多表 适用场景主表,拥有相同主键附属表或扩展表. 1. 建表词句 CREATE TABLE [Chapter2].[Product]( [SKU] [int] primary key , ...

  5. QLocalServer和QLocalSocket单进程和进程通信

    QLocalServer 继承自QObject. QLocalServer提供了一个基于本地套接字(socket)的服务端(server).QLocalServer可以接受来自本地socket的连接. ...

  6. Linux ->> UBuntu 14.04 LTE下设置静态IP地址

    UBuntu 14.04 LTE设置IP地址和一些服务器版本的Linux还不太一样.以Centos 7.0为例,网卡IP地址的配置文件应该是/etc/sysconfig/network-scripts ...

  7. pt-duplicate-key-checker使用

    pt-duplicate-key-checker工具可以检测表中重复的索引,对于一些业务量很大的表,而且开发不规范的情况下有用.基本用法: 看一下我们的测试表: mysql> desc new_ ...

  8. 鲁棒图(Robustness Diagram)

    鲁棒图与系统需求分析 鲁棒图(Robustness Diagram)是由Ivar Jacobson于1991年发明的,用以回答“每个用例需要哪些对象”的问题.后来的UML并没有将鲁棒图列入UML标准, ...

  9. 【系统】在windows中追加/删除虚拟打印机

    由于项目需要在windwos系统中添加多台虚拟打印机(能够正常打印出纸),查找了一下系统函数. 使用 rundll32 printui.dll,PrintUIEntry,在CMD中运行,在弹出框中得到 ...

  10. Wifi密码破解

    Wifi密码破解1:通过字典(暴力)破解WIFI密码   简单破解WEP/WPA/WPA2加密的WIFI密码,平台kali-linux 工具:Aircrack-ng 过程很简单:先抓含有正确密码的握手 ...