mysql-group-replication 测试环境的搭建与排错
mysql-group-replication 是由mysql-5.7.17这个版本提供的强一致的高可用集群解决方案
1、环境规划
主机ip 主机名
172.16.192.201 balm001
172.16.192.202 balm002
172.16.192.203 balm003
2、mysql的配置文件
balm001的配置如下:
[mysql]
auto-rehash [mysqld]
####: for global
user =mysql # mysql
basedir =/usr/local/mysql # /usr/local/mysql/
datadir =/usr/local/mysql_datas/3306 # /usr/local/mysql/data
server_id =1 #
port =3306 #
character_set_server =utf8 # latin1
log_timestamps =system # utc
socket =/tmp/mysql.sock # /tmp/mysql.sock
read_only =1 # off
skip-slave-start =1 #
auto_increment_increment =1 #
auto_increment_offset =1 #
lower_case_table_names =1 #
secure_file_priv = # null ####: for binlog
binlog_format =row # row
log_bin =mysql-bin # off
binlog_rows_query_log_events =on # off
log_slave_updates =on # off
expire_logs_days =4 #
binlog_cache_size =32768 # 32768(32k)
binlog_checksum =none # CRC32
sync_binlog =1 # ####: for error-log
log_error =error.log # /usr/local/mysql/data/localhost.localdomain.err ####: for slow query log ####: for gtid
gtid_executed_compression_period =1000 #
gtid_mode =on # off
enforce_gtid_consistency =on # off ####: for replication
master_info_repository =table # file
relay_log_info_repository =table # file ####: for group replication
transaction_write_set_extraction =XXHASH64 # off
loose-group_replication_group_name ="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" #
loose-group_replication_start_on_boot =off # off
loose-group_replication_local_address ="172.16.192.201:24901" #
loose-group_replication_group_seeds ="172.16.192.201:24901,172.16.192.202:24901,172.16.192.203:24901"
loose-group_replication_bootstrap_group =off # off ####: for innodb
default_storage_engine =innodb # innodb
default_tmp_storage_engine =innodb # innodb
innodb_data_file_path =ibdata1:12M:autoextend # ibdata1:12M:autoextend
innodb_temp_data_file_path =ibtmp1:12M:autoextend # ibtmp1:12M:autoextend
innodb_buffer_pool_filename =ib_buffer_pool # ib_buffer_pool
innodb_log_group_home_dir =./ # ./
innodb_log_files_in_group =2 #
innodb_log_file_size =48M # 50331648(48M)
innodb_file_format =Barracuda # Barracuda
innodb_file_per_table =on # on
innodb_page_size =16k # 16384(16k)
innodb_thread_concurrency =0 #
innodb_read_io_threads =4 #
innodb_write_io_threads =4 #
innodb_purge_threads =4 #
innodb_print_all_deadlocks =on # off
innodb_deadlock_detect =on # on
innodb_lock_wait_timeout =50 #
innodb_spin_wait_delay =6 #
innodb_autoinc_lock_mode =2 #
innodb_stats_persistent =on # on
innodb_stats_persistent_sample_pages =20 #
innodb_buffer_pool_instances =1 #
innodb_adaptive_hash_index =on # on
innodb_change_buffering =all # all
innodb_change_buffer_max_size =25 #
innodb_flush_neighbors =1 #
innodb_flush_method =O_DIRECT #
innodb_doublewrite =on # on
innodb_log_buffer_size =16M # 16777216(16M)
innodb_flush_log_at_timeout =1 #
innodb_flush_log_at_trx_commit =1 #
innodb_buffer_pool_size =134217728 # 134217728(128M)
autocommit =1 # #### for performance_schema
performance_schema =on # on
performance_schema_consumer_events_stages_current =on # off
performance_schema_consumer_events_stages_history =on # off
performance_schema_consumer_events_stages_history_long =off # off
performance_schema_consumer_statements_digest =on # on
performance_schema_consumer_events_statements_current =on # on
performance_schema_consumer_events_statements_history =on # on
performance_schema_consumer_events_statements_history_long =off # off
performance_schema_consumer_events_waits_current =on # off
performance_schema_consumer_events_waits_history =on # off
performance_schema_consumer_events_waits_history_long =off # off
performance_schema_consumer_global_instrumentation =on # on
performance_schema_consumer_thread_instrumentation =on # on
balm002的配置如下:
[mysql]
auto-rehash [mysqld]
####: for global
user =mysql # mysql
basedir =/usr/local/mysql # /usr/local/mysql/
datadir =/usr/local/mysql_datas/3306 # /usr/local/mysql/data
server_id =2 #
port =3306 #
character_set_server =utf8 # latin1
log_timestamps =system # utc
socket =/tmp/mysql.sock # /tmp/mysql.sock
read_only =1 # off
skip-slave-start =1 #
auto_increment_increment =1 #
auto_increment_offset =1 #
lower_case_table_names =1 #
secure_file_priv = # null ####: for binlog
binlog_format =row # row
log_bin =mysql-bin # off
binlog_rows_query_log_events =on # off
log_slave_updates =on # off
expire_logs_days =4 #
binlog_cache_size =32768 # 32768(32k)
binlog_checksum =none # CRC32
sync_binlog =1 # ####: for error-log
log_error =error.log # /usr/local/mysql/data/localhost.localdomain.err ####: for slow query log ####: for gtid
gtid_executed_compression_period =1000 #
gtid_mode =on # off
enforce_gtid_consistency =on # off ####: for replication
master_info_repository =table # file
relay_log_info_repository =table # file ####: for group replication
transaction_write_set_extraction =XXHASH64 # off
loose-group_replication_group_name ="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" #
loose-group_replication_start_on_boot =off # off
loose-group_replication_local_address ="172.16.192.202:24901" #
loose-group_replication_group_seeds ="172.16.192.201:24901,172.16.192.202:24901,172.16.192.203:24901"
loose-group_replication_bootstrap_group =off # off ####: for innodb
default_storage_engine =innodb # innodb
default_tmp_storage_engine =innodb # innodb
innodb_data_file_path =ibdata1:12M:autoextend # ibdata1:12M:autoextend
innodb_temp_data_file_path =ibtmp1:12M:autoextend # ibtmp1:12M:autoextend
innodb_buffer_pool_filename =ib_buffer_pool # ib_buffer_pool
innodb_log_group_home_dir =./ # ./
innodb_log_files_in_group =2 #
innodb_log_file_size =48M # 50331648(48M)
innodb_file_format =Barracuda # Barracuda
innodb_file_per_table =on # on
innodb_page_size =16k # 16384(16k)
innodb_thread_concurrency =0 #
innodb_read_io_threads =4 #
innodb_write_io_threads =4 #
innodb_purge_threads =4 #
innodb_print_all_deadlocks =on # off
innodb_deadlock_detect =on # on
innodb_lock_wait_timeout =50 #
innodb_spin_wait_delay =6 #
innodb_autoinc_lock_mode =2 #
innodb_stats_persistent =on # on
innodb_stats_persistent_sample_pages =20 #
innodb_buffer_pool_instances =1 #
innodb_adaptive_hash_index =on # on
innodb_change_buffering =all # all
innodb_change_buffer_max_size =25 #
innodb_flush_neighbors =1 #
innodb_flush_method =O_DIRECT #
innodb_doublewrite =on # on
innodb_log_buffer_size =16M # 16777216(16M)
innodb_flush_log_at_timeout =1 #
innodb_flush_log_at_trx_commit =1 #
innodb_buffer_pool_size =134217728 # 134217728(128M)
autocommit =1 # #### for performance_schema
performance_schema =on # on
performance_schema_consumer_events_stages_current =on # off
performance_schema_consumer_events_stages_history =on # off
performance_schema_consumer_events_stages_history_long =off # off
performance_schema_consumer_statements_digest =on # on
performance_schema_consumer_events_statements_current =on # on
performance_schema_consumer_events_statements_history =on # on
performance_schema_consumer_events_statements_history_long =off # off
performance_schema_consumer_events_waits_current =on # off
performance_schema_consumer_events_waits_history =on # off
performance_schema_consumer_events_waits_history_long =off # off
performance_schema_consumer_global_instrumentation =on # on
performance_schema_consumer_thread_instrumentation =on # on
balm003的配置如下:
[mysql]
auto-rehash [mysqld]
####: for global
user =mysql # mysql
basedir =/usr/local/mysql # /usr/local/mysql/
datadir =/usr/local/mysql_datas/3306 # /usr/local/mysql/data
server_id =3 #
port =3306 #
character_set_server =utf8 # latin1
log_timestamps =system # utc
socket =/tmp/mysql.sock # /tmp/mysql.sock
read_only =1 # off
skip-slave-start =1 #
auto_increment_increment =1 #
auto_increment_offset =1 #
lower_case_table_names =1 #
secure_file_priv = # null ####: for binlog
binlog_format =row # row
log_bin =mysql-bin # off
binlog_rows_query_log_events =on # off
log_slave_updates =on # off
expire_logs_days =4 #
binlog_cache_size =32768 # 32768(32k)
binlog_checksum =none # CRC32
sync_binlog =1 # ####: for error-log
log_error =error.log # /usr/local/mysql/data/localhost.localdomain.err ####: for slow query log ####: for gtid
gtid_executed_compression_period =1000 #
gtid_mode =on # off
enforce_gtid_consistency =on # off ####: for replication
master_info_repository =table # file
relay_log_info_repository =table # file ####: for group replication
transaction_write_set_extraction =XXHASH64 # off
loose-group_replication_group_name ="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" #
loose-group_replication_start_on_boot =off # off
loose-group_replication_local_address ="172.16.192.203:24901" #
loose-group_replication_group_seeds ="172.16.192.201:24901,172.16.192.202:24901,172.16.192.203:24901"
loose-group_replication_bootstrap_group =off # off ####: for innodb
default_storage_engine =innodb # innodb
default_tmp_storage_engine =innodb # innodb
innodb_data_file_path =ibdata1:12M:autoextend # ibdata1:12M:autoextend
innodb_temp_data_file_path =ibtmp1:12M:autoextend # ibtmp1:12M:autoextend
innodb_buffer_pool_filename =ib_buffer_pool # ib_buffer_pool
innodb_log_group_home_dir =./ # ./
innodb_log_files_in_group =2 #
innodb_log_file_size =48M # 50331648(48M)
innodb_file_format =Barracuda # Barracuda
innodb_file_per_table =on # on
innodb_page_size =16k # 16384(16k)
innodb_thread_concurrency =0 #
innodb_read_io_threads =4 #
innodb_write_io_threads =4 #
innodb_purge_threads =4 #
innodb_print_all_deadlocks =on # off
innodb_deadlock_detect =on # on
innodb_lock_wait_timeout =50 #
innodb_spin_wait_delay =6 #
innodb_autoinc_lock_mode =2 #
innodb_stats_persistent =on # on
innodb_stats_persistent_sample_pages =20 #
innodb_buffer_pool_instances =1 #
innodb_adaptive_hash_index =on # on
innodb_change_buffering =all # all
innodb_change_buffer_max_size =25 #
innodb_flush_neighbors =1 #
innodb_flush_method =O_DIRECT #
innodb_doublewrite =on # on
innodb_log_buffer_size =16M # 16777216(16M)
innodb_flush_log_at_timeout =1 #
innodb_flush_log_at_trx_commit =1 #
innodb_buffer_pool_size =134217728 # 134217728(128M)
autocommit =1 # #### for performance_schema
performance_schema =on # on
performance_schema_consumer_events_stages_current =on # off
performance_schema_consumer_events_stages_history =on # off
performance_schema_consumer_events_stages_history_long =off # off
performance_schema_consumer_statements_digest =on # on
performance_schema_consumer_events_statements_current =on # on
performance_schema_consumer_events_statements_history =on # on
performance_schema_consumer_events_statements_history_long =off # off
performance_schema_consumer_events_waits_current =on # off
performance_schema_consumer_events_waits_history =on # off
performance_schema_consumer_events_waits_history_long =off # off
performance_schema_consumer_global_instrumentation =on # on
performance_schema_consumer_thread_instrumentation =on # on
3、对balm001进行group-replication 的配置(我要用balm001这台机器做集群的seed结点)
set sql_log_bin=0;
create user rpl_user@'%' identified by '';
grant replication slave,replication client on *.* to rpl_user@'%';
create user rpl_user@'127.0.0.1' identified by '';
grant replication slave,replication client on *.* to rpl_user@'127.0.0.1';
create user rpl_user@'localhost' identified by '';
grant replication slave,replication client on *.* to rpl_user@'localhost';
set sql_log_bin=; change master to
master_user='rpl_user',
master_password=''
for channel 'group_replication_recovery'; install plugin group_replication soname 'group_replication.so'; set global group_replication_bootstrap_group=on;
start group_replication;
set global group_replication_bootstrap_group=off;
4、配置balm002 & balm003
set sql_log_bin=0;
create user rpl_user@'%' identified by '';
grant replication slave,replication client on *.* to rpl_user@'%';
create user rpl_user@'127.0.0.1' identified by '';
grant replication slave,replication client on *.* to rpl_user@'127.0.0.1';
create user rpl_user@'localhost' identified by '';
grant replication slave,replication client on *.* to rpl_user@'localhost';
set sql_log_bin=; change master to
master_user='rpl_user',
master_password=''
for channel 'group_replication_recovery'; install plugin group_replication soname 'group_replication.so'; #非seed结点直接start group_replication 就行
start group_replication;
5、检查mysql group-replication 是否配置成功
select * from replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 2263e491-05ad-11e7-b3ef-000c29c965ef | balm001 | 3306 | ONLINE |
| group_replication_applier | 441db987-0653-11e7-9d42-000c2922addb | balm003 | 3306 | ONLINE |
| group_replication_applier | 49ed2458-05b0-11e7-91af-000c29cac83b | balm002 | 3306 | ONLINE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
MEMBER_STATE列都是ONLINE说明集群状态是正常的
6、在配置的过程中日志如下
2017-03-30T21:48:57.781612+08:00 4 [Note] Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'
2017-03-30T21:48:57.781849+08:00 4 [Note] Plugin group_replication reported: '[GCS] Added automatically IP ranges 127.0.0.1/8,172.16.192.202/16 to the whitelist'
2017-03-30T21:48:57.782666+08:00 4 [Note] Plugin group_replication reported: '[GCS] SSL was not enabled'
2017-03-30T21:48:57.782703+08:00 4 [Note] Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"; group_replication_local_address: "172.16.192.202:24901"; group_replication_group_seeds: "172.16.192.201:24901,172.16.192.202:24901,172.16.192.203:24901"; group_replication_bootstrap_group: false; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: "AUTOMATIC"'
2017-03-30T21:48:57.783601+08:00 6 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2017-03-30T21:48:57.807438+08:00 4 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
2017-03-30T21:48:57.807513+08:00 4 [Note] Plugin group_replication reported: 'auto_increment_increment is set to 7'
2017-03-30T21:48:57.807522+08:00 4 [Note] Plugin group_replication reported: 'auto_increment_offset is set to 2'
2017-03-30T21:48:57.807902+08:00 9 [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log './balm002-relay-bin-group_replication_applier.000001' position: 4
2017-03-30T21:48:57.812977+08:00 0 [Note] Plugin group_replication reported: 'state 0 action xa_init'
2017-03-30T21:48:57.835709+08:00 0 [Note] Plugin group_replication reported: 'Successfully bound to 0.0.0.0:24901 (socket=46).'
2017-03-30T21:48:57.835753+08:00 0 [Note] Plugin group_replication reported: 'Successfully set listen backlog to 32 (socket=46)!'
2017-03-30T21:48:57.835759+08:00 0 [Note] Plugin group_replication reported: 'Successfully unblocked socket (socket=46)!'
2017-03-30T21:48:57.835792+08:00 0 [Note] Plugin group_replication reported: 'connecting to 172.16.192.202 24901'
2017-03-30T21:48:57.835918+08:00 0 [Note] Plugin group_replication reported: 'client connected to 172.16.192.202 24901 fd 47'
2017-03-30T21:48:57.835962+08:00 0 [Note] Plugin group_replication reported: 'Ready to accept incoming connections on 0.0.0.0:24901 (socket=46)!'
2017-03-30T21:48:57.836055+08:00 0 [Note] Plugin group_replication reported: 'connecting to 172.16.192.202 24901'
2017-03-30T21:48:57.836090+08:00 0 [Note] Plugin group_replication reported: 'client connected to 172.16.192.202 24901 fd 63'
2017-03-30T21:48:57.836159+08:00 0 [Note] Plugin group_replication reported: 'connecting to 172.16.192.202 24901'
2017-03-30T21:48:57.836192+08:00 0 [Note] Plugin group_replication reported: 'client connected to 172.16.192.202 24901 fd 65'
2017-03-30T21:48:57.836255+08:00 0 [Note] Plugin group_replication reported: 'connecting to 172.16.192.202 24901'
2017-03-30T21:48:57.836285+08:00 0 [Note] Plugin group_replication reported: 'client connected to 172.16.192.202 24901 fd 67'
2017-03-30T21:48:57.836350+08:00 0 [Note] Plugin group_replication reported: 'connecting to 172.16.192.202 24901'
2017-03-30T21:48:57.836382+08:00 0 [Note] Plugin group_replication reported: 'client connected to 172.16.192.202 24901 fd 69'
2017-03-30T21:48:57.836460+08:00 0 [Note] Plugin group_replication reported: 'connecting to 172.16.192.202 24901'
2017-03-30T21:48:57.836548+08:00 0 [Note] Plugin group_replication reported: 'client connected to 172.16.192.202 24901 fd 71'
2017-03-30T21:48:57.836621+08:00 0 [Note] Plugin group_replication reported: 'connecting to 172.16.192.201 24901'
2017-03-30T21:48:57.836983+08:00 0 [Note] Plugin group_replication reported: 'client connected to 172.16.192.201 24901 fd 73'
2017-03-30T21:48:59.097399+08:00 0 [Note] Plugin group_replication reported: 'state 4257 action xa_snapshot'
2017-03-30T21:48:59.097702+08:00 0 [Note] Plugin group_replication reported: 'new state x_recover'
2017-03-30T21:48:59.097720+08:00 0 [Note] Plugin group_replication reported: 'state 4277 action xa_complete'
2017-03-30T21:48:59.097829+08:00 0 [Note] Plugin group_replication reported: 'new state x_run'
2017-03-30T21:49:00.103807+08:00 0 [Note] Plugin group_replication reported: 'Starting group replication recovery with view_id 14908590232397539:7'
2017-03-30T21:49:00.105031+08:00 12 [Note] Plugin group_replication reported: 'Establishing group recovery connection with a possible donor. Attempt 1/10'
2017-03-30T21:49:00.125578+08:00 12 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='balm001', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''.
2017-03-30T21:49:00.130977+08:00 12 [Note] Plugin group_replication reported: 'Establishing connection to a group replication recovery donor 2263e491-05ad-11e7-b3ef-000c29c965ef at balm001 port: 3306.'
2017-03-30T21:49:00.131254+08:00 14 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2017-03-30T21:49:00.136503+08:00 14 [Note] Slave I/O thread for channel 'group_replication_recovery': connected to master 'rpl_user@balm001:3306',replication started in log 'FIRST' at position 4
2017-03-30T21:49:00.148622+08:00 15 [Note] Slave SQL thread for channel 'group_replication_recovery' initialized, starting replication in log 'FIRST' at position 0, relay log './balm002-relay-bin-group_replication_recovery.000001' position: 4
2017-03-30T21:49:00.176671+08:00 12 [Note] Plugin group_replication reported: 'Terminating existing group replication donor connection and purging the corresponding logs.'
2017-03-30T21:49:00.176723+08:00 15 [Note] Slave SQL thread for channel 'group_replication_recovery' exiting, replication stopped in log 'mysql-bin.000003' at position 1563
2017-03-30T21:49:00.177801+08:00 14 [Note] Slave I/O thread killed while reading event for channel 'group_replication_recovery'
2017-03-30T21:49:00.177833+08:00 14 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'mysql-bin.000003', position 1887
2017-03-30T21:49:00.182909+08:00 12 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='balm001', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2017-03-30T21:49:00.189188+08:00 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'
7、在配置中遇到的坑
The START GROUP_REPLICATION command failed as there was an error when initializ ... ...
最张确认这个错是由于dns没有配置引起的、对就的改一下/etc/hosts/就行了;要做到集群中的各个主机之间的dns解析是正常的
[root@balm003 Desktop]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.192.22 studio
172.16.192.201 balm001
172.16.192.202 balm002
172.16.192.203 balm003
----
mysql-group-replication 测试环境的搭建与排错的更多相关文章
- 找到当前mysql group replication 环境的primary结点
一.起原: mysql group replication 有两种模式.第一种是single primary 也就是说单个primary .这个模式下只有这一个主可以写入: 第二种是multi pri ...
- MySQL Group Replication配置
MySQL Group Replication简述 MySQL 组复制实现了基于复制协议的多主更新(单主模式). 复制组由多个 server成员构成,并且组中的每个 server 成员可以独立地执行事 ...
- mysql group replication 安装&配置详解
一.原起: 之前也有写过mysql-group-replication (mgr) 相关的文章.那时也没有什么特别的动力要写好它.主要是因为在 mysql-5.7.20 之前的版本的mgr都有着各种各 ...
- 使用ProxySQL实现MySQL Group Replication的故障转移、读写分离(一)
导读: 在之前,我们搭建了MySQL组复制集群环境,MySQL组复制集群环境解决了MySQL集群内部的自动故障转移,但是,组复制并没有解决外部业务的故障转移.举个例子,在A.B.C 3台机器上搭建了组 ...
- Mysql 5.7 基于组复制(MySQL Group Replication) - 运维小结
之前介绍了Mysq主从同步的异步复制(默认模式).半同步复制.基于GTID复制.基于组提交和并行复制 (解决同步延迟),下面简单说下Mysql基于组复制(MySQL Group Replication ...
- Galera将死——MySQL Group Replication正式发布
2016-12-14 来源:InsideMySQL 作者:姜承尧 MySQL Group Replication GA 很多同学表示昨天的从你的全世界路过画风不对,好在今天MySQL界终于有大事情发生 ...
- 使用ProxySQL实现MySQL Group Replication的故障转移、读写分离(二)
在上一篇文章<使用ProxySQL实现MySQL Group Replication的故障转移.读写分离(一) > 中,已经完成了MGR+ProxySQL集群的搭建,也测试了ProxySQ ...
- Percona XtraDB Cluster vs Galera Cluster vs MySQL Group Replication
Percona XtraDB Cluster vs Galera Cluster vs MySQL Group Replication Overview Galera Cluster 由 Coders ...
- mysql group replication 主节点宕机恢复
一.mysql group replication 生来就要面对两个问题: 一.主节点宕机如何恢复. 二.多数节点离线的情况下.余下节点如何继续承载业务. 在这里我们只讨论第一个问题.也就是说当主结点 ...
随机推荐
- [转]spring4.x注解概述
1. 背景 注解可以减少代码的开发量,spring提供了丰富的注解功能,因项目中用到不少注解,因此下定决心,经spring4.x中涉及到的注解罗列出来,供查询使用. 2. spring注解图 ...
- NHibernate 之持久化类、拦截器 (第二篇)
一.持久化类中成员标量的要求 作为被NHibernate使用的持久化类,必须满足以下几点要求: 1.声明读写属性 在NHibernate的使用中,持久化类的成员变量必须声明对应的属性,NHiberna ...
- IIS配置Asp.net时,出现“未能加载文件或程序集“System.Web.Extensions.Design, Version=1.0.61025.0”
如果出现未能加载文件或程序集“System.Web.Extensions.Design, Version=1.0.61025.0, 主要是没有安装.net framwork 3.5,安装一下就行了. ...
- 非docker的jenkins的master如何使用docker的jenkins的slave
前提 1.存在jenkins的master,这个master不是docker的,是通过yum install jenkins安装的 2.使用docker创建n个jenkins,方法是docker pu ...
- Unity3D新手教学,让你十二小时,从入门到掌握!(二) [转]
版权声明:本文为Aries原创文章,转载请标明出处.如有不足之处欢迎提出意见或建议,联系QQ531193915 继续上一讲的内容,首先呢, 为了接下来要做的小游戏,在这里我要小小的修改一下移动的代码. ...
- metal 优化数据分析
https://developer.apple.com/documentation/metal/render_pipeline/viewing_pipeline_statistics_of_a_dra ...
- C#/Sqlite-SQLite PetaPoco django 打造桌面程序
为什么是 SQLite? 在以前的程序中, 我通常会使用 MySQL. 如果使用你程序的用户是一个软件小白, 而且远在另一个城市, 那么让她安装和部署 MySQL 将是一场噩梦: 她需要配置服务, 面 ...
- 教你用 google-drive-ocamlfuse 在 Linux 上挂载 Google Drive
如果你在找一个方便的方式在 Linux 机器上挂载你的 Google Drive 文件夹, Jack Wallen 将教你怎么使用 google-drive-ocamlfuse 来挂载 Google ...
- 利用github和git命令,将本地项目共享到服务器上——第二章
附上关于git命令的第一章:https://www.cnblogs.com/mlw1814011067/p/9908856.html 六.删除服务器中的文件 1. 直接物理删除(右键,删除,或者是用b ...
- 【翻译自mos文章】在11gR2 rac环境中,文件系统使用率紧张,而且lsof显示有非常多oraagent_oracle.l10 (deleted)
在11gR2 rac环境中,文件系统使用率紧张.而且lsof显示有非常多oraagent_oracle.l10 (deleted) 參考原文: High Space Usage and "l ...