031:Cetus sharding
031:Cetus sharding
一、主机环境
- 虚拟机配置
| CPU | 内存 | 硬盘 | OS版本 | MySQL版本 | Cetus版本 | 
|---|---|---|---|---|---|
| 2-core | 4G | 500G | CentOS 7.5.1804 | 5.7.18 | v1.0.0-44 | 
- 主机信息
| 主机名 | IP地址 | Server_ID | Cetus | 备注 | 
|---|---|---|---|---|
| node01 | 192.168.222.171 | 部署 | 监控/MySQL主库的故障转移 | |
| node02 | 192.168.222.172 | 172 | - | |
| node03 | 192.168.222.173 | 173 | - | |
| node04 | 192.168.222.174 | 174 | - | 
- 借用官方架构图

二、搭建环境
1、准备环境
1.MySQL
- 5.7.17以上版本(分布式事务功能需要)
- 数据库设计(即分库,根据业务将数据对象分成若干组)
- 创建用户和密码
- 确认Cetus可以远程登录MySQL
2.Cetus
- 根据MySQL后端信息配置users.json、sharding.json和shard.conf(variables.json可选配),具体配置说明详见Cetus 分库(sharding)配置文件说明
3.mysql多实例
直接使用杨建荣快速搭建mgr脚本,配置多实例(不开启mgr)
- Github
- 搭建5个实例端口分别为33061~33065
- 创建测试库
1.创建mysql cetus测试账号 grant all on *.* to gcdb@'%' identified by password 'iforgot'; flush privileges;
2.创建测试库ttt
create database if not exists ttt;
create table ttt.t1(id int(4)primary key not null auto_increment,nums int(20) not null);
insert into ttt.t1(nums) values(1),(2),(3),(4),(5);
update ttt.t1 set nums=100 where id =3;
delete from ttt.t1 where id =4;
select * from ttt.t1;
[root@node06 mgr_scripts]# ps -ef |grep mynode
root     14503     1  0 09:55 pts/0    00:00:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --defaults-file=/data/mynode01/mynode01.cnf
mysql    14826 14503  0 09:55 pts/0    00:00:01 /usr/local/mysql/bin/mysqld --defaults-file=/data/mynode01/mynode01.cnf --basedir=/usr/local/mysql --datadir=/data/mynode01 --plugin-dir=/usr/local/mysql/lib/plugin --user=mysql --log-error=/data/mynode01/node06.test.com.err --pid-file=/data/mynode01/node06.test.com.pid --socket=/data/mynode01/mynode01.sock --port=33061
root     14861     1  0 09:55 pts/0    00:00:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --defaults-file=/data/mynode02/mynode02.cnf
mysql    15184 14861  0 09:55 pts/0    00:00:01 /usr/local/mysql/bin/mysqld --defaults-file=/data/mynode02/mynode02.cnf --basedir=/usr/local/mysql --datadir=/data/mynode02 --plugin-dir=/usr/local/mysql/lib/plugin --user=mysql --log-error=/data/mynode02/node06.test.com.err --pid-file=/data/mynode02/node06.test.com.pid --socket=/data/mynode02/mynode02.sock --port=33062
root     15220     1  0 09:55 pts/0    00:00:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --defaults-file=/data/mynode03/mynode03.cnf
mysql    15543 15220  0 09:55 pts/0    00:00:01 /usr/local/mysql/bin/mysqld --defaults-file=/data/mynode03/mynode03.cnf --basedir=/usr/local/mysql --datadir=/data/mynode03 --plugin-dir=/usr/local/mysql/lib/plugin --user=mysql --log-error=/data/mynode03/node06.test.com.err --pid-file=/data/mynode03/node06.test.com.pid --socket=/data/mynode03/mynode03.sock --port=33063
root     15579     1  0 09:55 pts/0    00:00:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --defaults-file=/data/mynode04/mynode04.cnf
mysql    15902 15579  0 09:55 pts/0    00:00:01 /usr/local/mysql/bin/mysqld --defaults-file=/data/mynode04/mynode04.cnf --basedir=/usr/local/mysql --datadir=/data/mynode04 --plugin-dir=/usr/local/mysql/lib/plugin --user=mysql --log-error=/data/mynode04/node06.test.com.err --pid-file=/data/mynode04/node06.test.com.pid --socket=/data/mynode04/mynode04.sock --port=33064
root     15938     1  0 09:56 pts/0    00:00:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --defaults-file=/data/mynode05/mynode05.cnf
mysql    16261 15938  0 09:56 pts/0    00:00:01 /usr/local/mysql/bin/mysqld --defaults-file=/data/mynode05/mynode05.cnf --basedir=/usr/local/mysql --datadir=/data/mynode05 --plugin-dir=/usr/local/mysql/lib/plugin --user=mysql --log-error=/data/mynode05/node06.test.com.err --pid-file=/data/mynode05/node06.test.com.pid --socket=/data/mynode05/mynode05.sock --port=33065
三、Cetus安装
1、下载包和安装依赖包
cetus源码下载
git clone https://github.com/Lede-Inc/cetus.git
# 安装Cetus依赖包
yum -y install cmake gcc glib2-devel flex libevent-devel mysql-devel gperftools-libs
# 安装mha依赖包
yum -y install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker perl-CPAN perl-ExtUtils-Manifest
2、Cetus安装
1.安装说明
Cetus利用自动化建构系统CMake进行编译安装,其中描述构建过程的构建文件CMakeLists.txt已经在源码中的主目录和子目录中,下载源码并解压后具体安装步骤如下:
- 创建编译目录:在源码主目录下创建独立的目录build,并转到该目录下
mkdir build/
cd build/
- 编译:利用cmake进行编译,指令如下
读写分离版本:
cmake ../ -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=/home/user/cetus_install -DSIMPLE_PARSER=ON
分库版本:
cmake ../ -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=/home/user/cetus_install -DSIMPLE_PARSER=OFF
其中CMAKE_BUILD_TYPE变量可以选择生成 debug 版和或release 版的程序,CMAKE_INSTALL_PREFIX变量确定软件的实际安装目录的绝对路径,安装目录建议以/home/user/日期.编译版本.分支.commit_id的方式命名;
SIMPLE_PARSER变量确定软件的编译版本,设置为ON则编译读写分离版本,否则编译分库版本。
该过程会检查您的系统是否缺少一些依赖库和依赖软件,可以根据错误代码安装相应依赖。
- 安装:执行make install进行安装
make install
- 配置:Cetus运行前还需要编辑配置文件
cd /home/user/cetus_install/conf/
cp XXX.json.example XXX.json
cp XXX.conf.example XXX.conf
vi XXX.json
vi XXX.conf
配置文件在make insatll后存在示例文件,以.example结尾,目录为/home/user/cetus_install/conf/,包括用户设置文件(users.json)、变量处理配置文件(variables.json)、分库版本的分片规则配置文件(sharding.json)、读写分离版本的启动配置文件(proxy.conf)和分库版本的启动配置文件(shard.conf)。
根据具体编译安装的版本编辑相关配置文件,若使用读写分离功能则需配置users.json和proxy.conf,若使用sharding功能则需配置users.json、sharding.json和shard.conf,其中两个版本的variables.json均可选配。
配置文件的具体说明见Cetus 读写分离版配置文件说明和Cetus 分库(sharding)版配置文件说明。
- 启动:Cetus可以利用bin/cetus启动
读写分离版本:
bin/cetus --defaults-file=conf/proxy.conf [--conf-dir=/home/user/cetus_install/conf/]
分库版本:
bin/cetus --defaults-file=conf/shard.conf [--conf-dir=/home/user/cetus_install/conf/]
其中Cetus启动时可以添加命令行选项,--defaults-file选项用来加载启动配置文件(proxy.conf或者shard.conf),且在启动前保证启动配置文件的权限为660;--conf-dir是可选项,用来加载其他配置文件(.json文件),默认为当前目录下conf文件夹。
Cetus可起动守护进程后台运行,也可在进程意外终止自动启动一个新进程,可通过启动配置选项进行设置。
2.安装实施
#进入源码目录
[root@node01 software]# git clone https://github.com/Lede-Inc/cetus.git
[root@node01 software]# cd cetus/
[root@node01 cetus]# mkdir build/ && cd build
[root@node01 build]# cmake ../ -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=/usr/local/cetus -DSIMPLE_PARSER=OFF
[root@node01 build]# make install
[root@node01 build]# ll /usr/local/cetus/
total 0
drwxr-xr-x 2 root root  19 Aug 31 09:29 bin
drwxr-xr-x 2 root root 143 Aug 31 09:29 conf
drwxr-xr-x 4 root root 210 Aug 31 09:29 lib
drwxr-xr-x 2 root root  19 Aug 31 09:29 libexec
drwxr-xr-x 2 root root  23 Aug 31 09:29 logs
四、Cetus配置
1、创建和修改配置文件
- 在manager节点上创建cetus的user和proxy.配置文件(users.json和proxy.conf)
[root@node01 build]# cp /usr/local/cetus/conf/users.json.example  /usr/local/cetus/conf/users.json
[root@node01 build]# cp /usr/local/cetus/conf/shard.conf.example /usr/local/cetus/conf/shard.conf
[root@node01 build]# cp /usr/local/cetus/conf/sharding.json.example /usr/local/cetus/conf/sharding.json
[root@node01 build]# vim /usr/local/cetus/conf/users.json
[root@node01 build]# cat /usr/local/cetus/conf/users.json
{
	"users":	[{
			"user":	"gcdb",
			"client_pwd":	"iforgot",
			"server_pwd":	"iforgot"
		}, {
			"user":	"cetus_app1",
			"client_pwd":	"cetus_app1",
			"server_pwd":	"cetus_app1"
		}]
}
[root@node01 build]# cat /usr/local/cetus/conf/sharding.json
{
  "vdb": [
    {
      "id": 1,
      "type": "int",
      "method": "hash",
      "num": 10,
      "partitions": {"data1": [0,1], "data2": [2,3], "data3": [4,5], "data4": [6,7],"data5": [8,9]}
    },
    {
      "id": 2,
      "type": "int",
      "method": "range",
      "num": 0,
      "partitions": {"data1": 99999, "data2": 199999, "data3": 299999,"data4": 399999,"data5": 499999}
    }
  ],
  "table": [
    {"vdb": 1, "db": "employees_hash", "table": "dept_emp", "pkey": "emp_no"},
    {"vdb": 1, "db": "employees_hash", "table": "employees", "pkey": "emp_no"},
    {"vdb": 1, "db": "employees_hash", "table": "salaries", "pkey": "emp_no"},
    {"vdb": 2, "db": "employees_range", "table": "employees", "pkey": "emp_no"},
    {"vdb": 2, "db": "employees_range", "table": "titles", "pkey": "emp_no"}
  ],
  "single_tables": [
    {"table": "dept_manager", "db": "employees_hash", "group": "data1"},
    {"table": "departments", "db": "employees_hash", "group": "data1"},
    {"table": "departments",  "db": "employees_range", "group": "data2"}
  ]
}
[root@node01 build]# vim /usr/local/cetus/conf/shard.conf
[root@node01 build]# cat /usr/local/cetus/conf/shard.conf
[cetus]
# For mode
daemon=true
# Loaded Plugins
plugins=shard,admin
# Proxy Configuration, For eaxmlpe: MySQL master host ip is 192.0.0.1 and salve host ip is 192.0.0.2
#proxy-address=192.168.222.171:8888
proxy-address=0.0.0.0:8888
proxy-backend-addresses=192.168.222.176:33061@data1,192.168.222.176:33062@data2,192.168.222.176:33063@data3,192.168.222.176:33064@data4,192.168.222.176:33065@data5
# Admin Configuration
admin-address=192.168.222.171:9999
admin-username=admin
admin-password=admin
# Backend Configuration, use test db and username created
default-db=ttt
default-username=gcdb
default-pool-size=100
max-pool-size=400
max-resp-=10485760
long-query-time=100
default-charset=utf8
# Log Configuration, put log in /data and marked by proxy port , /data/cetus needs to be created manually and has rw authority for cetus os user
plugin-dir=lib/cetus/plugins
max-open-files=65536
pid-file=cetus.pid
log-xa-file=/var/log/cetus/cetusxa.log
log-file=/var/log/cetus/cetus.log
log-level=debug
# Check salve delay
disable-threads=true
check-slave-delay=true
slave-delay-down=5
slave-delay-recover=1
# For trouble
keepalive=true
verbose-shutdown=true
log-backtrace-on-crash=true
2、启动cetus
- 启动前保证启动配置文件的权限为660
[root@node01 software]# chmod 660 /usr/local/cetus/conf/*
[root@node01 software]# ls /usr/local/cetus/conf/
proxy.conf.example  shard.conf  shard.conf.example  sharding.json  sharding.json.bak  sharding.json.example  users.json  users.json.example  variables.json.example
- 守护进程模式启动Cetus
- session A
[root@node01 software]# /usr/local/cetus/bin/cetus --defaults-file=/usr/local/cetus/conf/shard.conf
[root@node01 software]# ps -ef |grep cetus
root     21835 20826  0 14:59 pts/3    00:00:00 tailf /var/log/cetus/cetus.log
root     21845     1  0 14:59 ?        00:00:00 /usr/local/cetus/libexec/cetus --defaults-file=/usr/local/cetus/conf/shard.conf
root     21846 21845  4 14:59 ?        00:00:33 /usr/local/cetus/libexec/cetus --defaults-file=/usr/local/cetus/conf/shard.conf
root     22487 26164  0 15:12 pts/2    00:00:00 grep --color=auto cetus
- session B
[root@node01 software]# mkdir -p /mydata
[root@node01 software]# cd /mydata
[root@node01 mydata]# git clone  https://github.com/datacharmer/test_db.git
Cloning into 'test_db'...
remote: Counting objects: 102, done.
remote: Total 102 (delta 0), reused 0 (delta 0), pack-reused 102
Receiving objects: 100% (102/102), 68.81 MiB | 274.00 KiB/s, done.
Resolving deltas: 100% (54/54), done.
[root@node01 mydata]# cd test_db/
[root@node01 test_db]# ls
Changelog                      employees.sql          load_dept_emp.dump      load_salaries1.dump  load_titles.dump  sakila            test_employees_md5.sql
employees_partitioned_5.1.sql  images                 load_dept_manager.dump  load_salaries2.dump  objects.sql       show_elapsed.sql  test_employees_sha.sql
employees_partitioned.sql      load_departments.dump  load_employees.dump     load_salaries3.dump  README.md         sql_test.sh
[root@node01 test_db]# less employees.sql
DROP DATABASE IF EXISTS employees;
CREATE DATABASE IF NOT EXISTS employees;
USE employees;
SELECT 'CREATING DATABASE STRUCTURE' as 'INFO';
DROP TABLE IF EXISTS dept_emp,
                     dept_manager,
                     titles,
                     salaries,
                     employees,
                     departments;
CREATE TABLE employees (
    emp_no      INT             NOT NULL,
    birth_date  DATE            NOT NULL,
    first_name  VARCHAR(14)     NOT NULL,
    last_name   VARCHAR(16)     NOT NULL,
    gender      ENUM ('M','F')  NOT NULL,
    hire_date   DATE            NOT NULL,
    PRIMARY KEY (emp_no)
);
CREATE TABLE departments (
    dept_no     CHAR(4)         NOT NULL,
    dept_name   VARCHAR(40)     NOT NULL,
    PRIMARY KEY (dept_no),
    UNIQUE  KEY (dept_name)
);
CREATE TABLE dept_manager (
   emp_no       INT             NOT NULL,
   dept_no      CHAR(4)         NOT NULL,
   from_date    DATE            NOT NULL,
   to_date      DATE            NOT NULL,
   FOREIGN KEY (emp_no)  REFERENCES employees (emp_no)    ON DELETE CASCADE,
   FOREIGN KEY (dept_no) REFERENCES departments (dept_no) ON DELETE CASCADE,
   PRIMARY KEY (emp_no,dept_no)
); 
CREATE TABLE dept_emp (
    emp_no      INT             NOT NULL,
    dept_no     CHAR(4)         NOT NULL,
    from_date   DATE            NOT NULL,
    to_date     DATE            NOT NULL,
    FOREIGN KEY (emp_no)  REFERENCES employees   (emp_no)  ON DELETE CASCADE,
    FOREIGN KEY (dept_no) REFERENCES departments (dept_no) ON DELETE CASCADE,
    PRIMARY KEY (emp_no,dept_no)
);
CREATE TABLE titles (
    emp_no      INT             NOT NULL,
    title       VARCHAR(50)     NOT NULL,
    from_date   DATE            NOT NULL,
    to_date     DATE,
    FOREIGN KEY (emp_no) REFERENCES employees (emp_no) ON DELETE CASCADE,
    PRIMARY KEY (emp_no,title, from_date)
)
; 
CREATE TABLE salaries (
    emp_no      INT             NOT NULL,
    salary      INT             NOT NULL,
    from_date   DATE            NOT NULL,
    to_date     DATE            NOT NULL,
    FOREIGN KEY (emp_no) REFERENCES employees (emp_no) ON DELETE CASCADE,
    PRIMARY KEY (emp_no, from_date)
)
;
-- 把下面这创建视图语句删掉不然导入会报错(ERROR 1149 (42000) at line 98: near "OR": syntax error)
CREATE OR REPLACE VIEW dept_emp_latest_date AS
    SELECT emp_no, MAX(from_date) AS from_date, MAX(to_date) AS to_date
    FROM dept_emp
    GROUP BY emp_no;
# shows only the current department for each employee
CREATE OR REPLACE VIEW current_dept_emp AS
    SELECT l.emp_no, dept_no, l.from_date, l.to_date
    FROM dept_emp d
        INNER JOIN dept_emp_latest_date l
        ON d.emp_no=l.emp_no AND d.from_date=l.from_date AND l.to_date = d.to_date;
flush /*!50503 binary */ logs;
# 导入数据按下面顺序导入,不然会有外键报错问题
SELECT 'LOADING departments' as 'INFO';
source load_departments.dump ;
SELECT 'LOADING employees' as 'INFO';
source load_employees.dump ;
SELECT 'LOADING dept_emp' as 'INFO';
source load_dept_emp.dump ;
SELECT 'LOADING dept_manager' as 'INFO';
source load_dept_manager.dump ;
SELECT 'LOADING titles' as 'INFO';
source load_titles.dump ;
SELECT 'LOADING salaries' as 'INFO';
source load_salaries1.dump ;
source load_salaries2.dump ;
source load_salaries3.dump ;
source show_elapsed.sql ;
(END)
- session C
[root@node01 build]# mysql -ugcdb -piforgot -P8888 -h192.168.222.171
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.18-log (cetus) MySQL Community Server (GPL)
Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
(gcdb@192.168.222.171) 14:47:27 [(none)]> status
--------------
mysql  Ver 14.14 Distrib 5.7.18, for linux-glibc2.5 (x86_64) using  EditLine wrapper
Connection id:		5
Current database:
Current user:		gcdb@node01.test.com
SSL:			Not in use
Current pager:		stdout
Using outfile:		''
Using delimiter:	;
Server version:		5.7.18-log (cetus) MySQL Community Server (GPL)
Protocol version:	10
Connection:		192.168.222.171 via TCP/IP
Server characterset:	utf8
Db     characterset:	utf8
Client characterset:	utf8
Conn.  characterset:	utf8
TCP port:		8888
--------------
(gcdb@192.168.222.171) 14:47:32 [(none)]>
- session D
[root@node01 ~]# mysql -uadmin -padmin -P9999 -h192.168.222.171
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.7 admin
Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
(admin@192.168.222.171) 09:04:03 [(none)]> select * from vdb;
+--------+------------+--------------------------------------------------------------------------------------------------------------------------------+
| VDB id | Method     | Partitions                                                                                                                     |
+--------+------------+--------------------------------------------------------------------------------------------------------------------------------+
| 1      | hash(INT)  | [0,1]->data1; [2,3]->data2; [4,5]->data3; [6,7]->data4; [8,9]->data5                                                           |
| 2      | range(INT) | (-2147483648, 99999]->data1; (99999, 199999]->data2; (199999, 299999]->data3; (299999, 399999]->data4; (399999, 499999]->data5 |
+--------+------------+--------------------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)
(admin@192.168.222.171) 09:04:20 [(none)]> select sharded table;
+---------------------------+--------+--------+
| Table                     | VDB id | Key    |
+---------------------------+--------+--------+
| employees_hash.dept_emp   | 1      | emp_no |
| employees_hash.employees  | 1      | emp_no |
| employees_hash.salaries   | 1      | emp_no |
| employees_range.employees | 2      | emp_no |
| employees_range.titles    | 2      | emp_no |
+---------------------------+--------+--------+
5 rows in set (0.01 sec)
(admin@192.168.222.171) 09:11:22 [(none)]> select single table ;
+-----------------------------+-------+
| Table                       | Group |
+-----------------------------+-------+
| employees_hash.dept_manager | data1 |
| employees_hash.departments  | data1 |
| employees_range.departments | data2 |
+-----------------------------+-------+
3 rows in set (0.00 sec)
(admin@192.168.222.171) 09:11:28 [(none)]> select * from groups;
+-------+-----------------------+--------+
| group | master                | slaves |
+-------+-----------------------+--------+
| data1 | 192.168.222.176:33061 |        |
| data2 | 192.168.222.176:33062 |        |
| data3 | 192.168.222.176:33063 |        |
| data4 | 192.168.222.176:33064 |        |
| data5 | 192.168.222.176:33065 |        |
+-------+-----------------------+--------+
5 rows in set (0.00 sec)
(admin@192.168.222.171) 09:11:50 [(none)]> select conn_details from backends;select * from backends;
+-------------+----------+------------+------------+-------------+
| backend_ndx | username | idle_conns | used_conns | total_conns |
+-------------+----------+------------+------------+-------------+
| 0           | gcdb     | 100        | 0          | 100         |
| 1           | gcdb     | 100        | 0          | 100         |
| 2           | gcdb     | 100        | 0          | 100         |
| 3           | gcdb     | 100        | 0          | 100         |
| 4           | gcdb     | 100        | 0          | 100         |
+-------------+----------+------------+------------+-------------+
5 rows in set (0.00 sec)
+-------------+-----------------------+-------+------+-------------+------+------------+------------+-------------+-------+
| backend_ndx | address               | state | type | slave delay | uuid | idle_conns | used_conns | total_conns | group |
+-------------+-----------------------+-------+------+-------------+------+------------+------------+-------------+-------+
| 1           | 192.168.222.176:33061 | up    | rw   | NULL        | NULL | 100        | 0          | 100         | data1 |
| 2           | 192.168.222.176:33062 | up    | rw   | NULL        | NULL | 100        | 0          | 100         | data2 |
| 3           | 192.168.222.176:33063 | up    | rw   | NULL        | NULL | 100        | 0          | 100         | data3 |
| 4           | 192.168.222.176:33064 | up    | rw   | NULL        | NULL | 100        | 0          | 100         | data4 |
| 5           | 192.168.222.176:33065 | up    | rw   | NULL        | NULL | 100        | 0          | 100         | data5 |
+-------------+-----------------------+-------+------+-------------+------+------------+------------+-------------+-------+
5 rows in set (0.00 sec)
(admin@192.168.222.171) 09:12:40 [(none)]>
3、导入数据
- 分别创建库和表
-- employees_range创建3张表
create database employees_range;
use  employees_range;
CREATE TABLE employees (
    emp_no      INT             NOT NULL,
    birth_date  DATE            NOT NULL,
    first_name  VARCHAR(14)     NOT NULL,
    last_name   VARCHAR(16)     NOT NULL,
    gender      ENUM ('M','F')  NOT NULL,
    hire_date   DATE            NOT NULL,
    PRIMARY KEY (emp_no)
);
CREATE TABLE departments (
    dept_no     CHAR(4)         NOT NULL,
    dept_name   VARCHAR(40)     NOT NULL,
    PRIMARY KEY (dept_no),
    UNIQUE  KEY (dept_name)
);
CREATE TABLE titles (
    emp_no      INT             NOT NULL,
    title       VARCHAR(50)     NOT NULL,
    from_date   DATE            NOT NULL,
    to_date     DATE,
    KEY         (emp_no),
    FOREIGN KEY (emp_no) REFERENCES employees (emp_no) ON DELETE CASCADE,
    PRIMARY KEY (emp_no,title, from_date)
); 
-- employees_hash创建5张表
create database employees_hash;
use  employees_hash;
CREATE TABLE employees (
    emp_no      INT             NOT NULL,
    birth_date  DATE            NOT NULL,
    first_name  VARCHAR(14)     NOT NULL,
    last_name   VARCHAR(16)     NOT NULL,
    gender      ENUM ('M','F')  NOT NULL,
    hire_date   DATE            NOT NULL,
    PRIMARY KEY (emp_no)
);
CREATE TABLE departments (
    dept_no     CHAR(4)         NOT NULL,
    dept_name   VARCHAR(40)     NOT NULL,
    PRIMARY KEY (dept_no),
    UNIQUE  KEY (dept_name)
);
CREATE TABLE dept_manager (
   dept_no      CHAR(4)         NOT NULL,
   emp_no       INT             NOT NULL,
   from_date    DATE            NOT NULL,
   to_date      DATE            NOT NULL,
   KEY         (emp_no),
   KEY         (dept_no),
   FOREIGN KEY (emp_no)  REFERENCES employees (emp_no)    ON DELETE CASCADE,
   FOREIGN KEY (dept_no) REFERENCES departments (dept_no) ON DELETE CASCADE,
   PRIMARY KEY (emp_no,dept_no)
); 
CREATE TABLE dept_emp (
    emp_no      INT             NOT NULL,
    dept_no     CHAR(4)         NOT NULL,
    from_date   DATE            NOT NULL,
    to_date     DATE            NOT NULL,
    KEY         (emp_no),
    KEY         (dept_no),
    FOREIGN KEY (emp_no)  REFERENCES employees   (emp_no)  ON DELETE CASCADE,
    FOREIGN KEY (dept_no) REFERENCES departments (dept_no) ON DELETE CASCADE,
    PRIMARY KEY (emp_no,dept_no)
);
CREATE TABLE salaries (
    emp_no      INT             NOT NULL,
    salary      INT             NOT NULL,
    from_date   DATE            NOT NULL,
    to_date     DATE            NOT NULL,
    KEY         (emp_no),
    FOREIGN KEY (emp_no) REFERENCES employees (emp_no) ON DELETE CASCADE,
    PRIMARY KEY (emp_no, from_date)
);
- 先导入到employees库,然后备份出来再分批导入到employees_hash和employees_range中
 cetus不支持load data infile,不支持lock tables语法,导入employees具体请看README.md
#导入到employees库
mysql -ugcdb -piforgot  -h192.168.222.171 -P8888 employees <employees.sql
#分别备份employees库各个表
mysqldump -ugcdb -piforgot -h192.168.49.176 -P33061  --set-gtid-purged=OFF --complete-insert --no-create-info  --skip-add-locks  --master-data=2 employees titles >titles.sql
mysqldump -ugcdb -piforgot -h192.168.49.176 -P33061  --set-gtid-purged=OFF --complete-insert --no-create-info  --skip-add-locks  --master-data=2 employees dept_emp >dept_emp.sql
mysqldump -ugcdb -piforgot -h192.168.49.176 -P33061  --set-gtid-purged=OFF --complete-insert --no-create-info  --skip-add-locks  --master-data=2 employees salaries >salaries.sql
mysqldump -ugcdb -piforgot -h192.168.49.176 -P33061  --set-gtid-purged=OFF --complete-insert --no-create-info  --skip-add-locks  --master-data=2 employees employees >employees.sql
mysqldump -ugcdb -piforgot -h192.168.49.176 -P33061  --set-gtid-purged=OFF --complete-insert --no-create-info  --skip-add-locks  --master-data=2 employees dept_manager >dept_manager.sql
mysqldump -ugcdb -piforgot -h192.168.49.176 -P33061  --set-gtid-purged=OFF --complete-insert --no-create-info  --skip-add-locks  --master-data=2 employees departments >departments.sql
#分批导入到employees_hash和employees_range中
mysql -ugcdb -piforgot  -h192.168.222.171 -P8888 employees_range <departments.sql
mysql -ugcdb -piforgot  -h192.168.222.171 -P8888 employees_range <employees.sql
mysql -ugcdb -piforgot  -h192.168.222.171 -P8888 employees_range <titles.sql
mysql -ugcdb -piforgot  -h192.168.222.171 -P8888 employees_hash <departments.sql
mysql -ugcdb -piforgot  -h192.168.222.171 -P8888 employees_hash <employees.sql
mysql -ugcdb -piforgot  -h192.168.222.171 -P8888 employees_hash <salaries.sql
mysql -ugcdb -piforgot  -h192.168.222.171 -P8888 employees_hash <dept_manager.sql
mysql -ugcdb -piforgot  -h192.168.222.171 -P8888 employees_hash <dept_emp.sql
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1452 (23000) at line 29: Cannot add or update a child row: a foreign key constraint fails (`employees_hash`.`dept_emp`, CONSTRAINT `dept_emp_ibfk_2` FOREIGN KEY (`dept_no`) REFERENCES `departments` (`dept_no`) ON DELETE CASCADE)
注意导入顺序问题,不然有外键问题,可以先删除外键关联,导入数据后再重建外键
4、验证cetus分库功能
- Cetus主机
(gcdb@192.168.222.171) 09:35:56 [(none)]> select count(*) from employees_hash.employees;
+----------+
| count(*) |
+----------+
|   300024 |  --employees_hash.employees表总行数300024
+----------+
1 row in set (0.32 sec)
(gcdb@192.168.222.171) 09:36:21 [(none)]> select count(*) from employees_range.titles;
+----------+
| count(*) |
+----------+
|   443308 |  --employees_range.titles表总行数443308
+----------+
1 row in set (0.26 sec)
(gcdb@192.168.222.171) 09:40:22 [(none)]> select count(*) from employees_hash.dept_manager;
+----------+
| count(*) |
+----------+
|       24 |  --employees_hash.dept_manager表总行数24(单点全局表)
+----------+
1 row in set (0.01 sec)
(gcdb@192.168.222.171) 09:41:35 [(none)]>
- mysql主机(多实例)
--mynode01
(root@localhost) 09:38:34 [(none)]> select count(*) from employees_hash.employees;
+----------+
| count(*) |
+----------+
|    60004 |  --employees_hash.employees表总行数60004(hash分片表)
+----------+
1 row in set (0.03 sec)
(root@localhost) 09:38:41 [(none)]> select count(*) from employees_range.titles;
+----------+
| count(*) |
+----------+
|   133345 |  --employees_range.titles表总行数 133345(range分片表)
+----------+
1 row in set (0.05 sec)
(root@localhost) 09:38:43 [(none)]>  select count(*) from employees_hash.dept_manager;
+----------+
| count(*) |
+----------+
|       24 |   --employees_hash.dept_manager表总行数24(单点全局表)
+----------+
1 row in set (0.00 sec)
(root@localhost) 09:40:19 [(none)]>
-------------------------------------------------------
--mynode02
(root@localhost) 09:40:30 [(none)]> select count(*) from employees_hash.employees;
+----------+
| count(*) |
+----------+
|    60005 |
+----------+
1 row in set (0.03 sec)
(root@localhost) 09:40:35 [(none)]> select count(*) from employees_range.titles;
+----------+
| count(*) |
+----------+
|    14844 |
+----------+
1 row in set (0.01 sec)
(root@localhost) 09:40:41 [(none)]>  select count(*) from employees_hash.dept_manager;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.00 sec)
(root@localhost) 09:40:43 [(none)]>
-------------------------------------------------------
--mynode03
(root@localhost) 09:41:42 [(none)]> select count(*) from employees_hash.employees;
+----------+
| count(*) |
+----------+
|    60009 |
+----------+
1 row in set (0.03 sec)
(root@localhost) 09:41:48 [(none)]> select count(*) from employees_range.titles;
+----------+
| count(*) |
+----------+
|   147566 |
+----------+
1 row in set (0.07 sec)
(root@localhost) 09:41:50 [(none)]> select count(*) from employees_hash.dept_manager;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.02 sec)
(root@localhost) 09:41:55 [(none)]>
--mynode04
(root@localhost) 09:35:11 [(none)]> select count(*) from employees_hash.employees;
+----------+
| count(*) |
+----------+
|    60003 |
+----------+
1 row in set (0.03 sec)
(root@localhost) 09:42:31 [(none)]> select count(*) from employees_range.titles;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.00 sec)
(root@localhost) 09:42:36 [(none)]> select count(*) from employees_hash.dept_manager;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.01 sec)
(root@localhost) 09:42:41 [(none)]>
-------------------------------------------------------
--mynode05
(root@localhost) 09:35:14 [employees_range]> select count(*) from employees_hash.employees;
+----------+
| count(*) |
+----------+
|    60003 |
+----------+
1 row in set (0.02 sec)
(root@localhost) 09:43:12 [employees_range]> select count(*) from employees_range.titles;
+----------+
| count(*) |
+----------+
|   147553 |
+----------+
1 row in set (0.06 sec)
(root@localhost) 09:43:13 [employees_range]> select count(*) from employees_hash.dept_manager;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.02 sec)
(root@localhost) 09:43:20 [employees_range]>
五、Cetus测试各种限制
1、JOIN的使用限制
不支持跨库的JOIN,非分片表可以在每个分片中都保存一份,以提高join的使用成功率。
--employees_hash库
(gcdb@192.168.222.171) 11:34:56 [employees_hash]> select * from dept_manager d  left join employees a on d.emp_no=a.emp_no ; --dept_manager(全局表)
ERROR 5004 (HY000): (cetus) JOIN single-table WITH sharding-table
(gcdb@192.168.222.171) 11:34:59 [employees_hash]> select * from dept_emp d  left join employees a on d.emp_no=a.emp_no  limit 10;--(dept_emp分片表)
+--------+---------+------------+------------+--------+------------+------------+-----------+--------+------------+
| emp_no | dept_no | from_date  | to_date    | emp_no | birth_date | first_name | last_name | gender | hire_date  |
+--------+---------+------------+------------+--------+------------+------------+-----------+--------+------------+
|  10001 | d005    | 1986-06-26 | 9999-01-01 |  10001 | 1953-09-02 | Georgi     | Facello   | M      | 1986-06-26 |
|  10010 | d004    | 1996-11-24 | 2000-06-26 |  10010 | 1963-06-01 | Duangkaew  | Piveteau  | F      | 1989-08-24 |
|  10010 | d006    | 2000-06-26 | 9999-01-01 |  10010 | 1963-06-01 | Duangkaew  | Piveteau  | F      | 1989-08-24 |
|  10011 | d009    | 1990-01-22 | 1996-11-09 |  10011 | 1953-11-07 | Mary       | Sluis     | F      | 1990-01-22 |
|  10020 | d004    | 1997-12-30 | 9999-01-01 |  10020 | 1952-12-24 | Mayuko     | Warwick   | M      | 1991-01-26 |
|  10021 | d005    | 1988-02-10 | 2002-07-15 |  10021 | 1960-02-20 | Ramzi      | Erde      | M      | 1988-02-10 |
|  10030 | d004    | 1994-02-17 | 9999-01-01 |  10030 | 1958-07-14 | Elvis      | Demeyer   | M      | 1994-02-17 |
|  10031 | d005    | 1991-09-01 | 9999-01-01 |  10031 | 1959-01-27 | Karsten    | Joslin    | M      | 1991-09-01 |
|  10040 | d005    | 1993-02-14 | 2002-01-22 |  10040 | 1959-09-13 | Weiyi      | Meriste   | F      | 1993-02-14 |
|  10040 | d008    | 2002-01-22 | 9999-01-01 |  10040 | 1959-09-13 | Weiyi      | Meriste   | F      | 1993-02-14 |
+--------+---------+------------+------------+--------+------------+------------+-----------+--------+------------+
10 rows in set (0.01 sec)
(gcdb@192.168.222.171) 11:35:15 [employees_hash]> 
2、DISTINCT的限制
针对全局表没有限制;
针对分片表,仅支持DISCTINCT字段同时也是ORDER BY字段,例如:select distinct col1 from tab1 order by col1,
另外为了在使用上更加友好,对于order by未写全的,Cetus会进行补充,例如:selectdistinct col1,col2 from tab1 order by col1,Cetus会改写为 select distinct col1,col2 fromtab1 order by col1,col2,但如果写成select distinct * from tab1,Cetus则会返回错误,为了效率,提倡使用标准写法,以免造成不必要的资源开销。
(gcdb@192.168.222.171) 10:52:27 [employees_range]> select distinct title from titles;
+--------------------+
| title              |
+--------------------+
| Assistant Engineer |
| Engineer           |
| Manager            |
| Senior Engineer    |
| Senior Staff       |
| Staff              |
| Technique Leader   |
+--------------------+
7 rows in set (0.52 sec)
(gcdb@192.168.222.171) 10:52:37 [employees_range]> select distinct title from titles order by emp_no; --返回错误
ERROR 5005 (HY000): (proxy)ORDER BY columns must be a subset of DISTINCT columns
(gcdb@192.168.222.171) 10:52:43 [employees_range]> select distinct title,emp_no from titles order by emp_no limit 10;
+-----------------+--------+
| title           | emp_no |
+-----------------+--------+
| Senior Engineer |  10001 |
| Staff           |  10002 |
| Senior Engineer |  10003 |
| Engineer        |  10004 |
| Senior Engineer |  10004 |
| Senior Staff    |  10005 |
| Staff           |  10005 |
| Senior Engineer |  10006 |
| Senior Staff    |  10007 |
| Staff           |  10007 |
+-----------------+--------+
10 rows in set (0.01 sec)
(gcdb@192.168.222.171) 10:53:10 [employees_range]>
3、CASE WHEN/IF 的限制
全局表没有限制;针对分片表,不能用于DML语句中,也不能用在GROUP BY后,可以用于SELECT 后,也可以作为过滤条件。
(gcdb@192.168.222.171) 11:15:26 [employees_range]> SELECT
    ->         emp_no,
    ->         CASE WHEN gender = 'M' THEN '凸'
    ->              WHEN gender = 'F' THEN '凹'
    ->         ELSE NULL END gender,
    ->         hire_date
    -> FROM    employees limit 10;
+--------+--------+------------+
| emp_no | gender | hire_date  |     --Select里面可以引用case when语句
+--------+--------+------------+
|  10001 | 凸     | 1986-06-26 |
|  10002 | 凹     | 1985-11-21 |
|  10003 | 凸     | 1986-08-28 |
|  10004 | 凸     | 1986-12-01 |
|  10005 | 凸     | 1989-09-12 |
|  10006 | 凹     | 1989-06-02 |
|  10007 | 凹     | 1989-02-10 |
|  10008 | 凸     | 1994-09-15 |
|  10009 | 凹     | 1985-02-18 |
|  10010 | 凹     | 1989-08-24 |
+--------+--------+------------+
10 rows in set (0.04 sec)
(gcdb@192.168.222.171) 11:57:11 [employees_range]>  select * from employees_range.departments;  --employees_hash.departments(单点全局表)
+---------+--------------------+
| dept_no | dept_name          |
+---------+--------------------+
| d009    | Customer Service   |
| d005    | Development        |
| d002    | Finance            |
| d003    | Human Resources    |
| d001    | Marketing          |
| d004    | Production         |
| d006    | Quality Management |
| d008    | Research           |
| d007    | Sales              |
+---------+--------------------+
9 rows in set (0.00 sec)
(gcdb@192.168.222.171) 11:57:24 [employees_range]>
(gcdb@192.168.222.171) 11:57:26 [employees_range]> UPDATE employees_range.departments
    -> SET dept_no =
    -> CASE WHEN dept_no = 'd001'  THEN 'b'
    ->      WHEN dept_no = 'd002'  THEN 'a'
    ->      ELSE dept_no END
    -> WHERE dept_name IN ('Marketing', 'Finance');
Query OK, 2 rows affected (0.06 sec)
Rows matched: 2  Changed: 2  Warnings: 0
(gcdb@192.168.222.171) 11:57:30 [employees_range]> select * from employees_range.departments;
+---------+--------------------+
| dept_no | dept_name          |
+---------+--------------------+
| d009    | Customer Service   |
| d005    | Development        |
| a       | Finance            |
| d003    | Human Resources    |
| b       | Marketing          |
| d004    | Production         |
| d006    | Quality Management |
| d008    | Research           |
| d007    | Sales              |
+---------+--------------------+
9 rows in set (0.01 sec)
(gcdb@192.168.222.171) 10:12:41 [(none)]> UPDATE employees_range.employees
    -> SET gender =
    ->         CASE WHEN gender = 'M' THEN 'men'
    ->              WHEN gender = 'F' THEN 'wen'
    ->         ELSE gender END
    -> WHERE emp_no < 10003;
Query OK, 2 rows affected, 2 warnings (0.03 sec)
Rows matched: 2  Changed: 2  Warnings: 2
(gcdb@192.168.222.171) 10:13:24 [(none)]>
(gcdb@192.168.222.171) 10:13:31 [(none)]> select * from employees_range.employees limit 10;
+--------+------------+------------+-----------+--------+------------+
| emp_no | birth_date | first_name | last_name | gender | hire_date  |
+--------+------------+------------+-----------+--------+------------+
|  10001 | 1953-09-02 | Georgi     | Facello   |        | 1986-06-26 |  --update case when语句没生效
|  10002 | 1964-06-02 | Bezalel    | Simmel    |        | 1985-11-21 |
|  10003 | 1959-12-03 | Parto      | Bamford   | M      | 1986-08-28 |
|  10004 | 1954-05-01 | Chirstian  | Koblick   | M      | 1986-12-01 |
|  10005 | 1955-01-21 | Kyoichi    | Maliniak  | M      | 1989-09-12 |
|  10006 | 1953-04-20 | Anneke     | Preusig   | F      | 1989-06-02 |
|  10007 | 1957-05-23 | Tzvetan    | Zielinski | F      | 1989-02-10 |
|  10008 | 1958-02-19 | Saniya     | Kalloufi  | M      | 1994-09-15 |
|  10009 | 1952-04-19 | Sumant     | Peac      | F      | 1985-02-18 |
|  10010 | 1963-06-01 | Duangkaew  | Piveteau  | F      | 1989-08-24 |
+--------+------------+------------+-----------+--------+------------+
10 rows in set (0.00 sec)
(gcdb@192.168.222.171) 10:13:35 [(none)]> 
4、中文列名的限制
对表列的中文列名或别名的使用有限制,使用中文列名或中文别名时必须加引号``
(gcdb@192.168.222.171) 11:43:11 [employees_hash]> select emp_no,birth_date  from employees order by emp_no limit 10;
+--------+------------+
| emp_no | birth_date |
+--------+------------+
|  10001 | 1953-09-02 |
|  10002 | 1964-06-02 |
|  10003 | 1959-12-03 |
|  10004 | 1954-05-01 |
|  10005 | 1955-01-21 |
|  10008 | 1958-02-19 |
|  10006 | 1953-04-20 |
|  10007 | 1957-05-23 |
|  10009 | 1952-04-19 |
|  10010 | 1963-06-01 |
+--------+------------+
10 rows in set (0.00 sec)
(gcdb@192.168.222.171) 11:43:37 [employees_hash]> select emp_no,birth_date as 生日  from employees order by emp_no limit 10;
ERROR 1149 (42000): near "as 堺 syntax errornear "as 堺 syntax error
(gcdb@192.168.222.171) 11:43:49 [employees_hash]> select emp_no,birth_date as '生日' from employees order by emp_no limit 10;
+--------+------------+
| emp_no | 生日       |
+--------+------------+
|  10001 | 1953-09-02 |
|  10002 | 1964-06-02 |
|  10003 | 1959-12-03 |
|  10004 | 1954-05-01 |
|  10005 | 1955-01-21 |
|  10008 | 1958-02-19 |
|  10006 | 1953-04-20 |
|  10007 | 1957-05-23 |
|  10009 | 1952-04-19 |
|  10010 | 1963-06-01 |
+--------+------------+
10 rows in set (0.00 sec)
(gcdb@192.168.222.171) 11:44:01 [employees_hash]>
5、查询业务的限制
在做SQL查询时,应注意以下约束:只支持同一个 VDB 内的关联查询;
针对 sharding 表,在查询条件中可以使用 sharding key 的要求加上该过滤条件, 另外,使用 sharding key 时,不建议使用带有函数转换、算术表达式等逻辑处理, 会严重影响效率。
--内连接join,titles表和employees
(gcdb@192.168.222.171) 10:31:39 [employees_range]>  select * from employees e,titles t where e.emp_no = t.emp_no order by e.emp_no  limit 5;
+--------+------------+------------+-----------+--------+------------+--------+-----------------+------------+------------+
| emp_no | birth_date | first_name | last_name | gender | hire_date  | emp_no | title           | from_date  | to_date    |
+--------+------------+------------+-----------+--------+------------+--------+-----------------+------------+------------+
|  10001 | 1953-09-02 | Georgi     | Facello   | M      | 1986-06-26 |  10001 | Senior Engineer | 1986-06-26 | 9999-01-01 |
|  10002 | 1964-06-02 | Bezalel    | Simmel    | F      | 1985-11-21 |  10002 | Staff           | 1996-08-03 | 9999-01-01 |
|  10003 | 1959-12-03 | Parto      | Bamford   | M      | 1986-08-28 |  10003 | Senior Engineer | 1995-12-03 | 9999-01-01 |
|  10004 | 1954-05-01 | Chirstian  | Koblick   | M      | 1986-12-01 |  10004 | Engineer        | 1986-12-01 | 1995-12-01 |
|  10004 | 1954-05-01 | Chirstian  | Koblick   | M      | 1986-12-01 |  10004 | Senior Engineer | 1995-12-01 | 9999-01-01 |
+--------+------------+------------+-----------+--------+------------+--------+-----------------+------------+------------+
5 rows in set (0.00 sec)
--内连接join,titles表和employees_hash.employees(跨库JOIN)
(gcdb@192.168.222.171) 10:32:00 [employees_range]>  select * from employees_hash.employees e,titles t where e.emp_no = t.emp_no order by e.emp_no limit 5;
+--------+------------+------------+-----------+--------+------------+--------+------------------+------------+------------+
| emp_no | birth_date | first_name | last_name | gender | hire_date  | emp_no | title            | from_date  | to_date    |
+--------+------------+------------+-----------+--------+------------+--------+------------------+------------+------------+
|  10001 | 1953-09-02 | Georgi     | Facello   | M      | 1986-06-26 |  10001 | Senior Engineer  | 1986-06-26 | 9999-01-01 |
|  10010 | 1963-06-01 | Duangkaew  | Piveteau  | F      | 1989-08-24 |  10010 | Engineer         | 1996-11-24 | 9999-01-01 |
|  10011 | 1953-11-07 | Mary       | Sluis     | F      | 1990-01-22 |  10011 | Staff            | 1990-01-22 | 1996-11-09 |
|  10020 | 1952-12-24 | Mayuko     | Warwick   | M      | 1991-01-26 |  10020 | Engineer         | 1997-12-30 | 9999-01-01 |
|  10021 | 1960-02-20 | Ramzi      | Erde      | M      | 1988-02-10 |  10021 | Technique Leader | 1988-02-10 | 2002-07-15 |
+--------+------------+------------+-----------+--------+------------+--------+------------------+------------+------------+
5 rows in set (0.12 sec)  --结果错误
(gcdb@192.168.222.171) 10:32:20 [employees_range]> select count(*) from employees_hash.employees;
+----------+
| count(*) |
+----------+
|   300024 |
+----------+
1 row in set (0.08 sec)
(gcdb@192.168.222.171) 10:32:50 [employees_range]> select count(*) from employees_range.employees;
+----------+
| count(*) |
+----------+
|   300024 |
+----------+
1 row in set (0.23 sec)
(gcdb@192.168.222.171) 10:32:55 [employees_range]> explain select * from employees_hash.employees e,titles t where e.emp_no = t.emp_no order by e.emp_no limit 5;
+----+-------------+-------+------------+-------+----------------+---------+---------+-------------------------+------+----------+-------+
| id | select_type | table | partitions | type  | possible_keys  | key     | key_len | ref                     | rows | filtered | Extra |
+----+-------------+-------+------------+-------+----------------+---------+---------+-------------------------+------+----------+-------+
|  1 | SIMPLE      | e     | NULL       | index | PRIMARY        | PRIMARY | 4       | NULL                    |    3 |   100.00 | NULL  |
|  1 | SIMPLE      | t     | NULL       | ref   | PRIMARY,emp_no | PRIMARY | 4       | employees_hash.e.emp_no |    1 |   100.00 | NULL  |
+----+-------------+-------+------------+-------+----------------+---------+---------+-------------------------+------+----------+-------+
2 rows in set, 1 warning (0.02 sec)
(gcdb@192.168.222.171) 10:33:43 [employees_range]> explain select * from employees e,titles t where e.emp_no = t.emp_no order by e.emp_no  limit 5;
+----+-------------+-------+------------+-------+----------------+---------+---------+--------------------------+------+----------+-------+
| id | select_type | table | partitions | type  | possible_keys  | key     | key_len | ref                      | rows | filtered | Extra |
+----+-------------+-------+------------+-------+----------------+---------+---------+--------------------------+------+----------+-------+
|  1 | SIMPLE      | e     | NULL       | index | PRIMARY        | PRIMARY | 4       | NULL                     |    3 |   100.00 | NULL  |
|  1 | SIMPLE      | t     | NULL       | ref   | PRIMARY,emp_no | PRIMARY | 4       | employees_range.e.emp_no |    1 |   100.00 | NULL  |
+----+-------------+-------+------------+-------+----------------+---------+---------+--------------------------+------+----------+-------+
2 rows in set, 1 warning (0.00 sec)
031:Cetus sharding的更多相关文章
- 030:Cetus中间件和MHA读写分离
		030:Cetus中间件和MHA读写分离 line:V1.1 mail: gczheng@139.com date: 2018-08-30 一.主机环境 虚拟机配置 CPU 内存 硬盘 OS版本 My ... 
- Redis集群(八):Redis Sharding集群
		一.Redis目前的集群方案主要有两种:Redis Sharding和Redis Cluster 1.Redis Sharding:3.0以前基本上使用分片实现集群,目前主流方案,客户端实现 2.Re ... 
- Mongodb Manual阅读笔记:CH9 Sharding
		9.分片(Sharding) Mongodb Manual阅读笔记:CH2 Mongodb CRUD 操作Mongodb Manual阅读笔记:CH3 数据模型(Data Models)Mongodb ... 
- Redis集群(九):Redis Sharding集群Redis节点主从切换后客户端自动重新连接
		上文介绍了Redis Sharding集群的使用,点击阅读 本文介绍当某个Redis节点的Master节点发生问题,发生主从切换时,Jedis怎样自动重连新的Master节点 一.步骤如下: 1.配 ... 
- 031:verbatim 标签
		verbatim 标签: verbatim 标签:默认在 DTL 模板中是会去解析那些特殊字符的.比如 {% 和 %} 以及 {{ 等.如果你在某个代码片段中不想使用 DTL 的解析引擎.那么你可以把 ... 
- Python练习题 031:Project Euler 003:最大质因数
		本题来自 Project Euler 第3题:https://projecteuler.net/problem=3 # Project Euler: Problem 3: Largest prime ... 
- MySql从一窍不通到入门(五)Sharding:分表、分库、分片和分区
		转载:用sharding技术来扩展你的数据库(一)sharding 介绍 转载:MySQL架构方案 - Scale Out & Scale Up. 转载: 数据表分区策略及实现(一) 转载:M ... 
- cetus系列~安装和基本配置
		cetus系列一 安装 1 安装软件环境 yum install cmake gcc glib2-devel flex mysql-devel gperftools-libs bison f ... 
- Mongodb Manual阅读笔记:CH8 复制集
		8 复制 Mongodb Manual阅读笔记:CH2 Mongodb CRUD 操作Mongodb Manual阅读笔记:CH3 数据模型(Data Models)Mongodb Manual阅读笔 ... 
随机推荐
- Jmeter界面总是有warning提示
			要用Jmeter测试服务器性能,发现GUI界面总是有warning提示: WARNING: Could not open/create prefs root node Software\JavaSof ... 
- Android -- 在一个Activity开启另一个Activity 并 获取他的返回值。
			1. 视图示例, 按选择弹出 2界面, 选择选项 回显到1 2. 示例代码 MainActivity.java, 第一个activity public class MainActivity e ... 
- shell 设置超时时间
			等待20s后退出 a= b= while(true) do if [ $a -eq $b ] then echo "20s !!!" break else if [[ true ] ... 
- XML_CPP_ZC_libXml2
			1.错误信息: /* * Use the following function to reset the two global variables * xmlGenericError and xmlG ... 
- 大数据时代,IT行业竟有如此多高薪职位!
			近年来云计算.大数据.BYOD.社交媒体.3D打印机.物联网……在互联网时代,各种新词层出不穷,令人应接不暇.这些新的技术.新兴应用和对应的IT发展趋势,使得IT人必须了解甚至掌握最新的IT技能. 另 ... 
- android平台蓝牙编程
			Android平台支持蓝牙网络协议栈,实现蓝牙设备之间数据的无线传输. 本文档描述了怎样利用android平台提供的蓝牙API去实现蓝牙设备之间的通信,蓝牙设备之间的通信主要包括了四个步骤:设置蓝牙设 ... 
- PythonInstaller编译EXE方法+编译过程出错方案大全
			https://www.cnblogs.com/gopythoner/p/6337543.htmlhttps://www.zhihu.com/question/22963200https://blog ... 
- ✅javascript 语法:附加子节点
			received: function(data) { $("#notifications").prepend(data.html); } 如何用原生js写出jquery的功能: 先 ... 
- Idea2018激活
			[help]-->[register]-->[license server]-->输入下方链接 http://xdouble.cn:8888/ 如果不行,请用下面的这个: http: ... 
- 各种liunx发行版本包管理器对比
			关于Ubuntu安装软件问题:apt-get和dpkg区别? 两者的区别是dpkg绕过apt包管理数据库对软件包进行操作,所以你用dpkg安装过的软件包用apt可以再安装一遍,系统不知道之前安装过了, ... 
