OceanBase三节点部署&&扩容

环境信息搭建三节点(1-1-1)创建资源池和租户查看数据分布

环境信息

IP OB目录 端口
192.168.43.89 /data/observer [2881,2882]
192.168.43.233 /data/observer [2881,2882]
192.168.43.223 /data/observer [2881,2882]
IP OBProxy目录 端口
192.168.43.89 /data/obproxy [2883,2884]

搭建三节点(1-1-1)

本次使用笔记本上的虚拟机方式部署,配置如下:



注:内存最少8G

使用离线方式安装:

注:本文侧重使用离线方式安装,在3.4注意中叙述了在线安装方式

1 .准备工作 [root用户]

1.1 selinux关闭

#临时关闭
setenforce 0
getenforce #开机不启动selinux,需重启生效。已临时关闭,本次不需要重启生效。
sed -i 's/=enforcing/=disabled/g' /etc/selinux/config #查看配置已生效
cat /etc/selinux/config

1.2 firewalld关闭

#关闭防火墙
systemctl stop firewalld #开机不启动防火墙
systemctl disable firewalld

1.3 修改主机名

#修改主机名为oceanbase,当前已生效,退出会话,再登录后显示新主机名
hostnamectl set-hostname oceanbase1
hostnamectl set-hostname oceanbase2
hostnamectl set-hostname oceanbase3

1.4 配置hosts

#主机名解析添加主机信息
cat >> /etc/hosts << EOF
192.168.43.89 oceanbase1
192.168.43.233 oceanbase2
192.168.43.223 oceanbase3
EOF #查看主机名信息
cat /etc/hosts

1.5 创建互信 [可选]

#配置互信 免密 为了方便还是创建互信比较好
oceanbase1:
rm -rf /root/.ssh [root@oceanbase1 ~]# ssh-keygen
cd /root/.ssh
mv id_rsa.pub authorized_keys scp -r /root/.ssh oceanbase2:/root
scp -r /root/.ssh oceanbase3:/root #为了方便,我配置了互信

1.6 安装目录准备

以root用户安装时,默认安装在/root/observer目录下,本文以 root用户安装在/data/observer目录下。创建过程。

#查看安装目录大小
[root@oceanbase1 ~]# df -h
文件系统 容量 已用 可用 已用% 挂载点
/dev/sdb 107G 22G 80G 21% /data

1.7 sysctl.conf配置

#添加内容
echo "fs.aio-max-nr=1048576" >> /etc/sysctl.conf #配置生效
sysctl -p

1.8 打开文件数限制

#添加内容
cat >> /etc/security/limits.conf << EOF
* soft nofile 655350
* hard nofile 655350
EOF #退出当前会话,重新登录,使配置生效。
#检查open files当前值,应为655350,否则后续启动集群会报错
ulimit -n

1.9 时间同步服务 [可选]

#以ob1为主时钟
[root@oceanbase1 ~]# yum install -y ntp
[root@oceanbase1 ~]# vi /etc/ntp.conf
server 127.127.1.0 iburst
systemctl restart ntpd.service [root@oceanbase2 ~]# vi /etc/ntp.conf
server 192.168.43.89
restrict 192.168.43.89 mask 255.255.240.0 nomodify notrap
systemctl restart ntpd.service
ntpdate -u 192.168.43.89 [root@oceanbase3 ~]# vi /etc/ntp.conf
server 192.168.52.183
restrict 192.168.52.183 mask 255.255.240.0 nomodify notrap
systemctl restart ntpd.service
ntpdate -u 192.168.52.183 #运行以下命令验证配置是否成功[官方文档中也有时钟源配置流程]:
[root@oceanbase2 ~]# ntpdate -u 192.168.43.89
6 Aug 20:40:00 ntpdate[5211]: adjust time server 192.168.43.89 offset -0.003421 sec
[root@oceanbase2 ~]# ntpstat
unsynchronised
polling server every 8 s
[root@oceanbase2 ~]# timedatectl
Local time: 五 2021-08-06 20:40:27 CST
Universal time: 五 2021-08-06 12:40:27 UTC
RTC time: 五 2021-08-06 12:40:24
Time zone: Asia/Shanghai (CST, +0800)
NTP enabled: yes
NTP synchronized: no
RTC in local TZ: no
DST active: n/a
#说明 NTP 服务生效。

同样使用OBD快速部署,配置文件mini-distributed-with-obproxy-example.yaml:


根据实际情况在官网下载:


https://gitee.com/oceanbase/obdeploy/blob/master/example/mini-distributed-with-obproxy-example.yaml#

## Only need to configure when remote login is required
# user:
# username: root
# password: 111111
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
oceanbase-ce:
servers:
- name: z1
# Please don't use hostname, only IP can be supported
ip: 192.168.43.89
- name: z2
ip: 192.168.43.233
- name: z3
ip: 192.168.43.223
global:
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: ens33
#创建分布式架构时需要根据自己的网卡名字更改
cluster_id: 1
# please set memory limit to a suitable value which is matching resource.
memory_limit: 8G
#内存设定 最小为8G,不需要做变动
system_memory: 4G
stack_size: 512K
cpu_count: 10
#CPU总线程数,不确定的就用lscpu查询一下,
cache_wash_threshold: 1G
__min_full_resource_pool_memory: 268435456
workers_per_cpu_quota: 8
#CPU工作线程数,根据实际情况设定值,言外之意就是cpu_count中你想拿出来多少给他工作用
schema_history_expire_time: 1d
# The value of net_thread_count had better be same as cpu's core number.
net_thread_count: 4
major_freeze_duty_time: Disable
minor_freeze_times: 10
enable_separate_sys_clog: 0
enable_merge_by_turn: FALSE
datafile_disk_percentage: 20
syslog_level: INFO
enable_syslog_recycle: true
max_syslog_file_count: 4
# observer cluster name, consistent with obproxy's cluster_name
appname: ob_cluster
root_password:
proxyro_password:
z1:
mysql_port: 2881
rpc_port: 2882
home_path: /data/observer
zone: zone1
z2:
mysql_port: 2881
rpc_port: 2882
home_path: /data/observer
zone: zone2
z3:
mysql_port: 2881
rpc_port: 2882
home_path: /data/observer
zone: zone3
obproxy:
servers:
- 192.168.43.89
global:
listen_port: 2883
prometheus_listen_port: 2884
home_path: /data/obproxy
# oceanbase root server list
# format: ip:mysql_port,ip:mysql_port
rs_list: 192.168.43.89:2881;192.168.43.233:2881;192.168.43.223:2881
enable_cluster_checkout: false
# observer cluster name, consistent with oceanbase-ce's appname
cluster_name: ob_cluster
obproxy_sys_password:
observer_sys_password:
#如上参数大家可在官网查询具体解释,在这找了两个比较重要的叙述了一下,若解释错误请大家以官网为准

2.离线安装OBD

2.1检查OBD离线RPM安装包

通过官网去下载离线包上传到自己服务器

[root@oceanbase1 ~]# cd /opt
[root@oceanbase1 opt]# ll
ob-deploy-1.0.2-2.el7.x86_64.rpm

2.2本机安装OBD

[root@oceanbase1 opt]# rpm -ivh ob-deploy-1.0.2-2.el7.x86_64.rpm
警告:ob-deploy-1.0.2-2.el7.x86_64.rpm: 头V4 RSA/SHA1 Signature, 密钥 ID e9b4a7aa: NOKEY
准备中... ################################# [100%]
正在升级/安装...
1:ob-deploy-1.0.2-2.el7 ################################# [100%]
Installation of obd finished successfully
Please source /etc/profile.d/obd.sh to enable it
#安装过程确实依赖可根据实际情况安装,可使用本yum源或网上下载RPM包依赖,可以使用阿里云镜像根据自己版本去下载
https://mirrors.aliyun.com/centos/7.9.2009/os/x86_64/Packages/

3 本地OceanBase镜像

当前主机不能连接公网,将之前OceanBase数据库的rpm离线软件包镜像到本地。

3.1 检查离线包

[root@oceanbase1 opt]# ll
-rw-r--r--. 1 root root 8074612 5月 30 21:59 obproxy-3.1.0-1.el7.x86_64.rpm
-rw-r--r-- 1 root root 42035940 8月 3 15:29 oceanbase-ce-3.1.0-2.el7.x86_64.rpm
-rw-r--r-- 1 root root 158972 8月 3 15:34 oceanbase-ce-libs-3.1.0-2.el7.x86_64.rpm
[root@oceanbase1 opt2]# ll
-rw-r--r-- 1 root root 658620 8月 6 20:50 libobclient-2.0.0-2.el7.x86_64.rpm
-rw-r--r-- 1 root root 41916564 8月 6 20:50 obclient-2.0.0-2.el7.x86_64.rpm
#obclient客户端也一并下载,并且安装上,便于一会使用obclient

3.2 将OceanBase数离线软件包加入本地镜像

在中控机器(192.168.43.89)运行以下命令,将OceanBase数据库的离线软件包加入本地镜像:

[root@oceanbase1 opt]# pwd
/opt
[root@oceanbase1 opt]# obd mirror clone *.rpm
name: obproxy
version: 3.1.0
release:1.el7
arch: x86_64
md5: 0b17cf0459a3b53c5a2febb6572894d183154c64
add obproxy-3.1.0-1.el7.x86_64.rpm to local mirror
name: oceanbase-ce
version: 3.1.0
release:2.el7
arch: x86_64
md5: afd11d52f83eef4b456d77969fde620c4bfba85e
add oceanbase-ce-3.1.0-2.el7.x86_64.rpm to local mirror
name: oceanbase-ce-libs
version: 3.1.0
release:2.el7
arch: x86_64
md5: 47300ca1ac4c62493caf3e9235b105e242e533b5
add oceanbase-ce-libs-3.1.0-2.el7.x86_64.rpm to local mirror

3.3 将obd远程获取安装的remote文件改个名,让obd执行时候不走外网的镜像,走本地的镜像

mv /root/.obd/mirror/remote  /root/.obd/mirror/remotebak

3.4 注:若您的机器可以访问公网,并能够添加三方yum软件源,可以使用 OB的官方软件源安装 OBD,并且忽略掉2.2-3.3步骤:

#在线安装方法
[root@oceanbase1 ~]# yum install -y yum-utils
[root@oceanbase1 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/oceanbase/OceanBase.repo
[root@oceanbase1 ~]# yum install -y ob-deploy

4.obd 部署:

[root@oceanbase1 data]# obd  cluster deploy ob_cluster -c mini-distributed-with-obproxy-example.yaml
Found a higher version
name: oceanbase-ce
version: 3.1.0
release:2.el7
arch: x86_64
md5: afd11d52f83eef4b456d77969fde620c4bfba85e
Do you want to use it? [y/n]: y
Package oceanbase-ce-3.1.0 is available.
obproxy-3.1.0 already installed.
install oceanbase-ce-3.1.0 for local ok
+-----------------------------------------------------------------------------+
| Packages |
+--------------+---------+---------+------------------------------------------+
| Repository | Version | Release | Md5 |
+--------------+---------+---------+------------------------------------------+
| obproxy | 3.1.0 | 1.el7 | 0b17cf0459a3b53c5a2febb6572894d183154c64 |
| oceanbase-ce | 3.1.0 | 2.el7 | afd11d52f83eef4b456d77969fde620c4bfba85e |
+--------------+---------+---------+------------------------------------------+
Open ssh connection ok
Remote obproxy-3.1.0-0b17cf0459a3b53c5a2febb6572894d183154c64 repository install ok
Remote obproxy-3.1.0-0b17cf0459a3b53c5a2febb6572894d183154c64 repository lib check ok
Remote oceanbase-ce-3.1.0-afd11d52f83eef4b456d77969fde620c4bfba85e repository install ok
Remote oceanbase-ce-3.1.0-afd11d52f83eef4b456d77969fde620c4bfba85e repository lib check !!
[WARN] z1(192.168.43.89) oceanbase-ce-3.1.0-afd11d52f83eef4b456d77969fde620c4bfba85e require: libmariadb.so.3
[WARN] z2(192.168.43.233) oceanbase-ce-3.1.0-afd11d52f83eef4b456d77969fde620c4bfba85e require: libmariadb.so.3
[WARN] z3(192.168.43.223) oceanbase-ce-3.1.0-afd11d52f83eef4b456d77969fde620c4bfba85e require: libmariadb.so.3 Try to get lib-repository
Found a higher version
name: oceanbase-ce-libs
version: 3.1.0
release:2.el7
arch: x86_64
md5: 47300ca1ac4c62493caf3e9235b105e242e533b5
Do you want to use it? [y/n]: y
Package oceanbase-ce-libs-3.1.0 is available.
install oceanbase-ce-libs-3.1.0 for local ok
Use oceanbase-ce-libs-3.1.0-47300ca1ac4c62493caf3e9235b105e242e533b5 for oceanbase-ce-3.1.0-afd11d52f83eef4b456d77969fde620c4bfba85e
Remote oceanbase-ce-libs-3.1.0-47300ca1ac4c62493caf3e9235b105e242e533b5 repository install ok
Remote oceanbase-ce-3.1.0-afd11d52f83eef4b456d77969fde620c4bfba85e repository lib check ok
Cluster status check ok
192.168.43.89 init cluster work home
z1(192.168.43.89) initializes cluster work home
z2(192.168.43.233) initializes cluster work home
z3(192.168.43.223) initializes cluster work home
ob_cluster deployed
#看到ob_cluster deployed即证明集群初始化成功

4.1启动集群:

[root@oceanbase1 data]# obd cluster start ob_cluster
Get local repositories and plugins ok
Open ssh connection ok
Cluster param config check ok
Check before start observer ok
Check before start obproxy ok
Start observer ok
observer program health check ok
Connect to observer ok
Initialize cluster
Cluster bootstrap ok
Wait for observer init ok
+--------------------------------------------------+
| observer |
+----------------+---------+------+-------+--------+
| ip | version | port | zone | status |
+----------------+---------+------+-------+--------+
| 192.168.43.223 | 3.1.0 | 2881 | zone3 | active |
| 192.168.43.233 | 3.1.0 | 2881 | zone2 | active |
| 192.168.43.89 | 3.1.0 | 2881 | zone1 | active |
+----------------+---------+------+-------+--------+ Start obproxy ok
obproxy program health check ok
Connect to obproxy ok
Initialize cluster
+-------------------------------------------------+
| obproxy |
+---------------+------+-----------------+--------+
| ip | port | prometheus_port | status |
+---------------+------+-----------------+--------+
| 192.168.43.89 | 2883 | 2884 | active |
+---------------+------+-----------------+--------+
ob_cluster running
#集群启动时候可能会在Cluster bootstrap步骤失败,失败之后我重新启动了一次,第二次成功,可根据失败常见原因进行排查,当status全都为active时,cluster和proxy已经正常启动
初始化OceanBase集群失败的常见原因有【来自官方问答榜】:
1、机器间的时钟误差过大,可以利用ntpq、clockdiff等检查机器间的始终误差。
2、信息指定有无,比如zone名称有误,或者网卡名和ip地址没对上等。
3、其他问题,如硬件问题。
具体原因可以查看日志:
observer.log observer运行时的日志
rootserver.log observer上rootserver的日志
#猜测我的虚拟机可能因为网络原因时钟误差比较大,可以配置上ntp服务进行时间同步

小插曲1:

obd安装OceanBase和obproxy,安装后通过obproxy 2883连接报错,连接2881端口正常:



解决流程:


通过观察obproxy日志,发现proxyro用户密码被拒绝;

[2021-08-03 15:55:22.075669] WARN  [PROXY.SM] print_mysql_complete_log (ob_mysql_sm.cpp:3899) [4117][Y0-7FF1ABC7B3A0] [lt=39] [dc=0] finishing mysql tunnel((sm_id=7, cs_id=2147483655, proxy_sessid=3, ss_id=3, server_sessid=3221497608, client_ip={*Not IP address [0]*:0}, server_ip={192.168.43.89:2881}, server_trace_id=, proxy_user_name=proxyro@sys#ob_cluster, database_name=, is_flow_controlled=false, cpu_flow_control_count=0, memory_flow_control_count=0, sql=, sql_cmd=NULL, result={is_trans_completed:true, is_resp_completed:true, ending_type:2, is_partition_hit:true, has_new_sys_var:false, has_proxy_idc_name_user_var:false, is_server_db_reset:false, reserved_len:0, connection_id:0, scramble_buf:"", is_resultset_resp:false, server_capabilities_lower_.capability:0, ok_packet_action_type:2, last_ok_pkt_len:12, rewritten_last_ok_pkt_len:0, error_pkt:field_count:255, errcode:1045, sqlstate:"42000", message:
"Access denied for user 'proxyro'@'xxx.xxx.xxx.xxx' (using password: NO)"})

解决方案:通过2881端口登录observer:将proxyro密码设置为空。安装时在yaml文件没有设置proxyro的密码,此处不解,略过!


设置空之后通过2883可正常登录

此时:2881和2883端口登录时root密码都为空,为了方便没有设置密码。可根据需要alter一下

小插曲2:




解释:当status有状态为inactive的请检查进程是否正常启动,或网络是否畅通,我的是因为进程没有启动

5.创建资源池和租户

5.1通过obproxy 连接到observer, 创建资源及租户

[root@oceanbase1 data]# obclient -u root -p -h 127.0.0.1 -P 2883 -A
Enter password:
Welcome to the OceanBase. Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 5.6.25 OceanBase 3.1.0 (r-00672c3c730c3df6eef3b359eae548d8c2db5ea2) (Built Jun 22 2021 12:46:28) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
#加上-A参数
MySQL [(none)]> use oceanbase;
Database changed #创建资源池,unit数量为1,资源根据自己服务器配置给,我给了最小
MySQL [oceanbase]> create resource unit testunit1 max_cpu=1, min_cpu=1, max_memory='1G', min_memory='1G', max_iops=10000, min_iops=1000, max_session_num=1000000, max_disk_size='10G';
Query OK, 0 rows affected (0.048 sec) MySQL [oceanbase]> create resource pool testpool_1 unit='testunit1', unit_num=1;
Query OK, 0 rows affected (0.075 sec) MySQL [oceanbase]> CREATE TENANT IF NOT EXISTS testtenant_1
-> charset='utf8mb4',
-> replica_num=3,
-> zone_list=('zone1','zone2','zone3'),
-> primary_zone='RANDOM',
-> resource_pool_list=('testpool_1');
Query OK, 0 rows affected (11.333 sec)
#根据配置高低速度也会不同,等待。。。。。

5.2通过自己创建的租户testtenant_1登录创建表:

[root@oceanbase1 data]# obclient -u root@testtenant_1 -p -h 127.0.0.1 -P 2881 -A
Enter password:
Welcome to the OceanBase. Commands end with ; or \g.
Your MySQL connection id is 3221501050
Server version: 5.7.25 OceanBase 3.1.0 (r-00672c3c730c3df6eef3b359eae548d8c2db5ea2) (Built Jun 22 2021 12:46:28) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| oceanbase |
| information_schema |
| mysql |
| test |
+--------------------+
4 rows in set (0.014 sec) MySQL [(none)]> create database t1;
Query OK, 1 row affected (0.260 sec) MySQL [(none)]> create tablegroup t1 partition by hash partitions 3;
Query OK, 0 rows affected (0.319 sec) MySQL [(none)]> use t1;
Database changed
MySQL [t1]> create table tt1(id int , name char(20));
Query OK, 0 rows affected (0.779 sec) MySQL [t1]> create table tt2(id int , name char(20)) tablegroup=t1 partition by hash (id) partitions 3 ;
Query OK, 0 rows affected (0.639 sec) MySQL [t1]> create table tt3(id int , name char(20)) tablegroup=t1 partition by hash(id) partitions 3 ;
Query OK, 0 rows affected (0.697 sec)
#简单演示一下建库建表

5.3查看数据分布,使用sys租户:

[root@oceanbase1 data]# obclient -u root -p -h 127.0.0.1 -P 2883 -A
MySQL [(none)]> SELECT t.tenant_id, a.tenant_name, t.table_name, d.database_name, t1.tablegroup_name , t.part_num , t2.partition_id, t2.role , t2.ZONE, t2.svr_ip, concat(t2.svr_ip, ':', t2.svr_port) observer , t2.data_size , a.primary_zone , IF(t.locality = '' OR t.locality IS NULL, a.locality, t.locality) AS locality FROM oceanbase.__all_tenant AS a JOIN oceanbase.__all_virtual_database AS d ON ( a.tenant_id = d.tenant_id ) JOIN oceanbase.__all_virtual_table AS t ON (t.tenant_id = d.tenant_id AND t.database_id = d.database_id) JOIN oceanbase.__all_virtual_meta_table t2 ON (t.tenant_id = t2.tenant_id AND (t.table_id=t2.table_id OR t.tablegroup_id=t2.table_id) AND t2.ROLE IN (1) ) LEFT JOIN oceanbase.__all_virtual_tablegroup AS t1 ON (t.tenant_id = t1.tenant_id and t.tablegroup_id = t1.tablegroup_id) WHERE a.tenant_id IN (1001,1002 ) AND t.table_type IN (3) and table_name in ('tt1','tt2','tt3') ORDER BY t.tenant_id, t1.tablegroup_name, d.database_name, t.table_name, t2.partition_id;
+-----------+--------------+------------+---------------+-----------------+----------+--------------+------+-------+----------------+---------------------+-----------+--------------+---------------------------------------------+
| tenant_id | tenant_name | table_name | database_name | tablegroup_name | part_num | partition_id | role | ZONE | svr_ip | observer | data_size | primary_zone | locality |
+-----------+--------------+------------+---------------+-----------------+----------+--------------+------+-------+----------------+---------------------+-----------+--------------+---------------------------------------------+
| 1001 | testtenant_1 | tt1 | t1 | NULL | 1 | 0 | 1 | zone3 | 192.168.43.223 | 192.168.43.223:2882 | 0 | RANDOM | FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone3 |
| 1001 | testtenant_1 | tt2 | t1 | t1 | 3 | 0 | 1 | zone2 | 192.168.43.233 | 192.168.43.233:2882 | 0 | RANDOM | FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone3 |
| 1001 | testtenant_1 | tt2 | t1 | t1 | 3 | 1 | 1 | zone3 | 192.168.43.223 | 192.168.43.223:2882 | 0 | RANDOM | FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone3 |
| 1001 | testtenant_1 | tt2 | t1 | t1 | 3 | 2 | 1 | zone1 | 192.168.43.89 | 192.168.43.89:2882 | 0 | RANDOM | FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone3 |
| 1001 | testtenant_1 | tt3 | t1 | t1 | 3 | 0 | 1 | zone2 | 192.168.43.233 | 192.168.43.233:2882 | 0 | RANDOM | FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone3 |
| 1001 | testtenant_1 | tt3 | t1 | t1 | 3 | 1 | 1 | zone3 | 192.168.43.223 | 192.168.43.223:2882 | 0 | RANDOM | FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone3 |
| 1001 | testtenant_1 | tt3 | t1 | t1 | 3 | 2 | 1 | zone1 | 192.168.43.89 | 192.168.43.89:2882 | 0 | RANDOM | FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone3 |
+-----------+--------------+------------+---------------+-----------------+----------+--------------+------+-------+----------------+---------------------+-----------+--------------+---------------------------------------------+
7 rows in set (0.077 sec)
#t1为上面建的tablegroup name,tt1、tt2、tt3是上面建的table name

附图:


5.4sys租户查看目前资源分布:

[root@oceanbase1 data]# obclient -u root -p -h 127.0.0.1 -P 2883 -A
Enter password:
Welcome to the OceanBase. Commands end with ; or \g.
Your MySQL connection id is 11
Server version: 5.6.25 OceanBase 3.1.0 (r-00672c3c730c3df6eef3b359eae548d8c2db5ea2) (Built Jun 22 2021 12:46:28) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [(none)]> use oceanbase;
Database changed
MySQL [oceanbase]> select t1.name resource_pool_name, t2.`name` unit_config_name, t2.max_cpu, t2.min_cpu, round(t2.max_memory / 1024 / 1024 / 1024) max_mem_gb, round(t2.min_memory / 1024 / 1024 / 1024) min_mem_gb, t3.unit_id, t3.zone, concat(t3.svr_ip, ':', t3.`svr_port`) observer, t4.tenant_id, t4.tenant_name from __all_resource_pool t1 join __all_unit_config t2 on (t1.unit_config_id = t2.unit_config_id) join __all_unit t3 on (t1.`resource_pool_id` = t3.`resource_pool_id`) left join __all_tenant t4 on (t1.tenant_id = t4.tenant_id) order by t1.`resource_pool_id`, t2.`unit_config_id`, t3.unit_id;
+--------------------+------------------+---------+---------+------------+------------+---------+-------+---------------------+-----------+--------------+
| resource_pool_name | unit_config_name | max_cpu | min_cpu | max_mem_gb | min_mem_gb | unit_id | zone | observer | tenant_id | tenant_name |
+--------------------+------------------+---------+---------+------------+------------+---------+-------+---------------------+-----------+--------------+
| sys_pool | sys_unit_config | 5 | 2.5 | 1 | 1 | 1 | zone1 | 192.168.43.89:2882 | 1 | sys |
| sys_pool | sys_unit_config | 5 | 2.5 | 1 | 1 | 2 | zone2 | 192.168.43.233:2882 | 1 | sys |
| sys_pool | sys_unit_config | 5 | 2.5 | 1 | 1 | 3 | zone3 | 192.168.43.223:2882 | 1 | sys |
| testpool_1 | testunit1 | 1 | 1 | 1 | 1 | 1001 | zone1 | 192.168.43.89:2882 | 1001 | testtenant_1 |
| testpool_1 | testunit1 | 1 | 1 | 1 | 1 | 1002 | zone2 | 192.168.43.233:2882 | 1001 | testtenant_1 |
| testpool_1 | testunit1 | 1 | 1 | 1 | 1 | 1003 | zone3 | 192.168.43.223:2882 | 1001 | testtenant_1 |
+--------------------+------------------+---------+---------+------------+------------+---------+-------+---------------------+-----------+--------------+
6 rows in set (0.011 sec)

附图:

6.服务器扩容:[ 1-1-1 ---> 2 -2-2 ]

IP OB目录 端口
192.168.43.89 /data/observer2 [3881,3882]
192.168.43.233 /data/observer2 [3881,3882]
192.168.43.223 /data/observer2 [3881,3882]

扩容端口配置

6.1设置环境变量,创建observer目录(三台机器均执行)

[root@oceanbase1 47300ca1ac4c62493caf3e9235b105e242e533b5]# pwd
/root/.obd/repository/oceanbase-ce-libs/3.1.0/47300ca1ac4c62493caf3e9235b105e242e533b5
[root@oceanbase1 47300ca1ac4c62493caf3e9235b105e242e533b5]# ll
总用量 460
lrwxrwxrwx 1 root root 15 8月 3 15:46 libaio.so -> libaio.so.1.0.1
lrwxrwxrwx 1 root root 15 8月 3 15:46 libaio.so.1 -> libaio.so.1.0.1
-rwxr-xr-x 1 root root 7848 8月 3 15:46 libaio.so.1.0.1
lrwxrwxrwx 1 root root 15 8月 3 15:46 libmariadb.so -> libmariadb.so.3
-rwxr-xr-x 1 root root 460648 8月 3 15:46 libmariadb.so.3 export LD_LIBRARY_PATH=/root/.obd/repository/oceanbase-ce-libs/3.1.0/47300ca1ac4c62493caf3e9235b105e242e533b5/
mkdir -p /data/observer2/store/{sort_dir,sstable,clog,ilog,slog}

6.2手工启动observe进程

#官方地址:https://www.oceanbase.com/docs/knowledge-base/knowledge-base/V1.0.0/how-to-scale-out-the-oceanbase-cluster

192.168.43.89:
cd /data/observer2 &&/root/.obd/repository/oceanbase-ce/3.1.0/56f57e9843e719d830ec03c206d914f4b3adc82b/bin/observer -r "192.168.43.89:2882:2881;192.168.43.233:2882:2881;192.168.43.223:2882:2881" -o __min_full_resource_pool_memory=268435456,memory_limit=8G,system_memory=4G,stack_size=512K,cpu_count=10,cache_wash_threshold=1G,workers_per_cpu_quota=8,schema_history_expire_time=1d,net_thread_count=4,major_freeze_duty_time=Disable,minor_freeze_times=10,enable_separate_sys_clog=0,enable_merge_by_turn=False,datafile_disk_percentage=20,enable_syslog_recycle=True,max_syslog_file_count=4 -z zone1 -p 3881 -P 3882 -n ob_cluster -c 1 -d /data/observer2/store -i ens33 -l INFO 192.168.43.223:
cd /data/observer2 && /root/.obd/repository/oceanbase-ce/3.1.0/56f57e9843e719d830ec03c206d914f4b3adc82b/bin/observer -r "192.168.43.89:2882:2881;192.168.43.233:2882:2881;192.168.43.223:2882:2881" -o __min_full_resource_pool_memory=268435456,memory_limit=8G,system_memory=4G,stack_size=512K,cpu_count=10,cache_wash_threshold=1G,workers_per_cpu_quota=8,schema_history_expire_time=1d,net_thread_count=4,major_freeze_duty_time=Disable,minor_freeze_times=10,enable_separate_sys_clog=0,enable_merge_by_turn=False,datafile_disk_percentage=20,enable_syslog_recycle=True,max_syslog_file_count=4 -z zone2 -p 3881 -P 3882 -n ob_cluster -c 1 -d /data/observer2/store -i ens33 -l INFO 192.168.43.233:
cd /data/observer2 &&/root/.obd/repository/oceanbase-ce/3.1.0/56f57e9843e719d830ec03c206d914f4b3adc82b/bin/observer -r "192.168.43.89:2882:2881;192.168.43.233:2882:2881;192.168.43.223:2882:2881" -o __min_full_resource_pool_memory=268435456,memory_limit=8G,system_memory=4G,stack_size=512K,cpu_count=10,cache_wash_threshold=1G,workers_per_cpu_quota=8,schema_history_expire_time=1d,net_thread_count=4,major_freeze_duty_time=Disable,minor_freeze_times=10,enable_separate_sys_clog=0,enable_merge_by_turn=False,datafile_disk_percentage=20,enable_syslog_recycle=True,max_syslog_file_count=4 -z zone3 -p 3881 -P 3882 -n ob_cluster -c 1 -d /data/observer2/store -i ens33 -l INFO

6.3目前有两个ob进程,说明手动启动已经成功




注解:如下:

/root/.obd/repository/oceanbase-ce/3.1.0/56f57e9843e719d830ec03c206d914f4b3adc82b/bin/observer -r "192.168.43.89:2882:2881;192.168.43.233:2882:2881;192.168.43.223:2882:2881" -o __min_full_resource_pool_memory=268435456,memory_limit=8G,system_memory=4G,stack_size=512K,cpu_count=10,cache_wash_threshold=1G,workers_per_cpu_quota=8,schema_history_expire_time=1d,net_thread_count=4,major_freeze_duty_time=Disable,minor_freeze_times=10,enable_separate_sys_clog=0,enable_merge_by_turn=False,datafile_disk_percentage=20,enable_syslog_recycle=True,max_syslog_file_count=4 -z zone3 -p 3881 -P 3882 -n ob_cluster -c 1 -d /data/observer2/store -i ens33 -l INFO
其中:
3881:表示 MySQL 的访问端口。
3882:表示远程访问端口。
zone_1:表示 Zone 名称。
/data/observer2/store:表示数据盘目录。
datafile_disk_percentage=20:表示数据盘的占用比率为 20%。
192.168.43.89和192.168.43.223 和 192.168.43.233:表示 rs_list。
memory_limit=8GB:表示进程启动内存上限为 8 GB。
WARN:表示 log_level 为 WARNING 级别。
使用 -o 参数时,需满足以下条件:
不分大小写,但是推荐按照 observer.config.bin 中的变量名称来写。
参数不能包含以下特殊字符:
空格、\r、\n、\t
参数名和参数值中间必须有等号(=)。
参数之间使用逗号(,)进行分割。
#和上面yaml文件中配置的一样,只不过此处使用的是手动启动ob进程

添加server

MySQL [oceanbase]> alter system add  server '192.168.43.89:3882' zone 'zone1';
MySQL [oceanbase]> alter system add server '192.168.43.233:3882' zone 'zone2';
MySQL [oceanbase]> alter system add server '192.168.43.223:3882' zone 'zone3';




6.4查看资源情况,看每个zone内是否已经有两个observer

select zone,concat(svr_ip,':',svr_port) observer, cpu_capacity,cpu_total,cpu_assigned,cpu_assigned_percent, mem_capacity,mem_total,mem_assigned,mem_assigned_percent, unit_Num,round(`load`,2) `load`, round(cpu_weight,2) cpu_weight, round(memory_weight,2) mem_weight, leader_count from __all_virtual_server_stat order by zone,svr_ip;

注:如图所示,因资源限制问题,233上observer2无法启动进程,接下来的扩容未亲测。

7.租户扩容

7.1升级资源的unit数量

MySQL [oceanbase]> alter resource pool testpool_1 unit_num=2;

7.2扩容后查看一下数据分布:

MySQL [oceanbase]> SELECT t.tenant_id, a.tenant_name, t.table_name, d.database_name, t1.tablegroup_name , t.part_num , t2.partition_id, t2.role , t2.ZONE, t2.svr_ip, concat(t2.svr_ip, ':', t2.svr_port) observer , t2.data_size , a.primary_zone , IF(t.locality = '' OR t.locality IS NULL, a.locality, t.locality) AS locality FROM oceanbase.__all_tenant AS a JOIN oceanbase.__all_virtual_database AS d ON ( a.tenant_id = d.tenant_id ) JOIN oceanbase.__all_virtual_table AS t ON (t.tenant_id = d.tenant_id AND t.database_id = d.database_id) JOIN oceanbase.__all_virtual_meta_table t2 ON (t.tenant_id = t2.tenant_id AND (t.table_id=t2.table_id OR t.tablegroup_id=t2.table_id) AND t2.ROLE IN (1) ) LEFT JOIN oceanbase.__all_virtual_tablegroup AS t1 ON (t.tenant_id = t1.tenant_id and t.tablegroup_id = t1.tablegroup_id) WHERE a.tenant_id IN (1001,1002 ) AND t.table_type IN (3)  and table_name in ('tt1','tt2','tt3') ORDER BY t.tenant_id, t1.tablegroup_name, d.database_name, t.table_name, t2.partition_id;

8.租户缩容

MySQL [oceanbase]> alter resource pool testpool_1 unit_num=1;
MySQL [oceanbase]> select gmt_create,gmt_modified,job_id,job_type,job_status,return_code,progress,tenant_id from __all_rootservice_job;

查看__all_rootservice_job可查看进度,等待状态变成SUCCESS


删除server

MySQL [oceanbase]> ALTER SYSTEM DELETE SERVER 'ip:port' [,'ip:port'…] [ZONE [=] 'zone']

总结:搭建过程一定要注意细节,细节决定成败,遇到错误可以多看看observer.log、必要时也可以查看obproxy.log文件,OB的日志数量比较多,可以通过筛选找出对你有用的日志,通过错误去官网查看说明,最后说一句,有时间一定要多搭建几遍ob集群,熟能生巧,次数多了才能深入了解OB。


注:本文不探讨性能,只讲过程。

OceanBase三节点部署&&扩容的更多相关文章

  1. Ceph 13.2.8 三节点部署

    bs-k8s-ceph eth1 mon osd mgr deploy 2c2g sdb sdc sdd 各20G bs-hk-hk01 eth1 mon osd mgr 2c2g sdb sdc s ...

  2. OpenStack IceHouse 部署 - 5 - 网络节点部署

    Neutron网络服务(网络节点)     目录 [隐藏]  1 参考 2 前置工作 2.1 调整内核参数 3 安装 4 配置 4.1 keystone对接 4.2 rabbitmq对接 4.3 me ...

  3. Hadoop 完全分布式部署(三节点)

    用来测试,我在VMware下用Centos7搭起一个三节点的Hadoop完全分布式集群.其中NameNode和DataNode在同一台机器上,如果有条件建议大家把NameNode单独放在一台机器上,因 ...

  4. MongoDB 2.6复制集单节点部署(三)

    MongoDB在单节点中也可以做复制集,但是仅限于测试实验,最大的好处就是部署方便快速,可以随便添加新节点,节省资源.在这里我使用的是MongoDB 2.6版本进行复制集实验(但MongoDB配置文件 ...

  5. 使用Kubeadm创建k8s集群之节点部署(三十一)

    前言 本篇部署教程将讲述k8s集群的节点(master和工作节点)部署,请先按照上一篇教程完成节点的准备.本篇教程中的操作全部使用脚本完成,并且对于某些情况(比如镜像拉取问题)还提供了多种解决方案.不 ...

  6. rancher三节点k8s集群部署例子

    rancher三节点k8s集群部署例子 待办 https://rorschachchan.github.io/2019/07/25/使用Rancher2-1部署k8s/

  7. kubernetes master节点部署(三)

    一.部署kubernetes api服务 1.1.准备软件包 [root@linux-node1 ~]# cd /usr/local/src/kubernetes [root@linux-node1 ...

  8. 理解 OpenStack Swift (1):OpenStack + 三节点Swift 集群+ HAProxy + UCARP 安装和配置

    本系列文章着重学习和研究OpenStack Swift,包括环境搭建.原理.架构.监控和性能等. (1)OpenStack + 三节点Swift 集群+ HAProxy + UCARP 安装和配置 ( ...

  9. Kubernetes集群部署之四Master节点部署

    Kubernetes Master节点部署三个服务:kube-apiserver.kube-controller-manager.kube-scheduler和一个命令工具kubectl. Maste ...

随机推荐

  1. 10、linux启动过程

    (1)linux启动说明: 第一步:开机自检,检查硬件,加载BIOS(帮我们找到启动盘是谁): 第二步:读取MBR(读取启动硬盘0柱面0磁道1扇区(512字节)的前446字节,找到装有操作系统的分区) ...

  2. Vue(14)slot插槽的使用

    为什么使用slot slot(插槽) 在生活中很多地方都有插槽,电脑usb的插槽,插板当中的电源插槽 插槽的目的是为了让我们原来的设备具备更多的扩展性 比如电脑的USB我们可以插入U盘,手机,鼠标,键 ...

  3. hdu 3117 Fibonacci Numbers 矩阵快速幂+公式

    斐波那契数列后四位可以用快速幂取模(模10000)算出.前四位要用公式推 HDU 3117 Fibonacci Numbers(矩阵快速幂+公式) f(n)=(((1+√5)/2)^n+((1-√5) ...

  4. Tomcat:tomcat部署war包配置SSL访问

    Linux-CentOS7部署 war放置 war放到tomcat的webapps目录内 修改配置 修改tomcat的server.xml文件 HTTP: 找到Connector元素,修改端口 HTT ...

  5. Spring 依赖注入(DI) 的三种方式 和 对集合类型的注入

    // 分别省略了getter setter public class Student { private String name; private int age; private Teacher t ...

  6. Java 内存泄漏知多少?

    先看再点赞,给自己一点思考的时间,如果对自己有帮助,微信搜索[程序职场]关注这个执着的职场程序员.我有什么:职场规划指导,技能提升方法,讲不完的职场故事,个人成长经验. 面试的时候内存管理是不是很多面 ...

  7. TCP连接的11种状态

    传输控制协议(TCP,Transmission Control Protocol)是一种面向连接的.可靠的.基于字节流的传输层通信协议.TCP协议主要针对三次握手建立连接和四次挥手断开连接,其中包括了 ...

  8. IP数据包格式与ARP转发原理

    一.网络层简介1.网络层功能2.网络层协议字段二.ICMP与封装三.ARP协议与ARP欺骗1.ARP协议2.ARP欺骗 1.网络层功能 1. 定义了基于IP地址的逻辑地址2. 连接不同的媒介3. 选择 ...

  9. Rust安装-运行第一个程序-hello_world

    Rust官网:https://rust-lang.org/ 安装 点击install,选择版本 选择相对应的版本进行下载 我这里下载的是windows系统,运行下载好的exe文件,根据需要选择选对应的 ...

  10. [刘阳Java]_Spring常用注解介绍_第6讲

    Spring的注解是在Spring2.5的版本中引入的,目的简化XML配置.在企业开发过程中使用注解的频率非常高,但是学习注解的前提是大家一定要对Spring基于XML配置要熟悉,这是我个人建议,因为 ...