近期zabbix数据库占用的io高,在页面查看图形很慢,而且数据表已经很大,将采用把数据库的数据目录移到新的磁盘,将几个大表进行分表操作

一、数据迁移:

1、数据同步到新的磁盘上,先停止mysql(不停止同步的话就有问题):
systemctl stop mariadb
rsync -av /var/lib/mysql/ /mysql_data/ 2、修改mysql的配置文件/etc/my.cnf:
datadir=/mysql_data 3、启动mysql:
systemctl stop mariadb

二、数据库分表:

1、查看表空间占用情况:
select table_name, (data_length+index_length)/1024/1024 as total_mb, table_rows from information_schema.tables where table_schema='zabbix'; 2、一般都是这几个表很大history、history_str、history_text、history_uint、trends、trends_uint,先为每个表创建空表(数据很大的话执行的时间太长):
history:
CREATE TABLE `history_20190619` (
`itemid` bigint(20) unsigned NOT NULL,
`clock` int(11) NOT NULL DEFAULT '',
`value` double(16,4) NOT NULL DEFAULT '0.0000',
`ns` int(11) NOT NULL DEFAULT '',
KEY `history_1` (`itemid`,`clock`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin; history_str:
CREATE TABLE `history_str_20190619` (
`itemid` bigint(20) unsigned NOT NULL,
`clock` int(11) NOT NULL DEFAULT '',
`value` varchar(255) COLLATE utf8_bin NOT NULL DEFAULT '',
`ns` int(11) NOT NULL DEFAULT '',
KEY `history_str_1` (`itemid`,`clock`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin; history_text:
CREATE TABLE `history_text_20190619` (
`itemid` bigint(20) unsigned NOT NULL,
`clock` int(11) NOT NULL DEFAULT '',
`value` text COLLATE utf8_bin NOT NULL,
`ns` int(11) NOT NULL DEFAULT '',
KEY `history_text_1` (`itemid`,`clock`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin; history_uint:
CREATE TABLE `history_uint_20190619` (
`itemid` bigint(20) unsigned NOT NULL,
`clock` int(11) NOT NULL DEFAULT '',
`value` bigint(20) unsigned NOT NULL DEFAULT '',
`ns` int(11) NOT NULL DEFAULT '',
KEY `history_uint_1` (`itemid`,`clock`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin; trends:
CREATE TABLE `trends_20190619` (
`itemid` bigint(20) unsigned NOT NULL,
`clock` int(11) NOT NULL DEFAULT '',
`num` int(11) NOT NULL DEFAULT '',
`value_min` double(16,4) NOT NULL DEFAULT '0.0000',
`value_avg` double(16,4) NOT NULL DEFAULT '0.0000',
`value_max` double(16,4) NOT NULL DEFAULT '0.0000',
PRIMARY KEY (`itemid`,`clock`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin; trends_uint:
CREATE TABLE `trends_uint_20190619` (
`itemid` bigint(20) unsigned NOT NULL,
`clock` int(11) NOT NULL DEFAULT '',
`num` int(11) NOT NULL DEFAULT '',
`value_min` bigint(20) unsigned NOT NULL DEFAULT '',
`value_avg` bigint(20) unsigned NOT NULL DEFAULT '',
`value_max` bigint(20) unsigned NOT NULL DEFAULT '',
PRIMARY KEY (`itemid`,`clock`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin; 3、重命名表:
rename table history to history_back;
rename table history_20190619 to history; rename table history_str to history_str_back;
rename table history_str_20190619 to history_str; rename table history_text to history_text_back;
rename table history_text_20190619 to history_text; rename table history_uint to history_uint_back;
rename table history_uint_20190619 to history_uint; rename table trends to trends_back;
rename table trends_20190619 to trends; rename table trends_uint to trends_uint_back;
rename table trends_uint_20190619 to trends_uint; 4、将下面sql复制到partition.sql中,执行:
mysql -uzabbix -pzabbix zabbix < partition.sql
5、添加到定时任务:
01 01 * * * mysql -uzabbix -pzabbix zabbix -e"CALL partition_maintenance_all('zabbix')" &>/dev/null
6、手动执行:
mysql -uzabbix -pzabbix zabbix -e "CALL partition_maintenance_all('zabbix')" &> /root/partition.log& innodb_file_per_table

分表sql

DELIMITER $$
CREATE PROCEDURE `partition_create`(SCHEMANAME varchar(64), TABLENAME varchar(64), PARTITIONNAME varchar(64), CLOCK int)
BEGIN
/*
SCHEMANAME = The DB schema in which to make changes
TABLENAME = The table with partitions to potentially delete
PARTITIONNAME = The name of the partition to create
*/
/*
Verify that the partition does not already exist
*/ DECLARE RETROWS INT;
SELECT COUNT(1) INTO RETROWS
FROM information_schema.partitions
WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND partition_description >= CLOCK; IF RETROWS = 0 THEN
/*
1. Print a message indicating that a partition was created.
2. Create the SQL to create the partition.
3. Execute the SQL from #2.
*/
SELECT CONCAT( "partition_create(", SCHEMANAME, ",", TABLENAME, ",", PARTITIONNAME, ",", CLOCK, ")" ) AS msg;
SET @sql = CONCAT( 'ALTER TABLE ', SCHEMANAME, '.', TABLENAME, ' ADD PARTITION (PARTITION ', PARTITIONNAME, ' VALUES LESS THAN (', CLOCK, '));' );
PREPARE STMT FROM @sql;
EXECUTE STMT;
DEALLOCATE PREPARE STMT;
END IF;
END$$
DELIMITER ;
DELIMITER $$
CREATE PROCEDURE `partition_drop`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), DELETE_BELOW_PARTITION_DATE BIGINT)
BEGIN
/*
SCHEMANAME = The DB schema in which to make changes
TABLENAME = The table with partitions to potentially delete
DELETE_BELOW_PARTITION_DATE = Delete any partitions with names that are dates older than this one (yyyy-mm-dd)
*/
DECLARE done INT DEFAULT FALSE;
DECLARE drop_part_name VARCHAR(16); /*
Get a list of all the partitions that are older than the date
in DELETE_BELOW_PARTITION_DATE. All partitions are prefixed with
a "p", so use SUBSTRING TO get rid of that character.
*/
DECLARE myCursor CURSOR FOR
SELECT partition_name
FROM information_schema.partitions
WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND CAST(SUBSTRING(partition_name FROM 2) AS UNSIGNED) < DELETE_BELOW_PARTITION_DATE;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE; /*
Create the basics for when we need to drop the partition. Also, create
@drop_partitions to hold a comma-delimited list of all partitions that
should be deleted.
*/
SET @alter_header = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " DROP PARTITION ");
SET @drop_partitions = ""; /*
Start looping through all the partitions that are too old.
*/
OPEN myCursor;
read_loop: LOOP
FETCH myCursor INTO drop_part_name;
IF done THEN
LEAVE read_loop;
END IF;
SET @drop_partitions = IF(@drop_partitions = "", drop_part_name, CONCAT(@drop_partitions, ",", drop_part_name));
END LOOP;
IF @drop_partitions != "" THEN
/*
1. Build the SQL to drop all the necessary partitions.
2. Run the SQL to drop the partitions.
3. Print out the table partitions that were deleted.
*/
SET @full_sql = CONCAT(@alter_header, @drop_partitions, ";");
PREPARE STMT FROM @full_sql;
EXECUTE STMT;
DEALLOCATE PREPARE STMT; SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, @drop_partitions AS `partitions_deleted`;
ELSE
/*
No partitions are being deleted, so print out "N/A" (Not applicable) to indicate
that no changes were made.
*/
SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, "N/A" AS `partitions_deleted`;
END IF;
END$$
DELIMITER ;
DELIMITER $$
CREATE PROCEDURE `partition_maintenance`(SCHEMA_NAME VARCHAR(32), TABLE_NAME VARCHAR(32), KEEP_DATA_DAYS INT, HOURLY_INTERVAL INT, CREATE_NEXT_INTERVALS INT)
BEGIN
DECLARE OLDER_THAN_PARTITION_DATE VARCHAR(16);
DECLARE PARTITION_NAME VARCHAR(16);
DECLARE OLD_PARTITION_NAME VARCHAR(16);
DECLARE LESS_THAN_TIMESTAMP INT;
DECLARE CUR_TIME INT; CALL partition_verify(SCHEMA_NAME, TABLE_NAME, HOURLY_INTERVAL);
SET CUR_TIME = UNIX_TIMESTAMP(DATE_FORMAT(NOW(), '%Y-%m-%d 00:00:00')); SET @__interval = 1;
create_loop: LOOP
IF @__interval > CREATE_NEXT_INTERVALS THEN
LEAVE create_loop;
END IF; SET LESS_THAN_TIMESTAMP = CUR_TIME + (HOURLY_INTERVAL * @__interval * 3600);
SET PARTITION_NAME = FROM_UNIXTIME(CUR_TIME + HOURLY_INTERVAL * (@__interval - 1) * 3600, 'p%Y%m%d%H00');
IF(PARTITION_NAME != OLD_PARTITION_NAME) THEN
CALL partition_create(SCHEMA_NAME, TABLE_NAME, PARTITION_NAME, LESS_THAN_TIMESTAMP);
END IF;
SET @__interval=@__interval+1;
SET OLD_PARTITION_NAME = PARTITION_NAME;
END LOOP; SET OLDER_THAN_PARTITION_DATE=DATE_FORMAT(DATE_SUB(NOW(), INTERVAL KEEP_DATA_DAYS DAY), '%Y%m%d0000');
CALL partition_drop(SCHEMA_NAME, TABLE_NAME, OLDER_THAN_PARTITION_DATE); END$$
DELIMITER ;
DELIMITER $$
CREATE PROCEDURE `partition_verify`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), HOURLYINTERVAL INT(11))
BEGIN
DECLARE PARTITION_NAME VARCHAR(16);
DECLARE RETROWS INT(11);
DECLARE FUTURE_TIMESTAMP TIMESTAMP; /*
* Check if any partitions exist for the given SCHEMANAME.TABLENAME.
*/
SELECT COUNT(1) INTO RETROWS
FROM information_schema.partitions
WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND partition_name IS NULL; /*
* If partitions do not exist, go ahead and partition the table
*/
IF RETROWS = 1 THEN
/*
* Take the current date at 00:00:00 and add HOURLYINTERVAL to it. This is the timestamp below which we will store values.
* We begin partitioning based on the beginning of a day. This is because we don't want to generate a random partition
* that won't necessarily fall in line with the desired partition naming (ie: if the hour interval is 24 hours, we could
* end up creating a partition now named "p201403270600" when all other partitions will be like "p201403280000").
*/
SET FUTURE_TIMESTAMP = TIMESTAMPADD(HOUR, HOURLYINTERVAL, CONCAT(CURDATE(), " ", '00:00:00'));
SET PARTITION_NAME = DATE_FORMAT(CURDATE(), 'p%Y%m%d%H00'); -- Create the partitioning query
SET @__PARTITION_SQL = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " PARTITION BY RANGE(`clock`)");
SET @__PARTITION_SQL = CONCAT(@__PARTITION_SQL, "(PARTITION ", PARTITION_NAME, " VALUES LESS THAN (", UNIX_TIMESTAMP(FUTURE_TIMESTAMP), "));"); -- Run the partitioning query
PREPARE STMT FROM @__PARTITION_SQL;
EXECUTE STMT;
DEALLOCATE PREPARE STMT;
END IF;
END$$
DELIMITER ; DELIMITER $$
CREATE PROCEDURE`partition_maintenance_all`(SCHEMA_NAME VARCHAR(32))
BEGIN
CALL partition_maintenance(SCHEMA_NAME, 'history', 90, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'history_log', 90, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'history_str', 90, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'history_text', 90, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'history_uint', 90, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'trends', 730, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'trends_uint', 730, 24, 14);
END$$
DELIMITER ;

zabbix 数据库分表操作的更多相关文章

  1. zabbix数据库分表的实现

    前提条件是主从同步操作完成(主从同步的前提是两个数据库表结构必须一样) 先看一下mysql配置文件 vi /usr/local/mysql/my.cnf 配置内容:------------------ ...

  2. MySQL数据库分表的3种方法

    原文地址:MySQL数据库分表的3种方法作者:dreamboycx 一,先说一下为什么要分表 当一张的数据达到几百万时,你查询一次所花的时间会变多,如果有联合查询的话,我想有可能会死在那儿了.分表的目 ...

  3. mysql分表操作

    一般分表操作有垂直拆分和水平拆分.顾名思义. 1.  垂直拆分是指,这个表的列,即字段,要拆分成两个或多个表. 这个应用场景比如:这个表字段,几个都是int.datetime等,有那么一个是text类 ...

  4. Mycat本地模式的自增长分表操作

    Mycat对表t_rc_rule_monitor做分表操作 在mysql上执行(没有t_rc_rule_monitor) DROP TABLE IF EXISTS t_rc_rule_monitor; ...

  5. MyBatis实现Mysql数据库分库分表操作和总结

    前言 作为一个数据库,作为数据库中的一张表,随着用户的增多随着时间的推移,总有一天,数据量会大到一个难以处理的地步.这时仅仅一张表的数据就已经超过了千万,无论是查询还是修改,对于它的操作都会很耗时,这 ...

  6. mysql分表场景分析与简单分表操作

    为什么要分表 首先要知道什么情况下,才需要分表个人觉得单表记录条数达到百万到千万级别时就要使用分表了,分表的目的就在于此,减小数据库的负担,缩短查询时间. 表分割有两种方式: 1水平分割:根据一列或多 ...

  7. 一致性Hash算法在数据库分表中的实践

    最近有一个项目,其中某个功能单表数据在可预估的未来达到了亿级,初步估算在90亿左右.与同事详细讨论后,决定采用一致性Hash算法来完成数据库的自动扩容和数据迁移.整个程序细节由我同事完成,我只是将其理 ...

  8. Oracle亿级数据查询处理(数据库分表、分区实战)

    大数据量的查询,不仅查询速度非常慢,而且还会导致数据库经常宕机(刚接到这个项目时候,数据库经常宕机o(╯□╰)o). 那么,如何处理上亿级的数据量呢?如何从数据库经常宕机到上亿数据秒查?仅以此篇文章作 ...

  9. MySQL数据库分表分区(一)(转)

    面对当今大数据存储,设想当mysql中一个表的总记录超过1000W,会出现性能的大幅度下降吗? 答案是肯定的,一个表的总记录超过1000W,在操作系统层面检索也是效率非常低的   解决方案: 目前针对 ...

随机推荐

  1. sql中order by和group by的区别

    order by 和 group by 的区别: 1,order by 从英文里理解就是行的排序方式,默认的为升序. order by 后面必须列出排序的字段名,可以是多个字段名. 2,group b ...

  2. 【Java/Csv/Regex】用正则表达式去劈分带引号的csv文件行,得到想要的行数据

    csv文件是用引号分隔的文本行,为了完善内容人们又用引号把每个区块的内容又包了起来,于是形成下面的文件: "1","2","3"," ...

  3. Python适配器模式代码

    Python设计模式之适配器模式,代码,思考等 # -*- coding: utf-8 -*- # author:baoshan class Computer: def __init__(self, ...

  4. 十、collection的作用+变量

    一.collection作用?容器 组织业务逻辑 导入导出 其他功能,比如监控和mock server 二.为什么要使用变量 假设我们需要测试n个api,这些api的domain都是相同的,比如 ap ...

  5. Spring Cloud与Docker微服务架构实战 PDF版 内含目录

    Spring Cloud与Docker微服务架构实战  目录 1 微服务架构概述 1 1.1 单体应用架构存在的问题1 1.2 如何解决单体应用架构存在的问题3 1.3 什么是微服务3 1.4 微服务 ...

  6. Django models中的\_\_repr__方法

    先看个例子: class D(object): def __init__(self): pass def __str__(self): return "__str__" def _ ...

  7. bladex之nacos配置

    blade.yaml #服务器配置server:  undertow:    # 设置IO线程数, 它主要执行非阻塞的任务,它们会负责多个连接, 默认设置每个CPU核心一个线程    io-threa ...

  8. 【计算机视觉】OpenCV篇(5) - 仿射变换与透视变换

    参考: 图像处理的仿射变换与透视变换(https://www.imooc.com/article/27535) http://ex2tron.wang/opencv-python-extra-warp ...

  9. 用easymock来mock数据

    昨天学习微信小程序了解了一个模拟数据的工具EasyMock,一早到公司就使用试试. 1.创建项目: 创建好如下所示: 2.创建接口: 点击右下角+号按钮即可. 操作栏依次是:预览,编辑,链接,更多操作 ...

  10. 【Leetcode_easy】1025. Divisor Game

    problem 1025. Divisor Game 参考 1. Leetcode_easy_1025. Divisor Game; 完