MariaDB5.5(mysql)的partitioning设置 for Zabbix3.0
用zabbix的同学都知道,一台服务器监视几百几千台服务器,一个服务器几十个item,长年下来数据量是很惊人的。
而zabbix自带的housekeeping功能,默认状态下的删除速度完全跟不上数据增长的速度。
而一旦把删除量加大,CPU磁盘又开始报警了,严重的情况下,监视都会中断。
为了防止这种问题,我们采用了mariaDB(mysql)的partitioning功能。
partitioning功能的开启最好是在服务器搭建的初期就设计并配置好。
一旦服务器运行了一段时间,partitioning的配置做是也可以做,但是会比较麻烦。
众所周知,Zabbix里面数据量最大的表就是history和trends,
我们只针对这两张表做partitioning,其他的交给housekeeper就足够了。
*zabbix服务器为新装的服务器,除了表结构建立好了以外,没有任何数据。
登入mysql:
mysql zabbix
按顺序执行以下sql文
Alter table history_text drop primary key, add index (id), drop index history_text_2, add index history_text_2 (itemid, id);
Alter table history_log drop primary key, add index (id), drop index history_log_2, add index history_log_2 (itemid, id);
创建存储过程partition_create
DELIMITER $$
), TABLENAME ), PARTITIONNAME ), CLOCK int)
BEGIN
DECLARE RETROWS INT;
) INTO RETROWS
FROM information_schema.partitions
WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND partition_description >= CLOCK;
THEN
SELECT CONCAT( "partition_create(", SCHEMANAME, ",", TABLENAME, ",", PARTITIONNAME, ",", CLOCK, ")" ) AS msg;
SET @sql = CONCAT( 'ALTER TABLE ', SCHEMANAME, '.', TABLENAME, ' ADD PARTITION (PARTITION ', PARTITIONNAME, ' VALUES LESS THAN (', CLOCK, '));' );
PREPARE STMT FROM @sql;
EXECUTE STMT;
DEALLOCATE PREPARE STMT;
END IF;
END$$
DELIMITER ;
创建存储过程partition_drop
DELIMITER $$
), TABLENAME ), DELETE_BELOW_PARTITION_DATE BIGINT)
BEGIN
/*
SCHEMANAME = The DB schema in which to make changes
TABLENAME = The table with partitions to potentially delete
DELETE_BELOW_PARTITION_DATE = Delete any partitions with names that are dates older than this one (yyyy-mm-dd)
*/
DECLARE done INT DEFAULT FALSE;
);
/*
Get a list of all the partitions that are older than the date
in DELETE_BELOW_PARTITION_DATE. All partitions are prefixed with
a "p", so use SUBSTRING TO get rid of that character.
*/
DECLARE myCursor CURSOR FOR
SELECT partition_name
FROM information_schema.partitions
) AS UNSIGNED) < DELETE_BELOW_PARTITION_DATE;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
/*
Create the basics for when we need to drop the partition. Also, create
@drop_partitions to hold a comma-delimited list of all partitions that
should be deleted.
*/
SET @alter_header = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " DROP PARTITION ");
SET @drop_partitions = "";
/*
Start looping through all the partitions that are too old.
*/
OPEN myCursor;
read_loop: LOOP
FETCH myCursor INTO drop_part_name;
IF done THEN
LEAVE read_loop;
END IF;
SET @drop_partitions = IF(@drop_partitions = "", drop_part_name, CONCAT(@drop_partitions, ",", drop_part_name));
END LOOP;
IF @drop_partitions != "" THEN
/*
1. Build the SQL to drop all the necessary partitions.
2. Run the SQL to drop the partitions.
3. Print out the table partitions that were deleted.
*/
SET @full_sql = CONCAT(@alter_header, @drop_partitions, ";");
PREPARE STMT FROM @full_sql;
EXECUTE STMT;
DEALLOCATE PREPARE STMT;
SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, @drop_partitions AS `partitions_deleted`;
ELSE
/*
No partitions are being deleted, so print out "N/A" (Not applicable) to indicate
that no changes were made.
*/
SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, "N/A" AS `partitions_deleted`;
END IF;
END$$
DELIMITER ;
创建存储过程partition_maintenance
DELIMITER $$
), TABLE_NAME ), KEEP_DATA_DAYS INT, HOURLY_INTERVAL INT, CREATE_NEXT_INTERVALS INT)
BEGIN
);
);
);
DECLARE LESS_THAN_TIMESTAMP INT;
DECLARE CUR_TIME INT;
CALL partition_verify(SCHEMA_NAME, TABLE_NAME, HOURLY_INTERVAL);
SET CUR_TIME = UNIX_TIMESTAMP(DATE_FORMAT(NOW(), '%Y-%m-%d 00:00:00'));
;
create_loop: LOOP
IF @__interval > CREATE_NEXT_INTERVALS THEN
LEAVE create_loop;
END IF;
);
) , 'p%Y%m%d%H00');
IF(PARTITION_NAME != OLD_PARTITION_NAME) THEN
CALL partition_create(SCHEMA_NAME, TABLE_NAME, PARTITION_NAME, LESS_THAN_TIMESTAMP);
END IF;
;
SET OLD_PARTITION_NAME = PARTITION_NAME;
END LOOP;
SET OLDER_THAN_PARTITION_DATE=DATE_FORMAT(DATE_SUB(NOW(), INTERVAL KEEP_DATA_DAYS DAY), '%Y%m%d0000');
CALL partition_drop(SCHEMA_NAME, TABLE_NAME, OLDER_THAN_PARTITION_DATE);
END$$
DELIMITER ;
创建存储过程partition_verify
DELIMITER $$
), TABLENAME ), HOURLYINTERVAL ))
BEGIN
);
);
DECLARE FUTURE_TIMESTAMP TIMESTAMP;
) INTO RETROWS
FROM information_schema.partitions
WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND partition_name IS NULL;
THEN
SET FUTURE_TIMESTAMP = TIMESTAMPADD(HOUR, HOURLYINTERVAL, CONCAT(CURDATE(), " ", '00:00:00'));
SET PARTITION_NAME = DATE_FORMAT(CURDATE(), 'p%Y%m%d%H00');
SET @__PARTITION_SQL = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " PARTITION BY RANGE(`clock`)");
SET @__PARTITION_SQL = CONCAT(@__PARTITION_SQL, "(PARTITION ", PARTITION_NAME, " VALUES LESS THAN (", UNIX_TIMESTAMP(FUTURE_TIMESTAMP), "));");
PREPARE STMT FROM @__PARTITION_SQL;
EXECUTE STMT;
DEALLOCATE PREPARE STMT;
END IF;
END$$
DELIMITER ;
创建存储过程partition_maintenance_all
这里的格式是
CALL partition_maintenance('<zabbix_db_name>', '<table_name>', <days_to_keep_data>, <hourly_interval>, <num_future_intervals_to_create>)
后面3个参数分别为:数据保持的日数,时间间隔(一小时一个分区就是1,1天一个分区就是24,依此类推),预创建的分区数。
里面的数字请根据需要调整。
DELIMITER $$
))
BEGIN
CALL partition_maintenance(SCHEMA_NAME, , , );
CALL partition_maintenance(SCHEMA_NAME, , , );
CALL partition_maintenance(SCHEMA_NAME, , , );
CALL partition_maintenance(SCHEMA_NAME, , , );
CALL partition_maintenance(SCHEMA_NAME, , , );
CALL partition_maintenance(SCHEMA_NAME, , , );
CALL partition_maintenance(SCHEMA_NAME, , , );
END$$
DELIMITER ;
然后执行
mysql> CALL partition_maintenance_all('zabbix');
然后关闭zabbix页面上history和trends的housekeeping


最后,由于我们在本列中只创建了14个分区,到第15天数据就没地方写了,所以把以下脚本放到crontab里面,每天或者每周执行一下就可以了。
#!/bin/sh
mysql -u[username] -p[password] zabbix -e "CALL partition_maintenance_all('zabbix');"
另外,刚才我们创建的存储过程也可以单独使用
- partition_create - 创建一个表的分区
- partition_drop - 删除给出时间戳以前的分区
- partition_verify - 验证表是否开启了分区,如果没有,则创建一个单个的分区
用法分别为
partition_create(SCHEMANAME varchar(64), TABLENAME varchar(64), PARTITIONNAME varchar(64), CLOCK int) partition_drop(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), DELETE_BELOW_PARTITION_DATE VARCHAR(64)) partition_verify(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), HOURLYINTERVAL INT(11))
参考文档:
Docs/howto/mysql partitionhttps://www.zabbix.org/wiki/Docs/howto/mysql_partition
MariaDB5.5(mysql)的partitioning设置 for Zabbix3.0的更多相关文章
- 分布式监控系统Zabbix-3.0.3-完整安装记录(1)
分布式监控系统Zabbix-3.0.3的安装记录 环境说明zabbix-server:192.168.1.30 #zabbix的服务端(若要监控本机,则需要配置本机的Zabbix agent, ...
- Grafana3.0.1+Zabbix3.0.4监控系统平台搭建
前言 本文的Zabbix部分知识只介绍它的基础安装,Zabbix的使用以及配置优化并不在本文的介绍范围之内. 本文只介绍在CentOS6系列下的安装和部署,其他发行版与其他版本号暂不涉及 本文默认使用 ...
- zabbix3.0安装【server】
关于zabbix的介绍,就不多说了,功能强大,架构前卫,自己直接去官网研究 在这里,还是秉承研究一个应用或者技术,都要自己动手安装部署,实战操作,才能深入掌握,基于这个精神,zabbix从安装部署开始 ...
- zabbix3.0安装部署文档
zabbix v3.0安装部署 摘要: 本文的安装过程摘自http://www.ttlsa.com/以及http://b.lifec-inc.com ,和站长凉白开的<ZABBIX从入门到精通v ...
- Zabbix3.0 自动邮件报障
Zabbix3.0以后,自带的邮件报警支持SSL验证了, 但是仍然没有发送复数个邮箱以及CC,BCC的功能, 因此,我们还是得用别的方法来实现邮件报障. 实现方法有很多种,我用的是PHPmailer. ...
- centos7 安装Zabbix3.0
1 安装Mariadb数据库(代替MySQL)yum -y install mariadb*systemctl start mariadbsystemctl enable mariadb #自启动 2 ...
- Zabbix3.0源码安装
环境:nginx1.6.3 php-5.6.22 mysql-5.5.49 请参考前面的博文自行搭建 安装依赖并创建用户 [root@test88 ~]# yum install -y libxml2 ...
- centos7.2编译安装zabbix-3.0.4
安装zabbix-3.0.4 #安装必备的包 yum -y install gcc* make php php-gd php-mysql php-bcmath php-mbstring php-xml ...
- Zabbix3.0+CentOS7.0+MariaDB5.5监视服务器安装
本次安装采用: Centos7.0 Zabbix3.0 MariaDB5.5 ------------------- 2012/12/2更新 最新的Centos7.1或者Redhat7.1版本在最后 ...
随机推荐
- 8,SFDC 管理员篇 - 数据模型 - 公式和验证 2
1, Checkbox 只接受真值或者假值 And(arg1, arg2....)至少两个参数,只有参数都为真时候,才返回真,只要有一个为假,就都为假 例如:AND(DoNotCall, HasOpt ...
- Python之路【第十六篇】:Django【基础篇】
Python之路[第十六篇]:Django[基础篇] Python的WEB框架有Django.Tornado.Flask 等多种,Django相较与其他WEB框架其优势为:大而全,框架本身集成了O ...
- 【原创】No matching distribution found for Twisted>=10.0.0 (from scrapy)
系统 Ubuntu14.04 python 2.7.11 运行 pip install scrapy 报错: No matching distribution found for Twiste ...
- Hadoop MapReduce编程 API入门系列之小文件合并(二十九)
不多说,直接上代码. Hadoop 自身提供了几种机制来解决相关的问题,包括HAR,SequeueFile和CombineFileInputFormat. Hadoop 自身提供的几种小文件合并机制 ...
- Asp.net Page指令
Page指令为编译器编译页面时使用的指令 指令的格式为 <%@ [Directive] [Attribute=value]%> <%@ Page Language="C#& ...
- TinyXML:一个优秀的C++ XML解析器
//-------------------------------------------------------------------------------------------------- ...
- 分布式HBase-0.98.4环境搭建
fesh个人实践,欢迎经验交流!Blog地址:http://www.cnblogs.com/fesh/p/3804072.html 本文有点简单,详细版本请参见<分布式Hbase-0.98.4在 ...
- 转 A Week with Mozilla's Rust
转自http://relistan.com/a-week-with-mozilla-rust/ A Week with Mozilla's Rust I want another systems la ...
- screen 命令
# screen [-AmRvx -ls -wipe][-d <作业名称>][-h <行数>][-r <作业名称>][-s ][-S <作业名称>] 参 ...
- Android消息推送——JPush极光推送
刚看了一篇关于Android消息推送评测总结的博客http://www.cnblogs.com/logan/p/4514635.html: 自己也对原学过的JPush极光进行一下小结,方便后续工作使用 ...