zabbix4.0 之mysql优化(Zabbix分区表)

zabbix最大的瓶颈不在zabbix服务,而是mysql数据库的压力上,优化mysql其实就是优化zabbix的配置了。
zabbix数据库常见的优化处理方法有两种:

  • 清空数据库中history,history_uint,trends_uint中的数据(这种方式非常耗费时间)
  • 使用MySQL表分区来对history这种大表进行分区,但是一定要在数据量小的时候进行分区,当数据量达到好几十G设置几百G了还是采用第一种方法把数据清理了再作表分区

分表操作如下:

相关知识点:

MySQL的表分区不支持外键。Zabbix2.0以上history和trend相关的表没有使用外键,因此可以使用分区。
MySQL表分区就是将一个大表在逻辑上切分成好几个物理分片。使用MySQL表分区有以下几个好处:
1.在有些场景下可以明显增加查询性能,特别是对于那些重度使用的表如果是一个单独的分区或者好几个分区就可以明显增加查询性能,因为比起加载整张表的数据到内存,一个分区的数据和索引更容易加载到内存。查看zabbix数据的general日志,可以发现zabbix对于history相关的几张表调用是非常频繁的,所以如果要优化zabbix的数据库重点要优化history这几张大表。
2.如果查询或者更新主要是使用一个分区,那么性能提升就可以简单地通过顺序访问磁盘上的这个分区而不用使用索引和随机访问整张表。
3. 批量插入和删除执行的时候可以简单地删除或者增加分区,只要当创建分区的时候有计划的创建。ALTER TABLE操作也会很快
4.Housekeeper对于某些数据类型不在需要了。可以通过Administration->General->Housekeeping来关闭不需要的数据类型的housekeeping。比如关闭History类的housekeeping
5.当创建增加新的分区时,确保分区范围没有越界,要不然会返回错误
  一个MySQL表要么完全被分区,要么一点也不要被分区。
  当尝试对一个表进行大量分区时,增大open_files_limit的值
  被分区的表都不支持外键,在进行分区之前需要删除外键
  被分区的表不支持查询缓存
1. 查看表占用空间情况
select table_name, (data_length+index_length)/1024/1024 as total_mb, table_rows from information_schema.tables where table_schema='zabbix';
image.png
2.Zabbix大表:history,history_log,history_str,history_text,history_uint,trends,trends_uint

共有四个存储过程
partition_create - 这将在给定模式中的给定表上创建一个分区。
partition_drop - 这将删除给定模式中给定表上给定时间戳的分区。
partition_maintenance - 此功能是用户调用的。它负责解析给定的参数,然后根据需要创建/删除分区。
partition_verify - 检查给定模式中给定表上是否启用了分区。如果没有启用,它将创建一个单独的分区。
具体的脚本如下:

DELIMITER $$
CREATE PROCEDURE `partition_create`(SCHEMANAME varchar(64), TABLENAME varchar(64), PARTITIONNAME varchar(64), CLOCK int)
BEGIN
        /*
           SCHEMANAME = The DB schema in which to make changes
           TABLENAME = The table with partitions to potentially delete
           PARTITIONNAME = The name of the partition to create
        */
        /*
           Verify that the partition does not already exist
        */

        DECLARE RETROWS INT;
        SELECT COUNT(1) INTO RETROWS
        FROM information_schema.partitions
        WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND partition_description >= CLOCK;

        IF RETROWS = 0 THEN
        /*
           1. Print a message indicating that a partition was created.
           2. Create the SQL to create the partition.
           3. Execute the SQL from #2.
        */
        SELECT CONCAT( "partition_create(", SCHEMANAME, ",", TABLENAME, ",", PARTITIONNAME, ",", CLOCK, ")" ) AS msg;
        SET @sql = CONCAT( 'ALTER TABLE ', SCHEMANAME, '.', TABLENAME, ' ADD PARTITION (PARTITION ', PARTITIONNAME, ' VALUES LESS THAN (', CLOCK, '));' );
        PREPARE STMT FROM @sql;
        EXECUTE STMT;
        DEALLOCATE PREPARE STMT;
        END IF;
END$$
DELIMITER ;
DELIMITER $$
CREATE PROCEDURE `partition_drop`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), DELETE_BELOW_PARTITION_DATE BIGINT)
BEGIN
        /*
           SCHEMANAME = The DB schema in which to make changes
           TABLENAME = The table with partitions to potentially delete
           DELETE_BELOW_PARTITION_DATE = Delete any partitions with names that are dates older than this one (yyyy-mm-dd)
        */
        DECLARE done INT DEFAULT FALSE;
        DECLARE drop_part_name VARCHAR(16);

        /*
           Get a list of all the partitions that are older than the date
           in DELETE_BELOW_PARTITION_DATE.  All partitions are prefixed with
           a "p", so use SUBSTRING TO get rid of that character.
        */
        DECLARE myCursor CURSOR FOR
        SELECT partition_name
        FROM information_schema.partitions
        WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND CAST(SUBSTRING(partition_name FROM 2) AS UNSIGNED) < DELETE_BELOW_PARTITION_DATE;
        DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;

        /*
           Create the basics for when we need to drop the partition.  Also, create
           @drop_partitions to hold a comma-delimited list of all partitions that
           should be deleted.
        */
        SET @alter_header = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " DROP PARTITION ");
        SET @drop_partitions = "";

        /*
           Start looping through all the partitions that are too old.
        */
        OPEN myCursor;
        read_loop: LOOP
        FETCH myCursor INTO drop_part_name;
        IF done THEN
    LEAVE read_loop;
        END IF;
        SET @drop_partitions = IF(@drop_partitions = "", drop_part_name, CONCAT(@drop_partitions, ",", drop_part_name));
        END LOOP;
        IF @drop_partitions != "" THEN
        /*
           1. Build the SQL to drop all the necessary partitions.
           2. Run the SQL to drop the partitions.
           3. Print out the table partitions that were deleted.
        */
        SET @full_sql = CONCAT(@alter_header, @drop_partitions, ";");
        PREPARE STMT FROM @full_sql;
        EXECUTE STMT;
        DEALLOCATE PREPARE STMT;

        SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, @drop_partitions AS `partitions_deleted`;
        ELSE
        /*
           No partitions are being deleted, so print out "N/A" (Not applicable) to indicate
           that no changes were made.
        */
        SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, "N/A" AS `partitions_deleted`;
        END IF;
END$$
DELIMITER ;
DELIMITER $$
CREATE PROCEDURE `partition_maintenance`(SCHEMA_NAME VARCHAR(32), TABLE_NAME VARCHAR(32), KEEP_DATA_DAYS INT, HOURLY_INTERVAL INT, CREATE_NEXT_INTERVALS INT)
BEGIN
        DECLARE OLDER_THAN_PARTITION_DATE VARCHAR(16);
        DECLARE PARTITION_NAME VARCHAR(16);
        DECLARE OLD_PARTITION_NAME VARCHAR(16);
        DECLARE LESS_THAN_TIMESTAMP INT;
        DECLARE CUR_TIME INT;

        CALL partition_verify(SCHEMA_NAME, TABLE_NAME, HOURLY_INTERVAL);
        SET CUR_TIME = UNIX_TIMESTAMP(DATE_FORMAT(NOW(), '%Y-%m-%d 00:00:00'));

        SET @__interval = 1;
        create_loop: LOOP
        IF @__interval > CREATE_NEXT_INTERVALS THEN
    LEAVE create_loop;
        END IF;

        SET LESS_THAN_TIMESTAMP = CUR_TIME + (HOURLY_INTERVAL * @__interval * 3600);
        SET PARTITION_NAME = FROM_UNIXTIME(CUR_TIME + HOURLY_INTERVAL * (@__interval - 1) * 3600, 'p%Y%m%d%H00');
        IF(PARTITION_NAME != OLD_PARTITION_NAME) THEN
    CALL partition_create(SCHEMA_NAME, TABLE_NAME, PARTITION_NAME, LESS_THAN_TIMESTAMP);
        END IF;
        SET @__interval=@__interval+1;
        SET OLD_PARTITION_NAME = PARTITION_NAME;
        END LOOP;

        SET OLDER_THAN_PARTITION_DATE=DATE_FORMAT(DATE_SUB(NOW(), INTERVAL KEEP_DATA_DAYS DAY), '%Y%m%d0000');
        CALL partition_drop(SCHEMA_NAME, TABLE_NAME, OLDER_THAN_PARTITION_DATE);

END$$
DELIMITER ;
DELIMITER $$
CREATE PROCEDURE `partition_verify`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), HOURLYINTERVAL INT(11))
BEGIN
        DECLARE PARTITION_NAME VARCHAR(16);
        DECLARE RETROWS INT(11);
        DECLARE FUTURE_TIMESTAMP TIMESTAMP;

        /*
         * Check if any partitions exist for the given SCHEMANAME.TABLENAME.
         */
        SELECT COUNT(1) INTO RETROWS
        FROM information_schema.partitions
        WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND partition_name IS NULL;

        /*
         * If partitions do not exist, go ahead and partition the table
         */
        IF RETROWS = 1 THEN
        /*
         * Take the current date at 00:00:00 and add HOURLYINTERVAL to it.  This is the timestamp below which we will store values.
         * We begin partitioning based on the beginning of a day.  This is because we don't want to generate a random partition
         * that won't necessarily fall in line with the desired partition naming (ie: if the hour interval is 24 hours, we could
         * end up creating a partition now named "p201403270600" when all other partitions will be like "p201403280000").
         */
        SET FUTURE_TIMESTAMP = TIMESTAMPADD(HOUR, HOURLYINTERVAL, CONCAT(CURDATE(), " ", '00:00:00'));
        SET PARTITION_NAME = DATE_FORMAT(CURDATE(), 'p%Y%m%d%H00');

        -- Create the partitioning query
        SET @__PARTITION_SQL = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " PARTITION BY RANGE(`clock`)");
        SET @__PARTITION_SQL = CONCAT(@__PARTITION_SQL, "(PARTITION ", PARTITION_NAME, " VALUES LESS THAN (", UNIX_TIMESTAMP(FUTURE_TIMESTAMP), "));");

        -- Run the partitioning query
        PREPARE STMT FROM @__PARTITION_SQL;
        EXECUTE STMT;
        DEALLOCATE PREPARE STMT;
        END IF;
END$$
DELIMITER ;

DELIMITER $$
CREATE PROCEDURE`partition_maintenance_all`(SCHEMA_NAME VARCHAR(32))
BEGIN
               CALL partition_maintenance(SCHEMA_NAME, 'history', 90, 24, 14);
               CALL partition_maintenance(SCHEMA_NAME, 'history_log', 90, 24, 14);
               CALL partition_maintenance(SCHEMA_NAME, 'history_str', 90, 24, 14);
               CALL partition_maintenance(SCHEMA_NAME, 'history_text', 90, 24, 14);
               CALL partition_maintenance(SCHEMA_NAME, 'history_uint', 90, 24, 14);
               CALL partition_maintenance(SCHEMA_NAME, 'trends', 730, 24, 14);
               CALL partition_maintenance(SCHEMA_NAME, 'trends_uint', 730, 24, 14);
END$$
DELIMITER ;
3.上面内容包含了创建分区的存储过程,将上面内容复制到partition.sql中
a.然后执行如下
mysql  -uzabbix -pzabbix zabbix  < partition.sql
b.添加crontable,每天执行01点01分执行,如下:
crontab  -l > crontab.txt 
cat >> crontab.txt <<EOF
#zabbix partition_maintenance
01 01 * * *  mysql  -uzabbix -pzabbix zabbix -e"CALL partition_maintenance_all('zabbix')" &>/dev/null
EOF
cat crontab.txt |crontab
注意: mysql的zabbix用户的密码部分按照实际环境配置
c.首先执行一次(由于首次执行的时间较长,请使用nohup执行),如下:
nohup   mysql -uzabbix -pzabbix zabbix -e "CALLpartition_maintenance_all('zabbix')" &> /root/partition.log&  
4.查看分区是否成功
1.查看分区情况
mysql> show create table history_uint;
| history_uint | CREATE TABLE `history_uint` (
  `itemid` bigint(20) unsigned NOT NULL,
  `clock` int(11) NOT NULL DEFAULT '0',
  `value` bigint(20) unsigned NOT NULL DEFAULT '0',
  `ns` int(11) NOT NULL DEFAULT '0',
  KEY `history_uint_1` (`itemid`,`clock`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin
/*!50100 PARTITION BY RANGE (`clock`)
(PARTITION p201904160000 VALUES LESS THAN (1555430400) ENGINE = InnoDB,
 PARTITION p201904170000 VALUES LESS THAN (1555516800) ENGINE = InnoDB,
 PARTITION p201904180000 VALUES LESS THAN (1555603200) ENGINE = InnoDB,
 PARTITION p201904190000 VALUES LESS THAN (1555689600) ENGINE = InnoDB,
 PARTITION p201904200000 VALUES LESS THAN (1555776000) ENGINE = InnoDB,
 PARTITION p201904210000 VALUES LESS THAN (1555862400) ENGINE = InnoDB,
 PARTITION p201904220000 VALUES LESS THAN (1555948800) ENGINE = InnoDB,
 PARTITION p201904230000 VALUES LESS THAN (1556035200) ENGINE = InnoDB,
 PARTITION p201904240000 VALUES LESS THAN (1556121600) ENGINE = InnoDB,
 PARTITION p201904250000 VALUES LESS THAN (1556208000) ENGINE = InnoDB,
 PARTITION p201904260000 VALUES LESS THAN (1556294400) ENGINE = InnoDB,
 PARTITION p201904270000 VALUES LESS THAN (1556380800) ENGINE = InnoDB,
 PARTITION p201904280000 VALUES LESS THAN (1556467200) ENGINE = InnoDB,
 PARTITION p201904290000 VALUES LESS THAN (1556553600) ENGINE = InnoDB) */ |
1 row in set (0.00 sec)
2.查看MYSQL目录下的表文件,可以看到经过分区后的表的数据库文件由原来打个ibd文件变成了按照日期划分的多个ibd文件
[root@VM_0_3_centos zabbix]# pwd
/data/mysql/zabbix
[root@VM_0_3_centos zabbix]# ls -lh|grep history_uint
-rw-r----- 1 mysql mysql 8.5K Apr 16 16:16 history_uint.frm
-rw-r----- 1 mysql mysql 124M Apr 16 17:11 history_uint#P#p201904160000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904170000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904180000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904190000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904200000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904210000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904220000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904230000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904240000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904250000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904260000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904270000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904280000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904290000.ibd
5.界面设置,管理----一般----设置---管家
image.png
6.mysql优化

a.修改单独表空间

  vi /etc/my.cnf
  在[mysqld]下设置
  innodb_file_per_table=1
参考:https://blog.51cto.com/yuweibing/1656425

b.增大innodb_log_file_size的方法
zabbix在使用数据库的过程中,特别是删除历史数据的过程中,会涉及到大数据操作,如果逻辑日志文件太小,会造成执行不成功,日志回滚的问题

编辑my.cnf  ,  增加   innodb_log_file_size=20M
7 问题

运行一段时间后发i西安zabbix server端服务器下面多个agent出现报警:Zabbix agent on zabbix server is unreachable for 5 minutes
查看zabbix server日志如下

 32757:20190430:095456.588 [Z3005] query failed: [1526] Table has no partition for value 1556589295 [insert into history_uint (itemid,clock,ns,value) values (29578,1556589295,330422837,0);
]
 32757:20190430:095457.602 [Z3005] query failed: [1526] Table has no partition for value 1556589297 [insert into history_uint (itemid,clock,ns,value) values (29274,1556589297,446885606,200),(29270,1556589297,447245729,0),(29457,1556589297,493976343,0),(29517,1556589297,498202149,1);
]
 32758:20190430:095458.603 [Z3005] query failed: [1526] Table has no partition for value 1556589298 [insert into history_uint (itemid,clock,ns,value) values (29518,1556589298,496847550,154),(29675,1556589298,502218685,200),(29671,1556589298,502513014,0);
]
 32751:20190430:104927.004 executing housekeeper
 32751:20190430:104927.018 housekeeper [deleted 0 hist/trends, 0 items/triggers, 0 events, 0 problems, 0 sessions, 0 alarms, 0 audit items in 0.013509 sec, idle for 1 hour(s)]

提示sql插不进去对应的表,,,
query failed: [1526] Table has no partition 没有对应发分区,由于次数据库已经对Histroy,history_log,history_str,History_txt等历史记录表进行了表分区,并添加了计划任务
正常每天都会创建一天,不会出现这样的为你,出现上述日志表明分区表未被创建,所以手动执行一下恢复正常。
检查问题:
计划任务内容写的有问题,没有正常执行,更正即可

参考:
https://cloud.tencent.com/developer/article/1006301
https://www.kancloud.cn/devops-centos/centos-linux-devops/375488
https://zabbix.org/wiki/Docs/howto/MySQL_Table_Partitioning_(variant)

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 216,163评论 6 498
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 92,301评论 3 392
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 162,089评论 0 352
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,093评论 1 292
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,110评论 6 388
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,079评论 1 295
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,005评论 3 417
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,840评论 0 273
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,278评论 1 310
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,497评论 2 332
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,667评论 1 348
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,394评论 5 343
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,980评论 3 325
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,628评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,796评论 1 268
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,649评论 2 368
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,548评论 2 352

推荐阅读更多精彩内容

  • 1. 删除zabbix表,重建分区表 a) 保存即将删除表的ddl,外键 b) 根据ddl创建test表,将外键,...
    小李子阅读 1,136评论 1 2
  • Zabbix简介 Zabbix官方网站Zabbix中文文档 本文系统环境是CentOS7x86_64, Zabbi...
    Zhang21阅读 7,993评论 0 37
  • mysql高级之表分区 下列说明为个人见解,欢迎交流指正。 1、表分区简介 1.1 问题概述 问题引出:假设一个商...
    道无虚阅读 33,383评论 1 19
  • 前言 Zabbix是目前最为主流的开源监控方案之一,部署本身并不困难,难的是深入理解。根据在生产环境的实践从新版Z...
    王奥OX阅读 814评论 0 1
  • mysql分区 Mysql支持水平分区,并不支持垂直分区;水平分区:指将同一表中不同行的记录分配到不同的物理文件中...
    Gundy_阅读 906评论 0 2