这个是我在jobbole.com 上看到的

先给出数据表table结构

mysql> show create table tt \G
*************************** 1. row ***************************
Table: tt
Create Table: CREATE TABLE `tt` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`name` char(16) NOT NULL DEFAULT '',
`rule_id` int(10) unsigned NOT NULL DEFAULT '0',
`status` tinyint(3) unsigned NOT NULL DEFAULT '1',
PRIMARY KEY (`id`),
KEY `ttx` (`name`,`rule_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1176504 DEFAULT CHARSET=utf8
1 row in set (0.00 sec)

目前数据库中数据大概有2000W条,之后可能还会不断增长,现在想要查询的是:从表中找出相同姓名,相同规则ID,并且status = 1对应的最后一条记录

写个程序随机加入数据

<?php

mysql_connect('10.0.0.234','root','123456');
mysql_select_db('testdb');
mysql_query("set names utf8"); $strs ='';
$start = 'insert into tt values ';
for($i=0;$i<3000000;$i++){ $k = range(a,z);
shuffle($k);
$str = implode('',$k);
$num = mt_rand(8,16);
$in = substr($str,0,$num); $strs .= " (NULL,'$in',$num,1),";
if($i%10000 == 0){
echo $i."\n";
$sql = trim($start.$strs,',');
mysql_query($sql);
$strs ='';
}
}

这样300W数据就有了

目前的查询语句是这样的

SELECT * FROM tt WHERE id IN (
SELECT max(id) AS max_id FROM tt GROUP BY name, rule_id)
AND status = 1 

看一下查询计划

mysql> explain SELECT * FROM tt WHERE id IN ( SELECT max(id) AS max_id FROM tt GROUP BY name, rule_id) and status=1;
+----+--------------------+-------+-------+---------------+------+---------+------+---------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-------+-------+---------------+------+---------+------+---------+-------------+
| 1 | PRIMARY | tt | ALL | NULL | NULL | NULL | NULL | 1176818 | Using where |
| 2 | DEPENDENT SUBQUERY | tt | index | NULL | ttx | 52 | NULL | 1 | Using index |
+----+--------------------+-------+-------+---------------+------+---------+------+---------+-------------+

索引用上了,也没有filesort,这是不是就很快了,然后执行查询,就卡主了,卡主了,最后只能被Ctrl+C了

那么这个不行,就换一个,这里我想到了,子查询换成join看一看效果

SELECT a.id,b.name,b.rule_id  FROM (select max(id) as id from tt   group by name, rule_id) as a left join tt b on a.id = b.id

看一下查询计划

mysql> explain SELECT a.id,b.name,b.rule_id  FROM (select max(id) as id from tt   group by name, rule_id) as a left join tt b on a.id = b.id;
+----+-------------+------------+--------+---------------+---------+---------+------+---------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------+--------+---------------+---------+---------+------+---------+-------------+
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 1176503 | |
| 1 | PRIMARY | b | eq_ref | PRIMARY | PRIMARY | 4 | a.id | 1 | |
| 2 | DERIVED | tt | index | NULL | ttx | 52 | NULL | 1176818 | Using index |
+----+-------------+------------+--------+---------------+---------+---------+------+---------+-------------+

执行结果  1.77579775 sec

mysql> show profiles;
+----------+------------+-------------------------------------------------------------------------------------------------------------------------------+
| Query_ID | Duration | Query |
+----------+------------+-------------------------------------------------------------------------------------------------------------------------------+
| 6 | 1.77579775 | SELECT a.id,b.name,b.rule_id FROM (select max(id) as id from tt group by name, rule_id) as a left join tt b on a.id = b.id |
+----------+------------+-------------------------------------------------------------------------------------------------------------------------------+
7 rows in set (0.00 sec)

看一看执行 过程

mysql> show profile for query  6;
+------------------------------+----------+
| Status | Duration |
+------------------------------+----------+
| Sending data | 0.000596 |
| Waiting for query cache lock | 0.000002 |
| Sending data | 0.000579 |
| Waiting for query cache lock | 0.000002 |
| Sending data | 0.000534 |
| Waiting for query cache lock | 0.000002 |
| Sending data | 1.101490 |
| end | 0.000010 |
| query end | 0.000004 |
| closing tables | 0.000003 |
| removing tmp table | 0.001369 |
| closing tables | 0.000010 |
| freeing items | 0.000024 |
| logging slow query | 0.000002 |
| logging slow query | 0.000035 |
| cleaning up | 0.000003 |
+------------------------------+----------+
100 rows in set (0.00 sec)

太多的查询缓存lock,关掉它,再看一下查询结果

mysql> show profile for query  10;
+---------------------------+----------+
| Status | Duration |
+---------------------------+----------+
| starting | 0.000154 |
| checking permissions | 0.000006 |
| checking permissions | 0.000005 |
| Opening tables | 0.000124 |
| System lock | 0.000122 |
| optimizing | 0.000007 |
| statistics | 0.000035 |
| preparing | 0.000018 |
| executing | 0.000012 |
| Sorting result | 0.000006 |
| Sending data | 0.518897 |
| converting HEAP to MyISAM | 0.070147 |
| Sending data | 0.067123 |
| init | 0.000033 |
| optimizing | 0.000003 |
| statistics | 0.000020 |
| preparing | 0.000009 |
| executing | 0.000001 |
| Sending data | 1.193679 |
| end | 0.000011 |
| query end | 0.000010 |
| closing tables | 0.000002 |
| removing tmp table | 0.001491 |
| closing tables | 0.000011 |
| freeing items | 0.000020 |
| logging slow query | 0.000002 |
| logging slow query | 0.000050 |
| cleaning up | 0.000003 |
+---------------------------+----------+

再次分析,我们发现 converting HEAP to MyISAM 这个很耗时

mysql> select @@max_heap_table_size/1024/1024;
+---------------------------------+
| @@max_heap_table_size/1024/1024 |
+---------------------------------+
| 16.00000000 |
+---------------------------------+ mysql> set max_heap_table_size = 16777216*4;
Query OK, 0 rows affected (0.00 sec) mysql> select @@max_heap_table_size/1024/1024;
+---------------------------------+
| @@max_heap_table_size/1024/1024 |
+---------------------------------+
| 64.00000000 |
+---------------------------------+

再看看执行结果  1.77579775 > 1.68962725 ,还是有效果的

mysql> show profiles;
+----------+------------+-------------------------------------------------------------------------------------------------------------------------------+
| Query_ID | Duration | Query |
+----------+------------+-------------------------------------------------------------------------------------------------------------------------------+
| 17 | 1.68962725 | SELECT a.id,b.name,b.rule_id FROM (select max(id) as id from tt group by name, rule_id) as a left join tt b on a.id = b.id |
+----------+------------+-------------------------------------------------------------------------------------------------------------------------------+

分析

mysql> show profile for query  17;
+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| starting | 0.000120 |
| checking permissions | 0.000004 |
| checking permissions | 0.000003 |
| Opening tables | 0.000106 |
| System lock | 0.000113 |
| optimizing | 0.000007 |
| statistics | 0.000044 |
| preparing | 0.000011 |
| executing | 0.000006 |
| Sorting result | 0.000002 |
| Sending data | 0.567858 |
| init | 0.000032 |
| optimizing | 0.000004 |
| statistics | 0.000017 |
| preparing | 0.000015 |
| executing | 0.000002 |
| Sending data | 1.120159 |
| end | 0.000011 |
| query end | 0.000005 |
| closing tables | 0.000002 |
| removing tmp table | 0.001020 |
| closing tables | 0.000011 |
| freeing items | 0.000018 |
| logging slow query | 0.000002 |
| logging slow query | 0.000056 |
| cleaning up | 0.000004 |
+----------------------+----------+

好看多了,耗时的地方都在 Sending data 上了,如果硬盘换成PCI-SSD 估计又能提高不少

还有没有其他方法呢,当然有,那么换一种写法

先看查询计划

mysql> explain select max(concat_ws(' ',lpad(id,5,' '),status)) as res, name, rule_id from tt group by name, rule_id ;
+----+-------------+-------+-------+---------------+------+---------+------+---------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+------+---------+------+---------+-------+
| 1 | SIMPLE | tt | index | NULL | ttx | 52 | NULL | 1176818 | |
+----+-------------+-------+-------+---------------+------+---------+------+---------+-------+

 再看看执行结果

mysql> show profiles;
+----------+------------+-------------------------------------------------------------------------------------------------------------------------------+
| Query_ID | Duration | Query |
+----------+------------+-------------------------------------------------------------------------------------------------------------------------------+
| 22 | 1.82505025 | select max(concat_ws(' ',lpad(id,5,' '),status)) as res, name, rule_id from tt group by name, rule_id |
+----------+------------+-------------------------------------------------------------------------------------------------------------------------------+

好像更慢了 1.82505025 > 1.68962725

分析一下

mysql> show profile for query  22;
+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| starting | 0.000157 |
| checking permissions | 0.000007 |
| Opening tables | 0.000019 |
| System lock | 0.000020 |
| init | 0.000032 |
| optimizing | 0.000005 |
| statistics | 0.000016 |
| preparing | 0.000012 |
| executing | 0.000008 |
| Sorting result | 0.000003 |
| Sending data | 1.824677 |
| end | 0.000012 |
| query end | 0.000005 |
| closing tables | 0.000009 |
| freeing items | 0.000016 |
| logging slow query | 0.000002 |
| logging slow query | 0.000049 |
| cleaning up | 0.000004 |
+----------------------+----------+

时间基本上都花费在send data了

这里说一下,max + group by 完全就是一个坑,如果有多个字段要返回数据不能这样写

select  max(id) ,type ,name from table  where type=xx group by name

因为group by默认返回第一条记录

如果像下面那样写,肯定会遇到坑的

select max(id) as res, name, rule_id from tt group by name, rule_id ;

这样也可以,而且快多了,但是如果有其他字段怎么办

这样的语句基本上没有什么可以再优化了,只能换换其他方式了,比如:换SSD+raid10 ,分区/分表/分库+中间件了

【mysql】一条慢查询sql的的分析的更多相关文章

  1. 在MySQL中使用explain查询SQL的执行计划

    1.什么是MySQL执行计划 要对执行计划有个比较好的理解,需要先对MySQL的基础结构及查询基本原理有简单的了解. MySQL本身的功能架构分为三个部分,分别是 应用层.逻辑层.物理层,不只是MyS ...

  2. 如何在MySQL中使用explain查询SQL的执行计划?

    1.什么是MySQL执行计划 要对执行计划有个比较好的理解,需要先对MySQL的基础结构及查询基本原理有简单的了解. MySQL本身的功能架构分为三个部分,分别是 应用层.逻辑层.物理层,不只是MyS ...

  3. MySQL字符集不一致导致查询SQL性能问题

    今天做了一个MySQL数据库中的SQL优化. 结论是关联字段字符集不同,导致索引不可用. 查询的SQL如下: select `Alias`.`Grade`, `Alias`.`id`, `Alias` ...

  4. 转: 从Mysql某一表中随机读取n条数据的SQL查询语句

    若要在i ≤ R ≤ j 这个范围得到一个随机整数R ,需要用到表达式 FLOOR(i + RAND() * (j – i + 1)).例如, 若要在7 到 12 的范围(包括7和12)内得到一个随机 ...

  5. 从Mysql某一表中随机读取n条数据的SQL查询语句

    若要在i ≤ R ≤ j 这个范围得到一个随机整数R ,需要用到表达式 FLOOR(i + RAND() * (j – i + 1)).例如, 若要在7 到 12 的范围(包括7和12)内得到一个随机 ...

  6. MySql 学习之 一条查询sql的执行过程

    相信大家都接触过Mysql数据库,而且也肯定都会写sql.我不知道大家有没有这样的感受,反正我是有过这样的想法.就是当我把一条sql语句写完了,并且执行完得到想要的结果.这时我就在想为什么我写这样的一 ...

  7. mysql系列-⼀条SQL查询语句是如何执⾏的?

    ⼀条SQL查询语句是如何执⾏的? ⼤体来说,MySQL 可以分为 Server 层和存储引擎层两部分 Server 层 Server 层包括连接器.查询缓存.分析器.优化器.执⾏器等,涵盖 MySQL ...

  8. 不同数据库,查询前n条数据的SQL语句

    不同的数据库,支持的SQL语法略有不同,以下是不同数据库查询前n条数据的SQl语句 SQL Server(MSSQL) SELECT TOP n * FROM table_name ORACLE SE ...

  9. 提高MYSQL百万条数据的查询速度

    提高MYSQL百万条数据的查询速度 1.对查询进行优化,应尽量避免全表扫描,首先应考虑在 where 及 order by 涉及的列上建立索引. 2.应尽量避免在 where 子句中对字段进行 nul ...

随机推荐

  1. 推荐朋友 - LintCode

    拼多多笔试第三题 除了题目具体方法值得注意外,数据的输入格外注意 题目 描述 给n个人的朋友名单,告诉你user,请找出user最可能认识的人.(他和user有最多的共同好友且他不是user的朋友) ...

  2. Recover Binary Search Tree leetcode java

    题目: Two elements of a binary search tree (BST) are swapped by mistake. Recover the tree without chan ...

  3. hadoop2.2.0 centos 编译安装详解

    http://blog.csdn.net/w13770269691/article/details/16883663 废话不讲,直切正题. 搭建环境:Centos x 6.4 64bit 1.安装JD ...

  4. (转)Pixel-Fillrate显卡像素填充率

    [Pixel-Fillrate] “填充率“以每秒钟填充的像素点为单位,“三角形(多边形)生成速度“则表示每秒钟三角形(多边形)生成个数.现在的3D显卡的性能也主要看着两项指标,这两项指标的数值越大, ...

  5. 7.4 Javascript:表单验证-揭开正則表達式的面纱

    用元字符匹配对应的字符类型 创建正則表達式有点像创建字符串字面量,仅仅只是正則表達式出如今一对"/"里 正則表達式中会用到一级元字符.用于连接字母与数字 "." ...

  6. pip通过requirements.txt安装依赖

    pip install -t . -r requirements.txt requirements.txt Keras==2.0.8 gensim torchvision opencv-python ...

  7. 壮士断腕!WordPress宣布停止使用React

    WordPress是一种使用PHP语言开发的博客平台,用户可以在支持PHP和MySQL数据库的服务器上架设属于自己的网站,也可以把WordPress当作一个内容管理系统(CMS)来使用. WordPr ...

  8. SQL字符串分割解析

    常用以下三种: [1]substring( expression ,start , length ): [2]CHARINDEX ( expression1 , expression2 [ , sta ...

  9. 【Linux】好玩的linux命令

    Linux里面有很多有趣的东西,这篇文章整理了一些.摘录一下: 1. sl 命令 你会看到一辆火车从屏幕右边开往左边...... 安装  $ sudo apt-get install sl 运行  $ ...

  10. uni-app - 上传图片组件

    2018-12-29 更新1.0  2019-01-08 更新1.1 2019-01-29 更新1.3   重大更新 插件地址: http://ext.dcloud.net.cn/plugin?id= ...