众所周知,mysql在数据量很大的时候查询的效率是很低的,因为假如你需要 OFFSET 100000 LIMIT 5 这样的数据,数据库就需要跳过前100000条数据,才能返回给你你需要的5条数据。由于数据在磁盘上面不一定是相等长度的,所以没法在跳过这100000条数据上面进行优化,只能一条一条地查找数据、找到结尾处并查找下一条,这就导致了数据库很差的性能。解决的办法就是使用 seek 这种方法,可以参考 这篇文章 。

假如你有这样的数据,

| ID | VALUE | PAGE_BOUNDARY | |------|-------|---------------| | ... | ... | ... | | 474 | 2 | 0 | | 533 | 2 | 1 | <-- Last on page 5 | 640 | 2 | 0 | | 776 | 2 | 0 | | 815 | 2 | 0 | | 947 | 2 | 0 | | 37 | 3 | 1 | <-- Last on page 6 | 287 | 3 | 0 | | 450 | 3 | 0 | | ... | ... | ... |

你需要将第6页的数据返回给用户,这时不需要用OFFSET和LIMIT语句,取而代之的是这种方法――找到第5页的最后一个数据的标识(在这里是id和value),并传递给SQL语句,用于定位从哪里开始第6页的数据,这样即使前面有100000条数据,数据库也不用管这些,只需一行 where id > ? 这样的语句便可略过这100000条数据直接找到你需要的那5条。

用SQL表示为: SELECT id, value FROM t WHERE (value, id) > (2, 533) ORDER BY value, id LIMIT 5 用JOOQ则表示为: DSL.using(configuration) .select(T.ID, T.VALUE) .from(T) .orderBy(T.VALUE, T.ID) .seek(2, 533) .limit(5) .fetch();

返回的结果是

| ID | VALUE | |-----|-------| | 640 | 2 | | 776 | 2 | | 815 | 2 | | 947 | 2 | | 37 | 3 |

和用OFFSET、LIMIT语句查询的结果一样,但是性能方面显然这个更好。

在使用的时候,一般需要配合JOOQ的动态SQL来使用,以便区分各种筛选条件以及究竟是第一次查询还是需要查询更靠后面的数据,代码如下所示:

//动态生成SQL语句 public Condition whereCondition(List<Integer> ids, List<String> wordList, String time, boolean own) { Condition condition = trueCondition(); if (ids != null) { if (own) condition = condition.and(SELF_SITE.URL_ID.in(ids)); else condition = condition.and(SELF_SITE.URL_ID.notIn(ids)); } //加上关键字的条件 Condition wordCondition = falseCondition(); for (String word : wordList) { wordCondition = wordCondition.or(SELF_SITE.NAME.contains(word)); } condition = condition.and(wordCondition); //加上时间的条件 if (time != null) { Instant instant = Instant.ofEpochMilli(Long.parseLong(time)); LocalDateTime localDateTime = LocalDateTime.ofInstant(instant, ZoneOffset.ofHours(8)); condition = condition.and(SELF_SITE.FETCH_TIME.greaterOrEqual(localDateTime)); } return condition; } @Override public List<SelfSite> findData(List<Integer> ids, int self_site_id, List<String> wordList, String time, boolean own) { SelectQuery query = dsl.selectQuery(); query.addFrom(SELF_SITE); query.addConditions(whereCondition(ids, wordList, time, own)); query.addOrderBy(SELF_SITE.ID.desc()); if (self_site_id != -1) query.addSeekAfter(DSL.val(self_site_id)); query.addLimit(NUM_PER_PAGE); return query.fetch().into(SelfSite.class); }

MySQL has the ability to handle a decent amount of traffic as the business expands, however, when the database realizes maximum capacity, the e-commerce website will not work as efficiently as you want. This is because MySQL faces some difficulties and has limitations in handling a few things, such as the following:

Increased Availability : MySQL has a single point of failure at the master server. This means that if the master server goes down, then there will be downtime. When this happens, your customers will not be able to purchase anything from your e-commerce site and when the downtime lasts too long, they will get frustrated and will eventually take the business to some other place, even permanently in some cases. This directly translates to a loss in money for your business. Increasing Reads and Writes : MySQL has limitations on capacity, i.e., if more and more customers carry out complete transactions on your website, it won’t take long before the database stalls. It is true that MySQL can scale reads through read-slaves, however, at the same time the applications have to be aware that the reads are not synchronized with the write master. For example, if a customer updates products in his e-cart, it better be read from the write-master otherwise there will be a risk of having the wrong available-to-promise quantities. This will create a bottleneck in the checkout line, which will undoubtedly result in abandoned carts, unmanaged and unsold inventory, selling inventory you don’t have, refunds, negative social media exposure or worse ― unhappy customers who might not come back. Flexing Up and Down : MySQL writes are not scalable via slaves, which means that you can accommodate an increase in traffic by paying a premium and scaling up. Unfortunately, MySQL does not have the ability to flex up and down in order to benefit your e-commerce business. Tackling When it Reaches its Limit

There will come a time when MySQL will reach its limit causing your e-commerce website to stall. When your write master has been scaled, you must consider the following techniques:

Re-platforming : This technique involves moving the database from one platform to the other. For example, you can move Magento/MySQL to an Oracle-based platform. Sharding : In this, you partition the database across multiple different servers. This helps in improving the performance of MySQL even though it presents various shortcomings.

These techniques do enhance the performance of MySQL, however, they are quite complex processes and are costly to implement. Although, they are not your only options to fall back on.

Maintaining the Database

MySQL数据很大的时候的更多相关文章

  1. hdu 1690 构图后Floyd 数据很大

    WA了好多次... 这题要用long long 而且INF要设大一点 Sample Input2 //T1 2 3 4 1 3 5 7 //L1-L4 C1-C4 距离和花费4 2 //结点数 询问次 ...

  2. 关于Sending build context to Docker daemon 数据很大的问题

    以往进行docker build的时候都是在新建的文件夹下面进行,这次为了图方便,就直接放在开发根目录下进行build,这样子问题就来了.于是就有了下面的文件大小发送量: Sending build ...

  3. 黄聪:Mysql数据库还原备份提示MySQL server has gone away 的解决方法(备份文件数据过大)

    使用mysql做数据库还原的时候,由于有些数据很大,会出现这样的错误:The MySQL Server returned this Error:MySQL Error Nr. MySQL server ...

  4. ECMall的MySQL数据调用的简单方法

    很多ecmall开发者会问,怎么使用Ecmall的mysql类库进行数据调用.从原理上来讲Ecmall的数据调用是以数据模块+模块类库的方式进行mysql数据调用的,所有数据模块都存储在include ...

  5. 解决mysql导入数据量很大导致失败及查找my.ini 位置(my.ini)在哪

    数据库数据量很大的数据库导入到本地时,会等很久,然而等很久之后还是显示失败: 这是就要看看自己本地的没mysql是否设置了超时等待,如果报相关time_out这些,可以把mysql.ini尾部添加ma ...

  6. mysql innobackupex xtrabackup 大数据量 备份 还原

    大数据量备份与还原,始终是个难点.当MYSQL超10G,用mysqldump来导出就比较慢了.在这里推荐xtrabackup,这个工具比mysqldump要快很多. 一.Xtrabackup介绍 1, ...

  7. mysql innobackupex xtrabackup 大数据量 备份 还原(转)

    原文:http://blog.51yip.com/mysql/1650.html 作者:海底苍鹰 大数据量备份与还原,始终是个难点.当MYSQL超10G,用mysqldump来导出就比较慢了.在这里推 ...

  8. Slave延迟很大的优化方法总结(MySQL优化)

    [http://www.cstor.cn/textdetail_9146.html] 一般而言,slave相对master延迟较大,其根本原因就是slave上的复制线程没办法真正做到并发.简单说,在m ...

  9. [MySQL优化案例]系列 — slave延迟很大优化方法

    备注:插图来自网络搜索,如果觉得不当还请及时告知 :) 一般而言,slave相对master延迟较大,其根本原因就是slave上的复制线程没办法真正做到并发.简单说,在master上是并发模式(以In ...

随机推荐

  1. Java 杨辉三角的简单实现

    package com.lf.trianglenumber; public class Test { public static void main(String[] args) { // 打印的行数 ...

  2. HTTPS and the TLS handshake protocol阅读笔记

    目的 为能够透彻理解HTTPS报文交互过程,做此笔记. 本文大部分内容来自 : http://albertx.mx/blog/https-handshake/ http://www.cnblogs.c ...

  3. python urllib2 Basic认证

    1.通过添加http header 来实现 import urllib2 from base64 import encodestring url = 'http://202.108.1.51' use ...

  4. 新版mysql(mysql-5.7.12-winx64)安装

    之前安装了一个php集成运行环境的mysql,不太习惯,还是想弄一个原生的环境来进行学习.于是,卸载了php集成环境中的mysql. 计算机环境:win7 x64. 1.mysql-5.7.12-wi ...

  5. java io读书笔记(8)FileInputStream/FileOutputStream的应用

    转自:http://www.cnblogs.com/jjtech/archive/2011/04/17/2019210.html 这是一对继承于InputStream和OutputStream的类,用 ...

  6. java mock

    一篇文章: 5分钟了解Mockito 一.什么是mock测试,什么是mock对象? 先来看看下面这个示例: 从上图可以看出如果我们要对A进行测试,那么就要先把整个依赖树构建出来,也就是BCDE的实例. ...

  7. Lintcode: Rotate String

    Given a string and an offset, rotate string by offset. (rotate from left to right) Example Given &qu ...

  8. [原创]java WEB学习笔记88:Hibernate学习之路-- -Hibernate检索策略(立即检索,延迟检索,迫切左外连接检索)

    本博客的目的:①总结自己的学习过程,相当于学习笔记 ②将自己的经验分享给大家,相互学习,互相交流,不可商用 内容难免出现问题,欢迎指正,交流,探讨,可以留言,也可以通过以下方式联系. 本人互联网技术爱 ...

  9. C#: 启动画面设计

    Windows Form经常会在启动主界面的时候预先有启动画面,这也是因为用户体验的需要,用户知道已经启动application,而不是在load resource的时候等待.因此这里不能用单线程的思 ...

  10. Android studio删除工程项目,androidstudio

    本新手最近学Android都是用的eclipse.其实个人觉得eclipse不错,可能接触Android不久,倒也不觉得它慢还是怎样.对于Google的Android studio也是早有耳闻,前两天 ...