LOAD DATA INFILE – performance case study
转:
http://venublog.com/2007/11/07/load-data-infile-performance/
I often noticed that people complain about the LOAD DATA performance when loading the table with large number of rows of data. Even today I saw a case where the LOAD DATA on a simple 3 column table with about 5 million rows taking ~15 minutes of time. This is because the server did not had any tuning in regards to bulk insertion.
Consider the following simple MyISAM table on Redhat Linux 32-bit.
Shell
1
2
3
4
5
6
7
8
|
CREATE TABLE load1 (
`col1` varchar(100) NOT NULL default '',
`col2` int(11) default NULL,
`col3` char(1) default NULL,
PRIMARY KEY (`col1`)
) TYPE=MyISAM;
|
The table has a string key column. Here is the data file(download here) that I used it for testing:
Shell
1
2
3
4
5
6
7
|
[vanugant@escapereply:t55 tmp]$ wc loaddata.csv
5164946 5164946 227257389 loaddata.csv
[vanugant@escapereply:t55 tmp]$ ls -alh loaddata.csv
-rw-r--r-- 1 vanugant users 217M Nov 6 14:42 loaddata.csv
[vanugant@escapereply:t55 tmp]$
|
Here is the default mysql system variables related to LOAD DATA:
Shell
1
2
3
4
5
6
7
8
9
10
|
mysql> show variables;
+-------------------------+---------+
| Variable_name | Value |
+-------------------------+---------+
| bulk_insert_buffer_size | 8388608 |
| myisam_sort_buffer_size | 16777216 |
| key_buffer_size | 33554432 |
+-------------------------+----------+
|
and here is the actual LOAD DATA query to load all ~5m rows (~256M of data) to the table and its timing.
Shell
1
2
3
4
5
|
mysql> LOAD DATA INFILE '/home/vanugant/tmp/loaddata.csv' IGNORE INTO TABLE load1 FIELDS TERMINATED BY ',';
Query OK, 4675823 rows affected (14 min 56.84 sec)
Records: 5164946 Deleted: 0 Skipped: 489123 Warnings: 0
|
Now, lets experiment by disabling the keys in the table before running the LOAD DATA:
Shell
1
2
3
4
5
6
7
8
9
10
11
|
mysql> SET SESSION BULK_INSERT_BUFFER_SIZE=314572800;
Query OK, 0 rows affected (0.00 sec)
mysql> alter table load1 disable keys;
Query OK, 0 rows affected (0.00 sec)
mysql> LOAD DATA INFILE '/home/vanugant/tmp/loaddata.csv' IGNORE INTO TABLE load1 FIELDS TERMINATED BY ',';
Query OK, 4675823 rows affected (13 min 47.50 sec)
Records: 5164946 Deleted: 0 Skipped: 489123 Warnings: 0
|
No use, just 1% increase or same…., now lets set the real MyISAM values… and try again…
Shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
mysql> SET SESSION BULK_INSERT_BUFFER_SIZE=256217728;
Query OK, 0 rows affected (0.00 sec)
mysql> set session MYISAM_SORT_BUFFER_SIZE=256217728;
Query OK, 0 rows affected (0.00 sec)
mysql> set global KEY_BUFFER_SIZE=256217728;
Query OK, 0 rows affected (0.05 sec)
mysql> alter table load1 disable keys;
Query OK, 0 rows affected (0.00 sec)
mysql> LOAD DATA INFILE '/home/vanugant/tmp/loaddata.csv' IGNORE INTO TABLE load1 FIELDS TERMINATED BY ',';
Query OK, 4675823 rows affected (1 min 55.05 sec)
Records: 5164946 Deleted: 0 Skipped: 489123 Warnings: 0
mysql> alter table load1 enable keys;
Query OK, 0 rows affected (0.00 sec)
|
Wow…thats almost 90% increase in the performance. So, disabling the keys in MyISAM is not just the key, but tuning the buffer size does play role based on the input data.
For the same case with Innodb, here is the status by adjusting the Innodb_buffer_pool_size=1G andInnodb_log_file_size=256M along with innodb_flush_logs_at_trx_commit=1.
Shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
mysql> show variables like '%innodb%size';
+---------------------------------+------------+
| Variable_name | Value |
+---------------------------------+------------+
| innodb_additional_mem_pool_size | 26214400 |
| innodb_buffer_pool_size | 1073741824 |
| innodb_log_buffer_size | 8388608 |
| innodb_log_file_size | 268435456 |
+---------------------------------+------------+
mysql> LOAD DATA INFILE '/home/vanugant/tmp/loaddata.csv' IGNORE INTO TABLE load1 FIELDS TERMINATED BY ',';
Query OK, 4675823 rows affected (2 min 37.53 sec)
Records: 5164946 Deleted: 0 Skipped: 489123 Warnings: 0
|
With innodb_flush_logs_at_trx_commit=2, innodb_flush_method=O_DIRECT and innodb_doublewrite=0; it will be another 40% difference (use all these variables with caution, unless you know what you are doing)
Shell
1
2
3
4
5
|
mysql> LOAD DATA INFILE '/home/vanugant/tmp/loaddata.csv' IGNORE INTO TABLE load1 FIELDS TERMINATED BY ',';
Query OK, 4675823 rows affected (1 min 53.69 sec)
Records: 5164946 Deleted: 0 Skipped: 489123 Warnings: 0
|
LOAD DATA INFILE – performance case study的更多相关文章
- LOAD DATA INFILE Syntax--官方
LOAD DATA [LOW_PRIORITY | CONCURRENT] [LOCAL] INFILE 'file_name' [REPLACE | IGNORE] INTO TABLE tbl_n ...
- Data Visualization – Banking Case Study Example (Part 1-6)
python信用评分卡(附代码,博主录制) https://study.163.com/course/introduction.htm?courseId=1005214003&utm_camp ...
- Mysql load data infile 导入数据出现:Data truncated for column
[1]Mysql load data infile 导入数据出现:Data truncated for column .... 可能原因分析: (1)数据库表对应字段类型长度不够或修改为其他数据类型( ...
- Mysql load data infile 命令导入含中文csv源数据文件 【错误代码 1300】
[1]Load data infile 命令导入含中文csv源数据文件 报错:Invalid utf8 character string: '??֧' (1)问题现象 csv格式文件源数据: 导入SQ ...
- Mysql load data infile 命令格式
[1]Linux系统环境下 LOAD DATA INFILE /usr/LOCAL/lib/ubcsrvd/datacsv/201909_source.csv INTO TABLE np_cdr_20 ...
- Mysql 命令 load data infile 权限问题
[1]Mysql命令load data infile 执行权限问题 工作中,经常会遇到往线上环境mysql数据库批量导入源数据的场景. 针对这个场景问题,mysql有一个很高效的命令:load dat ...
- mysql load data infile的使用 和 SELECT into outfile备份数据库数据
LOAD DATA [LOW_PRIORITY | CONCURRENT] [LOCAL] INFILE 'file_name.txt' [REPLACE | IGNORE] INTO TABLE t ...
- SQL基本语句(3) LOAD DATA INFILE
使用LOAD语句批量录入数据 语法: LOAD DATA [LOW_PRIORITY | CONCURRENT] [LOCAL] INFILE 'file_name' [REPLACE | IGNOR ...
- mysql导入数据load data infile用法
mysql导入数据load data infile用法 基本语法: load data [low_priority] [local] infile 'file_name txt' [replace | ...
随机推荐
- servlet中中文乱码问题
在web项目中经常回碰到中文乱码的问题,特此整理一下,有不足的地方,希望大家纠正. 1从前台往后台传数据,.以get方式发送请求,发送的参数不乱,但是后台接收到参数乱码 在Tomcat的server. ...
- uploadify IO Error/http error 413
阿里云的服务器 linux 服务器,php 环境,上传附件问题,记录分享一下: 开始测试的时候,都是图片,小附件,没什么问题 系统上线后,发现大的附件上传 有问题,报错 IO Error 开始检查程序 ...
- iOS开发--appstore应用上架
除了企业级的应用,一般一个应用开发完成后,都会上架App Store.其实上架流程并不繁琐,麻烦的是要耗时等待审核,如果被拒,修改后又需要等待.被拒的原因很多(真的很多…),比如程序有崩溃,适配没做好 ...
- React的CSS
1.代码 <!DOCTYPE html> <html lang="zh-cn"> <head> <meta charset="U ...
- 自绘CListCtrl类,重载虚函数DrawItem
//自绘CListCtrl类,重载虚函数DrawItem void CNewListCtrl::DrawItem(LPDRAWITEMSTRUCT lpDrawItemStruct) { // TOD ...
- HtmlAgilityPackage XPath学习
最近的开发中要用到htmlAgilityPackage, 所以记录一下XPath相关知识! XPath 简介 XPath 是一门在 XML 文档中查找信息的语言.XPath 可用来在 XML 文档中对 ...
- DB2中字符、数字和日期类型之间的转换
DB2中字符.数字和日期类型之间的转换 一般我们在使用DB2或Oracle的过程中,经常会在数字<->字符<->日期三种类 型之间做转换,那么在DB2和Oracle中,他们分别 ...
- DB2 基本概念
DB2基本概念——实例,数据库,模式,表空间 DB2支持以下两种类型的表空间: 1. 系统管理存储器表空间(SMS-SYSTEM MANAGED STORAGE) 2. 数 ...
- Linux(CentOS)文件操作命令
touch命令 — 功能说明:生成新的空文件或更改现有文件的时间戳. — 命令格式:touch [参数] <文件> … — 常用参数: -a : 只更改访问时间. -m : 只更改修改时间 ...
- Android 批量插入数据到SQLite数据库
Android中在sqlite插入数据的时候默认一条语句就是一个事务,因此如果存在上万条数据插入的话,那就需要执行上万次插入操作,操作速度可想而知.因此在Android中插入数据时,使用批量插入的方式 ...