【最佳实践】MongoDB导出导入数据
首先说一下这个3节点MongoDB集群各个维度的数据规模:
1、dataSize: 1.9T
2、storageSize: 600G
3、全量备份-加压缩开关:186G,耗时 8h
4、全量备份-不加压缩开关:1.8T,耗时 4h27m
具体导出的语法比较简单,此处不再赘述,本文重点描述导入的优化过程,最后给出导入的最佳实践。
■ 2023-09-13T20:00 第1次4并发导入测试
mongorestore --port=20000 -uadmin -p'passwd' --authenticationDatabase=admin --numInsertionWorkersPerCollection=4 --bypassDocumentValidation -d likingtest /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest >> 10.2.2.2.log 2>&1 &
tail -100f /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/10.2.2.2.log
以上导入:
2023-09-13T21:59:55.452+0800    The --db and --collection flags are deprecated for this use-case; please use --nsInclude instead, i.e. with --nsInclude=${DATABASE}.${COLLECTION}
2023-09-13T21:59:55.452+0800    building a list of collections to restore from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest dir
2023-09-13T21:59:55.466+0800    reading metadata for likingtest.oprceConfiguration from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/oprceConfiguration.metadata.json
2023-09-13T21:59:55.478+0800    reading metadata for likingtest.oprceDataObj from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/oprceDataObj.metadata.json
2023-09-13T21:59:55.491+0800    reading metadata for likingtest.oprcesDataObjInit from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/oprcesDataObjInit.metadata.json
2023-09-13T21:59:55.503+0800    reading metadata for likingtest.role from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/role.metadata.json
2023-09-13T21:59:55.508+0800    reading metadata for likingtest.activityConfiguration from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/activityConfiguration.metadata.json
2023-09-13T21:59:55.511+0800    reading metadata for likingtest.history_task from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/history_task.metadata.json
2023-09-13T21:59:55.512+0800    reading metadata for likingtest.resOutRelDataSnapshot from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/resOutRelDataSnapshot.metadata.json
2023-09-13T21:59:55.520+0800    reading metadata for likingtest.snapshotResource from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/snapshotResource.metadata.json
2023-09-13T21:59:55.524+0800    reading metadata for likingtest.oprceDataObjDraft from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/oprceDataObjDraft.metadata.json
2023-09-13T21:59:55.526+0800    reading metadata for likingtest.oprceDataObjInit from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/oprceDataObjInit.metadata.json
2023-09-13T21:59:55.761+0800    restoring likingtest.snapshotResource from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/snapshotResource.bson
...
2023-09-13T22:00:01.451+0800    [........................]      likingtest.oprceDataObj   408MB/1205GB    (0.0%)
...
2023-09-13T21:59:58.323+0800    finished restoring likingtest.oprceDataObjDraft (1559 documents, 0 failures)
2023-09-13T22:00:01.034+0800    finished restoring likingtest.resOutRelDataSnapshot (34426 documents, 0 failures)
2023-09-13T22:00:01.559+0800    finished restoring likingtest.history_task (3629 documents, 0 failures)
2023-09-13T22:00:02.086+0800    finished restoring likingtest.activityConfiguration (974 documents, 0 failures)
2023-09-13T22:00:02.293+0800    finished restoring likingtest.oprceConfiguration (162 documents, 0 failures)
2023-09-13T22:00:02.529+0800    finished restoring likingtest.oprcesDataObjInit (4 documents, 0 failures)
2023-09-13T22:00:02.857+0800    finished restoring likingtest.role (10 documents, 0 failures)
2023-09-13T22:00:29.153+0800    [########################]  likingtest.snapshotResource  2.04GB/2.04GB  (100.0%)
2023-09-13T22:00:29.155+0800    finished restoring likingtest.snapshotResource (50320 documents, 0 failures)
...
2023-09-14T00:18:58.451+0800    [############............]      likingtest.oprceDataObj  651GB/1205GB   (54.0%)
2023-09-14T00:18:59.857+0800    [########################]  likingtest.oprceDataObjInit  635GB/635GB  (100.0%)
2023-09-14T00:18:59.888+0800    finished restoring likingtest.oprceDataObjInit (43776648 documents, 0 failures)
...
2023-09-14T02:05:58.904+0800    [########################]      likingtest.oprceDataObj  1205GB/1205GB  (100.0%)
2023-09-14T02:05:58.937+0800    finished restoring likingtest.oprceDataObj (53311330 documents, 0 failures)
2023-09-14T02:05:58.945+0800    no indexes to restore for collection likingtest.activityConfiguration
2023-09-14T02:05:58.945+0800    no indexes to restore for collection likingtest.history_task
2023-09-14T02:05:58.945+0800    restoring indexes for collection likingtest.oprcesDataObjInit from metadata
2023-09-14T02:05:58.976+0800    index: &idx.IndexDocument{Options:primitive.M{"name":"flowId_1_activityConfiguration.activityNameEn_1", "ns":"likingtest.oprcesDataObjInit", "v":2}, Key:primitive.D{primitive.E{Key:"flowId", Value:1}, primitive.E{Key:"activityConfiguration.activityNameEn", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-09-14T02:05:58.976+0800    index: &idx.IndexDocument{Options:primitive.M{"name":"oprceInfo.oprceInstID_1_activityInfo.activityInstID_1_workitemInfo.workItemID_1", "ns":"likingtest.oprcesDataObjInit", "v":2}, Key:primitive.D{primitive.E{Key:"oprceInfo.oprceInstID", Value:1}, primitive.E{Key:"activityInfo.activityInstID", Value:1}, primitive.E{Key:"workitemInfo.workItemID", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-09-14T02:05:58.976+0800    no indexes to restore for collection likingtest.role
2023-09-14T02:05:58.976+0800    no indexes to restore for collection likingtest.snapshotResource
2023-09-14T02:05:58.976+0800    no indexes to restore for collection likingtest.oprceDataObjDraft
2023-09-14T02:05:58.976+0800    restoring indexes for collection likingtest.oprceDataObjInit from metadata
2023-09-14T02:05:58.976+0800    index: &idx.IndexDocument{Options:primitive.M{"name":"oprceInfo.oprceInstID_1_activityInfo.activityInstID_1_workitemInfo.workItemID_1", "ns":"likingtest.oprceDataObjInit", "v":2}, Key:primitive.D{primitive.E{Key:"oprceInfo.oprceInstID", Value:1}, primitive.E{Key:"activityInfo.activityInstID", Value:1}, primitive.E{Key:"workitemInfo.workItemID", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-09-14T02:05:58.976+0800    index: &idx.IndexDocument{Options:primitive.M{"name":"flowNo_1", "ns":"likingtest.oprceDataObjInit", "v":2}, Key:primitive.D{primitive.E{Key:"flowNo", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-09-14T02:05:58.976+0800    no indexes to restore for collection likingtest.oprceConfiguration
2023-09-14T02:05:58.976+0800    no indexes to restore for collection likingtest.resOutRelDataSnapshot
2023-09-14T02:05:58.976+0800    restoring indexes for collection likingtest.oprceDataObj from metadata
2023-09-14T02:05:58.976+0800    index: &idx.IndexDocument{Options:primitive.M{"name":"flowId_1_activityConfiguration.activityNameEn_1", "ns":"likingtest.oprceDataObj", "v":2}, Key:primitive.D{primitive.E{Key:"flowId", Value:1}, primitive.E{Key:"activityConfiguration.activityNameEn",Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-09-14T02:05:58.976+0800    index: &idx.IndexDocument{Options:primitive.M{"name":"flowNo_1", "ns":"likingtest.oprceDataObj", "v":2}, Key:primitive.D{primitive.E{Key:"flowNo", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-09-14T02:05:58.976+0800    index: &idx.IndexDocument{Options:primitive.M{"name":"oprceInfo.oprceInstID_1_activityInfo.activityInstID_1_workitemInfo.workItemID_1", "ns":"likingtest.oprceDataObj", "v":2}, Key:primitive.D{primitive.E{Key:"oprceInfo.oprceInstID", Value:1}, primitive.E{Key:"activityInfo.activityInstID", Value:1}, primitive.E{Key:"workitemInfo.workItemID", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-09-14T02:05:58.976+0800    index: &idx.IndexDocument{Options:primitive.M{"name":"flowId_1_activityConfiguration.activityNameEn_1", "ns":"likingtest.oprceDataObjInit", "v":2}, Key:primitive.D{primitive.E{Key:"flowId", Value:1}, primitive.E{Key:"activityConfiguration.activityNameEn", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-09-14T03:45:47.152+0800    97179062 document(s) restored successfully. 0 document(s) failed to restore.
可见:
1、配置并发参数 --numInsertionWorkersPerCollection=4 和 检查参数 bypassDocumentValidation 后,restore速度大大提升,1.2T 的一个大集合 oprceDataObj,由原来默认restore方式约 12h,降为:4h
2、restore完所有数据以后,最后再restore索引,restore索引还是需要一定的时间,本次耗时:1h40m【注:实际没有成功,索引并未生效】
3、新版本的 -d -c 参数需统一修改为:--nsInclude --nsFrom= --nsTo=
■ 2023-09-14T10:40 第2次8并发导入测试
mongorestore --port=20000 -uadmin -p'passwd' --authenticationDatabase=admin --numInsertionWorkersPerCollection=8 --bypassDocumentValidation -d likingtest /u01/nfs/xxxxx_mongodb/10.1.1.1/20230914/likingtest >> 10.2.2.2.log 2>&1 &
tail -100f /u01/nfs/xxxxx_mongodb/10.1.1.1/20230914/10.2.2.2.log
---
2023-09-14T10:40:45.492+0800    The --db and --collection flags are deprecated for this use-case; please use --nsInclude instead, i.e. with --nsInclude=${DATABASE}.${COLLECTION}
...
2023-09-14T10:40:48.493+0800    [........................]       likingtest.oprceDataObj   112MB/1208GB    (0.0%)
...
2023-09-14T12:57:34.859+0800    [########################]       likingtest.oprceDataObj  1208GB/1208GB  (100.0%)
2023-09-14T12:57:34.867+0800    finished restoring likingtest.oprceDataObj (53413481 documents, 0 failures)
可见:
1、配置并发参数 --numInsertionWorkersPerCollection=8 和 检查参数 --bypassDocumentValidation 后,restore速度再次大大提升,1.2T的一个大集合 oprceDataObj,由原来默认restore方式约 12h,降为:2h17m
2、本次恢复采用nfs备份恢复,一台8C的虚机,8并发恢复时cpu占用约40%,网络接收速度300MB/s左右,本地磁盘写入速度在30-200MB/s左右,可见网络带段不是瓶颈。可以预见,如果采用更高的主机配置,尤其是IO更好的磁盘,resotore时间必将更少。
■ 2023-09-14T16:10 第3次12并发导入测试
【注意】由于新版本mongorestore摒弃了-d -c参数,虽然可用但使用不够灵活,因此需使用新参数--nsInclude,对于该参数的使用,摸索了多次才找到使用的限制条件,即 directory 必须为数据库备份的根目录/上一级目录,而不是 数据库目录!即类似 dumpdir/20230914,而不是 dumpdir/20230914/database!这是一个巨大的坑,切记!当然,这个目录下一定不能有其他不可识别的文件,否则也会报错。
mongorestore --port=20000 -uadmin -p'passwd' --authenticationDatabase=admin --numInsertionWorkersPerCollection=12 --bypassDocumentValidation --nsInclude="likingtest.*" /u01/nfs/xxxxx_mongodb/10.1.1.1/20230914 > 20230914.10.2.2.2-3.log 2>&1 &
tail -100f /u01/nfs/xxxxx_mongodb/10.1.1.1/20230914.10.2.2.2-3.log
---
2023-09-14T16:10:19.245+0800    preparing collections to restore from
...
2023-09-14T18:18:18.996+0800    [########################]  likingtest.oprceDataObj  1208GB/1208GB  (100.0%)
2023-09-14T18:18:19.014+0800    finished restoring likingtest.oprceDataObj (53413481 documents, 0 failures)
可见:
1、并发由 8 增至 12 并无效率提升,结论是 6-8 个并发就可以,这一点与oracle的并发导入设置为 6 基本是最佳实践类似。
2、本次恢复采用nfs备份恢复,一台8C的虚机,12并发恢复时cpu占用约60%,网络接收速度300MB/s左右,本地磁盘写入速度在30-500MB/s左右,可见网络带段不是瓶颈。可以预见,如果采用更高的主机配置,尤其是IO更好的磁盘,resotore时间必将更少。
3、关于索引的restore,restore时首先恢复数据,最后再创建索引,比较大的集合的索引创建还是需要较多的时间:
      currentOpTime: '2023-09-14T20:23:59.435+08:00',
...
      command: {
        createIndexes: 'oprceDataObj',
        indexes: [
          {
            key: { flowId: 1, 'activityConfiguration.activityNameEn': 1 },
            name: 'flowId_1_activityConfiguration.activityNameEn_1',
            ns: 'likingtest.oprceDataObj'
          },
          {
            key: { flowNo: 1 },
            name: 'flowNo_1',
            ns: 'likingtest.oprceDataObj'
          },
          {
            key: {
              'oprceInfo.oprceInstID': 1,
              'activityInfo.activityInstID': 1,
              'workitemInfo.workItemID': 1
            },
            name: 'oprceInfo.oprceInstID_1_activityInfo.activityInstID_1_workitemInfo.workItemID_1',
            ns: 'likingtest.oprceDataObj'
          }
        ],
.....
      currentOpTime: '2023-09-14T20:23:59.489+08:00',
...
      command: {
        createIndexes: 'oprcesDataObjInit',
        indexes: [
          {
            key: { flowId: 1, 'activityConfiguration.activityNameEn': 1 },
            name: 'flowId_1_activityConfiguration.activityNameEn_1',
            ns: 'likingtest.oprcesDataObjInit'
          },
          {
            key: {
              'oprceInfo.oprceInstID': 1,
              'activityInfo.activityInstID': 1,
              'workitemInfo.workItemID': 1
            },
            name: 'oprceInfo.oprceInstID_1_activityInfo.activityInstID_1_workitemInfo.workItemID_1',
            ns: 'likingtest.oprcesDataObjInit'
          }
        ],
......第二天再看,还没创建完索引:
      currentOpTime: '2023-09-15T09:16:16.460+08:00',
      effectiveUsers: [ { user: 'admin', db: 'admin' } ],
      runBy: [ { user: '__system', db: 'local' } ],
      threaded: true,
      opid: 'shard1:11312917',
      lsid: {
        id: new UUID("e78379ff-9664-46b1-9e87-2bdd4abc5c5f"),
        uid: Binary.createFromBase64("O0CMtIVItQN4IsEOsJdrPL8s7jv5xwh5a/A5Qfvs2A8=", 0)
      },
      secs_running: Long("53877"),
      microsecs_running: Long("53877330742"),
      op: 'command',
      ns: 'likingtest.oprcesDataObjInit',
      redacted: false,
      command: {
        createIndexes: 'oprcesDataObjInit',
......第二天满24h,还没创建完索引:
      currentOpTime: '2023-09-15T18:55:16.877+08:00',
      effectiveUsers: [ { user: 'admin', db: 'admin' } ],
      runBy: [ { user: '__system', db: 'local' } ],
      threaded: true,
      opid: 'shard1:11312917',
      lsid: {
        id: new UUID("e78379ff-9664-46b1-9e87-2bdd4abc5c5f"),
        uid: Binary.createFromBase64("O0CMtIVItQN4IsEOsJdrPL8s7jv5xwh5a/A5Qfvs2A8=", 0)
      },
      secs_running: Long("88617"),
      microsecs_running: Long("88617747875"),
      op: 'command',
      ns: 'likingtest.oprcesDataObjInit',
      redacted: false,
      command: {
        createIndexes: 'oprcesDataObjInit',
        indexes: [
          {
            key: { flowId: 1, 'activityConfiguration.activityNameEn': 1 },
            name: 'flowId_1_activityConfiguration.activityNameEn_1',
            ns: 'likingtest.oprcesDataObjInit'
          },
以上可见,mongorestore 导入数据库的数据效率目前是基本可控、可接受的,至少对于1.2T的大集合是可以接受的,但是最后的索引创建实在过于缓慢,且没有找到合适的解决办法:索引需多并发执行创建,且确保索引生效,本次索引创建最后并未生效
■ 2023-09-15T19:02 第4次10并发导入测试,不恢复索引
mongorestore --port=20000 -uadmin -p'passwd' --authenticationDatabase=admin --numInsertionWorkersPerCollection=10 --bypassDocumentValidation --nsInclude="likingtest.*" --nsFrom="likingtest.*" --nsTo="likingtest.*" --noIndexRestore /u01/nfs/xxxxx_mongodb/10.1.1.1/20230914 > 20230914.10.2.2.2-4.log 2>&1 &
tail -100f /u01/nfs/xxxxx_mongodb/10.1.1.1/20230914.10.2.2.2-4.log
2023-09-15T19:02:59.747+0800    preparing collections to restore from
...
2023-09-15T21:24:36.145+0800    [########################]  likingtest.oprceDataObj  1208GB/1208GB  (100.0%)
2023-09-15T21:24:36.161+0800    finished restoring likingtest.oprceDataObj (53413481 documents, 0 failures)
2023-09-15T21:24:36.165+0800    97367732 document(s) restored successfully. 0 document(s) failed to restore.
以上可见,耗时:2h22m
结论
1、restore 时需设置大数据量 collection 多并发导入:--numInsertionWorkersPerCollection=8
2、不恢复索引:--noIndexRestore
3、数据恢复后,后台创建索引:本站搜索"MongoDB 重建索引"
【最佳实践】MongoDB导出导入数据的更多相关文章
- mongoDB导出-导入数据
		
--导出数据集 C:\MongoDB\db\bin>mongoexport -d ttx-xwms-test -c things -o d:\mongo_data\things.txt C:\M ...
 - mongodb导出导入数据
		
在使用mongodump导出单个表的时候,遇到了一个错误 # mongodump --host xxx --port 27017 --username 'admin' -p '123456' -d 数 ...
 - MongoDB中导入数据命令的使用(mongoimport)
		
MongoDB中导入数据命令的使用(mongoimport) 制作人:全心全意 语法: mongoimport <options> <file> 介绍: 该命令可以将CSV,T ...
 - GreenPlum/postgres copy命令导出/导入数据
		
一.COPY命令简单实用 1.copy在postgres与GreenPlum介绍 1.1 postgrespostgres的COPY命令可以快速的导出/导入数据到postgresql数据库中,支持常用 ...
 - mongodb导出导入实例记录
		
mongodb导出导入实例记录 平时很用mongodb,所以用到了,就需要去网上搜索方法,干脆将自己的实际经历记录下来,方便日后使用. # 大致需求 源库:db_name_mongo 源IP:192. ...
 - mongodb使用自带命令工具导出导入数据
		
记录 mongo 数据库用原生自带的命令工具使用 json 文件方式进行导入.导出的操作! 在一次数据更新中,同事把老数据进行了清空操作,但是新的逻辑数据由于某种原因(好像是她的电脑中病毒了),一直无 ...
 - BCP 导出导入数据(SQL Server)
		
BCP指令工具可通过安装SQL Server获得. 1. 根据现有的数据库生成表的format文件(导入导出数据的时候需要) bcp db_test.dbo.Table1 format nul -c ...
 - 使用BCP导出导入数据
		
bcp 实用工具可以在 Microsoft SQL Server 实例和用户指定格式的数据文件间大容量复制数据. 使用 bcp 实用工具可以将大量新行导入 SQL Server 表,或将表数据导出到数 ...
 - oracle 导出导入数据
		
在window的运行中输出cmd,然后执行下面的一行代码, imp blmp/blmp@orcl full=y file=D:\blmp.dmp OK,问题解决.如果报找不到该blmp.dmp文件,就 ...
 - 【Teradata Utility】使用SQL Assistant导出导入数据
		
1.导出 (1)选择菜单栏File,点击Export Results,输入导出数据的SQL: select * from etl_data.soure_table; (2)选择导出数据格式为txt或h ...
 
随机推荐
- ModifyAjaxResponse,修改ajax请求返回值,前后端调试之利器
			
一.概要 先看图 京豆多的离谱,你的第一想法肯定是:按F12修改了网页元素 没那么简单,你看支持刷新的 肯定还是假的,通过 Fiddler 或 Wireshark 等抓包工具修改了响应包:或者干脆改了 ...
 - asp.net程序通过Microsoft Azure令牌授予流获取UserInfo终结点实现单点登录--隐式授予流(OIDC协议)
			
1. Microsoft Azure令牌授予流 令牌授予流种类如下: 本章节采用: 隐式授予流: 2. 隐式授予流的实现 流程:重定向到authorize--->拿到access_token-- ...
 - 每日一题 力扣 1090 https://leetcode.cn/problems/largest-values-from-labels/
			
每日一题 力扣 1090 https://leetcode.cn/problems/largest-values-from-labels/ 先对这道题目进行排序,贪心一下,要求分数最高的放在前面,而标 ...
 - CentOS 7 搭建NFS服务器
			
服务端安装 # 创建挂载目录 cd ~ cd data/ mkdir www-content cd www-content/ pwd # 安装软件 yum install nfs-utils yum ...
 - .net 温故知新【12】:Asp.Net Core WebAPI 中的Rest风格
			
RPC RPC(Remote Procedure Call),远程过程调用),这种RPC形式的API组织形态是类和方法的形式.所以API的请求往往是一个动词用来标识接口的意思,比如 https://x ...
 - 我和ChatGPT聊数字人
			
 近期,聊天机器人ChatGPT火了,写诗写文写代码,才艺狠狠拉满. 面对如此"会聊"的ChatGPT,很多人好奇相同的问题提问ChatGPT和真人,会有什么样的结果? 于是我们 ...
 - 常用语言的线程模型(Java、go、C++、python3)
			
背景知识 软件是如何驱动硬件的? 硬件是需要相关的驱动程序才能执行,而驱动程序是安装在操作系统内核中.如果写了一个程序A,A程序想操作硬件工作,首先需要进行系统调用,由内核去找对应的驱动程序驱使硬件工 ...
 - Nginx配置网站默认https
			
Nginx配置网站默认https 一.安装Nginx yum install nginx -y 二.修改nginx.conf vim /etc/nginx/nginx.conf 配置80转443 配置 ...
 - 学习LXC(Linux 容器)技术
			
安装LXC.LXD.zfs 测试机器为ubuntu sudo apt-get install lxc lxd zfsutils-linux -y 创建LXD的zfs存储池 sudo lxd init ...
 - Linux 命令:btrfs filesystem resize
			
btrfs filesystem resize 2:300G /path ## 为创建了btrfs文件系统,已经挂载到/path 且device ID为2的硬盘/分区进行resize # 已经做过硬盘 ...