本文档基于MongoDB版本3.6.2
下载地址:
建议使用最新版本
安装文件
集群ip及端口设计方案:
|
服务
|
192.168.141.201
|
192.168.141.202
|
192.168.141.203
|
|
Router
|
Mongos(17017)
|
Mongos(17017)
|
|
|
Config
|
Config server1(27017)
|
Config server2(27017)
|
Config server3(27017)
|
| |
Shard1-主(37017)
|
Shard2-主(47017)
|
Shard3-主(57017)
|
|
Shard
|
Shard2-从(47017)
|
Shard1-从(37017)
|
Shard1-从(37017)
|
| |
Shard3-从(57017)
|
Shard3-从(57017)
|
Shard2-从(47017)
|
在每台部署MongoDB的机器上面执行如下命名:(创建mongo下三个服务路径)
mkdir -p /home/mongo/{config,router,shard}
mkdir -p /home/mongo/config/{data,logs}
mkdir -p /home/mongo/router/logs
mkdir -p /home/mongo/shard/{data,logs}
mkdir -p /home/mongo/shard/data/{shard1,shard2,shard3}
可以生成一个脚本文件,mongodirs.sh,再拷贝到所有mongo机器上执行:
|
#!/usr/bin/bash mkdir -p mongo mkdir -p mongo/{config,router,shard} mkdir -p mongo/config/{data,logs} mkdir -p mongo/router/logs mkdir -p mongo/shard/{data,logs} mkdir -p mongo/shard/data/{shard1,shard2,shard3}
|
然后在mongo数据库的根目录下执行
config服务:
注意:config服务至少启动三个节点
vi /home/mongo/config/config.config
|
dbpath=/home/mongo/config/data logpath=/ home /mongo/config/logs/config.log bind_ip=0.0.0.0 port=27017 logappend=true fork=true quiet=true journal=true configsvr=true replSet=configRS/192.168.141.201:27017,192.168.141.202:27017,192.168.141.203:27017
|
启动config服务
mongod --config /home/mongo/config/config.config
可能的错误如下:
|
about to fork child process, waiting until server is ready for connections. forked process: 10632 ERROR: child process failed, exited with error number 14 To see additional information in this output, start without the "--fork" option.
没有权限,sudo启动就可以了
error number 18:
error number 100:端口被占用
exited with error number 48
原因是端口被占用
|
|
[****@centosvm config]$ sudo mongod --config config.config [sudo] **** 的密码: about to fork child process, waiting until server is ready for connections. forked process: 10667 child process started successfully, parent exiting
|
正确的启动命令行如下:
用mongo shell连接到已经启动的mongoconfig服务 ,初始化config服务
mongo –port 27017
|
[****@centosvm config]$ mongo -port 27017 MongoDB shell version v3.6.2 connecting to: mongodb://127.0.0.1:27017/ MongoDB server version: 3.6.2 Welcome to the MongoDB shell. For interactive help, type "help". For more comprehensive documentation, see http://docs.mongodb.org/ Questions? Try the support group http://groups.google.com/group/mongodb-user Server has startup warnings: 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost. 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server. 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] ** Start the server with --bind_ip <address> to specify which IP 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning. 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-01-26T14:38:04.377+0800 I CONTROL [initandlisten] 2018-01-26T14:55:45.854+0800 E - [main] Error loading history file: FileOpenFailed: Unable to fopen() file /home/****/.dbshell: No such file or directory MongoDB Enterprise >
|
按照集群的设计,配置集群的ip和端口列表,并指定哪个是主节点,哪几个是从节点;以下命令在MongoDB Shell命令行下执行:
rs.initiate({_id:"configRS",configsvr:true,members:[{_id:1,host:"192.168.141.201:27017",priority:2},{_id:2,host:"192.168.141.202:27017"},{_id:3,host:"192.168.141.203:27017"}]})
可能的错误的返回结果:
|
MongoDB Enterprise > rs.initiate({_id:”configRS”,configsvr:true,members:[{_id:1,host:”192.168.126.132:27017”,priority:2},{_id:2,host:”192.168.126.131:27017”},{_id:3,host:”192.168.126.130:27017”}]}) 2018-01-26T15:01:17.200+0800 E QUERY [thread1] SyntaxError: illegal character @(shell):1:17
可能的原因是双引号是全角字符
MongoDB Enterprise > rs.initiate({_id:"configRS",configsvr:true,members:[{_id:1,host:"192.168.126.132:27017",priority:2},{_id:2,host:"192.168.126.131:27017"},{_id:3,host:"192.168.126.130:27017"}]}) { "ok" : 0, "errmsg" : "replSetInitiate quorum check failed because not all proposed set members responded affirmatively: 192.168.126.131:27017 failed with No route to host, 192.168.126.130:27017 failed with No route to host", "code" : 74, "codeName" : "NodeNotFound", "$gleStats" : { "lastOpTime" : Timestamp(0, 0), "electionId" : ObjectId("000000000000000000000000") } }
可能的原因是,节点间联通失败,这时要看一下config.config文件的bind_ip是不是配置了
dbpath=/home/****/mongo/config/data logpath=/home/****/mongo/config/logs/config.log bind_ip=0.0.0.0 port=27017 logappend=true fork=true quiet=true journal=true configsvr=true replSet=configRS/192.168.126.130:27017,192.168.126.131:27017,192.168.126.132:27017
|
正确的返回结果样例:
|
MongoDB Enterprise > rs.initiate({_id:"configRS",configsvr:true,members:[{_id:1,host:"192.168.126.132:27017",priority:2},{_id:2,host:"192.168.126.131:27017"},{_id:3,host:"192.168.126.130:27017"}]}) { "ok" : 1, "operationTime" : Timestamp(1516954993, 1), "$gleStats" : { "lastOpTime" : Timestamp(1516954993, 1), "electionId" : ObjectId("000000000000000000000000") }, "$clusterTime" : { "clusterTime" : Timestamp(1516954993, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } }
|
然后到另外两台机器上查看一下,看看config节点群是不是联通了。
|
MongoDB Enterprise > rs.status() { "set" : "configRS", "date" : ISODate("2018-01-26T08:30:51.551Z"), "myState" : 2, "term" : NumberLong(1), "syncingTo" : "192.168.126.130:27017", "configsvr" : true, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1516955444, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1516955444, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1516955444, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1516955444, 1), "t" : NumberLong(1) } }, "members" : [ { "_id" : 1, "name" : "192.168.126.132:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 36, "optime" : { "ts" : Timestamp(1516955444, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1516955444, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-01-26T08:30:44Z"), "optimeDurableDate" : ISODate("2018-01-26T08:30:44Z"), "lastHeartbeat" : ISODate("2018-01-26T08:30:49.985Z"), "lastHeartbeatRecv" : ISODate("2018-01-26T08:30:50.967Z"), "pingMs" : NumberLong(0), "electionTime" : Timestamp(1516955424, 1), "electionDate" : ISODate("2018-01-26T08:30:24Z"), "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.126.131:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 545, "optime" : { "ts" : Timestamp(1516955444, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-01-26T08:30:44Z"), "syncingTo" : "192.168.126.130:27017", "configVersion" : 1, "self" : true }, { "_id" : 3, "name" : "192.168.126.130:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 36, "optime" : { "ts" : Timestamp(1516955444, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1516955444, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-01-26T08:30:44Z"), "optimeDurableDate" : ISODate("2018-01-26T08:30:44Z"), "lastHeartbeat" : ISODate("2018-01-26T08:30:49.991Z"), "lastHeartbeatRecv" : ISODate("2018-01-26T08:30:50.986Z"), "pingMs" : NumberLong(1), "syncingTo" : "192.168.126.132:27017", "configVersion" : 1 } ], "ok" : 1, "operationTime" : Timestamp(1516955444, 1), "$gleStats" : { "lastOpTime" : Timestamp(0, 0), "electionId" : ObjectId("000000000000000000000000") }, "$clusterTime" : { "clusterTime" : Timestamp(1516955444, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } }
|
mongos服务:
编辑mongos配置文件mongo/router/router.config
|
configdb=configRS/192.168.126.132:27017,192.168.126.131:27017,192.168.126.130:27017 bind_ip=0.0.0.0 port=17017 fork=true logpath=/home/****/mongo/router/logs/mongos.log
|
启动mongos,可能要等待5到10秒:
|
[****@centosvm router]$ sudo mongos --config router.config about to fork child process, waiting until server is ready for connections. forked process: 14553 child process started successfully, parent exiting
|
mongos至少要启动两个以上,mongos的IP+端口就是客户端配置的mongo服务地址。
把router.config文件拷贝到另一台mongo机器上相同目录中,按照上面命令启动。
caused by :: ShardNotFound: Database cloudconf not found due to No shards found
shard服务:
在每台服务器下,我们已经建立好路径:mongo/shard/data/{shard1,shard2,shard3}
在shard目录下写好3个shard配置文件
| vi /home/mongo/shard/shard1.config |
|
dbpath=/ home /mongo/shard/data/shard1 logpath=/ home /mongo/shard/logs/shard1.log port=37017 bind_ip=0.0.0.0 logappend=true fork=true quiet=true journal=true shardsvr=true replSet= shard1RS/192.168.141.201:37017,192.168.141.202:37017,192.168.141.203:37017
|
| vi /home/mongo/shard/shard2.config |
|
dbpath=/home/mongo/ shard /data/ shard2 logpath=/ home /mongo/ shard /logs/ shard2.log port=47017 bind_ip=0.0.0.0 logappend=true fork=true quiet=true journal=true shardsvr =true replSet= shard2RS/192.168.141.201:47017,192.168.141.202:47017,192.168.141.203:47017
|
同上配置shard3.config,端口57017
注意:更改相应ip、端口、集群名
replSet= shard1RS/192.168.141.201:37017,192.168.141.202:37017,192.168.141.203:37017
启动3个shard服务,先把我们预先设计的主片启动,然后启动剩余shard
| mongod -f /home/mongo/shard/shard1.config |
| mongod -f /home/mongo/shard/shard2.config |
初始化shard服务,进入任意一台mongodb服务器,配置每个shard副本集
rs.initiate({_id:"shard1RS",members:[{_id:1,host:"192.168.141.201:37017",priority:2},{_id:2,host:"192.168.141.202:37017"},{_id:3,host:"192.168.141.203:37017"}]})
|
| rs.initiate({_id:"shard2RS",members:[{_id:1,host:"192.168.141.202:47017",priority:2},{_id:2,host:"192.168.141.201:47017"},{_id:3,host:"192.168.141.203:47017"}]}) |
同上完成端口57017配置
配置分片,将主片添加至集群
|
>use admin
>db.runCommand({"addShard":"shard1RS/192.168.141.201:37017" ,"maxsize":1024})
>db.runCommand({"addShard":"shard2RS/192.168.141.202:47017" ,"maxsize":1024})
>db.runCommand({"addShard":"shard3RS/192.168.141.203:57017" ,"maxsize":1024})
|
使用,在使用的时候,需要对数据库开启分片功能,并对数据库下的表的字段指定分片算法
|
>use admin
--对库hdctest开启分片
>db.runCommand({"enablesharding":"hdctest"})
--对库hdctest下的表person按字段ID配置hash分库算法
>db.runCommand({"shardcollection":"hdctest.person","key":{_id:'hashed'}})
|
注意:登陆从库查看数据信息的时候,可能会报:
| not master and slaveOk= false,code=13435 |
执行:
| db.getMongo().setSlaveOk()
|
- MongoDB DBA 实践8-----Linux系统Mongodb分片集群部署
在Linux系统中,主要是使用命令行进行mongodb的分片集群部署 一.先决条件 mongodb安装成功,明确路径, MongoDB的几个路径: /var/lib/mongodb /var/log/ ...
- mongoDB研究笔记:分片集群部署
前面几篇文章的分析复制集解决了数据库的备份与自动故障转移,但是围绕数据库的业务中当前还有两个方面的问题变得越来越重要.一是海量数据如何存储?二是如何高效的读写海量数据?尽管复制集也可以实现读写分析,如 ...
- MongoDB DBA 实践6-----MongoDB的分片集群部署
一.分片 MongoDB使用分片技术来支持大数据集和高吞吐量操作. 1.分片目的 对于单台数据库服务器,庞大的数据量及高吞吐量的应用程序对它而言无疑是个巨大的挑战.频繁的CRUD操作能够耗尽服务器的C ...
- MongoDB分片集群部署方案
前言 副本集部署是对数据的冗余和增加读请求的处理能力,却不能提高写请求的处理能力:关键问题是随着数据增加,单机硬件配置会成为性能的瓶颈.而分片集群可以很好的解决这一问题,通过水平扩展来提升性能.分片部 ...
- MongoDB在windows平台分片集群部署
本文转载自:https://www.cnblogs.com/hx764208769/p/4260177.html 前言-为什么我要使用mongodb 最近我公司要开发一个日志系统,这个日志系统包括很多 ...
- MongoDB部署实战(一)MongoDB在windows平台分片集群部署
前言-为什么我要使用mongodb 最近我公司要开发一个日志系统,这个日志系统包括很多类型,错误的,操作的,...用MongoDB存储日志,大量的日志产生,大量读写吞吐量很大的时候,单个Server很 ...
- mongo分片集群部署
测试环境192.168.56.101-213 前期准备: openssl rand -base64 756 > /home/software/mongodb/mongodbkey chmod ...
- redis分片集群安装部署
redis分片集群安装与部署 分片集群的优势 高可用.且方便扩展. 数据分片,多节点提供服务,提高性能,数据提供冗余备份. 分片集群部署 只需更改配置文件 部署架构:6个节点,3主3从.数据集分为3片 ...
- Mongodb分片集群技术+用户验证
随着数据量持续增多,后续迟早会出现一台机器硬件瓶颈问题的.而mongodb主打的就是海量数据架构,“分片”就用这个来解决这个问题. 从图中可以看到有四个组件:mongos.config server. ...
随机推荐
- “全栈2019”Java第七十五章:内部类持有外部类对象
难度 初级 学习时间 10分钟 适合人群 零基础 开发语言 Java 开发环境 JDK v11 IntelliJ IDEA v2018.3 文章原文链接 "全栈2019"Java第 ...
- Windows 操作系统如何使程序开机自启
Windows 操作系统如何开机自启 一.前言: 作为一只运维开发,很多时候需要将自己的小工具做开机自启.在 Linux 的世界里,如果你希望一个程序可以开机自启,那么可以在/etc/rc.d/rc. ...
- maven初步了解
目标:创建一个父maven项目,有两个子项目分别为serverCenter,dbConnector. 建议:全程不要导入Jar包,全部使用maven依赖的方式导入包. 1.创建maven项目 这个创建 ...
- Ping程序
一.概述 Ping程序是对两个TCP/IP系统连通性进行测试的基本工具.该程序发送一份ICMP回显请求报文给主机,并等待返回ICMP回显应答. 二.格式 大多数TCP/IP实现都在内核中直接支持Pin ...
- 条目二十六《iterator优先于const_iterator、reverse_iterator以及const_reverse_iterator》
条目二十六<iterator优先于const_iterator.reverse_iterator以及const_reverse_iterator> 这几个东西不是类型来的,而是不同的类,所 ...
- 如何判断一个对象实例是不是某个类型,如Cat类型
<script> function cat(){} var b = new cat(); if(b instanceof cat){ console.log("a是cat&quo ...
- [原创]PHP 异常错误处理
目录 错误与异常 异常类 错误类(PHP >= 7) 错误 错误报告级别 错误报告设置 全局异常处理程序 全局错误处理函数 无法捕获的错误类型 范例代码 开发/生产环境处理错误和异常 开发环境 ...
- 读取Excel表格日期类型数据的时候
用POI读取Excel数据:(版本号:POI3.7) 1.读取Excel 2.Excel数据处理: Excel存储日期.时间均以数值类型进行存储,读取时POI先判断是是否是数值类型,再进行判断转化 1 ...
- springcloud(三)-Eureka
Eureka是Netflix开源的一款提供服务注册和发现的产品,它提供了完整的Service Registry和Service Discovery实现.也是springcloud体系中最重要最核心的组 ...
- Vue.js 的精髓——组件
开篇:Vue.js 的精髓——组件 写在前面 Vue.js,无疑是当下最火热的前端框架 Almost,而 Vue.js 最精髓的,正是它的组件与组件化.写一个 Vue 工程,也就是在写一个个的组件. ...