mongo分片集群部署
测试环境192.168.56.101-213
前期准备:
openssl rand -base64 756 > /home/software/mongodb/mongodbkey
chmod 600 /home/software/mongodb/mongodbkey
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinx/config
setenforce 0
systemctl stop firewalld
systemctl disable firewalld
tar -zxvf mongodb-linux-x86_64-rhel70-4.0.7.tgz
mv mongodb-linux-x86_64-rhel70-4.0.7 mongodb
cd /home/software/mongodb
mkdir conf mongos config-server shard1 shard2 shard3
mkdir mongos/{log,data}
mkdir shard1/{log,data}
mkdir shard2/{log,data}
mkdir shard3/{log,data}
mkdir config-server/{log,data}
环境变量设置,如下
cat /etc/profile.d/mongodb.sh
# 内容
export MONGODB_HOME=/home/software/mongodb4
export PATH=$MONGODB_HOME/bin:$PATH
# 使立即生效
source /etc/profile.d/mongodb.sh
mongodb的启动顺序是,先启动配置服务器,在启动分片,最后启动mongos
安全关闭mongodb方法一
kill -2 `ps -ef | grep mongod| awk 'NR==1 {print $2}'`
安全关闭mongodb方法二
/home/software/mongodb4/bin/mongo -host 127.0.0.1 -port 30000
> use admin; --使用管理员数据库
> db.shutdownServer(); --安全关闭MongoDB
配置服务器是一个普通的mongod进程,所以只需要新开一个实例即可。配置服务器必须开启1个或则3个,开启2个则会报错:
主机ip及服务
192.168.56.101:configsvr服务,shard1服务,shard2服务,shard3服务,mongos服务
192.168.56.102:configsvr服务,shard1服务,shard2服务,shard3服务,mongos服务
192.168.56.102:configsvr服务,shard1服务,shard2服务,shard3服务,mongos服务
分布:每个机子起3个shard实例,从shard1到shard3的端口号为27001 , 27002 , 27003
每个机子都有config实例,端口号21000
每个机子都有mongs实例,端口号27017
三台服务器目录机构一样,mongo的配置文件在/home/software/mongodb/conf目录下
配置文件基本三台机子通用,但是初始化的时候要注意ip及端口是否正确
-------------config-server.conf的配置文件-----------
systemLog:
destination: file
logAppend: true
path: /home/software/mongodb/config-server/log/congigsrv.log
storage:
dbPath: /home/software/mongodb/config-server/data
journal:
enabled: true
wiredTiger:
engineConfig:
directoryForIndexes: true
processManagement:
fork: true # fork and run in background
pidFilePath: /home/software/mongodb/config-server/log/configsrv.pid # location of pidfile
net:
port: 21000
#bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
bindIpAll: true
maxIncomingConnections: 65535
unixDomainSocket:
enabled: true
##pathPrefix: /tmp/mongod1
filePermissions: 0700
security:
keyFile: /home/software/mongodb/mongodbkey
authorization: enabled
replication:
replSetName: configs
sharding:
clusterRole: configsvr
3台都启动:mongod -f /home/software/mongodb/conf/config-server.conf
--------------初始化configsrv副本集群 -------------
mongo --port 21000
rs.initiate(
... {
... _id: "configs",
... version: 1,
... protocolVersion: 1,
... writeConcernMajorityJournalDefault: true,
... configsvr: true,
... members: [
... {
... _id: 0,
... host: "192.168.56.101:21000",
... arbiterOnly: false,
... buildIndexes: true,
... hidden: false,
... priority: 66,
... tags: {
... BigBoss: "YES"
... },
... slaveDelay: 0,
... votes: 1
... },
... {
... _id: 1,
... host: "192.168.56.102:21000",
... arbiterOnly: false,
... buildIndexes: true,
... hidden: false,
... priority: 55,
... tags: {
... BigBoss: "NO"
... },
... slaveDelay: 0,
... votes: 1
... },
... {
... _id: 2,
... host: "192.168.56.103:21000",
... arbiterOnly: false,
... buildIndexes: true,
... hidden: false,
... priority: 33,
... tags: {
... BigBoss: "YES"
... },
... slaveDelay: 0,
... votes: 1
... }
... ],
... settings: {
... chainingAllowed : true,
... }
... }
... )
----------shard1的配置文件------
systemLog:
destination: file
logAppend: true
path: /home/software/mongodb/shard1/log/shard1.log
storage:
dbPath: /home/software/mongodb/shard1/data
journal:
enabled: true
wiredTiger:
engineConfig:
directoryForIndexes: true
processManagement:
fork: true # fork and run in background
pidFilePath: /home/software/mongodb/config-server/log/configsrv.pid # location of pidfile
##timeZoneInfo: /usr/share/zoneinfo
net:
port: 27001
#bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
bindIpAll: true
maxIncomingConnections: 65535
unixDomainSocket:
enabled: true
##pathPrefix: /tmp/mongod1
filePermissions: 0700
security:
keyFile: /home/software/mongodb/mongodbkey
authorization: enabled
replication:
replSetName: shard1
sharding:
clusterRole: shardsvr
3台启动:mongod -f /home/software/mongodb/conf/shard1.conf
mongo --port 27001
初始化shard1副本集
rs.initiate(
... {
... _id: "shard1",
... version: 1,
... protocolVersion: 1,
... writeConcernMajorityJournalDefault: true,
... members: [
... {
... _id: 0,
... host: "192.168.56.101:27001",
... arbiterOnly: false,
... buildIndexes: true,
... hidden: false,
... priority: 66,
... tags: {
... BigBoss: "YES"
... },
... slaveDelay: 0,
... votes: 1
... },
... {
... _id: 1,
... host: "192.168.56.102:27001",
... arbiterOnly: false,
... buildIndexes: true,
... hidden: false,
... priority: 55,
... tags: {
... BigBoss: "NO"
... },
... slaveDelay: 0,
... votes: 1
... },
... {
... _id: 2,
... host: "192.168.56.103:27001",
... arbiterOnly: false,
... buildIndexes: true,
... hidden: false,
... priority: 33,
... tags: {
... BigBoss: "NO"
... },
... slaveDelay: 0,
... votes: 1
... }
... ],
... settings: {
... chainingAllowed : true,
... }
... }
... )
------------------------shard2的配置文件---------------
systemLog:
destination: file
logAppend: true
path: /home/software/mongodb/shard2/log/shard2.log
storage:
dbPath: /home/software/mongodb/shard2/data
journal:
enabled: true
wiredTiger:
engineConfig:
directoryForIndexes: true
processManagement:
fork: true # fork and run in background
pidFilePath: /home/software/mongodb/config-server/log/configsrv.pid # location of pidfile
##timeZoneInfo: /usr/share/zoneinfo
net:
port: 27002
#bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
bindIpAll: true
maxIncomingConnections: 65535
unixDomainSocket:
enabled: true
##pathPrefix: /tmp/mongod1
filePermissions: 0700
security:
keyFile: /home/software/mongodb/mongodbkey
authorization: enabled
replication:
replSetName: shard2
sharding:
clusterRole: shardsvr
3台启动:mongod -f /home/software/mongodb/conf/shard2.conf
mongo --port 27002
初始化shard2副本集
rs.initiate(
{
_id: "shard2",
version: 1,
protocolVersion: 1,
writeConcernMajorityJournalDefault: true,
members: [
{
_id: 0,
host: "192.168.56.101:27002",
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 66,
tags: {
BigBoss: "YES"
},
slaveDelay: 0,
votes: 1
},
{
_id: 1,
host: "192.168.56.102:27002",
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 55,
tags: {
BigBoss: "NO"
},
slaveDelay: 0,
votes: 1
},
{
_id: 2,
host: "192.168.56.103:27002",
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 33,
tags: {
BigBoss: "NO"
},
slaveDelay: 0,
votes: 1
}
],
settings: {
chainingAllowed : true,
}
}
)
-------------------shard3的配置文件----------
systemLog:
destination: file
logAppend: true
path: /home/software/mongodb/shard3/log/shard3.log
storage:
dbPath: /home/software/mongodb/shard3/data
journal:
enabled: true
wiredTiger:
engineConfig:
directoryForIndexes: true
processManagement:
fork: true # fork and run in background
pidFilePath: /home/software/mongodb/config-server/log/configsrv.pid # location of pidfile
##timeZoneInfo: /usr/share/zoneinfo
net:
port: 27003
#bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
bindIpAll: true
maxIncomingConnections: 65535
unixDomainSocket:
enabled: true
##pathPrefix: /tmp/mongod1
filePermissions: 0700
security:
keyFile: /home/software/mongodb/mongodbkey
authorization: enabled
replication:
replSetName: shard3
sharding:
clusterRole: shardsvr
3台启动:mongod -f /home/software/mongodb/conf/shard3.conf
mongo --port 27003
初始化shard3副本集
rs.initiate(
{
_id: "shard3",
version: 1,
protocolVersion: 1,
writeConcernMajorityJournalDefault: true,
members: [
{
_id: 0,
host: "192.168.56.101:27003",
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 66,
tags: {
BigBoss: "YES"
},
slaveDelay: 0,
votes: 1
},
{
_id: 1,
host: "192.168.56.102:27003",
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 55,
tags: {
BigBoss: "NO"
},
slaveDelay: 0,
votes: 1
},
{
_id: 2,
host: "192.168.56.103:27003",
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 33,
tags: {
BigBoss: "NO"
},
slaveDelay: 0,
votes: 1
}
],
settings: {
chainingAllowed : true,
}
}
)
-----------配置Route------
创建mongos配置文件如下:
systemLog:
destination: file
logAppend: true
path: /home/software/mongodb/mongos/log/mongos.log
processManagement:
fork: true # fork and run in background
pidFilePath: /home/software/mongodb/mongos/log/mongos.pid # location of pidfile
#timeZoneInfo: /usr/share/zoneinfo
net:
bindIpAll: true
maxIncomingConnections: 500
unixDomainSocket:
enabled: true
#pathPrefix: /tmp
filePermissions: 0700
security:
keyFile: /home/software/mongodb/mongodbkey
# authorization: enabled
#replication:
sharding:
configDB: configs/192.168.56.101:21000,192.168.56.102:21000,192.168.56.103:21000
启动mongos并设置一个连接的账号密码
#启动
mongos -f /home/software/mongodb/conf/mongos.conf
#连接
mongo
#设置管理员账号密码
use admin
db.createUser(
{
user: "root",
pwd: "123456",
roles: [ { role: "__system", db: "admin" } ]
}
)
重连至mongodb
mongo -uroot -p123456 --authenticationDatabase admin
#添加分片主机至集群中
sh.addShard("shard1/192.168.56.101:27001,192.168.56.102:27001,192.168.56.103:27001");
sh.addShard("shard2/192.168.56.101:27002,192.168.56.102:27002,192.168.56.103:27002");
sh.addShard("shard3/192.168.56.101:27003,192.168.56.102:27003,192.168.56.103:27003");
#查看状态
sh.status()
####为了展示出效果,修改一下默认的chunksize大小,这里修改为1M
#默认的chunksize大小为64M,示例修改命令如下:
#use config
#db.settings.save( { _id:"chunksize", value: <sizeInMB> } )
use config
db.settings.save( { _id:"chunksize", value: 1 } )
#为test数据库开启分片
#选择一个片键age并指定一个集合mycoll对其进行分片
sh.enableSharding("test")
sh.shardCollection("test.mycoll", {"age": 1})
#测试分片,写入数据到数据库中
use test
for (i = 1; i <= 10000; i++) db.mycoll.insert({age:(i%100), name:"bigboss_user"+i, address:i+", Some Road, Zhengzhou, Henan", country:"China", course:"cousre"+"(i%12)"})
#写入完成之后就可以查看分片信息了
sh.status()
mongo分片集群部署的更多相关文章
- MongoDB DBA 实践8-----Linux系统Mongodb分片集群部署
在Linux系统中,主要是使用命令行进行mongodb的分片集群部署 一.先决条件 mongodb安装成功,明确路径, MongoDB的几个路径: /var/lib/mongodb /var/log/ ...
- mongoDB研究笔记:分片集群部署
前面几篇文章的分析复制集解决了数据库的备份与自动故障转移,但是围绕数据库的业务中当前还有两个方面的问题变得越来越重要.一是海量数据如何存储?二是如何高效的读写海量数据?尽管复制集也可以实现读写分析,如 ...
- mongo的集群部署
# MongoDB 集群部署 ## 关键词 * 集群 * 副本集 * 分片 ## MongoDB集群部署 >今天主要来说说Mongodb的三种集群方式的搭建Replica Set副本集 / Sh ...
- MongoDB分片集群部署方案
前言 副本集部署是对数据的冗余和增加读请求的处理能力,却不能提高写请求的处理能力:关键问题是随着数据增加,单机硬件配置会成为性能的瓶颈.而分片集群可以很好的解决这一问题,通过水平扩展来提升性能.分片部 ...
- MongoDB DBA 实践6-----MongoDB的分片集群部署
一.分片 MongoDB使用分片技术来支持大数据集和高吞吐量操作. 1.分片目的 对于单台数据库服务器,庞大的数据量及高吞吐量的应用程序对它而言无疑是个巨大的挑战.频繁的CRUD操作能够耗尽服务器的C ...
- MongoDB在windows平台分片集群部署
本文转载自:https://www.cnblogs.com/hx764208769/p/4260177.html 前言-为什么我要使用mongodb 最近我公司要开发一个日志系统,这个日志系统包括很多 ...
- MongoDB部署实战(一)MongoDB在windows平台分片集群部署
前言-为什么我要使用mongodb 最近我公司要开发一个日志系统,这个日志系统包括很多类型,错误的,操作的,...用MongoDB存储日志,大量的日志产生,大量读写吞吐量很大的时候,单个Server很 ...
- Mongo--04 Mongo分片集群
目录 一.分片的概念 二. 分片工作原理 三.IP端口目录规划 1.IP端口规划 2.目录规划 四.分片集群搭建副本集步骤 1.安装软件 2.创建目录 3.创建配置文件 4.优化警告 5.启动服务 6 ...
- Mongo分片集群脚本
bash大法好啊,一键玩mongo. 我的mongo版本是:MongoDB shell version v4.0.2 这里准备为大家献上Mongo创建分片和可复制集的脚本,以及在部署的时候踩的坑. 分 ...
随机推荐
- Java开发笔记(十八)上下求索的while循环
循环是流程控制的又一重要结构,“白天-黑夜-白天-黑夜”属于时间上的循环,古人“年复一年.日复一日”的“日出而作.日落而息”便是每天周而复始的生活.计算机程序处理循环结构时,给定一段每次都要执行的代码 ...
- Java开发笔记(五十四)内部类和嵌套类
通常情况下,一个Java代码文件只定义一个类,即使两个类是父类与子类的关系,也要把它们拆成两个代码文件分别定义.可是有些事物相互之间密切联系,又不同于父子类的继承关系,比如一棵树会开很多花朵,这些花儿 ...
- Java 学习笔记 使用并发包ReentrantLock简化生产者消费者模式代码
说明 ReentrantLock是java官方的一个线程锁类,ReentarntLock实现了Lock的接口 我们只需要使用这个,就可以不用使用synchronized同步关键字以及对应的notify ...
- C# 消息队列-Microsoft Azure service bus 服务总线
先决条件 Visual Studio 2015或更高版本.本教程中的示例使用Visual Studio 2015. Azure订阅. 注意 要完成本教程,您需要一个Azure帐户.您可以激活MSDN订 ...
- ELK 日志采集 实战教程
概要 带着问题去看教程: 不是用logstash来监听我们的日志,我们可以使用logback配置来使用TCP appender通过TCP协议将日志发送到远程Logstash实例. 我们可以使用Logs ...
- java-文件读取
1.利用递归读取文件 (1)NotifyFolder.java package com.etc; import java.io.File; import java.io.IOException; im ...
- Html和Css学习笔记-html进阶-html5属性
我的邮箱地址:zytrenren@163.com欢迎大家交流学习纠错! 此篇博客是我的复习笔记,html和css学的时间太久了,忘得差不多了,最近要使用一下,所以重新打开html的书略读,后记录了标签 ...
- H5移动端rem适配
/** * 移动端自适应 */ <meta name="viewport" content="width=device-width,user-scalable=no ...
- jupyter notebook安装、登录
pip install jupyter 提示pip需要升级(本人装的是anaconda) 输入:python -m pip install --upgrade pip 安装完成. 运行jupyter ...
- Unity Profiler的使用
选中Development Build.Autoconnect Profiler和Script Debugging三个选项,如下图所示. 点击Build And Run按钮,将会编译项目并安装APK到 ...