一、MongoDB副本集

副本集名称 角色 IP地址 端口号 优先级
CCTV-test Primary 192.168.1.21 27017 10
Secondary 192.168.1.21 27018 9
Secondary 192.168.1.21 27019 8

二、现象描述

1. 配置文件

复制集成员配置文件->连接数

[root@mm2020 ~]# cat /data/mongodb/cctv/27017/conf/mongodb.conf | grep max
maxIncomingConnections: 8

2. 报错

操作系统连接MongoDB数据库主节点报错

[root@mm2020 ~]# mongo 192.168.1.21:27017/admin -u 'admin' -p 'admin'
MongoDB shell version v3.6.3
connecting to: mongodb://192.168.1.21:27017/admin
2020-12-14T20:37:59.851+0800 I NETWORK [thread1] Socket recv() Connection reset by peer 192.168.1.21:27017
2020-12-14T20:37:59.851+0800 I NETWORK [thread1] SocketException: remote: (NONE):0 error: SocketException socket exception [RECV_ERROR] server [192.168.1.21:27017]
2020-12-14T20:37:59.851+0800 E QUERY [thread1] Error: network error while attempting to run command 'isMaster' on host '192.168.1.21:27017' :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed

日志显示报错

2020-12-14T20:28:22.547+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:60634 #60 (7 connections now open)
2020-12-14T20:28:22.547+0800 I NETWORK [conn60] received client metadata from 192.168.1.21:60634 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T20:28:22.560+0800 I ACCESS [conn60] Successfully authenticated as principal admin on admin
2020-12-14T20:28:22.564+0800 I NETWORK [conn60] end connection 192.168.1.21:60634 (6 connections now open)
2020-12-14T20:28:24.625+0800 I NETWORK [listener] connection accepted from 192.168.1.21:60636 #61 (7 connections now open)
2020-12-14T20:28:24.625+0800 I NETWORK [conn61] received client metadata from 192.168.1.21:60636 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T20:28:24.637+0800 I ACCESS [conn61] Successfully authenticated as principal admin on admin
2020-12-14T20:28:24.641+0800 I NETWORK [conn61] end connection 192.168.1.21:60636 (6 connections now open)
2020-12-14T20:28:26.659+0800 I NETWORK [listener] connection accepted from 192.168.1.21:60638 #62 (7 connections now open)
2020-12-14T20:28:26.659+0800 I NETWORK [conn62] received client metadata from 192.168.1.21:60638 conn: { driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T20:28:26.660+0800 I NETWORK [listener] connection accepted from 192.168.1.21:60644 #63 (8 connections now open)
2020-12-14T20:28:26.660+0800 I NETWORK [conn63] received client metadata from 192.168.1.21:60644 conn: { driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T20:28:26.661+0800 I ACCESS [conn63] Successfully authenticated as principal __system on local
2020-12-14T20:28:26.662+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:26.663+0800 I NETWORK [conn63] end connection 192.168.1.21:60644 (7 connections now open)
2020-12-14T20:28:26.694+0800 I NETWORK [listener] connection accepted from 192.168.1.21:60648 #65 (8 connections now open)
2020-12-14T20:28:26.694+0800 I NETWORK [conn65] received client metadata from 192.168.1.21:60648 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T20:28:26.706+0800 I ACCESS [conn65] Successfully authenticated as principal admin on admin
2020-12-14T20:28:26.711+0800 I NETWORK [conn65] end connection 192.168.1.21:60648 (7 connections now open)
2020-12-14T20:28:28.754+0800 I NETWORK [listener] connection accepted from 192.168.1.21:60650 #66 (8 connections now open)
2020-12-14T20:28:28.754+0800 I NETWORK [conn66] received client metadata from 192.168.1.21:60650 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T20:28:28.766+0800 I ACCESS [conn66] Successfully authenticated as principal admin on admin
2020-12-14T20:28:28.771+0800 I NETWORK [conn66] end connection 192.168.1.21:60650 (7 connections now open)
2020-12-14T20:28:30.416+0800 I NETWORK [listener] connection accepted from 192.168.1.21:60656 #67 (8 connections now open)
2020-12-14T20:28:30.416+0800 I NETWORK [conn67] received client metadata from 192.168.1.21:60656 conn: { driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T20:28:30.416+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:30.891+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:32.961+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:35.028+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:37.084+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:39.134+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:41.296+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:43.376+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:45.442+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:47.501+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:49.549+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:51.613+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:52.501+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:53.664+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:53.749+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:55.087+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:55.710+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:57.777+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:28:59.840+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:29:01.886+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:29:03.949+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:29:06.021+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:29:08.069+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:29:10.138+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T20:29:12.202+0800 I NETWORK [listener] connection refused because too many open connections: 9

2

显示的主要报错

2020-12-14T20:30:35.086+0800 I NETWORK  [listener] connection refused because too many open connections: 9

3. 检查

从节点检查副本集状态良好

CCTV-test:SECONDARY> rs.status()
{
"set" : "CCTV-test",
"date" : ISODate("2020-12-14T12:41:18.696Z"),
"myState" : 2,
"term" : NumberLong(1),
"syncingTo" : "192.168.1.21:27017",
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1607949670, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1607949670, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1607949670, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1607949670, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.1.21:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 958,
"optime" : {
"ts" : Timestamp(1607949670, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1607949670, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-12-14T12:41:10Z"),
"optimeDurableDate" : ISODate("2020-12-14T12:41:10Z"),
"lastHeartbeat" : ISODate("2020-12-14T12:41:18.437Z"),
"lastHeartbeatRecv" : ISODate("2020-12-14T12:41:17.558Z"),
"pingMs" : NumberLong(0),
"electionTime" : Timestamp(1607948728, 1),
"electionDate" : ISODate("2020-12-14T12:25:28Z"),
"configVersion" : 1
},
{
"_id" : 1,
"name" : "192.168.1.21:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1073,
"optime" : {
"ts" : Timestamp(1607949670, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-12-14T12:41:10Z"),
"syncingTo" : "192.168.1.21:27017",
"configVersion" : 1,
"self" : true
},
{
"_id" : 2,
"name" : "192.168.1.21:27019",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 958,
"optime" : {
"ts" : Timestamp(1607949670, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1607949670, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-12-14T12:41:10Z"),
"optimeDurableDate" : ISODate("2020-12-14T12:41:10Z"),
"lastHeartbeat" : ISODate("2020-12-14T12:41:18.438Z"),
"lastHeartbeatRecv" : ISODate("2020-12-14T12:41:18.437Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.1.21:27017",
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1607949670, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1607949670, 1),
"signature" : {
"hash" : BinData(0,"53rtWYGUhf34PYvIgzb/NPx+KmA="),
"keyId" : NumberLong("6906087208994734081")
}
}
}

3

从节点检查主从延迟状态正常

CCTV-test:SECONDARY> rs.printReplicationInfo()
configured oplog size: 2048MB
log length start to end: 1032secs (0.29hrs)
oplog first event time: Mon Dec 14 2020 20:25:18 GMT+0800 (CST)
oplog last event time: Mon Dec 14 2020 20:42:30 GMT+0800 (CST)
now: Mon Dec 14 2020 20:42:39 GMT+0800 (CST)
CCTV-test:SECONDARY> rs.printSlaveReplicationInfo()
source: 192.168.1.21:27018
syncedTo: Mon Dec 14 2020 20:42:30 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
source: 192.168.1.21:27019
syncedTo: Mon Dec 14 2020 20:42:30 GMT+0800 (CST)
0 secs (0 hrs) behind the primary

4

三、解决措施

1. 重启主节点mongod进程

操作简单快捷,连接释放彻底。但是会影响数据写入。

2. 在线关闭一个 TCP SCOCKET 连接

使用 netstat 找到 mongod 实例进程

[root@mm2020 ~]# ps -ef | grep mongod
root 20446 1 0 20:23 ? 00:00:22 mongod -f /data/mongodb/cctv/27018/conf/mongodb.conf
root 20478 1 0 20:23 ? 00:00:22 mongod -f /data/mongodb/cctv/27019/conf/mongodb.conf
root 20551 1 0 20:24 ? 00:00:23 mongod -f /data/mongodb/cctv/27017/conf/mongodb.conf
root 29086 14219 0 21:09 pts/0 00:00:00 grep --color=auto mongod

使用 lsof 找到进程 20446 打开的所有文件描述符,并找到对应的 socket 连接

[root@mm2020 ~]# lsof -i:27017
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
mongod 20446 root 39u IPv4 73906 0t0 TCP mm2020.us.oracle.com:60508->mm2020.us.oracle.com:27017 (ESTABLISHED)
mongod 20446 root 51u IPv4 88512 0t0 TCP mm2020.us.oracle.com:34432->mm2020.us.oracle.com:27017 (ESTABLISHED)
mongod 20446 root 52u IPv4 88516 0t0 TCP mm2020.us.oracle.com:34434->mm2020.us.oracle.com:27017 (ESTABLISHED)
mongod 20446 root 64u IPv4 77209 0t0 TCP mm2020.us.oracle.com:60638->mm2020.us.oracle.com:27017 (ESTABLISHED)
mongod 20478 root 35u IPv4 84076 0t0 TCP mm2020.us.oracle.com:33732->mm2020.us.oracle.com:27017 (ESTABLISHED)
mongod 20478 root 39u IPv4 73903 0t0 TCP mm2020.us.oracle.com:60506->mm2020.us.oracle.com:27017 (ESTABLISHED)
mongod 20478 root 52u IPv4 84080 0t0 TCP mm2020.us.oracle.com:33734->mm2020.us.oracle.com:27017 (ESTABLISHED)
mongod 20478 root 68u IPv4 77232 0t0 TCP mm2020.us.oracle.com:60656->mm2020.us.oracle.com:27017 (ESTABLISHED)
mongod 20551 root 11u IPv4 73870 0t0 TCP *:27017 (LISTEN)
mongod 20551 root 45u IPv4 73904 0t0 TCP mm2020.us.oracle.com:27017->mm2020.us.oracle.com:60506 (ESTABLISHED)
mongod 20551 root 47u IPv4 73907 0t0 TCP mm2020.us.oracle.com:27017->mm2020.us.oracle.com:60508 (ESTABLISHED)
mongod 20551 root 52u IPv4 84077 0t0 TCP mm2020.us.oracle.com:27017->mm2020.us.oracle.com:33732 (ESTABLISHED)
mongod 20551 root 53u IPv4 88513 0t0 TCP mm2020.us.oracle.com:27017->mm2020.us.oracle.com:34432 (ESTABLISHED)
mongod 20551 root 57u IPv4 88517 0t0 TCP mm2020.us.oracle.com:27017->mm2020.us.oracle.com:34434 (ESTABLISHED)
mongod 20551 root 58u IPv4 84081 0t0 TCP mm2020.us.oracle.com:27017->mm2020.us.oracle.com:33734 (ESTABLISHED)
mongod 20551 root 59u IPv4 77216 0t0 TCP mm2020.us.oracle.com:27017->mm2020.us.oracle.com:60638 (ESTABLISHED)
mongod 20551 root 60u IPv4 77233 0t0 TCP mm2020.us.oracle.com:27017->mm2020.us.oracle.com:60656 (ESTABLISHED)

调试PID进程

方法原理:

检查发现主节点所有的连接全部源自同一个进程PID 20551,如果手动kill该进程,不仅连接被kill掉,包括该进程同样会被kill掉。

Super User上看到一个非常棒的方法:

gdb -p PID       //gdb attach 到进程上下文
call close($fd) //关闭该 socket

[root@mm2020 ~]# gdb -p 20551
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-114.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Attaching to process 20551
Reading symbols from /usr/local/src/mongodb-linux-x86_64-3.6.3/bin/mongod...Missing separate debuginfo for /usr/local/src/mongodb-linux-x86_64-3.6.3/bin/mongod
Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/64/21b3d51ea326c407c1d5147dcee2f9775c87f0.debug
(no debugging symbols found)...done.
Reading symbols from /lib64/libresolv.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/libresolv.so.2
Reading symbols from /lib64/librt.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib64/librt.so.1
Reading symbols from /lib64/libdl.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/libdl.so.2
Reading symbols from /lib64/libm.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib64/libm.so.6
Reading symbols from /lib64/libgcc_s.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib64/libgcc_s.so.1
Reading symbols from /lib64/libpthread.so.0...(no debugging symbols found)...done.
[New LWP 28829]
[New LWP 28399]
[New LWP 27375]
[New LWP 27374]
[New LWP 25167]
[New LWP 25166]
[New LWP 21179]
[New LWP 21161]
[New LWP 20767]
[New LWP 20646]
[New LWP 20645]
[New LWP 20644]
[New LWP 20643]
[New LWP 20642]
[New LWP 20641]
[New LWP 20640]
[New LWP 20639]
[New LWP 20638]
[New LWP 20637]
[New LWP 20636]
[New LWP 20635]
[New LWP 20634]
[New LWP 20633]
[New LWP 20632]
[New LWP 20631]
[New LWP 20630]
[New LWP 20629]
[New LWP 20628]
[New LWP 20627]
[New LWP 20626]
[New LWP 20625]
[New LWP 20624]
[New LWP 20623]
[New LWP 20622]
[New LWP 20621]
[New LWP 20620]
[New LWP 20619]
[New LWP 20618]
[New LWP 20617]
[New LWP 20616]
[New LWP 20615]
[New LWP 20614]
[New LWP 20613]
[New LWP 20612]
[New LWP 20611]
[New LWP 20610]
[New LWP 20609]
[New LWP 20608]
[New LWP 20607]
[New LWP 20606]
[New LWP 20605]
[New LWP 20604]
[New LWP 20598]
[New LWP 20580]
[New LWP 20579]
[New LWP 20578]
[New LWP 20577]
[New LWP 20576]
[New LWP 20575]
[New LWP 20574]
[New LWP 20572]
[New LWP 20571]
[New LWP 20570]
[New LWP 20569]
[New LWP 20568]
[New LWP 20567]
[New LWP 20566]
[New LWP 20565]
[New LWP 20564]
[New LWP 20563]
[New LWP 20562]
[New LWP 20561]
[New LWP 20560]
[New LWP 20559]
[New LWP 20558]
[New LWP 20553]
[New LWP 20552]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Loaded symbols for /lib64/libpthread.so.0
Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib64/libc.so.6
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
0x00007f8cbea97965 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
Missing separate debuginfos, use: debuginfo-install glibc-2.17-260.el7.x86_64 libgcc-4.8.5-36.el7.x86_64
(gdb) call close(47u)
[New Thread 0x7f8c9b072700 (LWP 29202)]
$1 = 0
(gdb) quit
A debugging session is active. Inferior 1 [process 20551] will be detached. Quit anyway? (y or n)
Please answer y or n.
A debugging session is active. Inferior 1 [process 20551] will be detached. Quit anyway? (y or n) y
Detaching from program: /usr/local/src/mongodb-linux-x86_64-3.6.3/bin/mongod, process 20551

gdb -p PID

退出 gdb 后:

查看mongod.log日志

2020-12-14T21:13:33.611+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T21:13:33.611+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T21:13:33.611+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T21:13:33.611+0800 I NETWORK [listener] connection refused because too many open connections: 9
2020-12-14T21:13:33.611+0800 I COMMAND [conn713] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1607951410, 1), t: 1 }, appliedOpTime: { ts: Timestamp(1607951410, 1), t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp(1607951410, 1), t: 1 }, appliedOpTime: { ts: Timestamp(1607951410, 1), t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp(1607951410, 1), t: 1 }, appliedOpTime: { ts: Timestamp(1607951410, 1), t: 1 }, memberId: 2, cfgver: 1 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1607951410, 1), t: 1 }, lastOpVisible: { ts: Timestamp(1607951410, 1), t: 1 }, configVersion: 1, replicaSetId: ObjectId('5fd759aead2915a66623f6b4'), primaryIndex: 0, syncSourceIndex: 0 }, $clusterTime: { clusterTime: Timestamp(1607951410, 1), signature: { hash: BinData(0, B4FDE38F4D10BAE21D9865AD22CADE3B2C40FA73), keyId: 6906087208994734081 } }, $db: "admin" } numYields:0 reslen:353 locks:{} protocol:op_msg 126302ms
2020-12-14T21:13:33.611+0800 I REPL [replexec-45] Member 192.168.1.21:27018 is now in state RS_DOWN
2020-12-14T21:13:33.612+0800 I NETWORK [conn1356] received client metadata from 192.168.1.21:35022 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:33.613+0800 I COMMAND [conn1063] command local.oplog.rs command: getMore { getMore: 30749310873, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1607951410, 1), t: 1 }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1607951410, 1), signature: { hash: BinData(0, B4FDE38F4D10BAE21D9865AD22CADE3B2C40FA73), keyId: 6906087208994734081 } }, $db: "local" } originatingCommand: { find: "oplog.rs", filter: { ts: { $gte: Timestamp(1607948718, 1) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, batchSize: 13981010, term: 1, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1607948730, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "local" } planSummary: COLLSCAN cursorid:30749310873 keysExamined:0 docsExamined:0 numYields:2 nreturned:0 reslen:576 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_msg 126304ms
2020-12-14T21:13:33.613+0800 I COMMAND [conn1062] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:750 locks:{} protocol:op_query 5738ms
2020-12-14T21:13:33.613+0800 I NETWORK [conn1062] end connection 192.168.1.21:34432 (7 connections now open)
2020-12-14T21:13:33.614+0800 I REPL [replexec-45] stepping down from primary, because a new term has begun: 2
2020-12-14T21:13:33.615+0800 I NETWORK [conn1063] end connection 192.168.1.21:34434 (6 connections now open)
2020-12-14T21:13:33.615+0800 I NETWORK [listener] connection accepted from 192.168.1.21:35064 #1372 (7 connections now open)
2020-12-14T21:13:33.616+0800 I NETWORK [conn1372] received client metadata from 192.168.1.21:35064 conn: { driver: { name: "NetworkInterfaceASIO-Replication", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:33.617+0800 I ACCESS [conn1372] Successfully authenticated as principal __system on local
2020-12-14T21:13:33.618+0800 I REPL [replexec-47] Member 192.168.1.21:27018 is now in state PRIMARY
2020-12-14T21:13:33.619+0800 I NETWORK [listener] connection accepted from 192.168.1.21:35068 #1373 (8 connections now open)
2020-12-14T21:13:33.619+0800 I REPL [replexec-45] transition to SECONDARY from PRIMARY
2020-12-14T21:13:33.619+0800 I NETWORK [replexec-45] Skip closing connection for connection # 1373
2020-12-14T21:13:33.619+0800 I NETWORK [replexec-45] Skip closing connection for connection # 1372
2020-12-14T21:13:33.619+0800 I NETWORK [replexec-45] Skip closing connection for connection # 1358
2020-12-14T21:13:33.619+0800 I NETWORK [replexec-45] Skip closing connection for connection # 1357
2020-12-14T21:13:33.619+0800 I NETWORK [replexec-45] Skip closing connection for connection # 713
2020-12-14T21:13:33.620+0800 I COMMAND [ftdc] serverStatus was very slow: { after basic: 0, after asserts: 0, after backgroundFlushing: 0, after connections: 0, after dur: 0, after extra_info: 0, after globalLock: 0, after locks: 0, after logicalSessionRecordCache: 0, after network: 0, after opLatencies: 0, after opcounters: 0, after opcountersRepl: 0, after repl: 5747, after storageEngine: 5747, after tcmalloc: 5747, after transactions: 5747, after wiredTiger: 5747, at end: 5747 }
2020-12-14T21:13:33.622+0800 I NETWORK [conn1356] Error sending response to client: SocketException: Broken pipe. Ending connection from 192.168.1.21:35022 (connection id: 1356)
2020-12-14T21:13:33.622+0800 I NETWORK [conn1356] end connection 192.168.1.21:35022 (7 connections now open)
2020-12-14T21:13:33.623+0800 I NETWORK [conn62] end connection 192.168.1.21:60638 (6 connections now open)
2020-12-14T21:13:33.623+0800 I NETWORK [conn67] end connection 192.168.1.21:60656 (5 connections now open)
2020-12-14T21:13:33.623+0800 I NETWORK [conn1358] end connection 192.168.1.21:35026 (4 connections now open)
2020-12-14T21:13:33.623+0800 I NETWORK [conn1357] received client metadata from 192.168.1.21:35024 conn: { driver: { name: "NetworkInterfaceASIO-Replication", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:33.623+0800 I NETWORK [conn1373] received client metadata from 192.168.1.21:35068 conn: { driver: { name: "NetworkInterfaceASIO-Replication", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:33.624+0800 I NETWORK [conn1357] end connection 192.168.1.21:35024 (3 connections now open)
2020-12-14T21:13:33.624+0800 I ACCESS [conn1373] Successfully authenticated as principal __system on local
2020-12-14T21:13:34.613+0800 I REPL [rsBackgroundSync] sync source candidate: 192.168.1.21:27019
2020-12-14T21:13:34.613+0800 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to 192.168.1.21:27019
2020-12-14T21:13:34.620+0800 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to 192.168.1.21:27019, took 7ms (1 connections now open to 192.168.1.21:27019)
2020-12-14T21:13:34.624+0800 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to 192.168.1.21:27019
2020-12-14T21:13:34.626+0800 I REPL [rsBackgroundSync] Starting rollback due to OplogStartMissing: Received an empty batch from sync source.
2020-12-14T21:13:34.626+0800 I REPL [rsBackgroundSync] Replication commit point: { ts: Timestamp(1607951410, 1), t: 1 }
2020-12-14T21:13:34.626+0800 I REPL [rsBackgroundSync] Rollback using the 'rollbackViaRefetch' method because UUID support is feature compatible with featureCompatibilityVersion 3.6.
2020-12-14T21:13:34.627+0800 I REPL [rsBackgroundSync] transition to ROLLBACK from SECONDARY
2020-12-14T21:13:34.627+0800 I NETWORK [rsBackgroundSync] Skip closing connection for connection # 1373
2020-12-14T21:13:34.627+0800 I NETWORK [rsBackgroundSync] Skip closing connection for connection # 1372
2020-12-14T21:13:34.627+0800 I NETWORK [rsBackgroundSync] Skip closing connection for connection # 713
2020-12-14T21:13:34.627+0800 I ROLLBACK [rsBackgroundSync] Starting rollback. Sync source: 192.168.1.21:27019
2020-12-14T21:13:34.630+0800 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to 192.168.1.21:27019, took 6ms (2 connections now open to 192.168.1.21:27019)
2020-12-14T21:13:34.631+0800 I ROLLBACK [rsBackgroundSync] Finding the Common Point
2020-12-14T21:13:34.631+0800 I ROLLBACK [rsBackgroundSync] our last optime: Timestamp(1607951613, 1)
2020-12-14T21:13:34.631+0800 I ROLLBACK [rsBackgroundSync] their last optime: Timestamp(1607951606, 1)
2020-12-14T21:13:34.631+0800 I ROLLBACK [rsBackgroundSync] diff in end of log times: 7 seconds
2020-12-14T21:13:34.631+0800 I ROLLBACK [rsBackgroundSync] Rollback common point is { ts: Timestamp(1607951410, 1), t: 1 }
2020-12-14T21:13:34.631+0800 I ROLLBACK [rsBackgroundSync] Starting refetching documents
2020-12-14T21:13:34.631+0800 I ROLLBACK [rsBackgroundSync] Finished refetching documents. Total size of documents refetched: 0
2020-12-14T21:13:34.631+0800 I ROLLBACK [rsBackgroundSync] Checking the RollbackID and updating the MinValid if necessary
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Setting minvalid to { ts: Timestamp(1607951606, 1), t: 2 }
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Rolling back createIndexes commands.
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Dropping collections to roll back create operations
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Rolling back renameCollection commands and collection drop commands.
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Rolling back collections pending being dropped: Removing them from the list of drop-pending collections in the DropPendingCollectionReaper.
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Rolling back dropIndexes commands.
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Deleting and updating documents to roll back insert, update and remove operations
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Rollback deleted 0 documents and updated 0 documents.
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Truncating the oplog at { ts: Timestamp(1607951410, 1), t: 1 }
2020-12-14T21:13:34.633+0800 I REPL [rsBackgroundSync] Incremented the rollback ID to 2
2020-12-14T21:13:34.633+0800 I ROLLBACK [rsBackgroundSync] Rollback finished. The final minValid is: { ts: Timestamp(1607951606, 1), t: 2 }
2020-12-14T21:13:34.633+0800 I REPL [rsBackgroundSync] transition to RECOVERING from ROLLBACK
2020-12-14T21:13:34.633+0800 I REPL [rsBackgroundSync] sync source candidate: 192.168.1.21:27019
2020-12-14T21:13:34.635+0800 I STORAGE [rsSync] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: a4b4f292-e51d-4a3b-9a59-3526c3cb9548
2020-12-14T21:13:34.640+0800 I REPL [rsSync] transition to SECONDARY from RECOVERING
2020-12-14T21:13:34.641+0800 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to 192.168.1.21:27019: InvalidSyncSource: Sync source was cleared. Was 192.168.1.21:27019
2020-12-14T21:13:35.619+0800 I REPL [replexec-49] Scheduling priority takeover at 2020-12-14T21:13:46.246+0800
2020-12-14T21:13:35.671+0800 I NETWORK [listener] connection accepted from 192.168.1.21:35076 #1374 (4 connections now open)
2020-12-14T21:13:35.672+0800 I NETWORK [conn1374] received client metadata from 192.168.1.21:35076 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:35.684+0800 I ACCESS [conn1374] Successfully authenticated as principal admin on admin
2020-12-14T21:13:35.690+0800 I NETWORK [conn1374] end connection 192.168.1.21:35076 (3 connections now open)
2020-12-14T21:13:36.841+0800 W REPL [rsBackgroundSync] Fetcher stopped querying remote oplog with error: InvalidSyncSource: sync source 192.168.1.21:27019 (config version: 1; last applied optime: { ts: Timestamp(1607951616, 1), t: 2 }; sync source index: 1; primary index: 1) is no longer valid
2020-12-14T21:13:36.842+0800 I REPL [rsBackgroundSync] could not find member to sync from
2020-12-14T21:13:37.738+0800 I NETWORK [listener] connection accepted from 192.168.1.21:35078 #1375 (4 connections now open)
2020-12-14T21:13:37.738+0800 I NETWORK [conn1375] received client metadata from 192.168.1.21:35078 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:37.751+0800 I ACCESS [conn1375] Successfully authenticated as principal admin on admin
2020-12-14T21:13:37.756+0800 I NETWORK [conn1375] end connection 192.168.1.21:35078 (3 connections now open)
2020-12-14T21:13:39.803+0800 I NETWORK [listener] connection accepted from 192.168.1.21:35080 #1376 (4 connections now open)
2020-12-14T21:13:39.804+0800 I NETWORK [conn1376] received client metadata from 192.168.1.21:35080 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:39.816+0800 I ACCESS [conn1376] Successfully authenticated as principal admin on admin
2020-12-14T21:13:39.821+0800 I NETWORK [conn1376] end connection 192.168.1.21:35080 (3 connections now open)
2020-12-14T21:13:41.891+0800 I NETWORK [listener] connection accepted from 192.168.1.21:35082 #1377 (4 connections now open)
2020-12-14T21:13:41.891+0800 I NETWORK [conn1377] received client metadata from 192.168.1.21:35082 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:41.903+0800 I ACCESS [conn1377] Successfully authenticated as principal admin on admin
2020-12-14T21:13:41.908+0800 I NETWORK [conn1377] end connection 192.168.1.21:35082 (3 connections now open)
2020-12-14T21:13:43.952+0800 I NETWORK [listener] connection accepted from 192.168.1.21:35084 #1378 (4 connections now open)
2020-12-14T21:13:43.953+0800 I NETWORK [conn1378] received client metadata from 192.168.1.21:35084 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:43.966+0800 I ACCESS [conn1378] Successfully authenticated as principal admin on admin
2020-12-14T21:13:43.970+0800 I NETWORK [conn1378] end connection 192.168.1.21:35084 (3 connections now open)
2020-12-14T21:13:46.033+0800 I NETWORK [listener] connection accepted from 192.168.1.21:35086 #1379 (4 connections now open)
2020-12-14T21:13:46.033+0800 I NETWORK [conn1379] received client metadata from 192.168.1.21:35086 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:46.045+0800 I ACCESS [conn1379] Successfully authenticated as principal admin on admin
2020-12-14T21:13:46.050+0800 I NETWORK [conn1379] end connection 192.168.1.21:35086 (3 connections now open)
2020-12-14T21:13:46.247+0800 I REPL [replexec-47] Canceling priority takeover callback
2020-12-14T21:13:46.247+0800 I REPL [replexec-47] Starting an election for a priority takeover
2020-12-14T21:13:46.247+0800 I REPL [replexec-47] conducting a dry run election to see if we could be elected. current term: 2
2020-12-14T21:13:46.249+0800 I REPL [replexec-50] VoteRequester(term 2 dry run) received a yes vote from 192.168.1.21:27018; response message: { term: 2, voteGranted: true, reason: "", ok: 1.0, operationTime: Timestamp(1607951616, 1), $clusterTime: { clusterTime: Timestamp(1607951616, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }
2020-12-14T21:13:46.249+0800 I REPL [replexec-50] dry election run succeeded, running for election in term 3
2020-12-14T21:13:46.250+0800 I ASIO [NetworkInterfaceASIO-Replication-0] Ending connection to host 192.168.1.21:27019 due to bad connection status; 0 connections to that host remain open
2020-12-14T21:13:46.251+0800 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to 192.168.1.21:27019
2020-12-14T21:13:46.258+0800 I REPL [replexec-47] VoteRequester(term 3) received a yes vote from 192.168.1.21:27018; response message: { term: 3, voteGranted: true, reason: "", ok: 1.0, operationTime: Timestamp(1607951616, 1), $clusterTime: { clusterTime: Timestamp(1607951616, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }
2020-12-14T21:13:46.258+0800 I REPL [replexec-47] election succeeded, assuming primary role in term 3
2020-12-14T21:13:46.258+0800 I REPL [replexec-47] transition to PRIMARY from SECONDARY
2020-12-14T21:13:46.258+0800 I REPL [replexec-47] Entering primary catch-up mode.
2020-12-14T21:13:46.259+0800 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to 192.168.1.21:27019
2020-12-14T21:13:46.260+0800 I REPL [replexec-47] Member 192.168.1.21:27018 is now in state SECONDARY
2020-12-14T21:13:46.260+0800 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to 192.168.1.21:27019, took 9ms (2 connections now open to 192.168.1.21:27019)
2020-12-14T21:13:46.261+0800 I REPL [replexec-42] Caught up to the latest optime known via heartbeats after becoming primary.
2020-12-14T21:13:46.261+0800 I REPL [replexec-42] Exited primary catch-up mode.
2020-12-14T21:13:46.264+0800 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to 192.168.1.21:27019, took 5ms (2 connections now open to 192.168.1.21:27019)
2020-12-14T21:13:47.853+0800 I REPL [rsSync] transition to primary complete; database writes are now permitted
2020-12-14T21:13:48.098+0800 I NETWORK [listener] connection accepted from 192.168.1.21:35092 #1380 (4 connections now open)
2020-12-14T21:13:48.098+0800 I NETWORK [conn1380] received client metadata from 192.168.1.21:35092 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:48.110+0800 I ACCESS [conn1380] Successfully authenticated as principal admin on admin
2020-12-14T21:13:48.115+0800 I NETWORK [conn1380] end connection 192.168.1.21:35092 (3 connections now open)
2020-12-14T21:13:48.403+0800 I NETWORK [listener] connection accepted from 192.168.1.21:35094 #1381 (4 connections now open)
2020-12-14T21:13:48.403+0800 I NETWORK [conn1381] received client metadata from 192.168.1.21:35094 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:48.416+0800 I ACCESS [conn1381] Successfully authenticated as principal admin on admin
2020-12-14T21:13:50.166+0800 I NETWORK [listener] connection accepted from 192.168.1.21:35096 #1382 (5 connections now open)
2020-12-14T21:13:50.167+0800 I NETWORK [conn1382] received client metadata from 192.168.1.21:35096 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:50.178+0800 I ACCESS [conn1382] Successfully authenticated as principal admin on admin
2020-12-14T21:13:50.182+0800 I NETWORK [conn1382] end connection 192.168.1.21:35096 (4 connections now open)
2020-12-14T21:13:51.858+0800 I NETWORK [listener] connection accepted from 192.168.1.21:35098 #1383 (5 connections now open)
2020-12-14T21:13:51.858+0800 I NETWORK [conn1383] received client metadata from 192.168.1.21:35098 conn: { driver: { name: "NetworkInterfaceASIO-RS", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:51.859+0800 I ACCESS [conn1383] Successfully authenticated as principal __system on local
2020-12-14T21:13:52.245+0800 I NETWORK [listener] connection accepted from 192.168.1.21:35100 #1384 (6 connections now open)
2020-12-14T21:13:52.245+0800 I NETWORK [conn1384] received client metadata from 192.168.1.21:35100 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:52.256+0800 I ACCESS [conn1384] Successfully authenticated as principal admin on admin
2020-12-14T21:13:52.261+0800 I NETWORK [conn1384] end connection 192.168.1.21:35100 (5 connections now open)
2020-12-14T21:13:52.806+0800 I NETWORK [conn1381] end connection 192.168.1.21:35094 (4 connections now open)
2020-12-14T21:13:52.946+0800 I NETWORK [listener] connection accepted from 192.168.1.21:35102 #1385 (5 connections now open)
2020-12-14T21:13:52.947+0800 I NETWORK [conn1385] received client metadata from 192.168.1.21:35102 conn: { driver: { name: "NetworkInterfaceASIO-RS", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:52.948+0800 I ACCESS [conn1385] Successfully authenticated as principal __system on local
2020-12-14T21:13:52.952+0800 I NETWORK [listener] connection accepted from 192.168.1.21:35104 #1386 (6 connections now open)
2020-12-14T21:13:52.952+0800 I NETWORK [conn1386] received client metadata from 192.168.1.21:35104 conn: { driver: { name: "NetworkInterfaceASIO-RS", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:52.956+0800 I ACCESS [conn1386] Successfully authenticated as principal __system on local
2020-12-14T21:13:54.321+0800 I NETWORK [listener] connection accepted from 192.168.1.21:35106 #1387 (7 connections now open)
2020-12-14T21:13:54.321+0800 I NETWORK [conn1387] received client metadata from 192.168.1.21:35106 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:54.334+0800 I ACCESS [conn1387] Successfully authenticated as principal admin on admin
2020-12-14T21:13:54.338+0800 I NETWORK [conn1387] end connection 192.168.1.21:35106 (6 connections now open)

8

查看连接数信息

2020-12-14_21-11-27 connections:2020-12-14T21:11:27.309+0800 I NETWORK  [thread1] Socket recv() Connection reset by peer 192.168.1.21:27017
2020-12-14T21:11:27.309+0800 I NETWORK [thread1] SocketException: remote: (NONE):0 error: SocketException socket exception [RECV_ERROR] server [192.168.1.21:27017]
2020-12-14T21:11:27.309+0800 E QUERY [thread1] Error: network error while attempting to run command 'isMaster' on host '192.168.1.21:27017' :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:21
2020-12-14_21-13-33 connections:2020-12-14T21:13:33.620+0800 E QUERY [thread1] Error: network error while attempting to run command 'saslStart' on host '192.168.1.21:27017' :
DB.prototype._authOrThrow@src/mongo/shell/db.js:1608:20
@(auth):6:1
@(auth):1:2
2020-12-14_21-13-35 connections:{ "current" : 4, "available" : 4, "totalCreated" : 89 }
2020-12-14_21-13-37 connections:{ "current" : 4, "available" : 4, "totalCreated" : 90 }
2020-12-14_21-13-39 connections:{ "current" : 4, "available" : 4, "totalCreated" : 91 }
2020-12-14_21-13-41 connections:{ "current" : 4, "available" : 4, "totalCreated" : 92 }
2020-12-14_21-13-43 connections:{ "current" : 4, "available" : 4, "totalCreated" : 93 }
2020-12-14_21-13-46 connections:{ "current" : 4, "available" : 4, "totalCreated" : 94 }
2020-12-14_21-13-48 connections:{ "current" : 4, "available" : 4, "totalCreated" : 95 }
2020-12-14_21-13-50 connections:{ "current" : 5, "available" : 3, "totalCreated" : 97 }
2020-12-14_21-13-52 connections:{ "current" : 6, "available" : 2, "totalCreated" : 99 }
2020-12-14_21-13-54 connections:{ "current" : 7, "available" : 1, "totalCreated" : 102 }
2020-12-14_21-13-56 connections:{ "current" : 7, "available" : 1, "totalCreated" : 103 }

连接数满,无法连接MongoDB主节点实例问题解决。哦耶!

too many open connections

MongoDB mongod.log "connection refused because too many open connections" 处理方法的更多相关文章

  1. MongoDB numa系列问题一:[initandlisten] connection refused because too many open connections:

    1:Mongod日志有很多这样的报错: [initandlisten] connection refused because too many open connections: 2:查看系统的限制 ...

  2. mongodb报错:connection refused because too many open connections: 819

    问题: 发现mongodb无法连接,查看mongodb日志,出现大量的如下报错: [initandlisten] connection refused because too many open co ...

  3. mongodb connection refused because too many open connections: 819

    Env Debian 9 # 使用通用二进制方式安装 # mongod --version db version v3.4.21-2.19 git version: 2e0631f5e0d868dd5 ...

  4. Nginx报错 connect() failed (111: Connection refused) while connecting to upstream 的解决方法

    今天访问公司的网站突然报错,抛出一些英文,提示看一下Nginx的error.log日志: cd  /usr/local/nginx/logs/  看到了error.log ,下一步 tail -n 2 ...

  5. mongodb 初学 意外 连接服务器异常(Connection refused)

    啦啦啦 这种情况 root@localhost:/# mongo MongoDB shell version: connecting to: test --31T07:: W NETWORK [thr ...

  6. 【MongoDB】 Failed to connect to 127.0.0.1:27017, reason: Connection refused

    由于项目需要,在一台虚拟机上安装了MongoDB,但是在启动的时候,出现如下错误: [root@localhost bin]# ./mongo MongoDB shell version v3.4.0 ...

  7. 远程连接mongodb出现 no route to host 和 Connection refused

    部署好mongodb服务器后,在客户端安装好php的mongodb扩展,用程序连接mongodb服务器出错:no route to host.搜索了差不多一天的时候都没有相关的解决方法.最后在mong ...

  8. springboot+mongodb报错Caused by: java.net.ConnectException: Connection refused (Connection refused)

    com.mongodb.MongoSocketOpenException: Exception opening socket at com.mongodb.connection.SocketStrea ...

  9. ZooKeeper集群搭建中的Connection refused而导致的启动失败

    1. 前言 每一次搭建集群环境都像一次战斗,作战中任何一个细节的出错都会导致严重的后果,所以搭建中所需要做的配置如系统配置.网络配置(防火墙记得关).用户权限.文件权限还有配置文件等等内容,都必须非常 ...

  10. connect() failed (111: Connection refused) while connecting to upstream

    配置好lamp后,在浏览器中运行程序后,出现上面的错误. 转自:http://www.xuejiehome.com/blread-1828.html I'm experiencing 502 gate ...

随机推荐

  1. 【客户案例】白鲸开源WhaleStudio助力某证券公司打造全面数据解决方案:探析DataOps平台革新与应用

    背景 近年来随着国际形势的变化,信创产业成为我国国家战略的一部分.一直以来,一直以来,全球 ICT 产业底层标准.架构.产品.生态等要素均由国外公司或机构制定和控制,使我国 ICT 产业乃至广大用户面 ...

  2. Django框架创建运行最小程序过程记录

    基于 python语言 Django web框架下 用pycharm创建,修改,运行 最简单程序.旨在过程 ========================================== 步骤一 ...

  3. JavaScript设计模式样例十六 —— 备忘录模式

    备忘录模式(Memento Pattern) 定义:保存一个对象的某个状态,以便在适当的时候恢复对象.目的:在不破坏封装性的前提下,捕获一个对象的内部状态,并在该对象之外保存这个状态.场景:数据缓存. ...

  4. NVM 使用

    NVM(Node Version Manager)是 Node.js 的版本管理工具. NVM 项目为 macOS 和 Linux 开发.Windows 用户需要使用 NVM for Windows. ...

  5. Ruby 学习笔记

    基本语法 变量 name = "Alice" age = 30 puts "Name: #{name}, Age: #{age}" var # 局部变量 @va ...

  6. 运行 Java 程序

    Java 程序实际上就是我们编译好的 Java 类文件.运行 Java 程序就是运行 Java 类的 main 函数. 编译并运行 Java 文件 源文件: package com.example; ...

  7. 结合mysql 架构分析SQL查询语句和更新语句的执行

    结合mysql 架构分析SQL查询语句和更新语句的执行 一:基础架构 mysql分为Server层和存储引擎层 Server层 涵盖了大多数mysql的核心服务功能,以及所有内置的函数(例如日期.加密 ...

  8. 丝滑解决Chatgpt频繁奔溃、断网掉线问题

    事件缘由 这段时间使用Chatgpt的时候频繁出现something wrong等断网掉线问题,中间还频繁出现物品转向的人机验证(我那么具有迷惑性吗...),被烦的不行.后面了解到有一个KeepCha ...

  9. ASP.NET Core – Work with Environment (Development, Staging, Production)

    前言 这篇讲一讲发布和环境 (development, staging, production) 介绍 我的网站是 host 在 Azure 的 Virtual Machine, 跑 IIS, 没有使 ...

  10. Angular 学习笔记 (Typescript 版本更新)

    记入每一期我会用到的功能 : v4.3  1. getter setter 的类型可以不一样咯, 这个困扰很久了.  (interface, object, class 都可以这样用) class T ...