021 Ceph关于too few PGs per OSD的问题
在一个ceph集群中,操作创建一个池后,发现ceph的集群状态处于warn状态,信息如下

检查集群的信息
查看看池
[root@serverc ~]# ceph osd pool ls
images #只有一个池
[root@serverc ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.13129 root default
-5 0.04376 host serverc
2 hdd 0.01459 osd.2 up 1.00000 1.00000 #9块osd状态up in状态
3 hdd 0.01459 osd.3 up 1.00000 1.00000
7 hdd 0.01459 osd.7 up 1.00000 1.00000
-3 0.04376 host serverd
0 hdd 0.01459 osd.0 up 1.00000 1.00000
5 hdd 0.01459 osd.5 up 1.00000 1.00000
6 hdd 0.01459 osd.6 up 1.00000 1.00000
-7 0.04376 host servere
1 hdd 0.01459 osd.1 up 1.00000 1.00000
4 hdd 0.01459 osd.4 up 1.00000 1.00000
8 hdd 0.01459 osd.8 up 1.00000 1.00000
重现错误
[root@serverc ~]# ceph osd pool create images 64 64
[root@serverc ~]# ceph osd pool application enable images rbd
[root@serverc ~]# ceph -s
cluster:
id: 04b66834-1126-4870-9f32-d9121f1baccd
health: HEALTH_WARN
too few PGs per OSD (21 < min 30)
services:
mon: 3 daemons, quorum serverc,serverd,servere
mgr: servere(active), standbys: serverd, serverc
osd: 9 osds: 9 up, 9 in
data:
pools: 1 pools, 64 pgs
objects: 8 objects, 12418 kB
usage: 1005 MB used, 133 GB / 134 GB avail
pgs: 64 active+clean
[root@serverc ~]# ceph pg dump
dumped all
version 1334
stamp 2019-03-29 22:21:41.795511
last_osdmap_epoch 0
last_pg_scan 0
full_ratio 0
nearfull_ratio 0
PG_STAT OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES LOG DISK_LOG STATE STATE_STAMP VERSION REPORTED UP UP_PRIMARY ACTING ACTING_PRIMARY LAST_SCRUB SCRUB_STAMP LAST_DEEP_SCRUB DEEP_SCRUB_STAMP
1.3f 0 0 0 0 0 0 0 0 active+clean 2019-03-29 22:17:34.871318 0'0 33:41 [7,1,0] 7 [7,1,0] 7 0'0 2019-03-29 21:55:07.534833 0'0 2019-03-29 21:55:07.534833
1.3e 0 0 0 0 0 0 0 0 active+clean 2019-03-29 22:17:34.867341 0'0 33:41 [4,5,7] 4 [4,5,7] 4 0'0 2019-03-29 21:55:07.534833 0'0 2019-03-29 21:55:07.534833
1.3d 0 0 0 0 0 0 0 0 active+clean 2019-03-29 22:17:34.871213 0'0 33:41 [0,3,1] 0 [0,3,1] 0 0'0 2019-03-29 21:55:07.534833 0'0 2019-03-29 21:55:07.534833
1.3c 0 0 0 0 0 0 0 0 active+clean 2019-03-29 22:17:34.859216 0'0 33:41 [5,7,1] 5 [5,7,1] 5 0'0 2019-03-29 21:55:07.534833 0'0 2019-03-29 21:55:07.534833
1.3b 0 0 0 0 0 0 0 0 active+clean 2019-03-29 22:17:34.870865 0'0 33:41 [0,8,7] 0 [0,8,7] 0 0'0 2019-03-29 21:55:07.534833 0'0 2019-03-29 21:55:07.534833
1.3a 2 0 0 0 0 19 17 17 active+clean 2019-03-29 22:17:34.858977 33'17 33:117 [4,6,7] 4 [4,6,7] 4 0'0 2019-03-29 21:55:07.534833 0'0 2019-03-29 21:55:07.534833
1.39 0 0 0 0 0 0 0 0 active+clean 2019-03-29 22:17:34.871027 0'0 33:41 [0,3,4] 0 [0,3,4] 0 0'0 2019-03-29 21:55:07.534833 0'0 2019-03-29 21:55:07.534833
1.38 1 0 0 0 0 16 1 1 active+clean 2019-03-29 22:17:34.861985 30'1 33:48 [4,2,5] 4 [4,2,5] 4 0'0 2019-03-29 21:55:07.534833 0'0 2019-03-29 21:55:07.534833
1.37 0 0 0 0 0 0 0 0 active+clean 2019-03-29 22:17:34.861667 0'0 33:41 [6,7,1] 6 [6,7,1] 6 0'0 2019-03-29 21:55:07.534833 0'0 2019-03-29 21:55:07.534833
1.36 0 0 0 0 0 0 0 0 active+clean 2019-03-29 22:17:34.860382 0'0 33:41 [6,3,1] 6 [6,3,1] 6 0'0 2019-03-29 21:55:07.534833 0'0 2019-03-29 21:55:07.534833
1.35 0 0 0 0 0 0 0 0 active+clean 2019-03-29 22:17:34.860407 0'0 33:41 [8,6,2] 8 [8,6,2] 8 0'0 2019-03-29 21:55:07.534833 0'0 2019-03-29 21:55:07.534833
1.34 0 0 0 0 0 0 2 2 active+clean 2019-03-29 22:17:34.861874 32'2 33:44 [4,3,0] 4 [4,3,0] 4 0'0 2019-03-29 21:55:07.534833 0'0 2019-03-29 21:55:07.534833
1.33 0 0 0 0 0 0 0 0 active+clean 2019-03-29 22:17:34.860929 0'0 33:41 [4,6,2] 4 [4,6,2] 4 0'0 2019-03-29 21:55:07.534833 0'0 2019-03-29 21:55:07.534833
1.32 0 0 0 0 0 0 0 0 active+clean 2019-03-29 22:17:34.860589 0'0 33:41 [4,2,6] 4 [4,2,6] 4 0'0 2019-03-29 21:55:07.534833 0'0 2019-03-29 21:55:07.534833
…………
1 8 0 0 0 0 12716137 78 78
sum 8 0 0 0 0 12716137 78 78
OSD_STAT USED AVAIL TOTAL HB_PEERS PG_SUM PRIMARY_PG_SUM
8 119M 15229M 15348M [0,1,2,3,4,5,6,7] 22 6
7 119M 15229M 15348M [0,1,2,3,4,5,6,8] 22 9
6 119M 15229M 15348M [0,1,2,3,4,5,7,8] 23 5
5 107M 15241M 15348M [0,1,2,3,4,6,7,8] 18 7
4 107M 15241M 15348M [0,1,2,3,5,6,7,8] 18 9
3 107M 15241M 15348M [0,1,2,4,5,6,7,8] 23 6
2 107M 15241M 15348M [0,1,3,4,5,6,7,8] 19 6
1 107M 15241M 15348M [0,2,3,4,5,6,7,8] 24 8
0 107M 15241M 15348M [1,2,3,4,5,6,7,8] 23 8
sum 1005M 133G 134G
由提示看出,每个osd上的pg数量小于最小的数目30个。是因为在创建池的时候,指定pg和pgs为64,由于是3副本的配置,所以当有9个osd的时候,每个osd上均分了64/9 *3=21个pgs,也就是出现了如上的错误 小于最小配置30个。从pg dump看出每块osd上的PG数,都小于30
集群这种状态如果进行数据的存储和操作,会发现集群卡死,无法响应io,同时会导致大面积的osd down。
解决办法
修改pool的pg数
[root@serverc ~]# ceph osd pool set images pg_num 128
set pool 1 pg_num to 128
[root@serverc ~]# ceph -s
cluster:
id: 04b66834-1126-4870-9f32-d9121f1baccd
health: HEALTH_WARN
Reduced data availability: 21 pgs peering
Degraded data redundancy: 21 pgs unclean
1 pools have pg_num > pgp_num
too few PGs per OSD (21 < min 30) services:
mon: 3 daemons, quorum serverc,serverd,servere
mgr: servere(active), standbys: serverd, serverc
osd: 9 osds: 9 up, 9 in data:
pools: 1 pools, 128 pgs
objects: 8 objects, 12418 kB
usage: 1005 MB used, 133 GB / 134 GB avail
pgs: 50.000% pgs unknown
16.406% pgs not active
64 unknown
43 active+clean
21 peering
出现 too few PGs per OSD
继续修改PGS
[root@serverc ~]# ceph osd pool set images pgp_num 128
set pool 1 pgp_num to 128
查看
[root@serverc ~]# ceph -s
cluster:
id: 04b66834-1126-4870-9f32-d9121f1baccd
health: HEALTH_WARN
Reduced data availability: 7 pgs peering
Degraded data redundancy: 24 pgs unclean, 2 pgs degraded
services:
mon: 3 daemons, quorum serverc,serverd,servere
mgr: servere(active), standbys: serverd, serverc
osd: 9 osds: 9 up, 9 in
data:
pools: 1 pools, 128 pgs
objects: 8 objects, 12418 kB
usage: 1005 MB used, 133 GB / 134 GB avail
pgs: 24.219% pgs not active #pg状态,数据在重平衡(状态信息代表的意义,请参考https://www.cnblogs.com/zyxnhr/p/10616497.html第三部分内容)
97 active+clean
20 activating
9 peering
2 activating+degraded
[root@serverc ~]# ceph -s
cluster:
id: 04b66834-1126-4870-9f32-d9121f1baccd
health: HEALTH_WARN
Reduced data availability: 7 pgs peering
Degraded data redundancy: 3/24 objects degraded (12.500%), 33 pgs unclean, 4 pgs degraded
services:
mon: 3 daemons, quorum serverc,serverd,servere
mgr: servere(active), standbys: serverd, serverc
osd: 9 osds: 9 up, 9 in
data:
pools: 1 pools, 128 pgs
objects: 8 objects, 12418 kB
usage: 1005 MB used, 133 GB / 134 GB avail
pgs: 35.938% pgs not active
3/24 objects degraded (12.500%)
79 active+clean
34 activating
9 peering
3 activating+degraded
2 active+clean+snaptrim
1 active+recovery_wait+degraded
io:
recovery: 1 B/s, 0 objects/s
[root@serverc ~]# ceph -s
cluster:
id: 04b66834-1126-4870-9f32-d9121f1baccd
health: HEALTH_OK
services:
mon: 3 daemons, quorum serverc,serverd,servere
mgr: servere(active), standbys: serverd, serverc
osd: 9 osds: 9 up, 9 in
data:
pools: 1 pools, 128 pgs
objects: 8 objects, 12418 kB
usage: 1050 MB used, 133 GB / 134 GB avail
pgs: 128 active+clean
io:
recovery: 1023 kB/s, 0 keys/s, 0 objects/s
[root@serverc ~]# ceph -s
cluster:
id: 04b66834-1126-4870-9f32-d9121f1baccd
health: HEALTH_OK #数据平衡完毕,集群状态恢复正常
services:
mon: 3 daemons, quorum serverc,serverd,servere
mgr: servere(active), standbys: serverd, serverc
osd: 9 osds: 9 up, 9 in
data:
pools: 1 pools, 128 pgs
objects: 8 objects, 12418 kB
usage: 1016 MB used, 133 GB / 134 GB avail
pgs: 128 active+clean
io:
recovery: 778 kB/s, 0 keys/s, 0 objects/s
注:这里是实验环境,pool上也没有数据,所以修改pg影响并不大,但是如果是生产环境,这时候再重新修改pg数,会对生产环境产生较大影响。因为pg数变了,就会导致整个集群的数据重新均衡和迁移,数据越大响应io的时间会越长。具体请参考https://www.cnblogs.com/zyxnhr/p/10543814.html,对PG的状态参数有详细的解释,同时,在生产环境,修改PG,如果不影响业务,要考虑到各个方面,比如在什么时候恢复,什么时间修改pgs,请参考
参考资料:
https://my.oschina.net/xiaozhublog/blog/664560
021 Ceph关于too few PGs per OSD的问题的更多相关文章
- ceph -s集群报错too many PGs per OSD
背景 集群状态报错,如下: # ceph -s cluster 1d64ac80-21be-430e-98a8-b4d8aeb18560 health HEALTH_WARN <-- 报错的地方 ...
- ceph故障:too many PGs per OSD
原文:http://www.linuxidc.com/Linux/2017-04/142518.htm 背景 集群状态报错,如下: # ceph -s cluster 1d64ac80-21be-43 ...
- HEALTH_WARN too few PGs per OSD (21 < min 30)解决方法
标签(空格分隔): ceph,ceph运维,pg 集群环境: [root@node3 ~]# cat /etc/redhat-release CentOS Linux release 7.3.1611 ...
- too few PGs per OSD (20 < min 30)
ceph osd pool set replicapool pg_num 150 ceph osd pool set replicapool pgp_num 150
- Ceph学习笔记(4)- OSD
前言 OSD是一个抽象的概念,对应一个本地块设备(一块盘或一个raid组) 传统NAS和SAN存储是赋予底层物理磁盘一些CPU.内存等,使其成为一个对象存储设备(OSD),可以独立进行磁盘空间分配.I ...
- ceph 存储池PG查看和PG存放OSD位置
1. 查看PG (ceph-mon)[root@controller /]# ceph pg stat 512 pgs: 512 active+clean; 0 bytes data, 1936 MB ...
- ceph 剔除osd
先将osd.2移出集群 root@ceph-monster:~# ceph osd out osd.2 marked out osd.2. root@ceph-monster:~# ceph osd ...
- 018 Ceph的mon和osd的删除和添加
一.OSD管理 1.1 移出故障osd 查看当前节点的osd的id [root@ceph2 ceph]# df -hT Filesystem Type Size Used Avail Use% Mou ...
- Ceph osd启动报错osd init failed (36) File name too long
在Ceph的osd节点上,启动osd进程失败,查看其日志/var/log/ceph/ceph-osd.{osd-index}.log日志,报错如下: 2017-02-14 16:26:13.55853 ...
随机推荐
- 前端基础☞CSS
css的四种引入方式 1.行内式 行内式是在标记的style属性中设定CSS样式.这种方式没有体现出CSS的优势,不推荐使用. <p style="background-color: ...
- laravel进阶系列--通过事件和事件监听实现服务解耦
简介 Laravel 事件提供了简单的观察着模式实现,允许你订阅和监听应用中的事件.事件类通常存放在 app/Events 目录. 监听器存放在 app/Listeners. 如果你在应用中没有看到这 ...
- 2018-8-10-WPF-修改按钮按下的颜色
title author date CreateTime categories WPF 修改按钮按下的颜色 lindexi 2018-08-10 19:16:53 +0800 2018-03-15 2 ...
- hdu 3873 Invade the Mars(有限制的最短路 spfa+容器)
Invade the Mars Time Limit: 5000/2000 MS (Java/Others) Memory Limit: 365768/165536 K (Java/Others ...
- W3C vs IE盒模型
今年4月份的一次面试,问到盒模型,是我第一次接触到盒模型,但当时不太明白,没有说清楚,后来查了下,但一知半解. 下面分享下,我对盒模型的理解: 盒模型,也就是box-sizing,分为content- ...
- vue4——把输入框的内容添加到页面(简单留言板)
文章地址:https://www.cnblogs.com/sandraryan/ vue最最最简单的demo(记得引入) 实例化一个vue,绑定#app的元素,要渲染的数组arr作为data. 把ar ...
- 我爱自然语言处理bert ner chinese
BERT相关论文.文章和代码资源汇总 4条回复 BERT最近太火,蹭个热点,整理一下相关的资源,包括Paper, 代码和文章解读. 1.Google官方: 1) BERT: Pre-training ...
- JavaScript setTimeout this对象指向问题
上面这幅图片是原始的效果, 现在想鼠标移到图标上,显示图标的提示,但需要延时,也就是鼠标滑至图标上,并不立刻显示,而是等1秒后显示. html部分 <div class="porHea ...
- data-属性的作用
data-用于存储页面或应用程序的私有自定义数据,赋予我们在所有HTML元素上嵌入自定义data属性的能力,存储的数据能被页面的JS利用,以创建更好的用户体验. <div id="bo ...
- Activity学习(二):Activity的启动模式(转载)
在Android中每个界面都是一个Activity,切换界面操作其实是多个不同Activity之间的实例化操作.在Android中Activity的启动模式决定了Activity的启动运行方式. An ...