ceph-pg
版本:mimic
https://192.168.1.5:8006/pve-docs/chapter-pveceph.html#pve_ceph_osds
As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory will be used by an OSD. OSD caching will use additional memory.
mon_command failed - pg_num 128 size 3 would mean 6147 total pgs, which exceeds max 6000 (mon_max_pg_per_osd 250 * num_in_osds 24)
mon_command failed - pg_num size would mean total pgs, which exceeds max (mon_max_pg_per_osd * num_in_osds ) [root@ali- dd]# ceph pg dump
dumped all
version
stamp -- ::24.077612
last_osdmap_epoch
last_pg_scan
full_ratio 0.9
nearfull_ratio 0.8 [root@ceph1 ~]# ceph pg ls
PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES LOG STATE STATE_STAMP VERSION REPORTED UP ACTING SCRUB_STAMP DEEP_SCRUB_STAMP
1.0 active+clean -- ::54.430131 '2 57:95 [1,2,0]p1 [1,2,0]p1 2019-03-28 02:42:54.430020 2019-03-28 02:42:54.430020
1.1 active+clean -- ::33.846731 '0 57:78 [2,0,1]p2 [2,0,1]p2 2019-03-27 20:42:33.846600 2019-03-27 20:42:33.846600
1.2 active+clean -- ::31.853254 '0 57:92 [1,0,2]p1 [1,0,2]p1 2019-03-27 20:02:31.853127 2019-03-21 18:53:07.286885
1.3 active+clean -- ::29.499574 '0 57:94 [0,1,2]p0 [0,1,2]p0 2019-03-28 01:04:29.499476 2019-03-21 18:53:07.286885
1.4 active+clean -- ::42.694788 '0 57:77 [2,1,0]p2 [2,1,0]p2 2019-03-28 10:17:42.694658 2019-03-21 18:53:07.286885
1.5 active+clean -- ::49.922515 '0 57:78 [2,0,1]p2 [2,0,1]p2 2019-03-28 14:33:49.922414 2019-03-21 18:53:07.286885
1.6 active+clean -- ::08.897114 '0 57:78 [2,1,0]p2 [2,1,0]p2 2019-03-28 08:33:08.897044 2019-03-25 19:51:32.716535
1.7 active+clean -- ::16.417698 '0 57:92 [1,2,0]p1 [1,2,0]p1 2019-03-27 21:37:16.417553 2019-03-22 23:05:53.863908
2.0 active+clean -- ::09.127196 '1 57:155 [1,2,0]p1 [1,2,0]p1 2019-03-27 15:07:09.127107 2019-03-22 15:05:32.211389
2.1 active+clean -- ::41.958378 '0 57:89 [0,2,1]p0 [0,2,1]p0 2019-03-27 20:55:41.958328 2019-03-27 20:55:41.958328
2.2 active+clean -- ::45.117140 '0 57:87 [1,0,2]p1 [1,0,2]p1 2019-03-28 03:09:45.117036 2019-03-28 03:09:45.117036
2.3 active+clean -- ::17.944907 '0 57:87 [1,0,2]p1 [1,0,2]p1 2019-03-27 08:54:17.944792 2019-03-26 05:44:21.586541
2.4 active+clean -- ::52.040458 '0 57:89 [0,2,1]p0 [0,2,1]p0 2019-03-27 23:42:52.040353 2019-03-22 15:05:32.211389
2.5 active+clean -- ::15.908085 '0 57:73 [2,0,1]p2 [2,0,1]p2 2019-03-27 14:26:15.908022 2019-03-22 15:05:32.211389
2.6 active+clean -- ::22.282027 '2 57:161 [0,2,1]p0 [0,2,1]p0 2019-03-28 15:00:22.281923 2019-03-26 05:39:41.395132
2.7 active+clean -- ::39.415262 '4 57:253 [1,2,0]p1 [1,2,0]p1 2019-03-27 17:09:39.415167 2019-03-27 17:09:39.415167 [root@ceph1 rbdpool]# ceph pg map 8.13
osdmap e55 pg 8.13 (8.13) -> up [,,] acting [,,] pg id由{pool-num}.{pg-id}组成
ceph osd lspools [root@ceph1 rbdpool]# ceph pg stat
pgs: active+clean; GiB data, GiB used, 8.4 GiB / GiB avail
[root@client mnt]# rm -rf a*
上面的删除操作后,下面的pg才开始清理
[root@ceph1 rbdpool]# ceph pg stat
pgs: active+clean; 2.5 MiB data, 3.5 GiB used, GiB / GiB avail; 8.7 KiB/s rd, B/s wr, op/s [root@ceph1 ~]# ceph pg dump
dumped all
version
stamp -- ::18.312134
last_osdmap_epoch
last_pg_scan
PG_STAT OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES LOG DISK_LOG STATE STATE_STAMP VERSION REPORTED UP UP_PRIMARY ACTING ACTING_PRIMARY LAST_SCRUB SCRUB_STAMP LAST_DEEP_SCRUB DEEP_SCRUB_STAMP SNAPTRIMQ_LEN
8.3f active+clean -- ::27.945410 '0 57:30 [0,1,2] 0 [0,1,2] 0 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3e active+clean -- ::27.967178 '0 57:28 [2,1,0] 2 [2,1,0] 2 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
8.3d active+clean -- ::27.946169 '0 57:29 [0,2,1] 0 [0,2,1] 0 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3c active+clean -- ::27.954775 '0 57:29 [1,2,0] 1 [1,2,0] 1 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3b active+clean -- ::27.958550 '0 57:28 [1,2,0] 1 [1,2,0] 1 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3a active+clean -- ::27.968929 '2 57:31 [2,0,1] 2 [2,0,1] 2 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
8.39 active+clean -- ::27.966700 '0 57:28 [2,0,1] 2 [2,0,1] 2 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
8.38 active+clean -- ::27.946091 '0 57:29 [0,2,1] 0 [0,2,1] 0 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0 sum
OSD_STAT USED AVAIL TOTAL HB_PEERS PG_SUM PRIMARY_PG_SUM
1.3 GiB GiB GiB [,]
1.1 GiB GiB GiB [,]
1.1 GiB GiB GiB [,]
sum 3.5 GiB GiB GiB
[root@ceph1 ~]# [root@ceph1 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
GiB GiB 3.5 GiB 1.93
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
mypool B GiB
.rgw.root 1.1 KiB GiB
default.rgw.control B GiB
default.rgw.meta B GiB
default.rgw.log B GiB
cfs_data KiB GiB
cfs_meta 2.4 MiB GiB
rbdpool B GiB [root@ceph1 ~]# ceph pg 8.1 query [root@ceph1 ~]# ceph osd map cfs_data secure
osdmap e58 pool 'cfs_data' () object 'secure' -> pg .a67b1c61 (6.1) -> up ([,,], p2) acting ([,,], p2) ===========================================
root@cu-pve05:/mnt/pve# ceph osd pool stats
pool kyc_block01 id
client io 0B/s rd, 0op/s rd, 0op/s wr pool cephfs_data id
nothing is going on pool cephfs_metadata id
nothing is going on pool system_disks id
client io 0B/s rd, 576B/s wr, 0op/s rd, 0op/s wr pool data_disks id
nothing is going on pool fs01 id
nothing is going on root@cu-pve05:/mnt/pve# root@cu-pve05:/mnt/pve# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
.4TiB .9TiB 528GiB 0.98
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
kyc_block01 130GiB 0.77 .4TiB
cephfs_data .62GiB 0.04 .4TiB
cephfs_metadata 645KiB .4TiB
system_disks .1GiB 0.19 .4TiB
data_disks 0B .4TiB
fs01 128MiB .4TiB
root@cu-pve05:/mnt/pve# root@cu-pve05:/mnt/pve# ceph pg dump pgs_brief|grep ^|wc -l
dumped pgs_brief 上面的这个结果就是pve中的pool上的pg_num数量。 PG_STAT STATE UP UP_PRIMARY ACTING ACTING_PRIMARY
9.21 active+clean [,,] [,,]
9.20 active+clean [,,] [,,]
9.27 active+clean [,,] [,,] ===============================
root@cu-pve05:/mnt/pve# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
cephfs_data .14GiB .73GiB .9GiB
cephfs_metadata 698KiB 556KiB .01MiB
fs01 128MiB 0B 256MiB
kyc_block01 133GiB 524GiB 223GiB
system_disks .1GiB .3GiB 109GiB total_objects
total_used 539GiB
total_avail .9TiB
total_space .4TiB OBJECTS*=COPIES
===============================
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
:4pg
metadata是128pg
data是512pg
placement groups objects object_size
vm--disk- 32GiB .19k 4MiB
8.19*=.76g 8.19+0.643=8.833 .18t*=17.44
17.44*=52.32
52.39 TiB PGs active+clean:
24osd each node has 1921pgs
ceph-pg的更多相关文章
- ceph PG数量调整/PG的状态说明
优化: PG Number PG和PGP数量一定要根据OSD的数量进行调整,计算公式如下,但是最后算出的结果一定要接近或者等于一个2的指数.调整PGP不会引起PG内的对象的分裂,但是会引起PG的分布的 ...
- Ceph PG介绍及故障状态和修复
1 PG介绍pg的全称是placement group,中文译为放置组,是用于放置object的一个载体,pg的创建是在创建ceph存储池的时候指定的,同时跟指定的副本数也有关系,比如是3副本的则会有 ...
- 利用火焰图分析ceph pg分布
前言 性能优化大神Brendan Gregg发明了火焰图来定位性能问题,通过图表就可以发现问题出在哪里,通过svg矢量图来查看性能卡在哪个点,哪个操作占用的资源最多 在查看了原始数据后,这个分析的原理 ...
- 记一次ceph pg unfound处理过程
今天检查ceph集群,发现有pg丢失,于是就有了本文~~~ 1.查看集群状态 [root@k8snode001 ~]# ceph health detail HEALTH_ERR 1/973013 o ...
- [转] 关于 Ceph PG
本系列文章会深入研究 Ceph 以及 Ceph 和 OpenStack 的集成: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 的基础数据结构 ...
- Ceph pg分裂流程及可行性分析
转自:https://www.ustack.com/blog/ceph-pg-fenlie/ 1 pg分裂 Ceph作为一个scalable的分布式系统,集群规模会逐渐增大,为了保证数据分布的均匀性, ...
- ceph 存储池PG查看和PG存放OSD位置
1. 查看PG (ceph-mon)[root@controller /]# ceph pg stat 512 pgs: 512 active+clean; 0 bytes data, 1936 MB ...
- ceph之查看osd上pg的分布
一.概述 osd上pg的分布决定了数据分布的均匀与否,所以能直观的看到pg到osd的上分布是很有必要的: ceph只身提供了相关的命令: #ceph pg ls-by-osd.{osd_id} #fo ...
- Ceph中PG和PGP的区别
http://www.zphj1987.com/2016/10/19/Ceph%E4%B8%ADPG%E5%92%8CPGP%E7%9A%84%E5%8C%BA%E5%88%AB/ 一.前言 首先来一 ...
- 分布式存储Ceph之PG状态详解
https://www.jianshu.com/p/36c2d5682d87 1. PG介绍 继上次分享的<Ceph介绍及原理架构分享>,这次主要来分享Ceph中的PG各种状态详解,PG是 ...
随机推荐
- spring声明式的事务管理
spring支持声明式事务管理和编程式事务管理两种方式. 编程式事务使用TransactionTemplate来定义,可在代码级别对事务进行定义. 声明式事务基于aop来实现,缺点是其最细粒度的事务声 ...
- Redis入门很简单之七【使用Jedis实现客户端Sharding】
Redis入门很简单之七[使用Jedis实现客户端Sharding] 博客分类: NoSQL/Redis/MongoDB redisjedisspringsharding分片 <一>. 背 ...
- WIN7 下的 filemon 版本
http://blog.sina.com.cn/s/blog_594398e80100tx1q.html WIN7 下的 filemon 版本 (2011-09-26 22:26:12) 标签: fi ...
- vector的自定义实现
#pragma warning(disable:4996) #include<iostream> #include<string> #include<vector> ...
- 12. Jmeter-断言
jmeter-断言介绍与使用 性能测试中较少用到断言.断言会增加脚本执行时间,但是接口测试中断言是必备的.什么是断言?其实就是功能测试中常说的预期结果和实际结果是否相等. 响应断言 JSON Asse ...
- APP测试功能点大全
APP测试要点 APP测试的时候,建议让开发打好包APK和IPA安装包,测试人员自己安装应用,进行测试.在测试过程中需要注意的测试点如下: 1.安装和卸载 ●应用是否可以在IOS不同系统版本或 ...
- Visual Assist 10.9.2248 破解版(支持VS2017) 转载
自己在Windows10下同时安装了VS2017和VS2013,先装的VS2017和Visual Assist,后装的VS2013,发现VS2013中没显示Visual Assist,Google了一 ...
- 常用css代码(scss mixin)
溢出显示省略号 参过参数可以只是单/多行. /** * 溢出省略号 * @param {Number} 行数 */ @mixin ellipsis($rowCount: 1) { @if $rowCo ...
- pycharm内对python文件的模板
#!/usr/bin/env python# -*- coding: utf-8 -*-# @Time : ${DATE} ${TIME}# @Author : Aries# @Site : ${SI ...
- ollvm 新增字符串加密功能
好久没弄ollvm了,可以继续了,今天给ollvm新增了一个pass,用来加密字符串,这个pass是从别的库里面扒出来的. 本文是基于在Windows 上使用VS2017编译出来的ollvm,在这个基 ...