ceph-pg
版本:mimic
https://192.168.1.5:8006/pve-docs/chapter-pveceph.html#pve_ceph_osds
As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory will be used by an OSD. OSD caching will use additional memory.
mon_command failed - pg_num 128 size 3 would mean 6147 total pgs, which exceeds max 6000 (mon_max_pg_per_osd 250 * num_in_osds 24)
mon_command failed - pg_num size would mean total pgs, which exceeds max (mon_max_pg_per_osd * num_in_osds ) [root@ali- dd]# ceph pg dump
dumped all
version
stamp -- ::24.077612
last_osdmap_epoch
last_pg_scan
full_ratio 0.9
nearfull_ratio 0.8 [root@ceph1 ~]# ceph pg ls
PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES LOG STATE STATE_STAMP VERSION REPORTED UP ACTING SCRUB_STAMP DEEP_SCRUB_STAMP
1.0 active+clean -- ::54.430131 '2 57:95 [1,2,0]p1 [1,2,0]p1 2019-03-28 02:42:54.430020 2019-03-28 02:42:54.430020
1.1 active+clean -- ::33.846731 '0 57:78 [2,0,1]p2 [2,0,1]p2 2019-03-27 20:42:33.846600 2019-03-27 20:42:33.846600
1.2 active+clean -- ::31.853254 '0 57:92 [1,0,2]p1 [1,0,2]p1 2019-03-27 20:02:31.853127 2019-03-21 18:53:07.286885
1.3 active+clean -- ::29.499574 '0 57:94 [0,1,2]p0 [0,1,2]p0 2019-03-28 01:04:29.499476 2019-03-21 18:53:07.286885
1.4 active+clean -- ::42.694788 '0 57:77 [2,1,0]p2 [2,1,0]p2 2019-03-28 10:17:42.694658 2019-03-21 18:53:07.286885
1.5 active+clean -- ::49.922515 '0 57:78 [2,0,1]p2 [2,0,1]p2 2019-03-28 14:33:49.922414 2019-03-21 18:53:07.286885
1.6 active+clean -- ::08.897114 '0 57:78 [2,1,0]p2 [2,1,0]p2 2019-03-28 08:33:08.897044 2019-03-25 19:51:32.716535
1.7 active+clean -- ::16.417698 '0 57:92 [1,2,0]p1 [1,2,0]p1 2019-03-27 21:37:16.417553 2019-03-22 23:05:53.863908
2.0 active+clean -- ::09.127196 '1 57:155 [1,2,0]p1 [1,2,0]p1 2019-03-27 15:07:09.127107 2019-03-22 15:05:32.211389
2.1 active+clean -- ::41.958378 '0 57:89 [0,2,1]p0 [0,2,1]p0 2019-03-27 20:55:41.958328 2019-03-27 20:55:41.958328
2.2 active+clean -- ::45.117140 '0 57:87 [1,0,2]p1 [1,0,2]p1 2019-03-28 03:09:45.117036 2019-03-28 03:09:45.117036
2.3 active+clean -- ::17.944907 '0 57:87 [1,0,2]p1 [1,0,2]p1 2019-03-27 08:54:17.944792 2019-03-26 05:44:21.586541
2.4 active+clean -- ::52.040458 '0 57:89 [0,2,1]p0 [0,2,1]p0 2019-03-27 23:42:52.040353 2019-03-22 15:05:32.211389
2.5 active+clean -- ::15.908085 '0 57:73 [2,0,1]p2 [2,0,1]p2 2019-03-27 14:26:15.908022 2019-03-22 15:05:32.211389
2.6 active+clean -- ::22.282027 '2 57:161 [0,2,1]p0 [0,2,1]p0 2019-03-28 15:00:22.281923 2019-03-26 05:39:41.395132
2.7 active+clean -- ::39.415262 '4 57:253 [1,2,0]p1 [1,2,0]p1 2019-03-27 17:09:39.415167 2019-03-27 17:09:39.415167 [root@ceph1 rbdpool]# ceph pg map 8.13
osdmap e55 pg 8.13 (8.13) -> up [,,] acting [,,] pg id由{pool-num}.{pg-id}组成
ceph osd lspools [root@ceph1 rbdpool]# ceph pg stat
pgs: active+clean; GiB data, GiB used, 8.4 GiB / GiB avail
[root@client mnt]# rm -rf a*
上面的删除操作后,下面的pg才开始清理
[root@ceph1 rbdpool]# ceph pg stat
pgs: active+clean; 2.5 MiB data, 3.5 GiB used, GiB / GiB avail; 8.7 KiB/s rd, B/s wr, op/s [root@ceph1 ~]# ceph pg dump
dumped all
version
stamp -- ::18.312134
last_osdmap_epoch
last_pg_scan
PG_STAT OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES LOG DISK_LOG STATE STATE_STAMP VERSION REPORTED UP UP_PRIMARY ACTING ACTING_PRIMARY LAST_SCRUB SCRUB_STAMP LAST_DEEP_SCRUB DEEP_SCRUB_STAMP SNAPTRIMQ_LEN
8.3f active+clean -- ::27.945410 '0 57:30 [0,1,2] 0 [0,1,2] 0 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3e active+clean -- ::27.967178 '0 57:28 [2,1,0] 2 [2,1,0] 2 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
8.3d active+clean -- ::27.946169 '0 57:29 [0,2,1] 0 [0,2,1] 0 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3c active+clean -- ::27.954775 '0 57:29 [1,2,0] 1 [1,2,0] 1 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3b active+clean -- ::27.958550 '0 57:28 [1,2,0] 1 [1,2,0] 1 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3a active+clean -- ::27.968929 '2 57:31 [2,0,1] 2 [2,0,1] 2 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
8.39 active+clean -- ::27.966700 '0 57:28 [2,0,1] 2 [2,0,1] 2 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
8.38 active+clean -- ::27.946091 '0 57:29 [0,2,1] 0 [0,2,1] 0 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0 sum
OSD_STAT USED AVAIL TOTAL HB_PEERS PG_SUM PRIMARY_PG_SUM
1.3 GiB GiB GiB [,]
1.1 GiB GiB GiB [,]
1.1 GiB GiB GiB [,]
sum 3.5 GiB GiB GiB
[root@ceph1 ~]# [root@ceph1 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
GiB GiB 3.5 GiB 1.93
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
mypool B GiB
.rgw.root 1.1 KiB GiB
default.rgw.control B GiB
default.rgw.meta B GiB
default.rgw.log B GiB
cfs_data KiB GiB
cfs_meta 2.4 MiB GiB
rbdpool B GiB [root@ceph1 ~]# ceph pg 8.1 query [root@ceph1 ~]# ceph osd map cfs_data secure
osdmap e58 pool 'cfs_data' () object 'secure' -> pg .a67b1c61 (6.1) -> up ([,,], p2) acting ([,,], p2) ===========================================
root@cu-pve05:/mnt/pve# ceph osd pool stats
pool kyc_block01 id
client io 0B/s rd, 0op/s rd, 0op/s wr pool cephfs_data id
nothing is going on pool cephfs_metadata id
nothing is going on pool system_disks id
client io 0B/s rd, 576B/s wr, 0op/s rd, 0op/s wr pool data_disks id
nothing is going on pool fs01 id
nothing is going on root@cu-pve05:/mnt/pve# root@cu-pve05:/mnt/pve# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
.4TiB .9TiB 528GiB 0.98
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
kyc_block01 130GiB 0.77 .4TiB
cephfs_data .62GiB 0.04 .4TiB
cephfs_metadata 645KiB .4TiB
system_disks .1GiB 0.19 .4TiB
data_disks 0B .4TiB
fs01 128MiB .4TiB
root@cu-pve05:/mnt/pve# root@cu-pve05:/mnt/pve# ceph pg dump pgs_brief|grep ^|wc -l
dumped pgs_brief 上面的这个结果就是pve中的pool上的pg_num数量。 PG_STAT STATE UP UP_PRIMARY ACTING ACTING_PRIMARY
9.21 active+clean [,,] [,,]
9.20 active+clean [,,] [,,]
9.27 active+clean [,,] [,,] ===============================
root@cu-pve05:/mnt/pve# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
cephfs_data .14GiB .73GiB .9GiB
cephfs_metadata 698KiB 556KiB .01MiB
fs01 128MiB 0B 256MiB
kyc_block01 133GiB 524GiB 223GiB
system_disks .1GiB .3GiB 109GiB total_objects
total_used 539GiB
total_avail .9TiB
total_space .4TiB OBJECTS*=COPIES
===============================
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
:4pg
metadata是128pg
data是512pg
placement groups objects object_size
vm--disk- 32GiB .19k 4MiB
8.19*=.76g 8.19+0.643=8.833 .18t*=17.44
17.44*=52.32
52.39 TiB PGs active+clean:
24osd each node has 1921pgs
ceph-pg的更多相关文章
- ceph PG数量调整/PG的状态说明
优化: PG Number PG和PGP数量一定要根据OSD的数量进行调整,计算公式如下,但是最后算出的结果一定要接近或者等于一个2的指数.调整PGP不会引起PG内的对象的分裂,但是会引起PG的分布的 ...
- Ceph PG介绍及故障状态和修复
1 PG介绍pg的全称是placement group,中文译为放置组,是用于放置object的一个载体,pg的创建是在创建ceph存储池的时候指定的,同时跟指定的副本数也有关系,比如是3副本的则会有 ...
- 利用火焰图分析ceph pg分布
前言 性能优化大神Brendan Gregg发明了火焰图来定位性能问题,通过图表就可以发现问题出在哪里,通过svg矢量图来查看性能卡在哪个点,哪个操作占用的资源最多 在查看了原始数据后,这个分析的原理 ...
- 记一次ceph pg unfound处理过程
今天检查ceph集群,发现有pg丢失,于是就有了本文~~~ 1.查看集群状态 [root@k8snode001 ~]# ceph health detail HEALTH_ERR 1/973013 o ...
- [转] 关于 Ceph PG
本系列文章会深入研究 Ceph 以及 Ceph 和 OpenStack 的集成: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 的基础数据结构 ...
- Ceph pg分裂流程及可行性分析
转自:https://www.ustack.com/blog/ceph-pg-fenlie/ 1 pg分裂 Ceph作为一个scalable的分布式系统,集群规模会逐渐增大,为了保证数据分布的均匀性, ...
- ceph 存储池PG查看和PG存放OSD位置
1. 查看PG (ceph-mon)[root@controller /]# ceph pg stat 512 pgs: 512 active+clean; 0 bytes data, 1936 MB ...
- ceph之查看osd上pg的分布
一.概述 osd上pg的分布决定了数据分布的均匀与否,所以能直观的看到pg到osd的上分布是很有必要的: ceph只身提供了相关的命令: #ceph pg ls-by-osd.{osd_id} #fo ...
- Ceph中PG和PGP的区别
http://www.zphj1987.com/2016/10/19/Ceph%E4%B8%ADPG%E5%92%8CPGP%E7%9A%84%E5%8C%BA%E5%88%AB/ 一.前言 首先来一 ...
- 分布式存储Ceph之PG状态详解
https://www.jianshu.com/p/36c2d5682d87 1. PG介绍 继上次分享的<Ceph介绍及原理架构分享>,这次主要来分享Ceph中的PG各种状态详解,PG是 ...
随机推荐
- Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) 解决方法
可以通过如下命令来解决,具体就是先关闭服务器,然后再重启服务器: cd /etc/init.d sudo service mysql stop sudo service mysql start
- spring boot 尚桂谷学习笔记04 ---Web开始
------web开发------ 1.创建spring boot 应用 选中我们需要的模块 2.spring boot 已经默认将这些场景配置好了 @EnableAutoConfiguration ...
- upc组队赛14 Bus stop【签到水】
Bus Stop 题目描述 In a rural village in Thailand, there is a long, straight, road with houses scattered ...
- 转 LoadRunner错误处理函数
在脚本的Run-time Settings中,可以设置在脚本运行过程中发生错误的处理方式.进入到Run-time Settings中,切换到Miscellaneous标签页,可以看到Error Han ...
- css控制文本对齐
h1 {text-align:center;} p.date {text-align:right;} p.main {text-align:justify;} text-decoration 属性用来 ...
- Codeforces 1114C(数论)
题面 传送门 分析 我们先考虑n!在10进制下有多少个0 由于10=2*5, 我们考虑n!的分解式中5的指数,答案显然等于\(\frac{n}{5}+\frac{n}{5^2}+\frac{n}{5^ ...
- java并发编程之美-阅读记录1
1.1什么是线程? 在理解线程之前先要明白什么是进程,因为线程是进程中的一个实体.(线程是不会独立存在的) 进程:是代码在数据集合上的一次运行活动,是系统进行资源分配和调度的基本单位,线程则是进程中的 ...
- Mac 电脑如何卸载 重装node
由于在日常开发中,部分node版本不支持,因此,我们需要对已安装的node进行卸载重装,步骤如下: 一.在终端依次输入以下命令 sudo npm uninstall npm -g sudo r ...
- springmvc 异常统一处理的三种方式详解
1 描述 在J2EE项目的开发中,不管是对底层的数据库操作过程,还是业务层的处理过程,还是控制层的处理过程,都不可避免会遇到各种可预知的.不可预知的异常需要处理.每个过程都单独处理异常,系统的代码耦 ...
- 【转】SQLSERVER磁盘原理
[声明:本篇博客转载自http://www.cnblogs.com/ljhdo/p/5149401.html] 最近一段时间的工作主要是与SQLSERVER数据库打交道,需要对SQLSERVER有一个 ...