ceph-pg
版本:mimic
https://192.168.1.5:8006/pve-docs/chapter-pveceph.html#pve_ceph_osds
As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory will be used by an OSD. OSD caching will use additional memory.
mon_command failed - pg_num 128 size 3 would mean 6147 total pgs, which exceeds max 6000 (mon_max_pg_per_osd 250 * num_in_osds 24)
mon_command failed - pg_num size would mean total pgs, which exceeds max (mon_max_pg_per_osd * num_in_osds ) [root@ali- dd]# ceph pg dump
dumped all
version
stamp -- ::24.077612
last_osdmap_epoch
last_pg_scan
full_ratio 0.9
nearfull_ratio 0.8 [root@ceph1 ~]# ceph pg ls
PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES LOG STATE STATE_STAMP VERSION REPORTED UP ACTING SCRUB_STAMP DEEP_SCRUB_STAMP
1.0 active+clean -- ::54.430131 '2 57:95 [1,2,0]p1 [1,2,0]p1 2019-03-28 02:42:54.430020 2019-03-28 02:42:54.430020
1.1 active+clean -- ::33.846731 '0 57:78 [2,0,1]p2 [2,0,1]p2 2019-03-27 20:42:33.846600 2019-03-27 20:42:33.846600
1.2 active+clean -- ::31.853254 '0 57:92 [1,0,2]p1 [1,0,2]p1 2019-03-27 20:02:31.853127 2019-03-21 18:53:07.286885
1.3 active+clean -- ::29.499574 '0 57:94 [0,1,2]p0 [0,1,2]p0 2019-03-28 01:04:29.499476 2019-03-21 18:53:07.286885
1.4 active+clean -- ::42.694788 '0 57:77 [2,1,0]p2 [2,1,0]p2 2019-03-28 10:17:42.694658 2019-03-21 18:53:07.286885
1.5 active+clean -- ::49.922515 '0 57:78 [2,0,1]p2 [2,0,1]p2 2019-03-28 14:33:49.922414 2019-03-21 18:53:07.286885
1.6 active+clean -- ::08.897114 '0 57:78 [2,1,0]p2 [2,1,0]p2 2019-03-28 08:33:08.897044 2019-03-25 19:51:32.716535
1.7 active+clean -- ::16.417698 '0 57:92 [1,2,0]p1 [1,2,0]p1 2019-03-27 21:37:16.417553 2019-03-22 23:05:53.863908
2.0 active+clean -- ::09.127196 '1 57:155 [1,2,0]p1 [1,2,0]p1 2019-03-27 15:07:09.127107 2019-03-22 15:05:32.211389
2.1 active+clean -- ::41.958378 '0 57:89 [0,2,1]p0 [0,2,1]p0 2019-03-27 20:55:41.958328 2019-03-27 20:55:41.958328
2.2 active+clean -- ::45.117140 '0 57:87 [1,0,2]p1 [1,0,2]p1 2019-03-28 03:09:45.117036 2019-03-28 03:09:45.117036
2.3 active+clean -- ::17.944907 '0 57:87 [1,0,2]p1 [1,0,2]p1 2019-03-27 08:54:17.944792 2019-03-26 05:44:21.586541
2.4 active+clean -- ::52.040458 '0 57:89 [0,2,1]p0 [0,2,1]p0 2019-03-27 23:42:52.040353 2019-03-22 15:05:32.211389
2.5 active+clean -- ::15.908085 '0 57:73 [2,0,1]p2 [2,0,1]p2 2019-03-27 14:26:15.908022 2019-03-22 15:05:32.211389
2.6 active+clean -- ::22.282027 '2 57:161 [0,2,1]p0 [0,2,1]p0 2019-03-28 15:00:22.281923 2019-03-26 05:39:41.395132
2.7 active+clean -- ::39.415262 '4 57:253 [1,2,0]p1 [1,2,0]p1 2019-03-27 17:09:39.415167 2019-03-27 17:09:39.415167 [root@ceph1 rbdpool]# ceph pg map 8.13
osdmap e55 pg 8.13 (8.13) -> up [,,] acting [,,] pg id由{pool-num}.{pg-id}组成
ceph osd lspools [root@ceph1 rbdpool]# ceph pg stat
pgs: active+clean; GiB data, GiB used, 8.4 GiB / GiB avail
[root@client mnt]# rm -rf a*
上面的删除操作后,下面的pg才开始清理
[root@ceph1 rbdpool]# ceph pg stat
pgs: active+clean; 2.5 MiB data, 3.5 GiB used, GiB / GiB avail; 8.7 KiB/s rd, B/s wr, op/s [root@ceph1 ~]# ceph pg dump
dumped all
version
stamp -- ::18.312134
last_osdmap_epoch
last_pg_scan
PG_STAT OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES LOG DISK_LOG STATE STATE_STAMP VERSION REPORTED UP UP_PRIMARY ACTING ACTING_PRIMARY LAST_SCRUB SCRUB_STAMP LAST_DEEP_SCRUB DEEP_SCRUB_STAMP SNAPTRIMQ_LEN
8.3f active+clean -- ::27.945410 '0 57:30 [0,1,2] 0 [0,1,2] 0 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3e active+clean -- ::27.967178 '0 57:28 [2,1,0] 2 [2,1,0] 2 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
8.3d active+clean -- ::27.946169 '0 57:29 [0,2,1] 0 [0,2,1] 0 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3c active+clean -- ::27.954775 '0 57:29 [1,2,0] 1 [1,2,0] 1 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3b active+clean -- ::27.958550 '0 57:28 [1,2,0] 1 [1,2,0] 1 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3a active+clean -- ::27.968929 '2 57:31 [2,0,1] 2 [2,0,1] 2 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
8.39 active+clean -- ::27.966700 '0 57:28 [2,0,1] 2 [2,0,1] 2 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
8.38 active+clean -- ::27.946091 '0 57:29 [0,2,1] 0 [0,2,1] 0 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0 sum
OSD_STAT USED AVAIL TOTAL HB_PEERS PG_SUM PRIMARY_PG_SUM
1.3 GiB GiB GiB [,]
1.1 GiB GiB GiB [,]
1.1 GiB GiB GiB [,]
sum 3.5 GiB GiB GiB
[root@ceph1 ~]# [root@ceph1 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
GiB GiB 3.5 GiB 1.93
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
mypool B GiB
.rgw.root 1.1 KiB GiB
default.rgw.control B GiB
default.rgw.meta B GiB
default.rgw.log B GiB
cfs_data KiB GiB
cfs_meta 2.4 MiB GiB
rbdpool B GiB [root@ceph1 ~]# ceph pg 8.1 query [root@ceph1 ~]# ceph osd map cfs_data secure
osdmap e58 pool 'cfs_data' () object 'secure' -> pg .a67b1c61 (6.1) -> up ([,,], p2) acting ([,,], p2) ===========================================
root@cu-pve05:/mnt/pve# ceph osd pool stats
pool kyc_block01 id
client io 0B/s rd, 0op/s rd, 0op/s wr pool cephfs_data id
nothing is going on pool cephfs_metadata id
nothing is going on pool system_disks id
client io 0B/s rd, 576B/s wr, 0op/s rd, 0op/s wr pool data_disks id
nothing is going on pool fs01 id
nothing is going on root@cu-pve05:/mnt/pve# root@cu-pve05:/mnt/pve# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
.4TiB .9TiB 528GiB 0.98
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
kyc_block01 130GiB 0.77 .4TiB
cephfs_data .62GiB 0.04 .4TiB
cephfs_metadata 645KiB .4TiB
system_disks .1GiB 0.19 .4TiB
data_disks 0B .4TiB
fs01 128MiB .4TiB
root@cu-pve05:/mnt/pve# root@cu-pve05:/mnt/pve# ceph pg dump pgs_brief|grep ^|wc -l
dumped pgs_brief 上面的这个结果就是pve中的pool上的pg_num数量。 PG_STAT STATE UP UP_PRIMARY ACTING ACTING_PRIMARY
9.21 active+clean [,,] [,,]
9.20 active+clean [,,] [,,]
9.27 active+clean [,,] [,,] ===============================
root@cu-pve05:/mnt/pve# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
cephfs_data .14GiB .73GiB .9GiB
cephfs_metadata 698KiB 556KiB .01MiB
fs01 128MiB 0B 256MiB
kyc_block01 133GiB 524GiB 223GiB
system_disks .1GiB .3GiB 109GiB total_objects
total_used 539GiB
total_avail .9TiB
total_space .4TiB OBJECTS*=COPIES
===============================
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
:4pg
metadata是128pg
data是512pg
placement groups objects object_size
vm--disk- 32GiB .19k 4MiB
8.19*=.76g 8.19+0.643=8.833 .18t*=17.44
17.44*=52.32
52.39 TiB PGs active+clean:
24osd each node has 1921pgs
ceph-pg的更多相关文章
- ceph PG数量调整/PG的状态说明
优化: PG Number PG和PGP数量一定要根据OSD的数量进行调整,计算公式如下,但是最后算出的结果一定要接近或者等于一个2的指数.调整PGP不会引起PG内的对象的分裂,但是会引起PG的分布的 ...
- Ceph PG介绍及故障状态和修复
1 PG介绍pg的全称是placement group,中文译为放置组,是用于放置object的一个载体,pg的创建是在创建ceph存储池的时候指定的,同时跟指定的副本数也有关系,比如是3副本的则会有 ...
- 利用火焰图分析ceph pg分布
前言 性能优化大神Brendan Gregg发明了火焰图来定位性能问题,通过图表就可以发现问题出在哪里,通过svg矢量图来查看性能卡在哪个点,哪个操作占用的资源最多 在查看了原始数据后,这个分析的原理 ...
- 记一次ceph pg unfound处理过程
今天检查ceph集群,发现有pg丢失,于是就有了本文~~~ 1.查看集群状态 [root@k8snode001 ~]# ceph health detail HEALTH_ERR 1/973013 o ...
- [转] 关于 Ceph PG
本系列文章会深入研究 Ceph 以及 Ceph 和 OpenStack 的集成: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 的基础数据结构 ...
- Ceph pg分裂流程及可行性分析
转自:https://www.ustack.com/blog/ceph-pg-fenlie/ 1 pg分裂 Ceph作为一个scalable的分布式系统,集群规模会逐渐增大,为了保证数据分布的均匀性, ...
- ceph 存储池PG查看和PG存放OSD位置
1. 查看PG (ceph-mon)[root@controller /]# ceph pg stat 512 pgs: 512 active+clean; 0 bytes data, 1936 MB ...
- ceph之查看osd上pg的分布
一.概述 osd上pg的分布决定了数据分布的均匀与否,所以能直观的看到pg到osd的上分布是很有必要的: ceph只身提供了相关的命令: #ceph pg ls-by-osd.{osd_id} #fo ...
- Ceph中PG和PGP的区别
http://www.zphj1987.com/2016/10/19/Ceph%E4%B8%ADPG%E5%92%8CPGP%E7%9A%84%E5%8C%BA%E5%88%AB/ 一.前言 首先来一 ...
- 分布式存储Ceph之PG状态详解
https://www.jianshu.com/p/36c2d5682d87 1. PG介绍 继上次分享的<Ceph介绍及原理架构分享>,这次主要来分享Ceph中的PG各种状态详解,PG是 ...
随机推荐
- SyntaxError: missing ] after element list
在前端页面js报错,找了很久没找到原因. 后来发现是后台向前端输出json字符串,而前端接收是html格式,需要将后台json字符串改成正常字符串就可以输出,或者通过ajax的方式接收json字符串.
- 抓包工具charles下载安装(MAC版)
什么是charles? charles是一个HTTP代理服务器,HTTP监视器,反转代理服务器,当浏览器连接Charles的代理访问互联网时,Charles可以监控浏览器发送和接收的所有数据.它允许一 ...
- haskell基本语法
定义新类型 data EmployeeInfo = Employee Int String String [String] deriving(Read, Show, Eq, Ord) Employee ...
- Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/ssm]]
org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].Standard ...
- 2019-4-8 zookeeper集群介绍学习笔记2
构建高可用ZooKeeper集群原理介绍 ZooKeeper 是 Apache 的一个顶级项目,为分布式应用提供高效.高可用的分布式协调服务,提供了诸如数据发布/订阅.负载均衡.命名服务.分布式协调/ ...
- java 自适应响应式 网站 源码 SSM 生成 静态化 手机 平板 PC 企业站源码
前台: 支持四套模版, 可以在后台切换 系统介绍: 1.网站后台采用主流的 SSM 框架 jsp JSTL,网站后台采用freemaker静态化模版引擎生成html 2.因为是生成的html,所以访问 ...
- 模仿JQuery封装ajax功能
需求分析 因为有时候想提高性能,只需要一个ajax函数,不想引入较大的jq文件,尝试过axios,可是get方法不支持多层嵌套的json,post方式后台接收方式似乎要变..也许是我不太会用吧..其实 ...
- 用html标签+css写出旋转的正方体
有一段时间没写代码了,刚写有点手生,无从下手,为了能快速进入状态,就写了这一个小东西,纯用标签和样式表写.下面看一下我写的. 这一段是样式表: <style> *{ margin: 0; ...
- 禁用usb是否一种方法?
CM_Request_Device_EjectW?? ddk sdk必须安装 https://blog.csdn.net/phmatthaus/article/details/49779585
- RecyclerView跳转到指定位置的三种方式
自从android5.0推出RecyclerView以后,RecyclerView越来越受广大程序员的热爱了!大家都知道RecyclerView的出现目的是为了替代listview和ScrollVie ...