一、集群和组件版本

K8S集群:1.17.3+
Ceph集群:Nautilus(stables)
Ceph-CSI:release-v3.1
snapshotter-controller:release-2.1
Linue kernel:3.10.0-1127.19.1.el7.x86_64 +
  • 镜像版本:

docker pull quay.io/k8scsi/csi-snapshotter:v2.1.1
docker pull quay.io/k8scsi/csi-snapshotter:v2.1.0
docker pull quay.io/k8scsi/csi-resizer:v0.5.0
docker pull quay.io/k8scsi/csi-provisioner:v1.6.0
docker pull quay.io/k8scsi/csi-node-driver-registrar:v1.3.0
docker pull quay.io/k8scsi/csi-attacher:v2.1.1
docker pull quay.io/cephcsi/cephcsi:v3.1-canary
docker pull quay.io/k8scsi/snapshot-controller:v2.0.1

二、部署

1)部署Ceph-CSI

1.1)克隆代码
# git clone https://github.com/ceph/ceph-csi.git
# cd ceph-csi/deploy/rbd/kubernetes
1.2)修改yaml文件

1.2.1)修改csi-rbdplugin-provisioner.yaml和csi-rbdplugin.yaml文件,注释ceph-csi-encryption-kms-config配置:

# grep "#" csi-rbdplugin-provisioner.yaml
# for stable functionality replace canary with latest release version
#- name: ceph-csi-encryption-kms-config
# mountPath: /etc/ceph-csi-encryption-kms-config/
#- name: ceph-csi-encryption-kms-config
# configMap:
# name: ceph-csi-encryption-kms-config

1.2.2)配置csi-config-map.yaml文件链接ceph集群的信息

# cat csi-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
[
{
"clusterID": "c7b4xxf7-c61e-4668-9xx0-82c9xx5e3696", // 通过ceph集群的ID
"monitors": [
"xxx.xxx.xxx.xxx:6789"
]
}
]
metadata:
name: ceph-csi-config

1.2.3)部署rbd相关的CSI

# kubectl apply -f ceph-csi/deploy/rbd/kubernetes/
# kubectl get pods
csi-rbdplugin-9f8kn 3/3 Running 0 39h
csi-rbdplugin-pnjtn 3/3 Running 0 39h
csi-rbdplugin-provisioner-7f469fb84-4qqbd 6/6 Running 0 41h
csi-rbdplugin-provisioner-7f469fb84-hkc9q 6/6 Running 5 41h
csi-rbdplugin-provisioner-7f469fb84-vm7qm 6/6 Running 0 40h

2)快照功能需要安装快照控制器支持:

2.1)克隆代码

# git clone https://github.com/kubernetes-csi/external-snapshotter
# cd external-snapshotter/deploy/kubernetes/snapshot-controller

2.2)部署

# kubectl external-snapshotter/deploy/kubernetes/snapshot-controller/
# kubectl get pods | grep snapshot-controller
snapshot-controller-0 1/1 Running 0 20h

2.3)部署crd

# kubectl apply -f external-snapshotter/config/crd/
# kubectl api-versions | grep snapshot
snapshot.storage.k8s.io/v1beta1

至此,Ceph-CSI和snapshot-controller安装完成。下面进行功能测试。测试功能前需要在ceph集群中创建对应的存储池:

// 查看集群状态
# ceph -s
cluster:
id: c7b43ef7-c61e-4668-9970-82c9775e3696
health: HEALTH_OK

services:
mon: 1 daemons, quorum cka-node-01 (age 24h)
mgr: cka-node-01(active, since 24h), standbys: cka-node-02, cka-node-03
mds: cephfs:1 {0=cka-node-01=up:active} 2 up:standby
osd: 3 osds: 3 up, 3 in
rgw: 1 daemon active (cka-node-01)

task status:
scrub status:
mds.cka-node-01: idle

data:
pools: 7 pools, 184 pgs
objects: 827 objects, 1.7 GiB
usage: 8.1 GiB used, 52 GiB / 60 GiB avail
pgs: 184 active+clean

io:
client: 32 KiB/s rd, 0 B/s wr, 31 op/s rd, 21 op/s wr

// 创建存储池kubernetes
# ceph osd pool create kubernetes 8 8
# rbd pool init kubernetes

// 创建用户kubernetes
# ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes'

// 获取集群信息和查看用户key
# ceph mon dump
dumped monmap epoch 3
epoch 3
fsid c7b43ef7-c61e-4668-9970-82c9775e3696
last_changed 2020-09-11 11:05:25.529648
created 2020-09-10 16:22:52.967856
min_mon_release 14 (nautilus)
0: [v2:10.0.xxx.xxx0:3300/0,v1:10.0.xxx.xxx:6789/0] mon.cka-node-01

# ceph auth get client.kubernetes
exported keyring for client.kubernetes
[client.kubernetes]
key = AQBt5xxxR0DBAAtjxxA+zlqxxxF3shYm8qLQmw==
caps mon = "profile rbd"
caps osd = "profile rbd pool=kubernetes"

三、验证

验证如下功能:

1)创建rbd类型pvc给pod使用;
2)创建rbd类型pvc的快照,并验证基于快照恢复的可用性;
3)扩容pvc大小;
4)同一个pvc重复创建快照;
1、创建rbd类型pvc给pod使用:

1.1) 创建连接ceph集群的秘钥

# cat secret.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: csi-rbd-secret
namespace: default
stringData:
userID: kubernetes
userKey: AQBt51lf9iR0DBAAtjA+zlqxxxYm8qLQmw==
encryptionPassphrase: test_passphrase

# kubectl apply -f secret.yaml

1.2) 创建storeclass

# cat storageclass.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: c7b43xxf7-c61e-4668-9970-82c9e3696
pool: kubernetes
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: default
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: default
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: default
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard

# kubectl apply -f storageclass.yaml

1.3)基于storeclass创建pvc

# cat pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd-sc

# kubectl apply -f pvc.yaml
# kubectl get pvc rbd-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
rbd-pvc Bound pvc-11b931b0-7cb5-40e1-815b-c15659310593 1Gi RWO csi-rbd-sc 17h

1.4)创建pod应用pvc

# cat pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: csi-rbd-demo-pod
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- name: mypvc
mountPath: /var/lib/www/html
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: rbd-pvc
readOnly: false

# kubectl apply -f pod.yaml
# kubectl get pods csi-rbd-demo-pod
NAME READY STATUS RESTARTS AGE
csi-rbd-demo-pod 1/1 Running 0 40h

# kubectl exec -ti csi-rbd-demo-pod -- bash
root@csi-rbd-demo-pod:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 199G 7.4G 192G 4% /
tmpfs 64M 0 64M 0% /dev
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/mapper/centos-root 199G 7.4G 192G 4% /etc/hosts
shm 64M 0 64M 0% /dev/shm
/dev/rbd0 976M 2.6M 958M 1% /var/lib/www/html
tmpfs 7.8G 12K 7.8G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 7.8G 0 7.8G 0% /proc/acpi
tmpfs 7.8G 0 7.8G 0% /proc/scsi
tmpfs 7.8G 0 7.8G 0% /sys/firmware

# 写入文件,用于后续快照验证
root@csi-rbd-demo-pod:/# cd /var/lib/www/html;mkdir demo;cd demo;echo "snapshot test" > test.txt
root@csi-rbd-demo-pod:/var/lib/www/html# cat demo/test.txt
snapshot test
2)创建rbd类型pvc的快照,并验证基于快照恢复的可用性:

2.1)创建上一步pvc的快照

# cat snapshot.yaml
---
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: rbd-pvc-snapshot
spec:
volumeSnapshotClassName: csi-rbdplugin-snapclass
source:
persistentVolumeClaimName: rbd-pvc

# kubectl apply -f snapshot.yaml
# kubectl get VolumeSnapshot rbd-pvc-snapshot
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
rbd-pvc-snapshot true rbd-pvc 1Gi csi-rbdplugin-snapclass snapcontent-48f3e563-d21a-40bb-8e15-ddbf27886c88 19h 19h

2.2)创建基于快照恢复的pvc

# cat pvc-restore.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc-restore
spec:
storageClassName: csi-rbd-sc
dataSource:
name: rbd-pvc-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

# kubectl apply -f pvc-restore.yaml

2.3)创建pod应用快照恢复的pvc

# cat pod-restore.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: csi-rbd-restore-demo-pod
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- name: mypvc
mountPath: /var/lib/www/html
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: rbd-pvc-restore
readOnly: false

# kubectl apply -f pod-restore.yaml
# kubectl get pods csi-rbd-restore-demo-pod
NAME READY STATUS RESTARTS AGE
csi-rbd-restore-demo-pod 1/1 Running 0 18h
# kubectl exec -ti csi-rbd-restore-demo-pod -- bash
root@csi-rbd-restore-demo-pod:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 199G 7.4G 192G 4% /
tmpfs 64M 0 64M 0% /dev
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/mapper/centos-root 199G 7.4G 192G 4% /etc/hosts
shm 64M 0 64M 0% /dev/shm
/dev/rbd3 976M 2.6M 958M 1% /var/lib/www/html
tmpfs 7.8G 12K 7.8G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 7.8G 0 7.8G 0% /proc/acpi
tmpfs 7.8G 0 7.8G 0% /proc/scsi
tmpfs 7.8G 0 7.8G 0% /sys/firmware

root@csi-rbd-restore-demo-pod:/# cd /var/lib/www/html
root@csi-rbd-restore-demo-pod:/var/lib/www/html# ls
demo lost+found
root@csi-rbd-restore-demo-pod:/var/lib/www/html# cat demo/test.txt
snapshot test

//基于快照恢复数据功能正常
3)扩容pvc大小:

3.1)修改rbd-pvc的容量大小

# cat pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi // 由1G改为100G
storageClassName: csi-rbd-sc

# kubectl apply -f pvc.yaml
# kubectl get pvc rbd-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
rbd-pvc Bound pvc-11b931b0-7cb5-40e1-815b-c15659310593 100Gi RWO csi-rbd-sc 40h
# kubectl exec -ti csi-rbd-demo-pod -- bash
root@csi-rbd-demo-pod:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 199G 7.4G 192G 4% /
tmpfs 64M 0 64M 0% /dev
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/mapper/centos-root 199G 7.4G 192G 4% /etc/hosts
shm 64M 0 64M 0% /dev/shm
/dev/rbd0 99G 6.8M 99G 1% /var/lib/www/html // 扩容正常
tmpfs 7.8G 12K 7.8G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 7.8G 0 7.8G 0% /proc/acpi
tmpfs 7.8G 0 7.8G 0% /proc/scsi
tmpfs 7.8G 0 7.8G 0% /sys/firmware

// 再次写入数据用于后续第二次创建快照
root@csi-rbd-demo-pod:# cd /var/lib/www/html;mkdir test;echo "abc" > test/demo.txt;echo "abc" >> /var/lib/www/html/demo/test.txt
root@csi-rbd-demo-pod:/var/lib/www/html# cat test/demo.txt
abc
root@csi-rbd-demo-pod:/var/lib/www/html# cat demo/test.txt
snapshot test
abc
4)同一个pvc重复创建快照:

4.1)再次对rbd-pvc创建快照

# cat snapshot-1.yaml
---
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: rbd-pvc-snapshot-1
spec:
volumeSnapshotClassName: csi-rbdplugin-snapclass
source:
persistentVolumeClaimName: rbd-pvc

# kubectl apply -f snapshot-1.yaml
# kubectl get VolumeSnapshot rbd-pvc-snapshot-1
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
rbd-pvc-snapshot-1 true rbd-pvc 100Gi csi-rbdplugin-snapclass snapcontent-b82dceb0-7ba6-4a3e-88ab-2220b729d85f 18h 18h

4.2)基于rbd-pvc-snapshot-1快照恢复pvc

# cat pvc-restore-1.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc-restore-1
spec:
storageClassName: csi-rbd-sc
dataSource:
name: rbd-pvc-snapshot-1
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi

# kubectl apply -f pvc-restore-1.yaml

4.3)创建pod引用rbd-pvc-restore-1恢复的pvc

# cat pod-restore-1.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: csi-rbd-restore-demo-pod-1
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- name: mypvc
mountPath: /var/lib/www/html
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: rbd-pvc-restore-1
readOnly: false

# kubectl apply -f pod-restore-1.yaml
NAME READY STATUS RESTARTS AGE
csi-rbd-restore-demo-pod-1 1/1 Running 0 18h
# kubectl exec -ti csi-rbd-restore-demo-pod-1 -- bash
root@csi-rbd-restore-demo-pod-1:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 199G 7.4G 192G 4% /
tmpfs 64M 0 64M 0% /dev
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/mapper/centos-root 199G 7.4G 192G 4% /etc/hosts
shm 64M 0 64M 0% /dev/shm
/dev/rbd4 99G 6.8M 99G 1% /var/lib/www/html
tmpfs 7.8G 12K 7.8G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 7.8G 0 7.8G 0% /proc/acpi
tmpfs 7.8G 0 7.8G 0% /proc/scsi
tmpfs 7.8G 0 7.8G 0% /sys/firmware
root@csi-rbd-restore-demo-pod-1:/# cd /var/lib/www/html
root@csi-rbd-restore-demo-pod-1:/var/lib/www/html# cat demo/test.txt
snapshot test
abc
root@csi-rbd-restore-demo-pod-1:/var/lib/www/html# cat test/demo.txt
abc

// 至此验证扩容后的pvc,二次创建的快照恢复数据功能正常

// 查看第一个创建的快照中是否有后续添加的文件数据,如下数据还是第一个快照创建时数据
[root@cka-node-01 rbd]# kubectl exec -ti csi-rbd-restore-demo-pod -- bash
root@csi-rbd-restore-demo-pod:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 199G 7.4G 192G 4% /
tmpfs 64M 0 64M 0% /dev
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/mapper/centos-root 199G 7.4G 192G 4% /etc/hosts
shm 64M 0 64M 0% /dev/shm
/dev/rbd3 976M 2.6M 958M 1% /var/lib/www/html
tmpfs 7.8G 12K 7.8G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 7.8G 0 7.8G 0% /proc/acpi
tmpfs 7.8G 0 7.8G 0% /proc/scsi
tmpfs 7.8G 0 7.8G 0% /sys/firmware
root@csi-rbd-restore-demo-pod:/# cd /var/lib/www/html
root@csi-rbd-restore-demo-pod:/var/lib/www/html# cat demo/test.txt
snapshot test
root@csi-rbd-restore-demo-pod:/var/lib/www/html# ls
demo lost+found

【原创】K8S使用ceph-csi持久化存储之RBD的更多相关文章

  1. 通过Heketi管理GlusterFS为K8S集群提供持久化存储

    参考文档: Github project:https://github.com/heketi/heketi MANAGING VOLUMES USING HEKETI:https://access.r ...

  2. k8s使用ceph作为后端存储挂载

    一.在ceph集群上操作: 1.创建池(主要使用存储类来进行持久卷的挂载,其他的挂载方式不好使也太麻烦):ceph osd pool create k8s 64 二.在k8s上操作: 1.安装客户端( ...

  3. 如何接入 K8s 持久化存储?K8s CSI 实现机制浅析

    作者 王成,腾讯云研发工程师,Kubernetes contributor,从事数据库产品容器化.资源管控等工作,关注 Kubernetes.Go.云原生领域. 概述 进入 K8s 的世界,会发现有很 ...

  4. K8S学习笔记之k8s使用ceph实现动态持久化存储

    0x00 概述 本文章介绍如何使用ceph为k8s提供动态申请pv的功能.ceph提供底层存储功能,cephfs方式支持k8s的pv的3种访问模式ReadWriteOnce,ReadOnlyMany ...

  5. 使用Ceph集群作为Kubernetes的动态分配持久化存储(转)

    使用Docker快速部署Ceph集群 , 然后使用这个Ceph集群作为Kubernetes的动态分配持久化存储. Kubernetes集群要使用Ceph集群需要在每个Kubernetes节点上安装ce ...

  6. 4.深入k8s:容器持久化存储

    从一个例子入手PV.PVC Kubernetes 项目引入了一组叫作 Persistent Volume Claim(PVC)和 Persistent Volume(PV)的 API 对象用于管理存储 ...

  7. k8s的持久化存储PV&&PVC

    1.PV和PVC的引入 Volume 提供了非常好的数据持久化方案,不过在可管理性上还有不足. 拿前面 AWS EBS 的例子来说,要使用 Volume,Pod 必须事先知道如下信息: 当前 Volu ...

  8. k8s使用ceph存储

    目录 ceph配置 k8s 配置 通过静态pv,pvc使用ceph 测试多pod挂载静态pv数据不一致问题 StoragaClass 方式 ceph 常用命令 k8s 常用命令 k8s各类端口及IP说 ...

  9. k8s集群,使用pvc方式实现数据持久化存储

    环境: 系统 华为openEulerOS(CentOS7) k8s版本 1.17.3 master 192.168.1.244 node1 192.168.1.245 介绍: 在Kubernetes中 ...

随机推荐

  1. 清空ARP缓存

    arp -n|awk '/^[1-9]/{print "arp -d " $1}'|sh -x

  2. html的JavaScript的简单输入验证

    <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...

  3. 第2篇scrum

    第2篇scrum 一.站立式会议 1.1会议照片 想得美 1.2项目进展 团队成员 昨日完成任务 今日计划任务 感想 吴茂平 完善用户系统 改进评论数据表,增加评论,删除评论,查询评论 今天也是元气满 ...

  4. latex在线帮助文档

    1.ctex官方网站 http://www.ctex.org/HomePage 2.在线帮助文档 http://www.ctex.org/OnlineDocuments

  5. jQuery 事件操作

    入口函数 使用$(document).ready(()=>{})作为jQuery入口函数,与window.onload(()=>{})类似,但它不会等待图片等外部资源的加载完毕,而是在HT ...

  6. 一文带你深扒ClassLoader内核,揭开它的神秘面纱!

    「MoreThanJava」 宣扬的是 「学习,不止 CODE」. 如果觉得 「不错」 的朋友,欢迎 「关注 + 留言 + 分享」,文末有完整的获取链接,您的支持是我前进的最大的动力! 前言 Clas ...

  7. 高可用集群之corosync+pacemaker

    1.概念 在传统Linux集群种类,主要分了三类,一类是LB集群,这类集群主要作用是对用户的流量做负载均衡,让其后端每个server都能均衡的处理一部分请求:这类集群有一个特点就是前端调度器通常是单点 ...

  8. Python 到底是强类型语言,还是弱类型语言?

    0.前言 我在上一篇文章中分析了 为什么 Python 没有 void 类型 的话题,在文章发布后,有读者跟我讨论起了另一个关于类型的问题,但是,我们很快就出现了重大分歧. 我们主要的分歧就在于:Py ...

  9. 使用kind快速创建本地集群

    简 介 kind是另一个Kubernetes SIG项目,但它与minikube有很大区别.它可以将集群迁移到Docker容器中,这与生成虚拟机相比,启动速度大大加快.简而言之,kind是一个使用Do ...

  10. HTML5 Drag & Drop

    一定要区分不同事件产生的对象 源元素 属性:draggable = "true" 事件: ondragstart:开始拖拽 ondragend:拖拽结束 目标元素 事件: ondr ...