1. 存储类的好处之一便是支持PV的动态供给,它甚至可以直接被视作为PV的创建模版,用户用到持久性存储时,需要通过创建PVC来绑定匹配的PV,此类操作需求较大,或者当管理员手动创建的PV无法满足PVC的所有需求时,系统按PVC的需求标准动态创建适配的PV会为存储管理带来极大的灵活性,不过仅那些属于StorageClass的PVC和PV才能产生绑定关系,即没有指定StorageClass的PVC只能绑定同类的PV。
  2. 存储类对象的名称至关重要,它是用户调用的标识,创建存储类对象时,除了名称之外,还需要为其定义三个关键字段。provisioner、parameter和reclaimPolicy。
  3. 所以kubernetes提供了一种可以动态分配的工作机制,可用自动创建PV,该机制依赖于StorageClass的API,将某个存储节点划分1T给kubernetes使用,当用户申请5Gi的PVC时,会自动从这1T的存储空间去创建一个5Gi的PV,而后自动与之进行关联绑定。
  4. 动态PV供给的启用需要事先创建一个存储类,不同的Provisoner的创建方法各有不同,并非所有的存储卷插件都由Kubernetes内建支持PV动态供给。

2.基于NFS实现动态供应

由于kubernetes内部不包含NFS驱动,所以需要使用外部驱动nfs-subdir-external-provisioner是一个自动供应器,它使用NFS服务端来支持动态供应。

NFS-subdir-external- provisioner实例负责监视PersistentVolumeClaims请求StorageClass,并自动为它们创建NFS所支持的PresistentVolumes。

GitHub地址: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

2.1 准备NFS服务端的共享目录

这里的意思是要把哪个目录给kubernetes来使用。把目录共享出来。

[root@kn-server-node02-15 ~]# ll /data/
总用量 0
[root@kn-server-node02-15 ~]# showmount -e 10.0.0.15
Export list for 10.0.0.15:
/data 10.0.0.0/24

2.2 安装NFS-Server驱动。

首先创建RBAC权限。

[root@kn-server-master01-13 nfs-provisioner]# cat nfs-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
[root@kn-server-master01-13 nfs-provisioner]# kubectl apply -f nfs-rbac.yaml
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

2.3 部署NFS-Provisioner

[root@kn-server-master01-13 nfs-provisioner]# cat nfs-provisioner-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 镜像在国内是拉取不到的,因此为下载下来了放在我的docker hub。 替换为lihuahaitang/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner NFS-Provisioner的名称,后续StorageClassName要与该名称保持一致
- name: NFS_SERVER NFS服务器的地址
value: 10.0.0.15
- name: NFS_PATH
value: /data
volumes:
- name: nfs-client-root
nfs:
server: 10.0.0.15
path: /data
[root@kn-server-master01-13 nfs-provisioner]# kubectl apply -f nfs-provisioner-deploy.yaml
deployment.apps/nfs-client-provisioner created Pod正常运行。
[root@kn-server-master01-13 nfs-provisioner]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-57d6d9d5f6-dcxgq 1/1 Running 0 2m25s describe查看Pod详细信息;
[root@kn-server-master01-13 nfs-provisioner]# kubectl describe pods nfs-client-provisioner-57d6d9d5f6-dcxgq
Name: nfs-client-provisioner-57d6d9d5f6-dcxgq
Namespace: default
Priority: 0
Node: kn-server-node02-15/10.0.0.15
Start Time: Mon, 28 Nov 2022 11:19:33 +0800
Labels: app=nfs-client-provisioner
pod-template-hash=57d6d9d5f6
Annotations: <none>
Status: Running
IP: 192.168.2.82
IPs:
IP: 192.168.2.82
Controlled By: ReplicaSet/nfs-client-provisioner-57d6d9d5f6
Containers:
nfs-client-provisioner:
Container ID: docker://b5ea240a8693185be681714747f8e0a9f347492a24920dd68e629effb3a7400f
Image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 镜像来自k8s.gcr.io
Image ID: docker-pullable://k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner@sha256:63d5e04551ec8b5aae83b6f35938ca5ddc50a88d85492d9731810c31591fa4c9
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 28 Nov 2022 11:20:12 +0800
Ready: True
Restart Count: 0
Environment:
PROVISIONER_NAME: k8s-sigs.io/nfs-subdir-external-provisioner
NFS_SERVER: 10.0.0.15
NFS_PATH: /data
Mounts:
/persistentvolumes from nfs-client-root (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q2z8w (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
nfs-client-root:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 10.0.0.15
Path: /data
ReadOnly: false
kube-api-access-q2z8w:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m11s default-scheduler Successfully assigned default/nfs-client-provisioner-57d6d9d5f6-dcxgq to kn-server-node02-15
Normal Pulling 3m11s kubelet Pulling image "k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2"
Normal Pulled 2m32s kubelet Successfully pulled image "k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2" in 38.965869132s
Normal Created 2m32s kubelet Created container nfs-client-provisioner
Normal Started 2m32s kubelet Started container nfs-client-provisioner

2.4 创建StorageClass

创建NFS StorageClass动态供应商。

[root@kn-server-master01-13 nfs-provisioner]# cat storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass 类型为storageclass
metadata:
name: nfs-provisioner-storage PVC申请时需明确指定的storageclass名称
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner 供应商名称,必须和上面创建的"PROVISIONER_NAME"保持一致
parameters:
archiveOnDelete: "false" 如果值为false,删除pvc后也会删除目录内容,"true"则会对数据进行保留
pathPattern: "${.PVC.namespace}/${.PVC.name}" 创建目录路径的模板,默认为随机命名。
[root@kn-server-master01-13 nfs-provisioner]# kubectl apply -f storageclass.yaml
storageclass.storage.k8s.io/nfs-provisioner-storage created storage简写sc
[root@kn-server-master01-13 nfs-provisioner]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-provisioner-storage k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 3s describe查看配详细信息。
[root@kn-server-master01-13 nfs-provisioner]# kubectl describe sc
Name: nfs-provisioner-storage
IsDefaultClass: Yes
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"nfs-provisioner-storage"},"parameters":{"archiveOnDelete":"false","pathPattern":"${.PVC.namespace}/${.PVC.name}"},"provisioner":"k8s-sigs.io/nfs-subdir-external-provisioner"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
Parameters: archiveOnDelete=false,pathPattern=${.PVC.namespace}/${.PVC.name}
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>

2.5 创建PVC,自动关联PV

[root@kn-server-master01-13 nfs-provisioner]# cat nfs-pvc-test.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc-test
spec:
storageClassName: "nfs-provisioner-storage"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 0.5Gi 这里的PV的名字是随机的,数据的存储路径是根据pathPattern来定义的。
[root@kn-server-node02-15 data]# ls
default
[root@kn-server-node02-15 data]# ll default/
总用量 0
drwxrwxrwx 2 root root 6 11月 28 13:56 nfs-pvc-test
[root@kn-server-master01-13 pv]# kubectl get pv
pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f 512Mi RWX Delete Bound default/nfs-pvc-test nfs-provisioner-storage 5m19s
[root@kn-server-master01-13 nfs-provisioner]# kubectl describe pv pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f
Name: pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: k8s-sigs.io/nfs-subdir-external-provisioner
Finalizers: [kubernetes.io/pv-protection]
StorageClass: nfs-provisioner-storage
Status: Bound
Claim: default/nfs-pvc-test
Reclaim Policy: Delete
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 512Mi
Node Affinity: <none>
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 10.0.0.15
Path: /data/default/nfs-pvc-test
ReadOnly: false
Events: <none> describe可用看到更详细的信息
root@kn-server-master01-13 nfs-provisioner]# kubectl describe pvc
Name: nfs-pvc-test
Namespace: default
StorageClass: nfs-provisioner-storage
Status: Bound
Volume: pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 512Mi 定义的存储大小
Access Modes: RWX 卷的读写
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 13m persistentvolume-controller waiting for a volume to be created, either by external provisioner "k8s-sigs.io/nfs-subdir-external-provisioner" or manually created by system administrator
Normal Provisioning 13m k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-57d6d9d5f6-dcxgq_259532a3-4dba-4183-be6d-8e8b320fc778 External provisioner is provisioning volume for claim "default/nfs-pvc-test"
Normal ProvisioningSucceeded 13m k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-57d6d9d5f6-dcxgq_259532a3-4dba-4183-be6d-8e8b320fc778 Successfully provisioned volume pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f

2.6 创建Pod,测试数据是否持久。

[root@kn-server-master01-13 nfs-provisioner]# cat nginx-pvc-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-sc
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: nginx-page
mountPath: /usr/share/nginx/html
volumes:
- name: nginx-page
persistentVolumeClaim:
claimName: nfs-pvc-test
[root@kn-server-master01-13 nfs-provisioner]# kubectl apply -f nginx-pvc-test.yaml
pod/nginx-sc created [root@kn-server-master01-13 nfs-provisioner]# kubectl describe pvc
Name: nfs-pvc-test
Namespace: default
StorageClass: nfs-provisioner-storage
Status: Bound
Volume: pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 512Mi
Access Modes: RWX
VolumeMode: Filesystem
Used By: nginx-sc 可以看到的是nginx-sc这个Pod在使用这个PVC。 和上面名称是一致的。
[root@kn-server-master01-13 nfs-provisioner]# kubectl get pods nginx-sc
NAME READY STATUS RESTARTS AGE
nginx-sc 1/1 Running 0 2m43s 尝试写入数据
[root@kn-server-node02-15 data]# echo "haitang" > /data/default/nfs-pvc-test/index.html 访问测试。
[root@kn-server-master01-13 nfs-provisioner]# curl 192.168.2.83
haitang

kubernetes数据持久化StorageClass动态供给(二)的更多相关文章

  1. K8S学习笔记之Kubernetes数据持久化方案

    在开始介绍k8s持久化存储前,我们有必要了解一下k8s的emptydir和hostpath.configmap以及secret的机制和用途. 0x00 Emptydir EmptyDir是一个空目录, ...

  2. kubernetes 数据持久化

    pod本身是无状态,所以很多有状态的应用,就需要将数据进行持久化. 1:将数据挂在到宿主机.但是pod重启之后有可能到另外一个节点,这样数据虽然不会丢但是还是有可能会找不到 apiVersion: v ...

  3. IOS开发--数据持久化篇文件存储(二)

    前言:个人觉得开发人员最大的悲哀莫过于懂得使用却不明白其中的原理.在代码之前我觉得还是有必要简单阐述下相关的一些知识点. 因为文章或深或浅总有适合的人群.若有朋友发现了其中不正确的观点还望多多指出,不 ...

  4. kubernetes 数据持久化之Glusterfs

    1.GlusterFS  部署过程请参考上篇文章 2.配置endpoints [root@manager ~]# cat glusterfs-endpoints.json { "kind&q ...

  5. iOS开发——数据持久化Swift篇&(二)沙盒文件

    沙盒文件 //******************** 5.2 文件操作 func use_FileOperations() { //1.获取程序的Home目录 let homeDirectory = ...

  6. PV 动态供给 - 每天5分钟玩转 Docker 容器技术(153)

    前面的例子中,我们提前创建了 PV,然后通过 PVC 申请 PV 并在 Pod 中使用,这种方式叫做静态供给(Static Provision). 与之对应的是动态供给(Dynamical Provi ...

  7. PV 动态供给【转】

    前面的例子中,我们提前创建了 PV,然后通过 PVC 申请 PV 并在 Pod 中使用,这种方式叫做静态供给(Static Provision). 与之对应的是动态供给(Dynamical Provi ...

  8. 《连载 | 物联网框架ServerSuperIO教程》- 15.数据持久化接口的使用。附:3.2发布与版本更新说明。

    1.C#跨平台物联网通讯框架ServerSuperIO(SSIO)介绍 <连载 | 物联网框架ServerSuperIO教程>1.4种通讯模式机制. <连载 | 物联网框架Serve ...

  9. 基于NFS的PV动态供给(StorageClass)

    一.简介 PersistentVolume(PV)是指由集群管理员配置提供的某存储系统上的段存储空间,它是对底层共享存储的抽象,将共享存储作为种可由用户申请使的资源,实现了“存储消费”机制.通过存储插 ...

  10. kubernetes的应用数据持久化

    1.无状态应用与有状态应用 应用的有状态和无状态是根据应用是否有持久化保存数据的需求而言的,即持久化保存数据的应用为有状态的应用,反之则为无状态的应用.常见的系统往往是有状态的应用,比如对于微博和微信 ...

随机推荐

  1. 入门Python,看完这篇就行了!

    转载请注明出处️ 作者:测试蔡坨坨 原文链接:caituotuo.top/3bbc3146.html 你好,我是测试蔡坨坨. 众所周知,Python语法简洁.功能强大,通过简单的代码就能实现很多实用. ...

  2. 腾讯云主机安全【等保三级】CentOS7安全基线检查策略

    转载自:https://secvery.com/8898.html 注意:注意,注意:处理前请先做备份,处理前请先做备份,处理前请先做备份 1.确保配置了密码尝试失败的锁定 编辑/etc/pam.d/ ...

  3. 使用 APM 中的 Service Map 了解和调试应用程序

    文章转载自:https://blog.csdn.net/UbuntuTouch/article/details/118667839

  4. 第四章:Django表单 - 4:表单的Widgets

    不要将Widget与表单的fields字段混淆.表单字段负责验证输入并直接在模板中使用.而Widget负责渲染网页上HTML表单的输入元素和提取提交的原始数据.widget是字段的一个内在属性,用于定 ...

  5. SpringBoot项目的CI配置 # 安全变量

    运行GitLab Runner容器 参考Run GitLab Runner in a container - Docker image installation and configuration 执 ...

  6. Elasticsearch:Snapshot 生命周期管理

    转载自:https://blog.csdn.net/UbuntuTouch/article/details/108643226

  7. 轻松绕过waf,内网技术,Cobalt Strike4.4远控木马绕waf流量监控

    DNS隧道技术可以解决运控木马无法上线的问题,waf,防火墙对tcp,http,https等端口有流量检测,这个时候我们就可以使用隧道技术,让cs木马走DNS隧道,不仅可以检测不到而且也是一种反溯源的 ...

  8. 关于Struts访问不到静态资源的问题

    今天重新配置了Struts的项目进行开发,但是项目静态资源一直访问不到. 将一些静态资源放在WebRoot下的static包下面便于管理. 一开始以为采用拦截.do,只拦截do后缀的请求,解决了静态资 ...

  9. docker gitlab迁移 备份 部署 搭建以及各种问题

    当前环境 服务器A 服务器B ubuntu docker gitlab(版本一致) docker安装gitlab 由于考虑到gitlab 包含了⾃身的nginx.数据库.端⼝占⽤等等因数,这⾥使⽤的是 ...

  10. .net core-利用PdfSharpCore和SkiaSharp.QrCode 添加PDF二维码页眉

    前序 由于去年的一个项目需要在PDF 添加公司二维码 ,当时在网上找了很多操作PDF方案,第一种Aspose.PDF,很遗憾 Aspose.PDF 有添加版权的背景还是页脚我忘记了,不适合公司项目,最 ...