详解K8s 镜像缓存管理kube-fledged
本文分享自华为云社区《K8s 镜像缓存管理 kube-fledged 认知》,作者: 山河已无恙。
我们知道 k8s
上的容器调度需要在调度的节点行拉取当前容器的镜像,在一些特殊场景中,
- 需要
快速启动和/或扩展
的应用程序。例如,由于数据量激增,执行实时数据处理的应用程序需要快速扩展。 - 镜像比较庞大,涉及多个版本,节点存储有限,需要动态清理不需要的镜像
无服务器函数
通常需要在几分之一秒内立即对传入事件和启动容器做出反应。- 在边缘设备上运行的
IoT 应用程序
,需要容忍边缘设备
和镜像镜像仓库之间的间歇性网络连接。 - 如果需要从
专用仓库
中拉取镜像,并且无法授予每个人从此镜像仓库
拉取镜像的访问权限,则可以在群集的节点上提供镜像。 - 如果集群管理员或操作员需要对应用程序进行升级,并希望事先验证是否可以成功拉取新镜像。
kube-fledged
是一个 kubernetes operator
,用于直接在 Kubernetes 集群的 worker
节点上创建和管理容器镜像缓存。它允许用户定义镜像列表以及这些镜像应缓存到哪些工作节点上(即拉取)。因此,应用程序 Pod 几乎可以立即启动,因为不需要从镜像仓库中提取镜像。
kube-fledged
提供了 CRUD API 来管理镜像缓存的生命周期,并支持多个可配置的参数,可以根据自己的需要自定义功能。
Kubernetes 具有内置的镜像垃圾回收机制
。节点中的 kubelet 会定期检查磁盘使用率是否达到特定阈值(可通过标志进行配置)。一旦达到这个阈值
,kubelet 会自动删除节点中所有未使用的镜像。
需要在建议的解决方案中实现自动和定期刷新机制。如果镜像缓存中的镜像被 kubelet 的 gc 删除,下一个刷新周期会将已删除的镜像拉入镜像缓存中。这可确保镜像缓存是最新的。
设计流程
https://github.com/senthilrch/kube-fledged/blob/master/docs/kubefledged-architecture.png
部署 kube-fledged
Helm 方式部署
──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$mkdir kube-fledged
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$cd kube-fledged
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$export KUBEFLEDGED_NAMESPACE=kube-fledged
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$kubectl create namespace ${KUBEFLEDGED_NAMESPACE}
namespace/kube-fledged created
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$helm repo add kubefledged-charts https://senthilrch.github.io/kubefledged-charts/
"kubefledged-charts" has been added to your repositories
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "kubefledged-charts" chart repository
...Successfully got an update from the "kubescape" chart repository
...Successfully got an update from the "rancher-stable" chart repository
...Successfully got an update from the "skm" chart repository
...Successfully got an update from the "openkruise" chart repository
...Successfully got an update from the "awx-operator" chart repository
...Successfully got an update from the "botkube" chart repository
Update Complete. ⎈Happy Helming!⎈
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$helm install --verify kube-fledged kubefledged-charts/kube-fledged -n ${KUBEFLEDGED_NAMESPACE} --wait
实际部署中发现,由于网络问题,chart
无法下载,所以通过 make deploy-using-yaml
使用 yaml 方式部署
Yaml 文件部署
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$git clone https://github.com/senthilrch/kube-fledged.git
正克隆到 'kube-fledged'...
remote: Enumerating objects: 10613, done.
remote: Counting objects: 100% (1501/1501), done.
remote: Compressing objects: 100% (629/629), done.
remote: Total 10613 (delta 845), reused 1357 (delta 766), pack-reused 9112
接收对象中: 100% (10613/10613), 34.58 MiB | 7.33 MiB/s, done.
处理 delta 中: 100% (4431/4431), done.
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$ls
kube-fledged
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$cd kube-fledged/
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$make deploy-using-yaml
kubectl apply -f deploy/kubefledged-namespace.yaml
第一次部署,发现镜像拉不下来
┌──[root@vms100.liruilongs.github.io]-[~]
└─$kubectl get all -n kube-fledged
NAME READY STATUS RESTARTS AGE
pod/kube-fledged-controller-df69f6565-drrqg 0/1 CrashLoopBackOff 35 (5h59m ago) 21h
pod/kube-fledged-webhook-server-7bcd589bc4-b7kg2 0/1 Init:CrashLoopBackOff 35 (5h58m ago) 21h
pod/kubefledged-controller-55f848cc67-7f4rl 1/1 Running 0 21h
pod/kubefledged-webhook-server-597dbf4ff5-l8fbh 0/1 Init:CrashLoopBackOff 34 (6h ago) 21h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-fledged-webhook-server ClusterIP 10.100.194.199 <none> 3443/TCP 21h
service/kubefledged-webhook-server ClusterIP 10.101.191.206 <none> 3443/TCP 21h NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-fledged-controller 0/1 1 0 21h
deployment.apps/kube-fledged-webhook-server 0/1 1 0 21h
deployment.apps/kubefledged-controller 0/1 1 0 21h
deployment.apps/kubefledged-webhook-server 0/1 1 0 21h NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-fledged-controller-df69f6565 1 1 0 21h
replicaset.apps/kube-fledged-webhook-server-7bcd589bc4 1 1 0 21h
replicaset.apps/kubefledged-controller-55f848cc67 1 1 0 21h
replicaset.apps/kubefledged-webhook-server-597dbf4ff5 1 1 0 21h
┌──[root@vms100.liruilongs.github.io]-[~]
└─$
这里我们找一下要拉取的镜像
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$cat *.yaml | grep image:
- image: senthilrch/kubefledged-controller:v0.10.0
- image: senthilrch/kubefledged-webhook-server:v0.10.0
- image: senthilrch/kubefledged-webhook-server:v0.10.0
单独拉取一些,当前使用 ansible
在所有工作节点批量操作
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible k8s_node -m shell -a "docker pull docker.io/senthilrch/kubefledged-cri-client:v0.10.0" -i host.yaml
其他相关的镜像都拉取一下
操作完成之后容器状态全部正常
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl -n kube-fledged get all
NAME READY STATUS RESTARTS AGE
pod/kube-fledged-controller-df69f6565-wdb4g 1/1 Running 0 13h
pod/kube-fledged-webhook-server-7bcd589bc4-j8xxp 1/1 Running 0 13h
pod/kubefledged-controller-55f848cc67-klxlm 1/1 Running 0 13h
pod/kubefledged-webhook-server-597dbf4ff5-ktbsh 1/1 Running 0 13h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-fledged-webhook-server ClusterIP 10.100.194.199 <none> 3443/TCP 36h
service/kubefledged-webhook-server ClusterIP 10.101.191.206 <none> 3443/TCP 36h NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-fledged-controller 1/1 1 1 36h
deployment.apps/kube-fledged-webhook-server 1/1 1 1 36h
deployment.apps/kubefledged-controller 1/1 1 1 36h
deployment.apps/kubefledged-webhook-server 1/1 1 1 36h NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-fledged-controller-df69f6565 1 1 1 36h
replicaset.apps/kube-fledged-webhook-server-7bcd589bc4 1 1 1 36h
replicaset.apps/kubefledged-controller-55f848cc67 1 1 1 36h
replicaset.apps/kubefledged-webhook-server-597dbf4ff5 1 1 1 36h
验证是否安装成功
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$kubectl get pods -n kube-fledged -l app=kubefledged
NAME READY STATUS RESTARTS AGE
kubefledged-controller-55f848cc67-klxlm 1/1 Running 0 16h
kubefledged-webhook-server-597dbf4ff5-ktbsh 1/1 Running 0 16h
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$kubectl get imagecaches -n kube-fledged
No resources found in kube-fledged namespace.
使用 kubefledged
创建镜像缓存对象
根据 Demo
文件,创建镜像缓存对象
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$cd deploy/
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$cat kubefledged-imagecache.yaml
---
apiVersion: kubefledged.io/v1alpha2
kind: ImageCache
metadata:
# Name of the image cache. A cluster can have multiple image cache objects
name: imagecache1
namespace: kube-fledged
# The kubernetes namespace to be used for this image cache. You can choose a different namepace as per your preference
labels:
app: kubefledged
kubefledged: imagecache
spec:
# The "cacheSpec" field allows a user to define a list of images and onto which worker nodes those images should be cached (i.e. pre-pulled).
cacheSpec:
# Specifies a list of images (nginx:1.23.1) with no node selector, hence these images will be cached in all the nodes in the cluster
- images:
- ghcr.io/jitesoft/nginx:1.23.1
# Specifies a list of images (cassandra:v7 and etcd:3.5.4-0) with a node selector, hence these images will be cached only on the nodes selected by the node selector
- images:
- us.gcr.io/k8s-artifacts-prod/cassandra:v7
- us.gcr.io/k8s-artifacts-prod/etcd:3.5.4-0
nodeSelector:
tier: backend
# Specifies a list of image pull secrets to pull images from private repositories into the cache
imagePullSecrets:
- name: myregistrykey
官方的 Demo 中对应的 镜像拉取不下来,所以换一下
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$docker pull us.gcr.io/k8s-artifacts-prod/cassandra:v7
Error response from daemon: Get "https://us.gcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$
为了测试选择器标签的使用,我们找一个节点的标签单独做镜像缓存
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl get nodes --show-labels
同时我们直接从公有仓库拉取镜像,所以不需要 imagePullSecrets
对象
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$vim kubefledged-imagecache.yaml
修改后的 yaml
文件
- 添加了一个所有节点的 liruilong/my-busybox:latest 镜像缓存
- 添加了一个
kubernetes.io/hostname: vms105.liruilongs.github.io
对应标签选择器的liruilong/hikvision-sdk-config-ftp:latest
镜像缓存
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$cat kubefledged-imagecache.yaml
---
apiVersion: kubefledged.io/v1alpha2
kind: ImageCache
metadata:
# Name of the image cache. A cluster can have multiple image cache objects
name: imagecache1
namespace: kube-fledged
# The kubernetes namespace to be used for this image cache. You can choose a different namepace as per your preference
labels:
app: kubefledged
kubefledged: imagecache
spec:
# The "cacheSpec" field allows a user to define a list of images and onto which worker nodes those images should be cached (i.e. pre-pulled).
cacheSpec:
# Specifies a list of images (nginx:1.23.1) with no node selector, hence these images will be cached in all the nodes in the cluster
- images:
- liruilong/my-busybox:latest
# Specifies a list of images (cassandra:v7 and etcd:3.5.4-0) with a node selector, hence these images will be cached only on the nodes selected by the node selector
- images:
- liruilong/hikvision-sdk-config-ftp:latest
nodeSelector:
kubernetes.io/hostname: vms105.liruilongs.github.io
# Specifies a list of image pull secrets to pull images from private repositories into the cache
#imagePullSecrets:
#- name: myregistrykey
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$
直接创建报错了
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl create -f kubefledged-imagecache.yaml
Error from server (InternalError): error when creating "kubefledged-imagecache.yaml": Internal error occurred: failed calling webhook "validate-image-cache.kubefledged.io": failed to call webhook: Post "https://kubefledged-webhook-server.kube-fledged.svc:3443/validate-image-cache?timeout=1s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubefledged.io")
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl get imagecaches -n kube-fledged
No resources found in kube-fledged namespace.
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$
解决办法,删除对应的对象,重新创建
我在当前项目的一个 issues
下面找到了解决办法 https://github.com/senthilrch/kube-fledged/issues/76
看起来这是因为 Webhook CA
是硬编码的,但是当 webhook
服务器启动时,会生成一个新的 CA 捆绑包并更新 webhook 配置。当发生另一个部署时,将重新应用原始 CA 捆绑包,并且 Webhook 请求开始失败,直到再次重新启动 Webhook 组件以修补捆绑包init-server
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$make remove-kubefledged-and-operator
# Remove kubefledged
kubectl delete -f deploy/kubefledged-operator/deploy/crds/charts.helm.kubefledged.io_v1alpha2_kubefledged_cr.yaml
error: resource mapping not found for name: "kube-fledged" namespace: "kube-fledged" from "deploy/kubefledged-operator/deploy/crds/charts.helm.kubefledged.io_v1alpha2_kubefledged_cr.yaml": no matches for kind "KubeFledged" in version "charts.helm.kubefledged.io/v1alpha2"
ensure CRDs are installed first
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$make deploy-using-yaml
kubectl apply -f deploy/kubefledged-namespace.yaml
namespace/kube-fledged created
kubectl apply -f deploy/kubefledged-crd.yaml
customresourcedefinition.apiextensions.k8s.io/imagecaches.kubefledged.io unchanged
....................
kubectl rollout status deployment kubefledged-webhook-server -n kube-fledged --watch
Waiting for deployment "kubefledged-webhook-server" rollout to finish: 0 of 1 updated replicas are available...
deployment "kubefledged-webhook-server" successfully rolled out
kubectl get pods -n kube-fledged
NAME READY STATUS RESTARTS AGE
kubefledged-controller-55f848cc67-76c4v 1/1 Running 0 112s
kubefledged-webhook-server-597dbf4ff5-56h6z 1/1 Running 0 66s
重新创建缓存对象,创建成功
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl create -f kubefledged-imagecache.yaml
imagecache.kubefledged.io/imagecache1 created
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl get imagecaches -n kube-fledged
NAME AGE
imagecache1 10s
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$
查看当前被纳管的镜像缓存
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$kubectl get imagecaches imagecache1 -n kube-fledged -o json
{
"apiVersion": "kubefledged.io/v1alpha2",
"kind": "ImageCache",
"metadata": {
"creationTimestamp": "2024-03-01T15:08:42Z",
"generation": 83,
"labels": {
"app": "kubefledged",
"kubefledged": "imagecache"
},
"name": "imagecache1",
"namespace": "kube-fledged",
"resourceVersion": "20169836",
"uid": "3a680a57-d8ab-444f-b9c9-4382459c5c72"
},
"spec": {
"cacheSpec": [
{
"images": [
"liruilong/my-busybox:latest"
]
},
{
"images": [
"liruilong/hikvision-sdk-config-ftp:latest"
],
"nodeSelector": {
"kubernetes.io/hostname": "vms105.liruilongs.github.io"
}
}
]
},
"status": {
"completionTime": "2024-03-02T01:06:47Z",
"message": "All requested images pulled succesfully to respective nodes",
"reason": "ImageCacheRefresh",
"startTime": "2024-03-02T01:05:33Z",
"status": "Succeeded"
}
}
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$
通过 ansible 来验证
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/my-busybox" -i host.yaml
192.168.26.102 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
192.168.26.101 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
192.168.26.103 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
192.168.26.105 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
192.168.26.100 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
192.168.26.106 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/hikvision-sdk-config-ftp" -i host.yaml
192.168.26.102 | FAILED | rc=1 >>
non-zero return code
192.168.26.100 | FAILED | rc=1 >>
non-zero return code
192.168.26.103 | FAILED | rc=1 >>
non-zero return code
192.168.26.105 | CHANGED | rc=0 >>
liruilong/hikvision-sdk-config-ftp latest a02cd03b4342 4 months ago 830MB
192.168.26.101 | FAILED | rc=1 >>
non-zero return code
192.168.26.106 | FAILED | rc=1 >>
non-zero return code
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$
开启自动刷新
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl annotate imagecaches imagecache1 -n kube-fledged kubefledged.io/refresh-imagecache=
imagecache.kubefledged.io/imagecache1 annotated
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$
添加镜像缓存
添加一个新的镜像缓存
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl get imagecaches.kubefledged.io -n kube-fledged imagecache1 -o json
{
"apiVersion": "kubefledged.io/v1alpha2",
"kind": "ImageCache",
"metadata": {
"creationTimestamp": "2024-03-01T15:08:42Z",
"generation": 92,
"labels": {
"app": "kubefledged",
"kubefledged": "imagecache"
},
"name": "imagecache1",
"namespace": "kube-fledged",
"resourceVersion": "20175233",
"uid": "3a680a57-d8ab-444f-b9c9-4382459c5c72"
},
"spec": {
"cacheSpec": [
{
"images": [
"liruilong/my-busybox:latest",
"liruilong/jdk1.8_191:latest"
]
},
{
"images": [
"liruilong/hikvision-sdk-config-ftp:latest"
],
"nodeSelector": {
"kubernetes.io/hostname": "vms105.liruilongs.github.io"
}
}
]
},
"status": {
"completionTime": "2024-03-02T01:43:32Z",
"message": "All requested images pulled succesfully to respective nodes",
"reason": "ImageCacheUpdate",
"startTime": "2024-03-02T01:40:34Z",
"status": "Succeeded"
}
}
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$
通过 ansible 确认
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/jdk1.8_191" -i host.yaml
192.168.26.101 | FAILED | rc=1 >>
non-zero return code
192.168.26.100 | FAILED | rc=1 >>
non-zero return code
192.168.26.102 | FAILED | rc=1 >>
non-zero return code
192.168.26.103 | FAILED | rc=1 >>
non-zero return code
192.168.26.105 | FAILED | rc=1 >>
non-zero return code
192.168.26.106 | FAILED | rc=1 >>
non-zero return code
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/jdk1.8_191" -i host.yaml
192.168.26.101 | CHANGED | rc=0 >>
liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB
192.168.26.102 | CHANGED | rc=0 >>
liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB
192.168.26.100 | CHANGED | rc=0 >>
liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB
192.168.26.103 | CHANGED | rc=0 >>
liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB
192.168.26.105 | CHANGED | rc=0 >>
liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB
192.168.26.106 | CHANGED | rc=0 >>
liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$
删除镜像缓存
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl edit imagecaches imagecache1 -n kube-fledged
imagecache.kubefledged.io/imagecache1 edited
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl get imagecaches.kubefledged.io -n kube-fledged imagecache1 -o json
{
"apiVersion": "kubefledged.io/v1alpha2",
"kind": "ImageCache",
"metadata": {
"creationTimestamp": "2024-03-01T15:08:42Z",
"generation": 94,
"labels": {
"app": "kubefledged",
"kubefledged": "imagecache"
},
"name": "imagecache1",
"namespace": "kube-fledged",
"resourceVersion": "20175766",
"uid": "3a680a57-d8ab-444f-b9c9-4382459c5c72"
},
"spec": {
"cacheSpec": [
{
"images": [
"liruilong/jdk1.8_191:latest"
]
},
{
"images": [
"liruilong/hikvision-sdk-config-ftp:latest"
],
"nodeSelector": {
"kubernetes.io/hostname": "vms105.liruilongs.github.io"
}
}
]
},
"status": {
"message": "Image cache is being updated. Please view the status after some time",
"reason": "ImageCacheUpdate",
"startTime": "2024-03-02T01:48:03Z",
"status": "Processing"
}
}
通过 Ansible 确认,可以看到无论是 mastere 上的节点还是 work 的节点,对应的镜像缓存都被清理
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/my-busybox" -i host.yaml
192.168.26.102 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
192.168.26.101 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
192.168.26.105 | FAILED | rc=1 >>
non-zero return code
192.168.26.100 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
192.168.26.103 | FAILED | rc=1 >>
non-zero return code
192.168.26.106 | FAILED | rc=1 >>
non-zero return code
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/my-busybox" -i host.yaml
192.168.26.105 | FAILED | rc=1 >>
non-zero return code
192.168.26.102 | FAILED | rc=1 >>
non-zero return code
192.168.26.103 | FAILED | rc=1 >>
non-zero return code
192.168.26.101 | FAILED | rc=1 >>
non-zero return code
192.168.26.100 | FAILED | rc=1 >>
non-zero return code
192.168.26.106 | FAILED | rc=1 >>
non-zero return code
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$
这里需要注意如果清除所有的镜像缓存,那么需要把 images
下的数组 写成 "".
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl edit imagecaches imagecache1 -n kube-fledged
imagecache.kubefledged.io/imagecache1 edited
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/jdk1.8_191" -i host.yaml
192.168.26.102 | FAILED | rc=1 >>
non-zero return code
192.168.26.101 | FAILED | rc=1 >>
non-zero return code
192.168.26.100 | FAILED | rc=1 >>
non-zero return code
192.168.26.105 | FAILED | rc=1 >>
non-zero return code
192.168.26.103 | FAILED | rc=1 >>
non-zero return code
192.168.26.106 | FAILED | rc=1 >>
non-zero return code
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl get imagecaches.kubefledged.io -n kube-fledged imagecache1 -o json
{
"apiVersion": "kubefledged.io/v1alpha2",
"kind": "ImageCache",
"metadata": {
"creationTimestamp": "2024-03-01T15:08:42Z",
"generation": 98,
"labels": {
"app": "kubefledged",
"kubefledged": "imagecache"
},
"name": "imagecache1",
"namespace": "kube-fledged",
"resourceVersion": "20176849",
"uid": "3a680a57-d8ab-444f-b9c9-4382459c5c72"
},
"spec": {
"cacheSpec": [
{
"images": [
""
]
},
{
"images": [
"liruilong/hikvision-sdk-config-ftp:latest"
],
"nodeSelector": {
"kubernetes.io/hostname": "vms105.liruilongs.github.io"
}
}
]
},
"status": {
"completionTime": "2024-03-02T01:52:16Z",
"message": "All cached images succesfully deleted from respective nodes",
"reason": "ImageCacheUpdate",
"startTime": "2024-03-02T01:51:47Z",
"status": "Succeeded"
}
}
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$
如果通过下面的方式删除,直接注释调对应的标签
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$cat kubefledged-imagecache.yaml
---
apiVersion: kubefledged.io/v1alpha2
kind: ImageCache
metadata:
# Name of the image cache. A cluster can have multiple image cache objects
name: imagecache1
namespace: kube-fledged
# The kubernetes namespace to be used for this image cache. You can choose a different namepace as per your preference
labels:
app: kubefledged
kubefledged: imagecache
spec:
# The "cacheSpec" field allows a user to define a list of images and onto which worker nodes those images should be cached (i.e. pre-pulled).
cacheSpec:
# Specifies a list of images (nginx:1.23.1) with no node selector, hence these images will be cached in all the nodes in the cluster
#- images:
#- liruilong/my-busybox:latest
# Specifies a list of images (cassandra:v7 and etcd:3.5.4-0) with a node selector, hence these images will be cached only on the nodes selected by the node selector
- images:
- liruilong/hikvision-sdk-config-ftp:latest
nodeSelector:
kubernetes.io/hostname: vms105.liruilongs.github.io
# Specifies a list of image pull secrets to pull images from private repositories into the cache
#imagePullSecrets:
#- name: myregistrykey
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$
那么会报下面的错
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl edit imagecaches imagecache1 -n kube-fledged
error: imagecaches.kubefledged.io "imagecache1" could not be patched: admission webhook "validate-image-cache.kubefledged.io" denied the request: Mismatch in no. of image lists
You can run `kubectl replace -f /tmp/kubectl-edit-4113815075.yaml` to try this update again.
博文部分内容参考
文中涉及参考链接内容版权归原作者所有,如有侵权请告知,如果你认可它不要吝啬星星哦 :)
https://github.com/senthilrch/kube-fledged
详解K8s 镜像缓存管理kube-fledged的更多相关文章
- Linux进程上下文切换过程context_switch详解--Linux进程的管理与调度(二十一)
1 前景回顾 1.1 Linux的调度器组成 2个调度器 可以用两种方法来激活调度 一种是直接的, 比如进程打算睡眠或出于其他原因放弃CPU 另一种是通过周期性的机制, 以固定的频率运行, 不时的检测 ...
- 详解k8s一个完整的监控方案(Heapster+Grafana+InfluxDB) - kubernetes
1.浅析整个监控流程 heapster以k8s内置的cAdvisor作为数据源收集集群信息,并汇总出有价值的性能数据(Metrics):cpu.内存.网络流量等,然后将这些数据输出到外部存储,如Inf ...
- 详解k8s零停机滚动发布微服务 - kubernetes
1.前言 在当下微服务架构盛行的时代,用户希望应用程序时时刻刻都是可用,为了满足不断变化的新业务,需要不断升级更新应用程序,有时可能需要频繁的发布版本.实现"零停机"." ...
- 详解k8s原生的集群监控方案(Heapster+InfluxDB+Grafana) - kubernetes
1.浅析监控方案 heapster是一个监控计算.存储.网络等集群资源的工具,以k8s内置的cAdvisor作为数据源收集集群信息,并汇总出有价值的性能数据(Metrics):cpu.内存.netwo ...
- Linux fdisk命令参数及用法详解---Linux磁盘分区管理命令fdisk
fdisk 命令 linux磁盘分区管理 用途:观察硬盘之实体使用情形与分割硬盘用. 使用方法: 一.在 console 上输入 fdisk -l /dev/sda ,观察硬盘之实体使用情形. 二.在 ...
- 零基础学习java------39---------json格式交互,Restful(不懂),静态资源映射,SSM整合(ssm整合思想,application.xml文件详解(声明式事务管理),)
一. json格式交互(知道) 1 . 回顾ajax基本语法 $.ajax({ url:"", // 请求的后台路径 data:{"":"" ...
- 【docker专栏5】详解docker镜像管理命令
一.国内Docker镜像仓库 由于大家都知道的原因,从国外的docker 仓库中pull镜像的下载速度实际上是很慢的.国内的一些一线厂商以及docker官方都在国内免费提供了一些docker镜像仓库, ...
- 从零开始入门 K8s| 阿里技术专家详解 K8s 核心概念
作者| 阿里巴巴资深技术专家.CNCF 9个 TCO 之一 李响 一.什么是 Kubernetes Kubernetes,从官方网站上可以看到,它是一个工业级的容器编排平台.Kubernetes 这个 ...
- 详解k8s组件Ingress边缘路由器并落地到微服务 - kubernetes
写在前面 Ingress 英文翻译 进入;进入权;进食,更准确的讲就是入口,即外部流量进入k8s集群必经之口.这到大门到底有什么作用?我们如何使用Ingress?k8s又是如何进行服务发现的呢?先看一 ...
- (转)详解k8s组件Ingress边缘路由器并落地到微服务 - kubernetes
转:https://www.cnblogs.com/justmine/p/8991379.html 写在前面 Ingress 英文翻译 进入;进入权;进食,更准确的讲就是入口,即外部流量进入k8s集群 ...
随机推荐
- docker构建带ik分词器Elasticsearch镜像
创建Dockerfile 文件: FROM elasticsearch:7.4.2 RUN cd /usr/share/elasticsearch && sh -c '/bin/ech ...
- better-scroll 1.13
简单入门示例:快速使用: <template> <div class="wrapper"> <div class="content" ...
- Nginx-web系列
nginx 系列 目录 nginx 系列 一 简述 1.1 为什么要使用? 1.2 主要用于哪里? 二. Nginx 搭建环境 2.1 版本选择 2.2 环境准备 2.2 yum 直装 2.3 ngi ...
- 一文学会JDBC实现java和mySQL的数据连接(尚硅谷学习课程代码+笔记+思路总结)
JDBC是指数据库连接技术,用于java连接mySQL等数据库.本文详细介绍了尚硅谷课程中JDBC的学习内容和补充知识. 概述 java语言只提供规范接口,存在于java.sql.javax.sql包 ...
- 单词本z escort 护卫 es=ex 出去 cor=con=com 一起, 一起出去 = 护卫
单词本z escort 护卫 es=ex 出去 cor=con=com 一起, 一起出去 = 护卫 escort 护卫, 护送 这个单词按照我自己理解的反而好记住 es = ex = 出 cor = ...
- 泰凌微2.4G无线私有协议芯片开发总结
案例 近些年,团队一直围绕着无线这块来做产品方案.一个无意的举动,接触到了泰凌微的2.4G私有协议芯片,发现这颗芯片在好几个场景中使用非常合适.就把这个芯片推荐给了客户,经过几个案子的历练.积 ...
- Android 获取设备的CPU型号和设备型号
原文: Android 获取设备的CPU型号和设备型号-Stars-One的杂货小窝 之前整的项目的总结信息,可能不太全,凑合着用吧,代码在最下面一节 CPU型号数据 华为: ro.mediatek. ...
- struts1之global-forwards
当你的某个转发要经常用,并且要携带某些数据(request)的时候用全局转发,也就是global-forwards,例如我们在分页的时候,或者得到数据列表的时候.. ForwardAction呢,是为 ...
- 记录--JavaScript 用简约的代码实现一些日常功能
这里给大家分享我在网上总结出来的一些JavaScript 知识,希望对大家有所帮助 一.日期处理 1. 检查日期是否有效 该方法用于检测给出的日期是否有效: const isDateValid = ( ...
- C# 二维码生成、识别
using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; usin ...