K8s 镜像缓存管理 kube-fledged 认知

不必太纠结于当下,也不必太忧虑未来,当你经历过一些事情的时候,眼前的风景已经和从前不一样了。——村上春树

写在前面


  • 博文内容为K8s 镜像缓存管理 kube-fledged 认知
  • 内容涉及:
    • kube-fledged 简单介绍
    • 部署以及基本使用
  • 理解不足小伙伴帮忙指正

不必太纠结于当下,也不必太忧虑未来,当你经历过一些事情的时候,眼前的风景已经和从前不一样了。——村上春树


简单介绍

我们知道 k8s 上的容器调度需要在调度的节点行拉取当前容器的镜像,在一些特殊场景中,

  • 需要快速启动和/或扩展的应用程序。例如,由于数据量激增,执行实时数据处理的应用程序需要快速扩展。
  • 镜像比较庞大,涉及多个版本,节点存储有限,需要动态清理不需要的镜像
  • 无服务器函数通常需要在几分之一秒内立即对传入事件和启动容器做出反应。
  • 在边缘设备上运行的 IoT 应用程序,需要容忍边缘设备和镜像镜像仓库之间的间歇性网络连接。
  • 如果需要从专用仓库中拉取镜像,并且无法授予每个人从此镜像仓库拉取镜像的访问权限,则可以在群集的节点上提供镜像。
  • 如果集群管理员或操作员需要对应用程序进行升级,并希望事先验证是否可以成功拉取新镜像。

kube-fledged 是一个 kubernetes operator,用于直接在 Kubernetes 集群的 worker 节点上创建和管理容器镜像缓存。它允许用户定义镜像列表以及这些镜像应缓存到哪些工作节点上(即拉取)。因此,应用程序 Pod 几乎可以立即启动,因为不需要从镜像仓库中提取镜像。

kube-fledged 提供了 CRUD API 来管理镜像缓存的生命周期,并支持多个可配置的参数,可以根据自己的需要自定义功能。

Kubernetes 具有内置的镜像垃圾回收机制。节点中的 kubelet 会定期检查磁盘使用率是否达到特定阈值(可通过标志进行配置)。一旦达到这个阈值,kubelet 会自动删除节点中所有未使用的镜像。

需要在建议的解决方案中实现自动和定期刷新机制。如果镜像缓存中的镜像被 kubelet 的 gc 删除,下一个刷新周期会将已删除的镜像拉入镜像缓存中。这可确保镜像缓存是最新的。

设计流程

https://github.com/senthilrch/kube-fledged/blob/master/docs/kubefledged-architecture.png

部署 kube-fledged

Helm 方式部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$mkdir kube-fledged
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$cd kube-fledged
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$export KUBEFLEDGED_NAMESPACE=kube-fledged
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$kubectl create namespace ${KUBEFLEDGED_NAMESPACE}
namespace/kube-fledged created
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$helm repo add kubefledged-charts https://senthilrch.github.io/kubefledged-charts/
"kubefledged-charts" has been added to your repositories
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "kubefledged-charts" chart repository
...Successfully got an update from the "kubescape" chart repository
...Successfully got an update from the "rancher-stable" chart repository
...Successfully got an update from the "skm" chart repository
...Successfully got an update from the "openkruise" chart repository
...Successfully got an update from the "awx-operator" chart repository
...Successfully got an update from the "botkube" chart repository
Update Complete. ⎈Happy Helming!⎈
1
2
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$helm install --verify kube-fledged kubefledged-charts/kube-fledged -n ${KUBEFLEDGED_NAMESPACE} --wait

实际部署中发现,由于网络问题,chart 无法下载,所以通过 make deploy-using-yaml 使用 yaml 方式部署

Yaml 文件部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$git clone https://github.com/senthilrch/kube-fledged.git
正克隆到 'kube-fledged'...
remote: Enumerating objects: 10613, done.
remote: Counting objects: 100% (1501/1501), done.
remote: Compressing objects: 100% (629/629), done.
remote: Total 10613 (delta 845), reused 1357 (delta 766), pack-reused 9112
接收对象中: 100% (10613/10613), 34.58 MiB | 7.33 MiB/s, done.
处理 delta 中: 100% (4431/4431), done.
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$ls
kube-fledged
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$cd kube-fledged/
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$make deploy-using-yaml
kubectl apply -f deploy/kubefledged-namespace.yaml

第一次部署,发现镜像拉不下来

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
┌──[root@vms100.liruilongs.github.io]-[~]
└─$kubectl get all -n kube-fledged
NAME READY STATUS RESTARTS AGE
pod/kube-fledged-controller-df69f6565-drrqg 0/1 CrashLoopBackOff 35 (5h59m ago) 21h
pod/kube-fledged-webhook-server-7bcd589bc4-b7kg2 0/1 Init:CrashLoopBackOff 35 (5h58m ago) 21h
pod/kubefledged-controller-55f848cc67-7f4rl 1/1 Running 0 21h
pod/kubefledged-webhook-server-597dbf4ff5-l8fbh 0/1 Init:CrashLoopBackOff 34 (6h ago) 21h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-fledged-webhook-server ClusterIP 10.100.194.199 <none> 3443/TCP 21h
service/kubefledged-webhook-server ClusterIP 10.101.191.206 <none> 3443/TCP 21h

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-fledged-controller 0/1 1 0 21h
deployment.apps/kube-fledged-webhook-server 0/1 1 0 21h
deployment.apps/kubefledged-controller 0/1 1 0 21h
deployment.apps/kubefledged-webhook-server 0/1 1 0 21h

NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-fledged-controller-df69f6565 1 1 0 21h
replicaset.apps/kube-fledged-webhook-server-7bcd589bc4 1 1 0 21h
replicaset.apps/kubefledged-controller-55f848cc67 1 1 0 21h
replicaset.apps/kubefledged-webhook-server-597dbf4ff5 1 1 0 21h
┌──[root@vms100.liruilongs.github.io]-[~]
└─$

这里我们找一下要拉取的镜像

1
2
3
4
5
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$cat *.yaml | grep image:
- image: senthilrch/kubefledged-controller:v0.10.0
- image: senthilrch/kubefledged-webhook-server:v0.10.0
- image: senthilrch/kubefledged-webhook-server:v0.10.0

单独拉取一些,当前使用 ansible 在所有工作节点批量操作

1
2
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible k8s_node -m shell -a "docker pull docker.io/senthilrch/kubefledged-cri-client:v0.10.0" -i host.yaml

其他相关的镜像都拉取一下

操作完成之后容器状态全部正常

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl -n kube-fledged get all
NAME READY STATUS RESTARTS AGE
pod/kube-fledged-controller-df69f6565-wdb4g 1/1 Running 0 13h
pod/kube-fledged-webhook-server-7bcd589bc4-j8xxp 1/1 Running 0 13h
pod/kubefledged-controller-55f848cc67-klxlm 1/1 Running 0 13h
pod/kubefledged-webhook-server-597dbf4ff5-ktbsh 1/1 Running 0 13h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-fledged-webhook-server ClusterIP 10.100.194.199 <none> 3443/TCP 36h
service/kubefledged-webhook-server ClusterIP 10.101.191.206 <none> 3443/TCP 36h

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-fledged-controller 1/1 1 1 36h
deployment.apps/kube-fledged-webhook-server 1/1 1 1 36h
deployment.apps/kubefledged-controller 1/1 1 1 36h
deployment.apps/kubefledged-webhook-server 1/1 1 1 36h

NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-fledged-controller-df69f6565 1 1 1 36h
replicaset.apps/kube-fledged-webhook-server-7bcd589bc4 1 1 1 36h
replicaset.apps/kubefledged-controller-55f848cc67 1 1 1 36h
replicaset.apps/kubefledged-webhook-server-597dbf4ff5 1 1 1 36h

验证是否安装成功

1
2
3
4
5
6
7
8
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$kubectl get pods -n kube-fledged -l app=kubefledged
NAME READY STATUS RESTARTS AGE
kubefledged-controller-55f848cc67-klxlm 1/1 Running 0 16h
kubefledged-webhook-server-597dbf4ff5-ktbsh 1/1 Running 0 16h
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$kubectl get imagecaches -n kube-fledged
No resources found in kube-fledged namespace.

使用 kubefledged

创建镜像缓存对象

根据 Demo 文件,创建镜像缓存对象

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$cd deploy/
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$cat kubefledged-imagecache.yaml
---
apiVersion: kubefledged.io/v1alpha2
kind: ImageCache
metadata:
# Name of the image cache. A cluster can have multiple image cache objects
name: imagecache1
namespace: kube-fledged
# The kubernetes namespace to be used for this image cache. You can choose a different namepace as per your preference
labels:
app: kubefledged
kubefledged: imagecache
spec:
# The "cacheSpec" field allows a user to define a list of images and onto which worker nodes those images should be cached (i.e. pre-pulled).
cacheSpec:
# Specifies a list of images (nginx:1.23.1) with no node selector, hence these images will be cached in all the nodes in the cluster
- images:
- ghcr.io/jitesoft/nginx:1.23.1
# Specifies a list of images (cassandra:v7 and etcd:3.5.4-0) with a node selector, hence these images will be cached only on the nodes selected by the node selector
- images:
- us.gcr.io/k8s-artifacts-prod/cassandra:v7
- us.gcr.io/k8s-artifacts-prod/etcd:3.5.4-0
nodeSelector:
tier: backend
# Specifies a list of image pull secrets to pull images from private repositories into the cache
imagePullSecrets:
- name: myregistrykey

官方的 Demo 中对应的 镜像拉取不下来,所以换一下

1
2
3
4
5
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$docker pull us.gcr.io/k8s-artifacts-prod/cassandra:v7
Error response from daemon: Get "https://us.gcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$

为了测试选择器标签的使用,我们找一个节点的标签单独做镜像缓存

1
2
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl get nodes --show-labels

同时我们直接从公有仓库拉取镜像,所以不需要 imagePullSecrets 对象

1
2
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$vim kubefledged-imagecache.yaml

修改后的 yaml 文件

  • 添加了一个所有节点的 liruilong/my-busybox:latest 镜像缓存
  • 添加了一个 kubernetes.io/hostname: vms105.liruilongs.github.io 对应标签选择器的 liruilong/hikvision-sdk-config-ftp:latest 镜像缓存
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$cat kubefledged-imagecache.yaml
---
apiVersion: kubefledged.io/v1alpha2
kind: ImageCache
metadata:
# Name of the image cache. A cluster can have multiple image cache objects
name: imagecache1
namespace: kube-fledged
# The kubernetes namespace to be used for this image cache. You can choose a different namepace as per your preference
labels:
app: kubefledged
kubefledged: imagecache
spec:
# The "cacheSpec" field allows a user to define a list of images and onto which worker nodes those images should be cached (i.e. pre-pulled).
cacheSpec:
# Specifies a list of images (nginx:1.23.1) with no node selector, hence these images will be cached in all the nodes in the cluster
- images:
- liruilong/my-busybox:latest
# Specifies a list of images (cassandra:v7 and etcd:3.5.4-0) with a node selector, hence these images will be cached only on the nodes selected by the node selector
- images:
- liruilong/hikvision-sdk-config-ftp:latest
nodeSelector:
kubernetes.io/hostname: vms105.liruilongs.github.io
# Specifies a list of image pull secrets to pull images from private repositories into the cache
#imagePullSecrets:
#- name: myregistrykey
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$

直接创建报错了

1
2
3
4
5
6
7
8
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl create -f kubefledged-imagecache.yaml
Error from server (InternalError): error when creating "kubefledged-imagecache.yaml": Internal error occurred: failed calling webhook "validate-image-cache.kubefledged.io": failed to call webhook: Post "https://kubefledged-webhook-server.kube-fledged.svc:3443/validate-image-cache?timeout=1s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubefledged.io")
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl get imagecaches -n kube-fledged
No resources found in kube-fledged namespace.
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$

解决办法,删除对应的对象,重新创建

我在当前项目的一个 issues 下面找到了解决办法 https://github.com/senthilrch/kube-fledged/issues/76

看起来这是因为 Webhook CA 是硬编码的,但是当 webhook 服务器启动时,会生成一个新的 CA 捆绑包并更新 webhook 配置。当发生另一个部署时,将重新应用原始 CA 捆绑包,并且 Webhook 请求开始失败,直到再次重新启动 Webhook 组件以修补捆绑包init-server

1
2
3
4
5
6
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$make remove-kubefledged-and-operator
# Remove kubefledged
kubectl delete -f deploy/kubefledged-operator/deploy/crds/charts.helm.kubefledged.io_v1alpha2_kubefledged_cr.yaml
error: resource mapping not found for name: "kube-fledged" namespace: "kube-fledged" from "deploy/kubefledged-operator/deploy/crds/charts.helm.kubefledged.io_v1alpha2_kubefledged_cr.yaml": no matches for kind "KubeFledged" in version "charts.helm.kubefledged.io/v1alpha2"
ensure CRDs are installed first
1
2
3
4
5
6
7
8
9
10
11
12
13
14
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$make deploy-using-yaml
kubectl apply -f deploy/kubefledged-namespace.yaml
namespace/kube-fledged created
kubectl apply -f deploy/kubefledged-crd.yaml
customresourcedefinition.apiextensions.k8s.io/imagecaches.kubefledged.io unchanged
....................
kubectl rollout status deployment kubefledged-webhook-server -n kube-fledged --watch
Waiting for deployment "kubefledged-webhook-server" rollout to finish: 0 of 1 updated replicas are available...
deployment "kubefledged-webhook-server" successfully rolled out
kubectl get pods -n kube-fledged
NAME READY STATUS RESTARTS AGE
kubefledged-controller-55f848cc67-76c4v 1/1 Running 0 112s
kubefledged-webhook-server-597dbf4ff5-56h6z 1/1 Running 0 66s

重新创建缓存对象,创建成功

1
2
3
4
5
6
7
8
9
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl create -f kubefledged-imagecache.yaml
imagecache.kubefledged.io/imagecache1 created
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl get imagecaches -n kube-fledged
NAME AGE
imagecache1 10s
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$

查看当前被纳管的镜像缓存

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$kubectl get imagecaches imagecache1 -n kube-fledged -o json
{
"apiVersion": "kubefledged.io/v1alpha2",
"kind": "ImageCache",
"metadata": {
"creationTimestamp": "2024-03-01T15:08:42Z",
"generation": 83,
"labels": {
"app": "kubefledged",
"kubefledged": "imagecache"
},
"name": "imagecache1",
"namespace": "kube-fledged",
"resourceVersion": "20169836",
"uid": "3a680a57-d8ab-444f-b9c9-4382459c5c72"
},
"spec": {
"cacheSpec": [
{
"images": [
"liruilong/my-busybox:latest"
]
},
{
"images": [
"liruilong/hikvision-sdk-config-ftp:latest"
],
"nodeSelector": {
"kubernetes.io/hostname": "vms105.liruilongs.github.io"
}
}
]
},
"status": {
"completionTime": "2024-03-02T01:06:47Z",
"message": "All requested images pulled succesfully to respective nodes",
"reason": "ImageCacheRefresh",
"startTime": "2024-03-02T01:05:33Z",
"status": "Succeeded"
}
}
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$

通过 ansible 来验证

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/my-busybox" -i host.yaml
192.168.26.102 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
192.168.26.101 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
192.168.26.103 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
192.168.26.105 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
192.168.26.100 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
192.168.26.106 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/hikvision-sdk-config-ftp" -i host.yaml
192.168.26.102 | FAILED | rc=1 >>
non-zero return code
192.168.26.100 | FAILED | rc=1 >>
non-zero return code
192.168.26.103 | FAILED | rc=1 >>
non-zero return code
192.168.26.105 | CHANGED | rc=0 >>
liruilong/hikvision-sdk-config-ftp latest a02cd03b4342 4 months ago 830MB
192.168.26.101 | FAILED | rc=1 >>
non-zero return code
192.168.26.106 | FAILED | rc=1 >>
non-zero return code
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$

开启自动刷新

1
2
3
4
5
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl annotate imagecaches imagecache1 -n kube-fledged kubefledged.io/refresh-imagecache=
imagecache.kubefledged.io/imagecache1 annotated
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$

添加镜像缓存

添加一个新的镜像缓存

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl get imagecaches.kubefledged.io -n kube-fledged imagecache1 -o json
{
"apiVersion": "kubefledged.io/v1alpha2",
"kind": "ImageCache",
"metadata": {
"creationTimestamp": "2024-03-01T15:08:42Z",
"generation": 92,
"labels": {
"app": "kubefledged",
"kubefledged": "imagecache"
},
"name": "imagecache1",
"namespace": "kube-fledged",
"resourceVersion": "20175233",
"uid": "3a680a57-d8ab-444f-b9c9-4382459c5c72"
},
"spec": {
"cacheSpec": [
{
"images": [
"liruilong/my-busybox:latest",
"liruilong/jdk1.8_191:latest"
]
},
{
"images": [
"liruilong/hikvision-sdk-config-ftp:latest"
],
"nodeSelector": {
"kubernetes.io/hostname": "vms105.liruilongs.github.io"
}
}
]
},
"status": {
"completionTime": "2024-03-02T01:43:32Z",
"message": "All requested images pulled succesfully to respective nodes",
"reason": "ImageCacheUpdate",
"startTime": "2024-03-02T01:40:34Z",
"status": "Succeeded"
}
}
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$

通过 ansible 确认

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/jdk1.8_191" -i host.yaml
192.168.26.101 | FAILED | rc=1 >>
non-zero return code
192.168.26.100 | FAILED | rc=1 >>
non-zero return code
192.168.26.102 | FAILED | rc=1 >>
non-zero return code
192.168.26.103 | FAILED | rc=1 >>
non-zero return code
192.168.26.105 | FAILED | rc=1 >>
non-zero return code
192.168.26.106 | FAILED | rc=1 >>
non-zero return code
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/jdk1.8_191" -i host.yaml
192.168.26.101 | CHANGED | rc=0 >>
liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB
192.168.26.102 | CHANGED | rc=0 >>
liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB
192.168.26.100 | CHANGED | rc=0 >>
liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB
192.168.26.103 | CHANGED | rc=0 >>
liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB
192.168.26.105 | CHANGED | rc=0 >>
liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB
192.168.26.106 | CHANGED | rc=0 >>
liruilong/jdk1.8_191 latest 17dbd4002a8c 5 years ago 170MB
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$

删除镜像缓存

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl edit imagecaches imagecache1 -n kube-fledged
imagecache.kubefledged.io/imagecache1 edited
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl get imagecaches.kubefledged.io -n kube-fledged imagecache1 -o json
{
"apiVersion": "kubefledged.io/v1alpha2",
"kind": "ImageCache",
"metadata": {
"creationTimestamp": "2024-03-01T15:08:42Z",
"generation": 94,
"labels": {
"app": "kubefledged",
"kubefledged": "imagecache"
},
"name": "imagecache1",
"namespace": "kube-fledged",
"resourceVersion": "20175766",
"uid": "3a680a57-d8ab-444f-b9c9-4382459c5c72"
},
"spec": {
"cacheSpec": [
{
"images": [
"liruilong/jdk1.8_191:latest"
]
},
{
"images": [
"liruilong/hikvision-sdk-config-ftp:latest"
],
"nodeSelector": {
"kubernetes.io/hostname": "vms105.liruilongs.github.io"
}
}
]
},
"status": {
"message": "Image cache is being updated. Please view the status after some time",
"reason": "ImageCacheUpdate",
"startTime": "2024-03-02T01:48:03Z",
"status": "Processing"
}
}

通过 Ansible 确认,可以看到无论是 mastere 上的节点还是 work 的节点,对应的镜像缓存都被清理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/my-busybox" -i host.yaml
192.168.26.102 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
192.168.26.101 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
192.168.26.105 | FAILED | rc=1 >>
non-zero return code
192.168.26.100 | CHANGED | rc=0 >>
liruilong/my-busybox latest 497b83a63aad 11 months ago 1.24MB
192.168.26.103 | FAILED | rc=1 >>
non-zero return code
192.168.26.106 | FAILED | rc=1 >>
non-zero return code
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/my-busybox" -i host.yaml
192.168.26.105 | FAILED | rc=1 >>
non-zero return code
192.168.26.102 | FAILED | rc=1 >>
non-zero return code
192.168.26.103 | FAILED | rc=1 >>
non-zero return code
192.168.26.101 | FAILED | rc=1 >>
non-zero return code
192.168.26.100 | FAILED | rc=1 >>
non-zero return code
192.168.26.106 | FAILED | rc=1 >>
non-zero return code
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$

这里需要注意如果清除所有的镜像缓存,那么需要把 images 下的数组 写成 “”.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl edit imagecaches imagecache1 -n kube-fledged
imagecache.kubefledged.io/imagecache1 edited
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/jdk1.8_191" -i host.yaml
192.168.26.102 | FAILED | rc=1 >>
non-zero return code
192.168.26.101 | FAILED | rc=1 >>
non-zero return code
192.168.26.100 | FAILED | rc=1 >>
non-zero return code
192.168.26.105 | FAILED | rc=1 >>
non-zero return code
192.168.26.103 | FAILED | rc=1 >>
non-zero return code
192.168.26.106 | FAILED | rc=1 >>
non-zero return code
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl get imagecaches.kubefledged.io -n kube-fledged imagecache1 -o json
{
"apiVersion": "kubefledged.io/v1alpha2",
"kind": "ImageCache",
"metadata": {
"creationTimestamp": "2024-03-01T15:08:42Z",
"generation": 98,
"labels": {
"app": "kubefledged",
"kubefledged": "imagecache"
},
"name": "imagecache1",
"namespace": "kube-fledged",
"resourceVersion": "20176849",
"uid": "3a680a57-d8ab-444f-b9c9-4382459c5c72"
},
"spec": {
"cacheSpec": [
{
"images": [
""
]
},
{
"images": [
"liruilong/hikvision-sdk-config-ftp:latest"
],
"nodeSelector": {
"kubernetes.io/hostname": "vms105.liruilongs.github.io"
}
}
]
},
"status": {
"completionTime": "2024-03-02T01:52:16Z",
"message": "All cached images succesfully deleted from respective nodes",
"reason": "ImageCacheUpdate",
"startTime": "2024-03-02T01:51:47Z",
"status": "Succeeded"
}
}
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$

如果通过下面的方式删除,直接注释调对应的标签

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$cat kubefledged-imagecache.yaml
---
apiVersion: kubefledged.io/v1alpha2
kind: ImageCache
metadata:
# Name of the image cache. A cluster can have multiple image cache objects
name: imagecache1
namespace: kube-fledged
# The kubernetes namespace to be used for this image cache. You can choose a different namepace as per your preference
labels:
app: kubefledged
kubefledged: imagecache
spec:
# The "cacheSpec" field allows a user to define a list of images and onto which worker nodes those images should be cached (i.e. pre-pulled).
cacheSpec:
# Specifies a list of images (nginx:1.23.1) with no node selector, hence these images will be cached in all the nodes in the cluster
#- images:
#- liruilong/my-busybox:latest
# Specifies a list of images (cassandra:v7 and etcd:3.5.4-0) with a node selector, hence these images will be cached only on the nodes selected by the node selector
- images:
- liruilong/hikvision-sdk-config-ftp:latest
nodeSelector:
kubernetes.io/hostname: vms105.liruilongs.github.io
# Specifies a list of image pull secrets to pull images from private repositories into the cache
#imagePullSecrets:
#- name: myregistrykey
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$

那么会报下面的错

1
2
3
4
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl edit imagecaches imagecache1 -n kube-fledged
error: imagecaches.kubefledged.io "imagecache1" could not be patched: admission webhook "validate-image-cache.kubefledged.io" denied the request: Mismatch in no. of image lists
You can run `kubectl replace -f /tmp/kubectl-edit-4113815075.yaml` to try this update again.

博文部分内容参考

© 文中涉及参考链接内容版权归原作者所有,如有侵权请告知,如果你认可它不要吝啬星星哦 :)


https://github.com/senthilrch/kube-fledged


© 2018-2024 liruilonger@gmail.com, All rights reserved. 保持署名-非商用-相同方式共享(CC BY-NC-SA 4.0)

发布于

2024-02-28

更新于

2024-03-05

许可协议

评论
Your browser is out-of-date!

Update your browser to view this website correctly.&npsb;Update my browser now

×