关于 Kubernetes中deployment的一些笔记

情不知所起,一往而深;可惜大多由深转浅,相忘江湖.我也如是 ——烽火戏诸侯《雪中悍刀行》

写在前面


  • 学习K8s涉及到这些,整理笔记加以记忆
  • 博文内容涉及:
    • deployment的创建
    • 通过deployment实现pod的扩容和缩容
    • 通过deployment实现容器的镜像滚动更新、回滚
    • pod的扩容和缩容通过HPA的方式有些问题,也可能是我机器的原因,这个之后解决了在补充。
    • 这一块学的有点乱,之后还需要整理

情不知所起,一往而深;可惜大多由深转浅,相忘江湖.我也如是 ——烽火戏诸侯《雪中悍刀行》


deployment

DeploymentKubernetes v1.2引入的新概念,引入的目的是为了更好地解决Pod的编排问题。为此, Deployment在内部使用了Replica Set来实现目的,无论从Deployment的作用与目的、它的YAML定义,还是从它的具体命令行操作来看,我们都可以把它看作RC的一次升级,两者的相似度超过90%。

Deployment相对于RC的一个最大升级是我们可以随时知道当前Pod“部署”的进度。实际上由于一个Pod的创建、调度、绑定节点及在目标Node上启动对应的容器这一完整过程需要一定的时间,所以我们期待系统启动N个Pod副本的目标状态,实际上是一个连续变化的“部署过程”导致的最终状态。

以下是 Deployments 的典型用例:

Deployments 的典型用例
创建 Deployment 以将 ReplicaSet 上线。 ReplicaSet 在后台创建 Pods。 检查 ReplicaSet 的上线状态,查看其是否成功。
通过更新 DeploymentPodTemplateSpec,声明 Pod 的新状态 。 新的ReplicaSet会被创建,Deployment 以受控速率将
如果 Deployment 的当前状态不稳定,回滚到较早的 Deployment 版本。 每次回滚都会更新 Deployment 的修订版本。
扩大 Deployment 规模以承担更多负载。
暂停 Deployment 以应用对 PodTemplateSpec 所作的多项修改, 然后恢复其执行以启动新的上线版本。
使用 Deployment 状态 来判定上线过程是否出现停滞。
清理较旧的不再需要的 ReplicaSet

ReplicaSet

ReplicaSet 的目的是维护一组在任何时候都处于运行状态的 Pod 副本的稳定集合。 因此,它通常用来保证给定数量的、完全相同的 Pod 的可用性。

ReplicaSet 的工作原理
RepicaSet 是通过一组字段来定义的,包括:

  • 一个用来识别可获得的 Pod 的集合的选择算符(选择器)、
  • 一个用来标明应该维护的副本个数的数值、
  • 一个用来指定应该创建新 Pod 以满足副本个数条件时要使用的 Pod 模板等等。

每个 ReplicaSet 都通过根据需要创建和 删除 Pod 以使得副本个数达到期望值, 进而实现其存在价值。当 ReplicaSet 需要创建新的 Pod 时,会使用所提供的 Pod 模板。

ReplicaSet 通过 Pod 上的 metadata.ownerReferences 字段连接到附属 Pod,该字段给出当前对象的属主资源。 ReplicaSet 所获得的 Pod 都在其 ownerReferences 字段中包含了属主 ReplicaSet 的标识信息。正是通过这一连接,ReplicaSet 知道它所维护的 Pod 集合的状态, 并据此计划其操作行为。

ReplicaSet 使用其选择算符来辨识要获得的 Pod 集合。如果某个 Pod 没有 OwnerReference 或者其 OwnerReference 不是一个 控制器,且其匹配到 某 ReplicaSet 的选择算符,则该 Pod 立即被此 ReplicaSet 获得。

何时使用 ReplicaSet

ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行。 然而,Deployment 是一个更高级的概念,它管理 ReplicaSet,并向 Pod 提供声明式的更新以及许多其他有用的功能。 因此,我们建议使用 Deployment 而不是直接使用 ReplicaSet,除非 你需要自定义更新业务流程或根本不需要更新。

这实际上意味着,你可能永远不需要操作ReplicaSet对象:而是使用 Deployment,并在 spec 部分定义你的应用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: php-redis
image: nginx

学习环境准备

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$dir=k8s-deploy-create ;mkdir $dir;cd $dir
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get ns
NAME STATUS AGE
default Active 78m
kube-node-lease Active 79m
kube-public Active 79m
kube-system Active 79m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl create ns liruilong-deploy-create
namespace/liruilong-deploy-create created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl config set-context $(kubectl config current-context) --namespace=liruilong-deploy-create
Context "kubernetes-admin@kubernetes" modified.
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl config view | grep namespace
namespace: liruilong-deploy-create
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$

用yaml文件创建deployment

1
2
3
4
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl create deployment web1 --image=nginx --dry-run=client -o yaml > ngixndeplog.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$vim ngixndeplog.yaml

ngixndeplog.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: web1
name: web1
spec:
replicas: 3
selector:
matchLabels:
app: web1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: web1
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
resources: {}
status: {}
1
2
3
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl apply -f ngixndeplog.yaml
deployment.apps/web1 created

查看创建的deployment

1
2
3
4
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get deploy -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
web1 2/3 3 2 37s nginx nginx app=web1

查看创建的replicaSet

1
2
3
4
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get rs -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
web1-66b5fd9bc8 3 3 3 4m28s nginx nginx app=web1,pod-template-hash=66b5fd9bc8

查看创建的pod

1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web1-66b5fd9bc8-2wpkr 1/1 Running 0 3m45s 10.244.171.131 vms82.liruilongs.github.io <none> <none>
web1-66b5fd9bc8-9lxh2 1/1 Running 0 3m45s 10.244.171.130 vms82.liruilongs.github.io <none> <none>
web1-66b5fd9bc8-s9w7g 1/1 Running 0 3m45s 10.244.70.3 vms83.liruilongs.github.io <none> <none>

Pod的扩容和缩容

在实际生产系统中,我们经常会遇到某个服务需要扩容的场景,也可能会遇到由于资源紧张或者工作负载降低而需要减少服务实例数量的场景。此时我们可以利用DeploymentRC的Scale机制来完成这些工作。Kubermetes对Pod的扩容和缩容操作提供了手动和自动两种模式,

手动模式通过执行kubecl scale命令对一个Deploymen/RC进行Pod副本数量的设置,即可一键完成。

自动模式则需要用户根据某个性能指标或者自定义业务指标,并指定Pod副本数量的范围,系统将自动在这个范围内根据性能指标的变化进行调整。

手动模式

命令行修改kubectl scale deployment web1 --replicas=2

1
2
3
4
5
6
7
8
9
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl scale deployment web1 --replicas=2
deployment.apps/web1 scaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web1-66b5fd9bc8-2wpkr 1/1 Running 0 8m19s 10.244.171.131 vms82.liruilongs.github.io <none> <none>
web1-66b5fd9bc8-s9w7g 1/1 Running 0 8m19s 10.244.70.3 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$

edit的方式修改kubectl edit deployment web1

1
2
3
4
5
6
7
8
9
10
11
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl edit deployment web1
deployment.apps/web1 edited
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web1-66b5fd9bc8-2wpkr 1/1 Running 0 9m56s 10.244.171.131 vms82.liruilongs.github.io <none> <none>
web1-66b5fd9bc8-9lnds 0/1 ContainerCreating 0 6s <none> vms82.liruilongs.github.io <none> <none>
web1-66b5fd9bc8-s9w7g 1/1 Running 0 9m56s 10.244.70.3 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$

修改yaml文件方式

1
2
3
4
5
6
7
8
9
10
11
12
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$sed -i 's/replicas: 3/replicas: 2/' ngixndeplog.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl apply -f ngixndeplog.yaml
deployment.apps/web1 configured
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web1-66b5fd9bc8-2wpkr 1/1 Running 0 12m 10.244.171.131 vms82.liruilongs.github.io <none> <none>
web1-66b5fd9bc8-s9w7g 1/1 Running 0 12m 10.244.70.3 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$

HPA自动模式

从Kubernetes v1.1版本开始,新增了名为Horizontal Pod Autoscaler (HPA)的控制器,用于实现基于CPU使用率进行自动Pod扩容和缩容的功能。

HPA控制器基于Master的kube-controller-manager服务启动参数–horizontal-pod-autoscaler-sync-period定义的时长(默认值为30s),周期性地监测目标Pod的CPU使用率,并在满足条件时对ReplicationController或Deployment中的Pod副本数量进行调整,以符合用户定义的平均Pod CPU使用率。Pod CPU使用率来源于metric server 组件,所以需要预先安装好metric server .

HPA 可以基于内存,CPU,并发量来动态伸缩

创建HPA时可以使用kubectl autoscale 命令进行快速创建或者使用yaml配置文件进行创建。在创建HPA之前,需要已经存在一个DeploymentRC对象,并且该Deployment/RC中的Pod必须定义resources.requests.cpu的资源请求值,如果不设置该值,则metric server 将无法采集到该Pod的CPU使用情况,会导致HPA无法正常工作。

设置metric server 监控

1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible/metrics/deploy/1.8+]
└─$kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
vms81.liruilongs.github.io 401m 20% 1562Mi 40%
vms82.liruilongs.github.io 228m 11% 743Mi 19%
vms83.liruilongs.github.io 221m 11% 720Mi 18%

配置HPA
设置副本数是最小2个,最大10个,CPU超过80

1
2
3
4
5
6
7
8
9
10
11
12
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl autoscale deployment web1 --min=2 --max=10 --cpu-percent=80
horizontalpodautoscaler.autoscaling/web1 autoscaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
web1 Deployment/web1 <unknown>/80% 2 10 2 15s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl delete hpa web1
horizontalpodautoscaler.autoscaling "web1" deleted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$

解决当前cpu的使用量为unknown,这个占时没有解决办法
ngixndeplog.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: web1
name: web1
spec:
replicas: 2
selector:
matchLabels:
app: web1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: web1
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
resources:
limits:
cpu: 500m
requests:
cpu: 200m

测试HPA

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$cat ngixndeplog.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx
name: nginxdep
spec:
replicas: 2
selector:
matchLabels:
app: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx
name: web
resources:
requests:
cpu: 100m
restartPolicy: Always

设置HPAkubectl autoscale deployment nginxdep --max=5 --cpu-percent=50

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
nginxdep 2/2 2 2 8m8s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl autoscale deployment nginxdep --max=5 --cpu-percent=50
horizontalpodautoscaler.autoscaling/nginxdep autoscaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginxdep-645bf755b9-27hzn 1/1 Running 0 97s 10.244.171.140 vms82.liruilongs.github.io <none> <none>
nginxdep-645bf755b9-cb57p 1/1 Running 0 97s 10.244.70.10 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get hpa -o wide
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginxdep Deployment/nginxdep <unknown>/50% 1 5 2 21s

创建一个svc,然后模拟调用

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl expose --name=nginxsvc deployment nginxdep --port=80
service/nginxsvc exposed
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nginxsvc ClusterIP 10.104.147.65 <none> 80/TCP 9s app=nginx

测试svc的调用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "curl http://10.104.147.65 "
192.168.26.83 | CHANGED | rc=0 >>
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html> % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 615 100 615 0 0 304k 0 --:--:-- --:--:-- --:--:-- 600k

安装http-tools(IP压力测试工具包),模拟调用

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "yum install httpd-tools -y"
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "ab -t 600 -n 1000000 -c 1000 http://10.104.147.65/ " &
[1] 123433
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

观察pod的变化

deployment-健壮性测试

1
2
3
4
5
6
7
8
9
10
11
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl scale deployment nginxdep --replicas=3
deployment.apps/nginxdep scaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginxdep-645bf755b9-27hzn 1/1 Running 1 (3m19s ago) 47m 10.244.171.141 vms82.liruilongs.github.io <none> <none>
nginxdep-645bf755b9-4dkpp 1/1 Running 0 30s 10.244.171.144 vms82.liruilongs.github.io <none> <none>
nginxdep-645bf755b9-vz5qt 1/1 Running 0 30s 10.244.70.11 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

vms83.liruilongs.github.io关机,等一段时间就会发现,pod都会在vms82.liruilongs.github.io上运行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 47h v1.22.2
vms82.liruilongs.github.io Ready <none> 47h v1.22.2
vms83.liruilongs.github.io NotReady <none> 47h v1.22.2
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginxdep-645bf755b9-27hzn 1/1 Running 1 (20m ago) 64m 10.244.171.141 vms82.liruilongs.github.io <none> <none>
nginxdep-645bf755b9-4dkpp 1/1 Running 0 17m 10.244.171.144 vms82.liruilongs.github.io <none> <none>
nginxdep-645bf755b9-9hzf2 1/1 Running 0 9m48s 10.244.171.145 vms82.liruilongs.github.io <none> <none>
nginxdep-645bf755b9-vz5qt 1/1 Terminating 0 17m 10.244.70.11 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginxdep-645bf755b9-27hzn 1/1 Running 1 (27m ago) 71m 10.244.171.141 vms82.liruilongs.github.io <none> <none>
nginxdep-645bf755b9-4dkpp 1/1 Running 0 24m 10.244.171.144 vms82.liruilongs.github.io <none> <none>
nginxdep-645bf755b9-9hzf2 1/1 Running 0 16m 10.244.171.145 vms82.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl top pods
NAME CPU(cores) MEMORY(bytes)
nginxdep-645bf755b9-27hzn 0m 4Mi
nginxdep-645bf755b9-4dkpp 0m 1Mi
nginxdep-645bf755b9-9hzf2 0m 1Mi
┌──[root@vms81.liruilongs.github.io]-[~]
└─$

vms83.liruilongs.github.io重新启动,pod并不会返回到vms83.liruilongs.github.io上运行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 2d v1.22.2
vms82.liruilongs.github.io Ready <none> 2d v1.22.2
vms83.liruilongs.github.io Ready <none> 2d v1.22.2
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginxdep-645bf755b9-27hzn 1/1 Running 1 (27m ago) 71m 10.244.171.141 vms82.liruilongs.github.io <none> <none>
nginxdep-645bf755b9-4dkpp 1/1 Running 0 24m 10.244.171.144 vms82.liruilongs.github.io <none> <none>
nginxdep-645bf755b9-9hzf2 1/1 Running 0 16m 10.244.171.145 vms82.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~]
└─$

deployment-更新回滚镜像

当集群中的某个服务需要升级时,我们需要停止目前与该服务相关的所有Pod,然后下载新版本镜像并创建新的Pod,如果集群规模比较大,则这个工作就变成了一个挑战,而且先全部停止然后逐步升级的方式会导致较长时间的服务不可用。

Kuberetes提供了滚动升级功能来解决上述问题。如果Pod是通过Deployment创建的,则用户可以在运行时修改Deployment的Pod定义(spec.template)或镜像名称,并应用到Deployment对象上,系统即可完成Deployment的自动更新操作。如果在更新过程中发生了错误,则还可以通过回滚(Rollback)操作恢复Pod的版本。
环境准备

1
2
3
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl scale deployment nginxdep --replicas=5
deployment.apps/nginxdep scaled
1
2
3
4
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "docker pull nginx:1.9"
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "docker pull nginx:1.7.9"

deployment滚动更新镜像

现在pod镜像需要更新为 Nginx l.9,我们可以通 kubectl set image deployment/deploy名字 容器名字=nginx:1.9 --recordDeployment设置新的镜像名称

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl set image deployment/nginxdep web=nginx:1.9 --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginxdep image updated
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
nginxdep-59d7c6b6f-6hdb8 0/1 ContainerCreating 0 26s
nginxdep-59d7c6b6f-bd5z2 0/1 ContainerCreating 0 26s
nginxdep-59d7c6b6f-jb2j7 1/1 Running 0 26s
nginxdep-59d7c6b6f-jd5df 0/1 ContainerCreating 0 4s
nginxdep-645bf755b9-27hzn 1/1 Running 1 (51m ago) 95m
nginxdep-645bf755b9-4dkpp 1/1 Running 0 48m
nginxdep-645bf755b9-hkcqx 1/1 Running 0 18m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
nginxdep-59d7c6b6f-6hdb8 0/1 ContainerCreating 0 51s
nginxdep-59d7c6b6f-bd5z2 1/1 Running 0 51s
nginxdep-59d7c6b6f-jb2j7 1/1 Running 0 51s
nginxdep-59d7c6b6f-jd5df 0/1 ContainerCreating 0 29s
nginxdep-59d7c6b6f-prfzd 0/1 ContainerCreating 0 14s
nginxdep-645bf755b9-27hzn 1/1 Running 1 (51m ago) 96m
nginxdep-645bf755b9-4dkpp 1/1 Running 0 49m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
nginxdep-59d7c6b6f-6hdb8 1/1 Running 0 2m28s
nginxdep-59d7c6b6f-bd5z2 1/1 Running 0 2m28s
nginxdep-59d7c6b6f-jb2j7 1/1 Running 0 2m28s
nginxdep-59d7c6b6f-jd5df 1/1 Running 0 2m6s
nginxdep-59d7c6b6f-prfzd 1/1 Running 0 111s

可以通过age的时间看到nginx的版本由latest滚动升级到 1.9的版本然后到1.7.9版本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl set image deployment/nginxdep web=nginx:1.7.9 --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginxdep image updated
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
nginxdep-66587778f6-9jqfz 1/1 Running 0 4m37s
nginxdep-66587778f6-jbsww 1/1 Running 0 5m2s
nginxdep-66587778f6-lwkpg 1/1 Running 0 5m1s
nginxdep-66587778f6-tmd4l 1/1 Running 0 4m41s
nginxdep-66587778f6-v9f28 1/1 Running 0 5m2s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl describe pods nginxdep-66587778f6-jbsww | grep Image:
Image: nginx:1.7.9

可以使用kubectl rollout pause deployment nginxdep来暂停更新操作,完成复杂更新

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl rollout pause deployment nginxdep
deployment.apps/nginxdep paused
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get deployments -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginxdep 5/5 5 5 147m web nginx:1.7.9 app=nginx
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl set image deployment/nginxdep web=nginx
deployment.apps/nginxdep image updated
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl rollout history deployment nginxdep
deployment.apps/nginxdep
REVISION CHANGE-CAUSE
4 kubectl set image deployment/nginxdep web=nginx:1.9 --record=true
5 kubectl set image deployment/nginxdep web=nginx:1.9 --record=true
6 kubectl set image deployment/nginxdep web=nginx:1.9 --record=true

┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl rollout resume deployment nginxdep
deployment.apps/nginxdep resumed

deployment-回滚镜像

这个和git基本类似。可以回滚到任意版本ID

查看版本历史记录

1
2
3
4
5
6
7
8
9
10
11
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl rollout history deployment nginxdep
deployment.apps/nginxdep
REVISION CHANGE-CAUSE
1 kubectl set image deployment/nginxdep nginxdep=nginx:1.9 --record=true
2 kubectl set image deployment/nginxdep web=nginx:1.9 --record=true
3 kubectl set image deployment/nginxdep web=nginx:1.7.9 --record=true
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get deployments nginxdep -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginxdep 5/5 5 5 128m web nginx:1.7.9 app=nginx

回滚版本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl rollout undo deployment nginxdep --to-revision=2
deployment.apps/nginxdep rolled back
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
nginxdep-59d7c6b6f-ctdh2 0/1 ContainerCreating 0 6s
nginxdep-59d7c6b6f-dk67c 0/1 ContainerCreating 0 6s
nginxdep-59d7c6b6f-kr74k 0/1 ContainerCreating 0 6s
nginxdep-66587778f6-9jqfz 1/1 Running 0 23m
nginxdep-66587778f6-jbsww 1/1 Running 0 23m
nginxdep-66587778f6-lwkpg 1/1 Running 0 23m
nginxdep-66587778f6-v9f28 1/1 Running 0 23m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
nginxdep-59d7c6b6f-7j9z7 0/1 ContainerCreating 0 37s
nginxdep-59d7c6b6f-ctdh2 1/1 Running 0 59s
nginxdep-59d7c6b6f-dk67c 1/1 Running 0 59s
nginxdep-59d7c6b6f-f2sb4 0/1 ContainerCreating 0 21s
nginxdep-59d7c6b6f-kr74k 1/1 Running 0 59s
nginxdep-66587778f6-jbsww 1/1 Running 0 24m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$

查看版本详细信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl rollout history deployment nginxdep --revision=4
deployment.apps/nginxdep with revision #4
Pod Template:
Labels: app=nginx
pod-template-hash=59d7c6b6f
Annotations: kubernetes.io/change-cause: kubectl set image deployment/nginxdep web=nginx:1.9 --record=true
Containers:
web:
Image: nginx:1.9
Port: <none>
Host Port: <none>
Requests:
cpu: 100m
Environment: <none>
Mounts: <none>
Volumes: <none>

滚动更新的相关参数

maxSurge :在升级过程中一次升级几个,即新旧的副本不超过 (1+ 设置的值)%

maxUnavailable :在升级过程中,pod不可用个数,一次性删除多少个pod

可以通过命令修改

1
2
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl edit deployments nginxdep

默认值

1
2
3
4
5
6
7
8
9
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get deployments nginxdep -o yaml | grep -A 5 strategy:
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$
  • type

Recreate (重建): 设置spec.strategy.type:Recreate,表示 Deployment在更新Pod时,会先杀掉所有正在运行的Pod,然后创建新的Pod
RolligUpdate (滚动更新): 设置spec.strategy.type:RollingUupdate,表示Deployment会以滚动更新的方式来逐个更新Pod.同时,可以通过设置spec.strategy.rollingUuplate下的两个参数(maxUnavailablemaxSurge)来控制滚动更新的过程。

发布于

2021-12-15

更新于

2023-06-21

许可协议

评论
Your browser is out-of-date!

Update your browser to view this website correctly.&npsb;Update my browser now

×