关于 Kubernetes中Pod的一些笔记

钱比你想象的重要得多,超过20岁了就别整天活在梦里了,对于平凡的你来讲,钱就是你的尊严。

写在前面


  • 学习K8s,刚把Pod学完,整理笔记记忆
  • 笔记主要是Pod的一些基本操作,偏实战,没有理论:
  • 笔记内容包括:
    • 创建Pod的两种方式,相关镜像下载,重启机制
    • Pod的详细信息,日志、命令运行等、生命周期等
    • 初始化Pod和静态Pod
    • Pod的调度(选择器、指定节点、主机亲和性)
    • 节点的coedondrain
    • 节点的taint(污点)及容忍污点(tolerations)
    • 部分地方使用了ansible,但是不影响阅读

Pod 学习环境测试

ansible ping测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──[root@vms81.liruilongs.github.io]-[~]
└─$cd ansible/
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m ping
192.168.26.82 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.26.83 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}

docker环境测试

1
2
3
4
5
6
7
8
9
10
11
12
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "systemctl enable docker --now"
192.168.26.83 | CHANGED | rc=0 >>
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 7d23h v1.21.1
vms82.liruilongs.github.io Ready <none> 7d23h v1.21.1
vms83.liruilongs.github.io Ready <none> 7d23h v1.21.1
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

一、帮助文档的使用

kubectl explain --help

1
2
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl explain --help

查看pod的语法结构

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl explain pods
KIND: Pod
VERSION: v1
DESCRIPTION:
Pod is a collection of containers that can run on a host. This resource is
created by clients and scheduled onto hosts.
FIELDS:
apiVersion <string>
....
kind <string>
.....
metadata <Object>
.....
spec <Object>
.....
status <Object>
....
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl explain pods.metadata
KIND: Pod
VERSION: v1

二、创建Pod的方式

这里因为是学习,所以我们新建一个命名空间用于学习

新建命名空间:

kubectl config set-context context1 --namespace=liruilong-pod-create

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$mkdir k8s-pod-create
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cd k8s-pod-create/
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl create ns liruilong-pod-create
namespace/liruilong-pod-create created

查看当前集群信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.26.81:6443
name: cluster1
contexts:
- context:
cluster: cluster1
namespace: kube-system
user: kubernetes-admin1
name: context1
current-context: context1
kind: Config
preferences: {}
users:
- name: kubernetes-admin1
user:
client-certificate-data: REDACTED
client-key-data: REDACTED

查看命名空间

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get ns
NAME STATUS AGE
default Active 8d
kube-node-lease Active 8d
kube-public Active 8d
kube-system Active 8d
liruilong Active 7d10h
liruilong-pod-create Active 4m18s

设置刚才新建的命名空间为当前命名空间

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl config set-context context1 --namespace=liruilong-pod-create
Context "context1" modified.
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
b380bbd43752: Pull complete
fca7e12d1754: Pull complete
745ab57616cb: Pull complete
a4723e260b6f: Pull complete
1c84ebdff681: Pull complete
858292fd2e56: Pull complete
Digest: sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest

命令行的方式创建pod

kubectl run podcommon --image=nginx --image-pull-policy=IfNotPresent --labels="name=liruilong" --env="name=liruilong"

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl run podcommon --image=nginx --image-pull-policy=IfNotPresent --labels="name=liruilong" --env="name=liruilong"
pod/podcommon created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
podcommon 0/1 ContainerCreating 0 12s

查看pod调度到了那个节点

kubectl get pods -o wide

1
2
3
4
5
6
7
8
9
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl run pod-demo --image=nginx --labels=name=nginx --env="user=liruilong" --port=8888 --image-pull-policy=IfNotPresent
pod/pod-demo created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-demo 1/1 Running 0 94s 10.244.171.149 vms82.liruilongs.github.io <none> <none>
poddemo 1/1 Running 0 8m22s 10.244.70.41 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

删除pod

kubectl delete pod pod-demo --force

1
2
3
4
5
6
7
8
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl delete pod pod-demo --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod-demo" force deleted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods | grep pod-
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

每个Pod都有一个pause镜像

1
2
3
4
5
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "docker ps | grep podcomm"
192.168.26.83 | CHANGED | rc=0 >>
c04e155aa25d nginx "/docker-entrypoint.…" 21 minutes ago Up 21 minutes k8s_podcommon_podcommon_liruilong-pod-create_dbfc4fcd-d62b-4339-9f15-0a48802f60ad_0
309925812d42 registry.aliyuncs.com/google_containers/pause:3.4.1 "/pause" 21 minutes ago Up 21 minutes k8s_POD_podcommon_liruilong-pod-create_dbfc4fcd-d62b-4339-9f15-0a48802f60ad_0

生成yaml文件的方式创建pod:-o yaml

kubectl run pod-demo --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml >pod-demo.yaml

yaml文件的获取方法:-o yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]  # yaml文件的获取方法:
└─$kubectl run pod-demo --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod-demo
name: pod-demo
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod-demo
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

yaml文件创建pod

1
2
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl run pod-demo --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml >pod-demo.yaml

pod-demo.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod-demo
name: pod-demo
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod-demo
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

yaml文件创建pod

1
2
3
4
5
6
7
8
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-demo.yaml
pod/pod-demo created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-demo 1/1 Running 0 27s 10.244.70.4 vms83.liruilongs.github.io <none> <none>
podcommon 1/1 Running 0 13m 10.244.70.3 vms83.liruilongs.github.io <none> <none>

删除pod:delete pod

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl delete pod pod-demo
pod "pod-demo" deleted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podcommon 1/1 Running 0 14m 10.244.70.3 vms83.liruilongs.github.io <none> <none>

Pod指定命令/删除pod/批量创建Pod

创建pod时指定运行命令。替换镜像中CMD的命令

  • 方式一
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
    └─$kubectl run comm-pod --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml -- "echo liruilong"
    apiVersion: v1
    kind: Pod
    metadata:
    creationTimestamp: null
    labels:
    run: comm-pod
    name: comm-pod
    spec:
    containers:
    - args:
    - echo liruilong
    image: nginx
    imagePullPolicy: IfNotPresent
    name: comm-pod
    resources: {}
    dnsPolicy: ClusterFirst
    restartPolicy: Always
    status: {}
  • 方式二
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
    └─$kubectl run comm-pod --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml -- sh -c "echo liruilong"
    apiVersion: v1
    kind: Pod
    metadata:
    creationTimestamp: null
    labels:
    run: comm-pod
    name: comm-pod
    spec:
    containers:
    - args:
    - sh
    - -c
    - echo liruilong
    image: nginx
    imagePullPolicy: IfNotPresent
    name: comm-pod
    resources: {}
    dnsPolicy: ClusterFirst
    restartPolicy: Always
    status: {}

kubectl delete -f comm-pod.yaml删除pod

1
2
3
4
5
6
7
8
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl run comm-pod --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml -- sh c "echo liruilong" > comm-pod.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f comm-pod.yaml
pod/comm-pod created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl delete -f comm-pod.yaml
pod "comm-pod" deleted

批量创建pod

通过 sed 更改 pod名字的方式:sed ‘s/demo/demo1/‘ demo.yaml | kubectl apply -f -

1
2
3
4
5
6
7
8
9
10
11
12
13
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$sed 's/demo/demo1/' demo.yaml | kubectl apply -f -
pod/demo1 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$sed 's/demo/demo2/' demo.yaml | kubectl create -f -
pod/demo2 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo1 1/1 Running 0 3m29s 10.244.70.32 vms83.liruilongs.github.io <none> <none>
demo2 1/1 Running 0 3m6s 10.244.70.33 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

容器共享pod的网络空间的。即使用同一个IP地址:pod IP

1
2
3
4
5
6
7
8
9
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "docker ps | grep demo1"
192.168.26.83 | CHANGED | rc=0 >>
0d644ad550f5 87a94228f133 "/docker-entrypoint.…" 8 minutes ago Up 8 minutes k8s_demo1_demo1_liruilong-pod-create_b721b109-a656-4379-9d3c-26710dadbf70_0
0bcffe0f8e2d registry.aliyuncs.com/google_containers/pause:3.4.1 "/pause" 8 minutes ago Up 8 minutes k8s_POD_demo1_liruilong-pod-create_b721b109-a656-4379-9d3c-26710dadbf70_0
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "docker inspect 0d644ad550f5 | grep -i ipaddress "
192.168.26.83 | CHANGED | rc=0 >>
"SecondaryIPAddresses": null,
"IPAddress": "",

pod多容器创建

一个pod内创建多个容器

comm-pod.yaml 文件编写

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: comm-pod
name: comm-pod
spec:
containers:
- args:
- sh
- -c
- echo liruilong;sleep 10000
image: nginx
imagePullPolicy: IfNotPresent
name: comm-pod0
resources: {}
- name: comm-pod1
image: nginx
imagePullPolicy: IfNotPresent
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

创建 多容器pod

1
2
3
4
5
6
7
8

┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f comm-pod.yaml
pod/comm-pod created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
comm-pod 2/2 Running 0 20s

镜像的下载策略

--image-pull-policy

  • Always 每次都下载最新镜像
  • Never 只使用本地镜像,从不下载
  • IfNotPresent 本地没有才下载

pod的重启策略

restartPolicy–单个容器正常退出

  • Always 总是重启
  • OnFailure 非正常退出才重启
  • Never 从不重启

labels 标签

k8s中每个资源对象都有标签

1
2
3
4
5
6
7
8
9
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
vms81.liruilongs.github.io Ready control-plane,master 8d v1.21.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms81.liruilongs.github.io,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
vms82.liruilongs.github.io Ready <none> 8d v1.21.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms82.liruilongs.github.io,kubernetes.io/os=linux
vms83.liruilongs.github.io Ready <none> 8d v1.21.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms83.liruilongs.github.io,kubernetes.io/os=linux
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
podcommon 1/1 Running 0 87s name=liruilong

查看标签

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
comm-pod 2/2 Running 0 4m43s run=comm-pod
mysql-577h7 1/1 Running 0 93m app=mysql
myweb-4xlc5 1/1 Running 0 92m app=myweb
myweb-ltqdt 1/1 Running 0 91m app=myweb

指定标签过滤

1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -l run=comm-pod
NAME READY STATUS RESTARTS AGE
comm-pod 2/2 Running 0 5m12s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

pod的状态

pod的状态
Pending pod 因为其他的原因导致pod准备开始创建 还没有创建(卡住了)
Running pod 已经被调度到节点上,且容器工作正常
Completed pod 里所有容器正常退出
error/CrashLoopBackOff 创建的时候就出错,属于内部原因
imagePullBackoff 创建pod的时候,镜像下载失败

三、Pod的基本操作

在pod里执行命令,查看pod详细信息。查看pod日志

1
2
3
4
5
6
kubectl exec 命令
kubectl exec -it pod sh #如果pod里有多个容器,则命令是在第一个容器里执行
kubectl exec -it demo -c demo1 sh # 指定容器
kubectl describe pod pod名
kubectl logs pod名 -c 容器名 #如果有多个容器的话 查看日志。
kubectl edit pod pod名 # 部分可以修改,有些不能修改

查看pod详细信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl describe pod demo1
Name: demo1
Namespace: liruilong-pod-create
Priority: 0
Node: vms83.liruilongs.github.io/192.168.26.83
Start Time: Wed, 20 Oct 2021 22:27:15 +0800
Labels: run=demo1
Annotations: cni.projectcalico.org/podIP: 10.244.70.32/32
cni.projectcalico.org/podIPs: 10.244.70.32/32
Status: Running
IP: 10.244.70.32
IPs:
IP: 10.244.70.32
Containers:
demo1:
Container ID: docker://0d644ad550f59029036fd73d420d4d2c651801dd12814bb26ad8e979dc0b59c1
Image: nginx
Image ID: docker-pullable://nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 20 Oct 2021 22:27:20 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-scc89 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-scc89:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned liruilong-pod-create/demo1 to vms83.liruilongs.github.io
Normal Pulled 13m kubelet Container image "nginx" already present on machine
Normal Created 13m kubelet Created container demo1
Normal Started 13m kubelet Started container demo1

在pod里执行命令

kubectl exec -it demo1 -- ls /tmp

1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl exec -it demo1 -- sh
# ls
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
boot docker-entrypoint.d etc lib media opt root sbin sys usr
# exit
1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl exec -it demo1 -- bash
root@demo1:/# ls
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
boot docker-entrypoint.d etc lib media opt root sbin sys usr
root@demo1:/# exit
exit

Pod多个容器需要用-c指定

1
2
3
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl exec comm-pod -c comm-pod1 -- echo liruilong
liruilong
1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl exec -it comm-pod -c comm-pod1 -- sh
# ls
bin boot dev docker-entrypoint.d docker-entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
# exit
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$#

查看日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl logs demo1
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2021/10/20 14:27:21 [notice] 1#1: using the "epoll" event method
2021/10/20 14:27:21 [notice] 1#1: nginx/1.21.3
2021/10/20 14:27:21 [notice] 1#1: built by gcc 8.3.0 (Debian 8.3.0-6)
2021/10/20 14:27:21 [notice] 1#1: OS: Linux 3.10.0-693.el7.x86_64
2021/10/20 14:27:21 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2021/10/20 14:27:21 [notice] 1#1: start worker processes
2021/10/20 14:27:21 [notice] 1#1: start worker process 32
2021/10/20 14:27:21 [notice] 1#1: start worker process 33
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

拷贝文件

和docke一样的,可以相互拷贝

1
2
3
4
5
6
7
8
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl cp /etc/hosts comm-pod:/usr/share/nginx/html -c comm-pod1
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl exec comm-pod -c comm-pod1 -- ls /usr/share/nginx/html
50x.html
hosts
index.html

pod里运行命令

command的执行方式一:

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo OK! && sleep 60']

command的执行方式二:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
command:
- sh
- -c
- echo OK! && sleep 60

优雅的关闭pod:pod的延期删除

k8s对于pod的删除有一个延期的删除期,即宽限期,这个时间默认为30s,如果删除时加了 --force选项,就会强制删除。

在删除宽限期内,节点状态被标记为treminating ,宽限期结束后删掉pod,这里的宽限期通过参数 terminationGracePeriodSeconds 设定

1
2
3
4
5
6
7
8
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl explain pod.spec
....
terminationGracePeriodSeconds <integer>
pod需要优雅终止的可选持续时间(以秒为单位)。可在删除请求中增加。值必须是非负整数。
值0表示通过kill信号立即停止(没有机会关机)。如果该值为null,则使用默认的宽限期。
宽限期是在pod中运行的进程收到终止信号后的持续时间(以秒为单位),以及进程被kill信号强制停止的时间。
设置此值比流程的预期清理时间长。默认为30秒。

如果pod里面是Nginx进程,就不行,Nginx的处理信号的方式和k8s不同,当我们使用Nginx作为镜像来生成一个个pod的时候,pod里面的Nginx进程就会被很快的关闭,之后的pod也会被删除,并不会使用k8s的宽限期

terminationGracePeriodSeconds: 600

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: demo
name: demo
spec:
terminationGracePeriodSeconds: 600
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: demo
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

当某个pod正在被使用是,突然关闭,那这个时候我们还想处理一些事情,这里可以用 pod hook

pod生命周期

pod hook(钩子)

hook是一个很常见的功能,有时候也称回调,即在到达某一预期事件时触发的操作,比如 前端框架 Vue 的生命周期回调函数,java 虚拟机 JVM 在进程结束时的钩子线程。

在pod的整个生命周期内,有两个回调可以使用

两个回调可以使用
postStart: 当创建pod的时候调用,会随着pod里的主进程同时运行,并行操作,没有先后顺序
preStop: 当删除pod的时候创建,要先运行perStop里的程序,之后在关闭pod,这里的preStop必须是在pod的宽限期内完成,没有完成pod也会被强制删除

修改yaml文件:demo.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: demo
name: demo
spec:
terminationGracePeriodSeconds: 600
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: demo
resources: {}
lifecycle:
postStart:
exec:
command: ["bin/sh", "-c","echo liruilong`date` >> /liruilong"]
preStop:
exec:
command: ["bin/sh","-c","use/sbin/nginx -s quit"]
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

下面我们创建一个带钩子的pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f demo.yaml
pod/demo created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
demo 1/1 Running 0 21s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl exec -it demo -- bin/bash
root@demo:/# ls
bin dev docker-entrypoint.sh home lib64 media opt root sbin sys usr
boot docker-entrypoint.d etc lib liruilong mnt proc run srv tmp var
root@demo:/# cat liruilong
liruilongSun Nov 14 05:10:51 UTC 2021
root@demo:/#

这里关闭的话,主进程不会等到宽限期结束,会找Ngixn收到关闭信号时直接关闭

四、初始化Pod

所谓初始化pod,类比java中的构造概念,如果pod的创建命令类比java的构造函数的话,那么初始化容器即为构造块,java中构造块是在构造函数之前执行的一些语句块。初始化容器即为主容器构造前执行的一些语句

初始化规则:
它们总是运行到完成。
每个都必须在下一个启动之前成功完成。
如果 Pod 的 Init 容器失败,Kubernetes 会不断地重启该 Pod,直到 Init 容器成功为止。然而,如果 Pod 对应的restartPolicy 为 Never,它不会重新启动。
Init 容器支持应用容器的全部字段和特性,但不支持 Readiness Probe,因为它们必须在 Pod 就绪之前运行完成。
如果为一个 Pod 指定了多个 Init 容器,那些容器会按顺序一次运行一个。 每个 Init 容器必须运行成功,下一个才能够运行。
因为Init容器可能会被重启、重试或者重新执行,所以 Init 容器的代码应该是幂等的。 特别地,被写到EmptyDirs 中文件的代码,应该对输出文件可能已经存在做好准备。
Pod 上使用 activeDeadlineSeconds,在容器上使用 livenessProbe,这样能够避免Init容器一直失败。 这就为 Init 容器活跃设置了一个期限。
Pod中的每个 appInit容器的名称必须唯一;与任何其它容器共享同一个名称,会在验证时抛出错误。
Init 容器 spec 的修改,被限制在容器 image 字段中。 更改 Init 容器的image字段,等价于重启该 Pod

初始化容器在pod资源文件里 的initContainers里定义,和containers是同一级

通过初始化容器修改内核参数

创建初始化容器,这里我们通过初始化容器修改swap的一个内核参数为0,即使用交换分区频率为0

Alpine 操作系统是一个面向安全的轻型 Linux 发行版。它不同于通常 Linux 发行版,Alpine 采用了 musl libc 和 busybox 以减小系统的体积和运行时资源消耗,但功能上比 busybox 又完善的多,因此得到开源社区越来越多的青睐。在保持瘦身的同时,Alpine 还提供了自己的包管理工具 apk,可以通过 https://pkgs.alpinelinux.org/packages 网站上查询包信息,也可以直接通过 apk 命令直接查询和安装各种软件

YAML文件编写

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod-init
name: pod-init
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod1-init
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
initContainers:
- image: alpine
name: init
imagePullPolicy: IfNotPresent
command: ["/bin/sh","-c","sbin/sysctl -w vm.swappiness=0"]
securityContext:
privileged: true
status: {}

查看系统默认值,运行pod

1
2
3
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$cat /proc/sys/vm/swappiness
30

创建初始化容器

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod_init.yaml
pod/pod-init created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-init 1/1 Running 0 11m 10.244.70.54 vms83.liruilongs.github.io <none> <none>

pod创建成功验证一下

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$cd ..
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "cat /proc/sys/vm/swappiness"
192.168.26.83 | CHANGED | rc=0 >>
0

初始化容器和普通容器数据共享

配置文件编写

这里我们配置一个共享卷,然后再初始化容器里同步数据到普通的容器里。

pod_init1.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod-init1
name: pod-init1
spec:
volumes:
- name: workdir
emptyDir: {}
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod1-init
resources: {}
volumeMounts:
- name: workdir
mountPath: "/2021"
dnsPolicy: ClusterFirst
restartPolicy: Always
initContainers:
- image: busybox
name: init
imagePullPolicy: IfNotPresent
command: ["/bin/sh","-c","touch /work-dir/liruilong.txt"]
volumeMounts:
- name: workdir
mountPath: "work-dir"
status: {}

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod_init1.yaml
pod/pod-init1 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods pod-init1
NAME READY STATUS RESTARTS AGE
pod-init1 1/1 Running 0 30s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl exec -it pod-init1 /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Defaulted container "pod1-init" out of: pod1-init, init (init)
# ls
2021 boot docker-entrypoint.d etc lib media opt root sbin sys usr
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
# cd 2021;ls
liruilong.txt
#

五、静态pod

正常情况下,pod是在master上统一管理的,所谓静态pod就是,即不是由master上创建调度的,是属于node自身特的pod,在node上只要启动kubelet之后,就会自动的创建的pod。这里理解的话,结合java静态熟悉,静态方法理解,即的node节点初始化的时候需要创建的一些pod

比如 kubeadm的安装k8s的话,所以的服务都是通过容器的方式运行的。相比较二进制的方式方便很多,这里的话,那么涉及到master节点的相关组件在没有k8s环境时是如何运行,构建master节点的,这里就涉及到静态pod的问题。

工作节点创建 静态pod

工作节点查看kubelet 启动参数配置文件

/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
--pod-manifest-path=/etc/kubernetes/kubelet.d
在这里插入图片描述
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/kubelet.d"
mkdir -p /etc/kubernetes/kubelet.d

首先需要在配置文件中添加加载静态pod 的yaml文件位置
先在本地改配置文件,使用ansible发送到node节点上,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/kubelet.d"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$mkdir -p /etc/kubernetes/kubelet.d

修改配置后需要加载配置文件重启kubelet

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m copy -a "src=/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf dest=/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf force
=yes"
192.168.26.82 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "13994d828e831f4aa8760c2de36e100e7e255526",
"dest": "/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf",
"gid": 0,
"group": "root",
"md5sum": "0cfe0f899ea24596f95aa2e175f0dd08",
"mode": "0644",
"owner": "root",
"size": 946,
"src": "/root/.ansible/tmp/ansible-tmp-1637403640.92-32296-63660481173900/source",
"state": "file",
"uid": 0
}
192.168.26.83 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "13994d828e831f4aa8760c2de36e100e7e255526",
"dest": "/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf",
"gid": 0,
"group": "root",
"md5sum": "0cfe0f899ea24596f95aa2e175f0dd08",
"mode": "0644",
"owner": "root",
"size": 946,
"src": "/root/.ansible/tmp/ansible-tmp-1637403640.89-32297-164984088437265/source",
"state": "file",
"uid": 0
}

创建配置文件文件夹

1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "mkdir -p /etc/kubernetes/kubelet.d"
192.168.26.83 | CHANGED | rc=0 >>

192.168.26.82 | CHANGED | rc=0 >>

加载配置文件

1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "systemctl daemon-reload"
192.168.26.82 | CHANGED | rc=0 >>

192.168.26.83 | CHANGED | rc=0 >>

重启kubelet

1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "systemctl restart kubelet"
192.168.26.83 | CHANGED | rc=0 >>

192.168.26.82 | CHANGED | rc=0 >>

现在我们需要到Node的/etc/kubernetes/kubelet.d里创建一个yaml文件,然后根据这个yaml文件,创建一个pod,这样创建出来的node,是不会接受master的管理的。我们同样使用ansible的方式来处理

配置文件编写

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat static-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod-static
name: pod-static
namespeace: default
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod-demo
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

default名称空间里创建两个静态pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m copy -a "src=./static-pod.yaml dest=/etc/kubernetes/kubelet.d/static-pod.yaml"
192.168.26.83 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "9b059b0acb4cd99272809d1785926092816f8771",
"dest": "/etc/kubernetes/kubelet.d/static-pod.yaml",
"gid": 0,
"group": "root",
"md5sum": "41515d4c5c116404cff9289690cdcc20",
"mode": "0644",
"owner": "root",
"size": 302,
"src": "/root/.ansible/tmp/ansible-tmp-1637474358.05-72240-139405051351544/source",
"state": "file",
"uid": 0
}
192.168.26.82 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "9b059b0acb4cd99272809d1785926092816f8771",
"dest": "/etc/kubernetes/kubelet.d/static-pod.yaml",
"gid": 0,
"group": "root",
"md5sum": "41515d4c5c116404cff9289690cdcc20",
"mode": "0644",
"owner": "root",
"size": 302,
"src": "/root/.ansible/tmp/ansible-tmp-1637474357.94-72238-185516913523170/source",
"state": "file",
"uid": 0
}

node检查一下,配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a " cat /etc/kubernetes/kubelet.d/static-pod.yaml"
192.168.26.83 | CHANGED | rc=0 >>
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod-static
name: pod-static
namespeace: default
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod-demo
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
192.168.26.82 | CHANGED | rc=0 >>
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod-static
name: pod-static
namespeace: default
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod-demo
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

查看静态pod

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get pod -n default
NAME READY STATUS RESTARTS AGE
pod-static-vms82.liruilongs.github.io 1/1 Running 0 8m17s
pod-static-vms83.liruilongs.github.io 1/1 Running 0 9m3s
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "rm -rf /etc/kubernetes/kubelet.d/static-pod.yaml"

master 节点创建pod

这里我们换一种方式创建一个pod,通过 KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml中定义的静态pod位置的方式创建pod

这里需要注意的是如果master 节点是使用 --pod-manifest-path=/etc/kubernetes/kubelet.d的方式的话,k8s就会无法启动,因为--pod-manifest-path会覆盖staticPodPath: /etc/kubernetes/manifests

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf "
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$grep static /var/lib/kubelet/config.yaml
staticPodPath: /etc/kubernetes/manifests
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

/etc/kubernetes/manifests/ 里面放着k8s环境需要的一些静态pod组件

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ls -l /etc/kubernetes/manifests/
总用量 16
-rw------- 1 root root 2284 10月 19 00:09 etcd.yaml
-rw------- 1 root root 3372 10月 19 00:10 kube-apiserver.yaml
-rw------- 1 root root 2893 10月 19 00:10 kube-controller-manager.yaml
-rw------- 1 root root 1479 10月 19 00:10 kube-scheduler.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

直接copy之前的配置文件在master节点创建静态pod,并检查

1
2
3
4
5
6
7
8
9
10
11
12
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cp static-pod.yaml /etc/kubernetes/manifests/static-pod.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
pod-static-vms81.liruilongs.github.io 1/1 Running 0 13s
pod-static-vms82.liruilongs.github.io 1/1 Running 0 34m
pod-static-vms83.liruilongs.github.io 1/1 Running 0 35m
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$rm -rf /etc/kubernetes/manifests/static-pod.yaml

六、Pod调度

调度的三个对象

待调度Pod列表:有多少个pod需要调度,即创建的pod列表

可用node列表:有那些节点可以参与调度,排除有污点,端口的一些node

调度算法

  • 主机过滤
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    + `NoDiskConflict`
    + `PodFitsResources`
    + `PodFitsPorts`
    + `MatchNodeSelector`
    + `HostName`
    + `NoVolumeZoneConflict`
    + `PodToleratesNodeTaints`
    + `CheckNodeMemoryPressure`
    + `CheckNodeDiskPressure`
    + `MaxEBSVolumeCount`
    + `MaxGCEPDVolumeCount`
    + `MaxAzureDiskVolumeCount`
    + `MatchInterPodAffinity`
    + `GeneralPredicates`
    + `NodeVolumeNodeConflic`
  • 主机打分
分数项 公式
LeastRequestedPriority score=cpu ( ( capacity - sum ( requested ) ) * 10 / capacity) + memory ( ( capacity - sum ( requested) ) * 10 / capacity )/2
BalanceResourceAllocation score = 10 -abs ( cpuFraction - memoryFraction ) * 10
CalculateSpreadPriority Score = 10 * ((maxCount -counts)/ (maxCount))

手动指定pod的运行位置:

可以给node设置指定的标签,然后我们可以在创建pod里指定node标签

标签设置
查看 kubectl get nodes –show-labels
设置 kubectl label node node2 disktype=ssd
取消 kubectl label node node2 disktype
所有节点设置 kubectl label node all key=vale

查看节点pod:kubectl get node --show-labels

给节点设置标签

1
2
3
4
5
6
7
8
9
10
11
12
13
14
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl label node vms82.liruilongs.github.io disktype=node1
node/vms82.liruilongs.github.io labeled
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl label node vms83.liruilongs.github.io disktype=node2
node/vms83.liruilongs.github.io labeled
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
vms81.liruilongs.github.io Ready control-plane,master 45d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms81.liruilongs.github.io,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
vms82.liruilongs.github.io Ready <none> 45d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=node1,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms82.liruilongs.github.io,kubernetes.io/os=linux
vms83.liruilongs.github.io Ready <none> 45d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=node2,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms83.liruilongs.github.io,kubernetes.io/os=linux
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

特殊的内置标签node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,用于设置角色列roles

1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get node
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 45d v1.22.2
vms82.liruilongs.github.io Ready <none> 45d v1.22.2
vms83.liruilongs.github.io Ready <none> 45d v1.22.2

我们也可以做worker节点上设置

1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl label nodes vms82.liruilongs.github.io node-role.kubernetes.io/worker1=
node/vms82.liruilongs.github.io labeled
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl label nodes vms83.liruilongs.github.io node-role.kubernetes.io/worker2=
node/vms83.liruilongs.github.io labeled
1
2
3
4
5
6
7
8
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get node
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 45d v1.22.2
vms82.liruilongs.github.io Ready worker1 45d v1.22.2
vms83.liruilongs.github.io Ready worker2 45d v1.22.2
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

选择器(nodeSelector)方式

在特定节点上运行pod

1
2
3
4
5
6
7
8
9
10
11
12
13
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes -l disktype=node2
NAME STATUS ROLES AGE VERSION
vms83.liruilongs.github.io Ready worker2 45d v1.22.2
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim pod-node2.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-node2.yaml
pod/podnode2 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podnode2 1/1 Running 0 13m 10.244.70.60 vms83.liruilongs.github.io <none> <none>

pod-node2.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podnode2
name: podnode2
spec:
nodeSelector:
disktype: node2
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: podnode2
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

指定节点名称(nodeName)的方式

1
2
3
4
5
6
7
8
9
10
11
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim pod-node1.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-node1.yaml
pod/podnode1 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podnode1 1/1 Running 0 36s 10.244.171.165 vms82.liruilongs.github.io <none> <none>
podnode2 1/1 Running 0 13m 10.244.70.60 vms83.liruilongs.github.io <none> <none>

pod-node1.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podnode1
name: podnode1
spec:
nodeName: vms82.liruilongs.github.io
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: podnode1
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

当pod资源文件指定的节点标签,或者节点名不存在时,这个pod资源是无法创建成功的

主机亲和性

所谓主机亲和性,即在满足指定条件的节点上运行。分为硬策略(必须满足),软策略(最好满足)

硬策略(requiredDuringSchedulingIgnoredDuringExecution)

pod-node-a.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podnodea
name: podnodea
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: podnodea
resources: {}
affinity:
nodeAffinity: #主机亲和性
requiredDuringSchedulingIgnoredDuringExecution: #硬策略
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- vms85.liruilongs.github.io
- vms84.liruilongs.github.io
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

条件不满足,所以 Pending

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-node-a.yaml
pod/podnodea created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
podnodea 0/1 Pending 0 8s

我梦修改一下

1
2
3
4
5
6
7
8
9
10
11
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$sed -i 's/vms84.liruilongs.github.io/vms83.liruilongs.github.io/' pod-node-a.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-node-a.yaml
pod/podnodea created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podnodea 1/1 Running 0 13s 10.244.70.61 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

软策略(preferredDuringSchedulingIgnoredDuringExecution)

pod-node-a-r.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podnodea
name: podnodea
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: podnodea
resources: {}
affinity:
nodeAffinity: #主机亲和性
preferredDuringSchedulingIgnoredDuringExecution: # 软策略
- weight: 2
preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- vms85.liruilongs.github.io
- vms84.liruilongs.github.io
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

检查一下

1
2
3
4
5
6
7
8
9
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim pod-node-a-r.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-node-a-r.yaml
pod/podnodea created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podnodea 1/1 Running 0 28s 10.244.70.62 vms83.liruilongs.github.io <none> <none>

运算符 描述
In 包含自, 比如上面的硬亲和就包含env_role=dev、env_role=test两种标签
NotIn 和上面相反,凡是包含该标签的节点都不会匹配到
Exists 存在里面和In比较类似,凡是有某个标签的机器都会被选择出来。使用Exists的operator的话,values里面就不能写东西了。
Gt greater than的意思,表示凡是某个value大于设定的值的机器则会被选择出来。
Lt less than的意思,表示凡是某个value小于设定的值的机器则会被选择出来。
DoesNotExists 不存在该标签的节点

Annotations 的设置

Annotations 即注释,设置查看方式很简单

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl annotate nodes vms82.liruilongs.github.io "dest=这是一个工作节点"
node/vms82.liruilongs.github.io annotated
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl describe nodes vms82.liruilongs.github.io
Name: vms82.liruilongs.github.io
Roles: worker1
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
disktype=node1
kubernetes.io/arch=amd64
kubernetes.io/hostname=vms82.liruilongs.github.io
kubernetes.io/os=linux
node-role.kubernetes.io/worker1=
Annotations: dest: 这是一个工作节点
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.26.82/24
projectcalico.org/IPv4IPIPTunnelAddr: 10.244.171.128
volumes.kubernetes.io/controller-managed-attach-detach: true

七、节点的coedon与drain

如果想把某个节点设置为不可用的话,可以对节点实施cordon或者drain

如果一个node被标记为cordon,新创建的pod不会被调度到此node上,已经调度上去的不会被移走

coedon用于节点的维护,当不希望再节点分配pod,那么可以使用coedon把节点标记为不可调度。

这里我们为了方便,创建一个Deployment控制器用去用于演示,关于Deployment,可以简单理解为他能保证你的pod保持在一定数量,当pod挂掉事,

1
2
3
4
5
6
7
8
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl create deployment nginx --image=nginx --dry-run=client -o yaml >nginx-dep.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cp nginx-dep.yaml ./k8s-pod-create/nginx-dep.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cd k8s-pod-create/
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim nginx-dep.yaml

nginx-dep.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
imagePullPolicy: IfNotPresent
resources: {}
status: {}

创建 deploy资源

1
2
3
4
5
6
7
8
9
10
11
12
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f nginx-dep.yaml
deployment.apps/nginx created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-hx96s 1/1 Running 0 2m16s 10.244.171.167 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-wshxp 1/1 Running 0 2m16s 10.244.70.1 vms83.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-x78x4 1/1 Running 0 2m16s 10.244.70.63 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

节点的coedon

1
2
kubectl cordon vms83.liruilongs.github.io  #标记不可用
kubectl uncordon vms83.liruilongs.github.io #取消标记

通过cordonvms83.liruilongs.github.io标记为不可调度

1
2
3
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl cordon vms83.liruilongs.github.io #通过cordon把83标记为不可调度
node/vms83.liruilongs.github.io cordoned

查看节点状态,vms83.liruilongs.github.io变成SchedulingDisabled

1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 48d v1.22.2
vms82.liruilongs.github.io Ready worker1 48d v1.22.2
vms83.liruilongs.github.io Ready,SchedulingDisabled worker2 48d v1.22.2

修改deployment副本数量 –replicas=6

1
2
3
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl scale deployment nginx --replicas=6
deployment.apps/nginx scaled

新增的pod都调度到了vms82.liruilongs.github.io 节点

1
2
3
4
5
6
7
8
9
10
11
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-2nmsj 1/1 Running 0 64s 10.244.171.170 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-chsrn 1/1 Running 0 63s 10.244.171.168 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-hx96s 1/1 Running 0 7m30s 10.244.171.167 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-lppbp 1/1 Running 0 63s 10.244.171.169 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-wshxp 1/1 Running 0 7m30s 10.244.70.1 vms83.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-x78x4 1/1 Running 0 7m30s 10.244.70.63 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

vms83.liruilongs.github.io节点上的Nginx都干掉,会发现新增pod都调度到了vms82.liruilongs.github.io

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl delete pod nginx-7cf7d6dbc8-wshxp
pod "nginx-7cf7d6dbc8-wshxp" deleted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-2nmsj 1/1 Running 0 2m42s 10.244.171.170 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-5hnc7 1/1 Running 0 10s 10.244.171.171 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-chsrn 1/1 Running 0 2m41s 10.244.171.168 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-hx96s 1/1 Running 0 9m8s 10.244.171.167 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-lppbp 1/1 Running 0 2m41s 10.244.171.169 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-x78x4 1/1 Running 0 9m8s 10.244.70.63 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl delete pod nginx-7cf7d6dbc8-x78x4
pod "nginx-7cf7d6dbc8-x78x4" deleted
1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-2nmsj 1/1 Running 0 3m31s 10.244.171.170 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-5hnc7 1/1 Running 0 59s 10.244.171.171 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-chsrn 1/1 Running 0 3m30s 10.244.171.168 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-hx96s 1/1 Running 0 9m57s 10.244.171.167 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-lppbp 1/1 Running 0 3m30s 10.244.171.169 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-m8ltr 1/1 Running 0 30s 10.244.171.172 vms82.liruilongs.github.io <none> <none>

通过 uncordon恢复节点vms83.liruilongs.github.io状态

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl uncordon vms83.liruilongs.github.io #恢复节点状态
node/vms83.liruilongs.github.io uncordoned
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 48d v1.22.2
vms82.liruilongs.github.io Ready worker1 48d v1.22.2
vms83.liruilongs.github.io Ready worker2 48d v1.22.2

删除所有的pod

1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl scale deployment nginx --replicas=0
deployment.apps/nginx scaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
No resources found in liruilong-pod-create namespace.

节点的为drain

如果一个节点被设置为drain,则此节点不再被调度pod,且此节点上已经运行的pod会被驱逐(evicted)到其他节点

drain包含两种状态:cordon不可被调度,evicted驱逐当前节点所以pod

1
2
kubectl drain vms83.liruilongs.github.io   --ignore-daemonsets
kubectl uncordon vms83.liruilongs.github.io

通过deployment添加4个nginx副本--replicas=4

1
2
3
4
5
6
7
8
9
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl scale deployment nginx --replicas=4
deployment.apps/nginx scaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide --one-output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-2clnb 1/1 Running 0 22s 10.244.171.174 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-9p6g2 1/1 Running 0 22s 10.244.70.2 vms83.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-ptqxm 1/1 Running 0 22s 10.244.171.173 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-zmdqm 1/1 Running 0 22s 10.244.70.4 vms83.liruilongs.github.io <none> <none>

添加一下污点 将节点vms82.liruilongs.github.io设置为drain

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl drain vms82.liruilongs.github.io --ignore-daemonsets --delete-emptydir-data
node/vms82.liruilongs.github.io cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-ntm7v, kube-system/kube-proxy-nzm24
evicting pod liruilong-pod-create/nginx-7cf7d6dbc8-ptqxm
evicting pod kube-system/metrics-server-bcfb98c76-wxv5l
evicting pod liruilong-pod-create/nginx-7cf7d6dbc8-2clnb
pod/nginx-7cf7d6dbc8-2clnb evicted
pod/nginx-7cf7d6dbc8-ptqxm evicted
pod/metrics-server-bcfb98c76-wxv5l evicted
node/vms82.liruilongs.github.io evicted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 48d v1.22.2
vms82.liruilongs.github.io Ready,SchedulingDisabled worker1 48d v1.22.2
vms83.liruilongs.github.io Ready worker2 48d v1.22.2

查看节点调度,所有pod调度到了vms83.liruilongs.github.io这台机器

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide --one-output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-9p6g2 1/1 Running 0 4m20s 10.244.70.2 vms83.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-hkflr 1/1 Running 0 25s 10.244.70.5 vms83.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-qt48k 1/1 Running 0 26s 10.244.70.7 vms83.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-zmdqm 1/1 Running 0 4m20s 10.244.70.4 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

取消污点:kubectl uncordon vms82.liruilongs.github.io

1
2
3
4
5
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl uncordon vms82.liruilongs.github.io
node/vms82.liruilongs.github.io uncordoned
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

报错的情况

将节点vms82.liruilongs.github.io设置为drain

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl drain vms82.liruilongs.github.io
node/vms82.liruilongs.github.io cordoned
DEPRECATED WARNING: Aborting the drain command in a list of nodes will be deprecated in v1.23.
The new behavior will make the drain command go through all nodes even if one or more nodes failed during the drain.
For now, users can try such experience via: --ignore-errors
error: unable to drain node "vms82.liruilongs.github.io", aborting command...

There are pending nodes to be drained:
vms82.liruilongs.github.io
cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-ntm7v, kube-system/kube-proxy-nzm24
cannot delete Pods with local storage (use --delete-emptydir-data to override): kube-system/metrics-server-bcfb98c76-wxv5l
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 48d v1.22.2
vms82.liruilongs.github.io Ready,SchedulingDisabled worker1 48d v1.22.2
vms83.liruilongs.github.io Ready worker2 48d v1.22.2

uncordon掉刚才的节点

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl uncordon vms82.liruilongs.github.io
node/vms82.liruilongs.github.io uncordoned
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 48d v1.22.2
vms82.liruilongs.github.io Ready worker1 48d v1.22.2
vms83.liruilongs.github.io Ready worker2 48d v1.22.2

八、节点taint(污点)及pod的tolerations(容忍污点)

给节点设置及删除taint,设置operator的值为Equal,以及设置operator的值为Exists

1
2
3
4
5
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible master -m shell -a "kubectl describe nodes vms81.liruilongs.github.io | grep -E '(Roles|Taints)'"
192.168.26.81 | CHANGED | rc=0 >>
Roles: control-plane,master
Taints: node-role.kubernetes.io/master:NoSchedule

master节点从来没有调度到pod,因为master节点设置了污点,如果想要在某个被设置了污点的机器调度pod,那么pod需要设置tolerations(容忍污点)才能够被运行。

taint(污点)的设置和查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 查看节点角色,和是否设置污点
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl describe nodes vms82.liruilongs.github.io | grep -E '(Roles|Taints)'
Roles: worker1
Taints: <none>
# 给 vms83.liruilongs.github.io节点设置污点,指定key为key83
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl describe nodes vms83.liruilongs.github.io | grep -E '(Roles|Taints)'
Roles: worker2
Taints: <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl taint node vms83.liruilongs.github.io key83=:NoSchedule
node/vms83.liruilongs.github.io tainted
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl describe nodes vms83.liruilongs.github.io | grep -E '(Roles|Taints)' # 从新查看污点信息
Roles: worker2
Taints: key83:NoSchedule
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

重新通过deployment 创建pod,会发现pod都调度到82上面,因为83设置了污点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl scale deployment nginx --replicas=0
deployment.apps/nginx scaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl scale deployment nginx --replicas=4
deployment.apps/nginx scaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide --one-output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-dhst5 0/1 ContainerCreating 0 12s <none> vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-j6g25 0/1 ContainerCreating 0 12s <none> vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-wpnhr 0/1 ContainerCreating 0 12s <none> vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-zkww8 0/1 ContainerCreating 0 11s <none> vms82.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl delete deployment nginx
deployment.apps "nginx" deleted

取消污点设置

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl taint node vms83.liruilongs.github.io key83-
node/vms83.liruilongs.github.io untainted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl describe nodes vms83.liruilongs.github.io | grep -E '(Roles|Taints)'
Roles: worker2
Taints: <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

设置operator的值为Equal

如果需要在有污点的节点上运行pod,那么需要在定义pod的时候指定toleration属性

在设置节点taint的时候,如果value的值为不为空,在pod里的tolerations字段只能写Equal,不能写Exists,

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl taint nodes vms82.liruilongs.github.io key82=val82:NoSchedule
node/vms82.liruilongs.github.io tainted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl describe nodes vms82.liruilongs.github.io | grep -E '(Roles|Taints)'
Roles: worker1
Taints: key82=val82:NoSchedule
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

修改yaml文件 pod-taint3.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$cat pod-taint2.yaml > pod-taint3.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim pod-taint3.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$cat pod-taint3.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
spec:
nodeSelector:
disktype: node2
tolerations:
- key: "key82"
operator: "Equal"
value: "val82"
effect: "NoSchedule"
containers:
- image: nginx
name: pod1
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-taint3.yaml
pod/pod1 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1 1/1 Running 0 11s 10.244.171.180 vms82.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

设置operator的值为Exists

如果使用Exists的话,那么pod中不能写value

设置vms83.liruilongs.github.io 节点污点标记

1
2
3
4
5
6
7
8
9
10
11
12
13
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl taint node vms83.liruilongs.github.io key83=:NoSchedule
node/vms83.liruilongs.github.io tainted
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl describe nodes vms83.liruilongs.github.io | grep -E '(Roles|Taints)'
Roles: worker2
Taints: key83:NoSchedule
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
vms81.liruilongs.github.io Ready control-plane,master 48d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms81.liruilongs.github.io,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
vms82.liruilongs.github.io Ready worker1 48d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=node1,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms82.liruilongs.github.io,kubernetes.io/os=linux,node-role.kubernetes.io/worker1=
vms83.liruilongs.github.io Ready worker2 48d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=node2,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms83.liruilongs.github.io,kubernetes.io/os=linux,node-role.kubernetes.io/worker2=

pod-taint.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
spec:
nodeSelector:
disktype: node2
tolerations:
- key: "key83"
operator: "Exists"
effect: "NoSchedule"
containers:
- image: nginx
name: pod1
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

会发现节点调度到了有污点的vms83.liruilongs.github.io节点

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-taint.yaml
pod/pod1 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1 1/1 Running 0 3m4s 10.244.70.8 vms83.liruilongs.github.io <none> <none>

当然,value没有值也可以这样使用Equal

1
2
3
4
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$cp pod-taint.yaml pod-taint2.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim pod-taint2.yaml

pod-taint2.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
spec:
nodeSelector:
disktype: node2
tolerations:
- key: "key83"
operator: "Equal"
value: ""
effect: "NoSchedule"
containers:
- image: nginx
name: pod1
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

会发现节点还是调度到了有污点的vms83.liruilongs.github.io节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl delete -f pod-taint.yaml
pod "pod1" deleted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-taint2.yaml
pod/pod1 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1 0/1 ContainerCreating 0 8s <none> vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl taint nodes vms83.liruilongs.github.io key83-
node/vms83.liruilongs.github.io untainted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$
发布于

2021-11-25

更新于

2023-06-21

许可协议

评论
加载中,最新评论有1分钟缓存...
Your browser is out-of-date!

Update your browser to view this website correctly.&npsb;Update my browser now

×