生活的意义就是学着真实的活下去,生命的意义就是寻找生活的意义 —–山河已无恙
写在前面
嗯,准备考 cka
证书,报了个班,花了好些钱,一定要考过去。
这篇博客是报班听课后整理的笔记,适合温习。
博客内容涉及 secret,configmap
的创建使用
deployment
的创建手动自动扩缩容,镜像滚动更新回滚等。
deamonset
,ReplicationController
,RepliSet
的创建使用
pod
的健康检测,服务可用性检测
Service
的创建,服务发现,服务发布Ingress
等
使用Calico
实现K8s
集群中容器的跨主机通信
使用NetworkPolicy
实现K8s网路策略
生活的意义就是学着真实的活下去,生命的意义就是寻找生活的意义 —–山河已无恙
密码配置管理 多个镜像的密码管理
环境准备 相关镜像拉去
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m ping 192.168.26.83 | SUCCESS => { "ansible_facts" : { "discovered_interpreter_python" : "/usr/bin/python" }, "changed" : false , "ping" : "pong" } 192.168.26.82 | SUCCESS => { "ansible_facts" : { "discovered_interpreter_python" : "/usr/bin/python" }, "changed" : false , "ping" : "pong" } ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "docker pull hub.c.163.com/library/mysql:latest" ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "docker pull hub.c.163.com/library/wordpress:latest"
学习环境准备,新建一个命名空间
1 2 3 4 5 6 7 8 9 10 11 12 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$dir =k8s-secret-create;mkdir $dir ;cd $dir ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get ns NAME STATUS AGE default Active 66d kube-node-lease Active 66d kube-public Active 66d kube-system Active 66d liruilong Active 65d liruilong-pod-create Active 58d liruilong-volume-create Active 16d
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl create ns liruilong-secret-create namespace/liruilong-secret-create created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl config set-context $(kubectl config current-context) --namespace=liruilong-secret-create Context "context1" modified. ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl config view | grep namespace namespace: default namespace: liruilong-secret-create namespace: kube-system ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE cluster1 default * context1 cluster1 kubernetes-admin1 liruilong-secret-create context2 kube-system ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$
mysqlpod创建一个mysql镜像
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl run mysqlpod --image=hub.c.163.com/library/mysql:latest --image-pull-policy=IfNotPresent --dry-run=client -o yaml >mysqlpod.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$vim mysqlpod.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$cat mysqlpod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: mysqlpod name: mysqlpod spec: containers: - image: hub.c.163.com/library/mysql:latest imagePullPolicy: IfNotPresent name: mysqlpod resources: {} env: - name: MYSQL_ROOT_PASSWORD value: liruilong dnsPolicy: ClusterFirst restartPolicy: Always status: {} ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl apply -f mysqlpod.yaml pod/mysqlpod created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mysqlpod 1/1 Running 0 19s 10.244.171.190 vms82.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$
客户端测试
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$yum -y install mariadb ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$mysql -uroot -pliruilong -h10.244.171.190 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.7.18 MySQL Community Server (GPL) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help . Type '\c' to clear the current input statement. MySQL [(none)]> quit Bye ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$
secret 创建 secret 1 2 3 4 5 6 7 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl describe pod mysqlpod | grep -A 2 Env Environment: MYSQL_ROOT_PASSWORD: liruilong Mounts: ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$
上面的密码我们使用的是明文,但是在实际的生产环境使用明文是很危险的一件事,所以我们需要加密处理
secret主要用于密码的保存 通过键值对的方式创建。直接指定键值对,或者存放中secret中
命令行创建secret 查看secret
1 2 3 4 5 6 7 8 9 10 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get sa NAME SECRETS AGE default 1 46m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get secrets NAME TYPE DATA AGE default-token-7q2qj kubernetes.io/service-account-token 3 46m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$
创建secret
1 2 3 4 5 6 7 8 9 10 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl create secret generic mysecl --from-literal=mysqlpassword=liruilong --from-literal=rqpassword=rq secret/mysecl created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get secrets NAME TYPE DATA AGE default-token-7q2qj kubernetes.io/service-account-token 3 49m mysecl Opaque 2 9s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$
Secret有三种类型:
Secret有三种类型
Opaque
kubernetes.io/dockerconfigjson
kubernetes.io/service-account-token
查看详细信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl describe secrets mysecl Name: mysecl Namespace: liruilong-secret-create Labels: <none> Annotations: <none> Type: Opaque Data ==== mysqlpassword: 9 bytes rqpassword: 2 bytes ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$ ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get secrets mysecl -o yaml apiVersion: v1 data: mysqlpassword: bGlydWlsb25n rqpassword: cnE= kind: Secret metadata: creationTimestamp: "2021-12-12T02:45:20Z" name: mysecl namespace: liruilong-secret-create resourceVersion: "1594980" selfLink: /api/v1/namespaces/liruilong-secret-create/secrets/mysecl uid: 05a99a7c-c7f0-48ac-9f67-32eb52ed1558 type : Opaque
也可以通过解密得到想要的密码
1 2 3 4 5 6 ┌──[root@vms81.liruilongs.github.io]-[~] └─$echo bGlydWlsb25n | base64 -d liruilong┌──[root@vms81.liruilongs.github.io]-[~] ┌──[root@vms81.liruilongs.github.io]-[~] └─$echo cnE= | base64 -d rq┌──[root@vms81.liruilongs.github.io]-[~]
直接解密
1 2 3 4 ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl get secrets mysecl -o jsonpath='{.data.mysqlpassword}' | base64 -d liruilong┌──[root@vms81.liruilongs.github.io]-[~] └─$
文件方式创建secret 一般使用命令行的方式创建,很少使用文件的方式创建 帐密信息文件
1 2 3 4 5 6 7 8 9 10 11 12 liruilong┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$tee cat env.txt <<-'EOF' > user=liruilong > password1=redhat > password2=redhat > EOF user=liruilong password1=redhat password2=redhat ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$ls env.txt mysqlpod.yaml
通过--from-env-file
文件创建 文件中的键值对
1 2 3 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl create secret generic mysecret1 --from-env-file=env.txt secret/mysecret1 created
查看创建信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get secrets NAME TYPE DATA AGE default-token-7q2qj kubernetes.io/service-account-token 3 6h34m mysecl Opaque 2 5h45m mysecret1 Opaque 3 32s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl describe secrets mysecret1 Name: mysecret1 Namespace: liruilong-secret-create Labels: <none> Annotations: <none> Type: Opaque Data ==== password1: 6 bytes password2: 6 bytes user: 9 bytes ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$
也可以通过--from-file
来创建,文件名是键,文件内容为值
1 2 3 4 5 6 7 8 9 10 11 12 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl create secret generic mysecret2 --from-file=/etc/hosts secret/mysecret2 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get secrets mysecret2 -o jsonpath='{.data.hosts}' | base64 -d 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.26.81 vms81.liruilongs.github.io vms81 192.168.26.82 vms82.liruilongs.github.io vms82 192.168.26.83 vms83.liruilongs.github.io vms83 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$
使用 secret secret可以通过卷的方式使用,也可以通过变量的方式使用
1 2 3 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl create secret generic mysecl --from-literal=mysqlpassword=liruilong --from-literal=rqpassword=rq secret/mysecl created
这里我们使用前面的创建的这个secret
变量的方式使用secret yaml文件中变量设置密码通过secret的方式:mysqlpodargs.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: mysqlpod name: mysqlpod spec: containers: - image: hub.c.163.com/library/mysql:latest imagePullPolicy: IfNotPresent name: mysqlpod resources: {} env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysecl key: mysqlpassword dnsPolicy: ClusterFirst restartPolicy: Always status: {}
创建pod
1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl apply -f mysqlpodargs.yaml pod/mysqlpod created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mysqlpod 0/1 ContainerCreating 0 15s <none> vms83.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mysqlpod 1/1 Running 0 21s 10.244.70.19 vms83.liruilongs.github.io <none> <none>
测试登录
1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$mysql -uroot -h10.244.70.19 -pliruilong Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.7.18 MySQL Community Server (GPL) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help . Type '\c' to clear the current input statement. MySQL [(none)]>
以卷的方式使用secret pod文件nginxsecret.yaml,一般不这样使用
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginxsecret name: nginxsecret spec: volumes: - name: v1 secret: secretName: mysecl containers: - image: nginx imagePullPolicy: IfNotPresent name: nginxsecret resources: {} volumeMounts: - name: v1 mountPath: /data dnsPolicy: ClusterFirst restartPolicy: Always status: {}
创建pod会把加密的文件信息写在pod里的/data
目录下
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl apply -f nginxsecret.yaml pod/nginxsecret created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nginxsecret 1/1 Running 0 41s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl exec -it nginxsecret -- bash root@nginxsecret:/ bin data docker-entrypoint.d etc lib media opt root sbin sys usr boot dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var root@nginxsecret:/ mysqlpassword rqpassword root@nginxsecret:/data exit
如过添加了subPath
,会把指定的信息写入文件:nginxsecretsubPth.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginxsecret name: nginxsecret spec: volumes: - name: v1 secret: secretName: mysecl containers: - image: nginx imagePullPolicy: IfNotPresent name: nginxsecret resources: {} volumeMounts: - name: v1 mountPath: /data/mysql subPath: mysqlpassword dnsPolicy: ClusterFirst restartPolicy: Always status: {}
创建pod测试
1 2 3 4 5 6 7 8 9 10 11 12 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl apply -f nginxsecretsubPth.yaml pod/nginxsecret created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nginxsecret 1/1 Running 0 16s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl exec -it nginxsecret -- bash root@nginxsecret:/ liruilongroot@nginxsecret:/ exit
configmap(cm) 也是以键值对的方式使用,一般通过命名行的方式创建,也可以通过卷和变量的方式使用 config和secret的区别主要是secret加密了,而config没有加密
configmap(cm)的创建 通过命令行的方式创建 1 2 3 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl create configmap myconfig1 --from-literal=user=liruilong --from-literal=password=liruilong configmap/myconfig1 created
查看创建信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get cm NAME DATA AGE kube-root-ca.crt 1 7h32m myconfig1 2 81s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl describe configmaps myconfig1 Name: myconfig1 Namespace: liruilong-secret-create Labels: <none> Annotations: <none> Data ==== password: ---- liruilong user: ---- liruilong BinaryData ==== Events: <none>
1 2 3 4 5 6 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get cm myconfig1 -o jsonpath='{.data.password}' liruilong┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get cm myconfig1 -o jsonpath='{.data.user}' liruilong┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$
通过文件的方式创建 微服务中常用的配置文件信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$cat application.properties server.port=8081 server.servlet.session.timeout=30m server.servlet.context-path=/ server.tomcat.uri-encoding=utf-8 server.tomcat.threads.max=500 server.tomcat.basedir=/home/sang/tmp spring.freemarker.allow-request-override=false spring.freemarker.allow-session-override=true spring.freemarker.cache=false spring.freemarker.charset=UTF-8 spring.freemarker.check-template-location=true spring.freemarker.content-type=text/html spring.freemarker.expose-request-attributes=false spring.freemarker.expose-session-attributes=true spring.freemarker.suffix=.ftl spring.freemarker.template-loader-path=classpath:/templates/ spring.thymeleaf.cache=true spring.thymeleaf.check-template=true spring.thymeleaf.check-template-location=true spring.thymeleaf.encoding=UTF-8 spring.thymeleaf.prefix=classpath:/templates/ spring.thymeleaf.servlet.content-type=text/html spring.thymeleaf.suffix=.html spring.redis.database=0 spring.redis.host=192.168.66.130 spring.redis.port=6379 spring.redis.password=123@456 spring.redis.lettuce.pool.max-active= spring.redis.lettuce.pool.max-idle= spring.redis.lettuce.pool.max-wait= spring.redis.lettuce.pool.min-idle= spring.redis.lettuce.shutdown-timeout= spring.redis.jedis.pool.max-active=8 spring.redis.jedis.pool.max-idle=8 spring.redis.jedis.pool.max-wait=-1ms spring.redis.jedis.pool.min-idle=0
1 2 3 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl create configmap myconfig2 --from-file=./application.properties configmap/myconfig2 created
1 2 3 4 5 6 7 8 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$cat env.txt user=liruilong password1=redhat password2=redhat ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl create configmap myconfig3 --from-env-file=./env.txt configmap/myconfig3 created
查看创建的全部configMap
1 2 3 4 5 6 7 8 9 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get cm NAME DATA AGE kube-root-ca.crt 1 8h myconfig1 2 37m myconfig2 1 9m16s myconfig3 3 18s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$
configmap(cm)的使用 用卷的方式使用configmap configmap通常使用卷的方式使用,一般可以在微服务中抽离配置文件: ngingconfig.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginxsecret name: nginxsecret spec: volumes: - name: config configMap: name: myconfig2 containers: - image: nginx imagePullPolicy: IfNotPresent name: nginxsecret resources: {} volumeMounts: - name: config mountPath: /app/java readOnly: true dnsPolicy: ClusterFirst restartPolicy: Always status: {}
测试,查看配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl apply -f ngingconfig.yaml pod/nginxsecret created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nginxsecret 1/1 Running 0 40s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl exec -it nginxsecret -- bash root@nginxsecret:/ application.properties root@nginxsecret:/app/java server.port=8081 server.servlet.session.timeout=30m server.servlet.context-path=/ server.tomcat.uri-encoding=utf-8 .........
修改kube-prosy的负载策略,修改其中的 mode: " iptables/ipvs"
,修改之后需要重启对应的pod
1 2 3 4 5 6 7 8 9 10 11 12 13 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get cm -n kube-system NAME DATA AGE calico-config 4 66d coredns 1 66d extension-apiserver-authentication 6 66d kube-proxy 2 66d kube-root-ca.crt 1 66d kubeadm-config 2 66d kubelet-config-1.21 1 66d kubelet-config-1.22 1 54d ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl edit cm kube-proxy -n kube-system
变量的方式使用configMap 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get configmaps myconfig3 -o yaml apiVersion: v1 data: password1: redhat password2: redhat user: liruilong kind: ConfigMap metadata: creationTimestamp: "2021-12-12T10:04:42Z" name: myconfig3 namespace: liruilong-secret-create resourceVersion: "1645816" selfLink: /api/v1/namespaces/liruilong-secret-create/configmaps/myconfig3 uid: b75bef31-05a8-4d67-8d5c-dea42aedea67
编写pod资源文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: mysqlpod name: mysqlpod spec: containers: - image: hub.c.163.com/library/mysql:latest imagePullPolicy: IfNotPresent name: mysqlpod resources: {} env: - name: MYSQL_ROOT_PASSWORD valueFrom: configMapKeyRef: name: myconfig3 key: user dnsPolicy: ClusterFirst restartPolicy: Always status: {}
创建pod
1 2 3 4 5 6 7 8 9 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl apply -f mysqlpodconfig.yaml pod/mysqlpod created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mysqlpod 1/1 Running 0 3m19s 10.244.171.130 vms82.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$
测试使用
1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-secret-create] └─$mysql -uroot -h10.244.171.130 -pliruilong Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.7.18 MySQL Community Server (GPL) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help . Type '\c' to clear the current input statement. MySQL [(none)]>
deployment Deployment
是Kubernetes v1.2
引入的新概念,引入的目的是为了更好地解决Pod的编排问题
。为此, Deployment
在内部使用了Replica Set
来实现目的,无论从Deployment
的作用与目的、它的YAML定义,还是从它的具体命令行操作来看,我们都可以把它看作RC
的一次升级,两者的相似度超过90%。
Deployment相对于RC的一个最大升级是我们可以随时知道当前Pod“部署”的进度。实际上由于一个Pod的创建、调度、绑定节点及在目标Node上启动对应的容器这一完整过程需要一定的时间,所以我们期待系统启动N个Pod副本的目标状态,实际上是一个连续变化的“部署过程”导致的最终状态。
以下是 Deployments 的典型用例:
Deployments 的典型用例
创建 Deployment
以将 ReplicaSet
上线。 ReplicaSet
在后台创建 Pods。
检查 ReplicaSet
的上线状态,查看其是否成功。
通过更新 Deployment
的 PodTemplateSpec
,声明 Pod 的新状态 。 新的ReplicaSet
会被创建,Deployment
以受控速率将
如果 Deployment
的当前状态不稳定,回滚到较早的 Deployment
版本。 每次回滚都会更新 Deployment
的修订版本。
扩大 Deployment
规模以承担更多负载。
暂停 Deployment
以应用对 PodTemplateSpec 所作的多项修改, 然后恢复其执行以启动新的上线版本。
使用 Deployment
状态 来判定上线过程是否出现停滞。
清理较旧的不再需要的 ReplicaSet
。
ReplicaSet ReplicaSet
的目的是维护一组在任何时候都处于运行状态的 Pod 副本的稳定集合
。 因此,它通常用来保证给定数量的、完全相同的 Pod 的可用性。
ReplicaSet 的工作原理 RepicaSet
是通过一组字段来定义的,包括:
一个用来识别可获得的 Pod 的集合的选择算符(选择器)、
一个用来标明应该维护的副本个数的数值、
一个用来指定应该创建新 Pod 以满足副本个数条件时要使用的 Pod 模板等等。
学习环境准备
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$dir =k8s-deploy-create ;mkdir $dir ;cd $dir ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get ns NAME STATUS AGE default Active 78m kube-node-lease Active 79m kube-public Active 79m kube-system Active 79m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl create ns liruilong-deploy-create namespace/liruilong-deploy-create created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl config set-context $(kubectl config current-context) --namespace=liruilong-deploy-create Context "kubernetes-admin@kubernetes" modified. ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl config view | grep namespace namespace: liruilong-deploy-create ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$
用yaml文件创建deployment 1 2 3 4 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl create deployment web1 --image=nginx --dry-run=client -o yaml > ngixndeplog.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$vim ngixndeplog.yaml
ngixndeplog.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: web1 name: web1 spec: replicas: 3 selector: matchLabels: app: web1 strategy: {} template: metadata: creationTimestamp: null labels: app: web1 spec: containers: - image: nginx name: nginx ports: - containerPort: 80 resources: {} status: {}
1 2 3 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl apply -f ngixndeplog.yaml deployment.apps/web1 created
查看创建的deployment
1 2 3 4 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR web1 2/3 3 2 37s nginx nginx app=web1
查看创建的replicaSet
1 2 3 4 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR web1-66b5fd9bc8 3 3 3 4m28s nginx nginx app=web1,pod-template-hash=66b5fd9bc8
查看创建的pod
1 2 3 4 5 6 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web1-66b5fd9bc8-2wpkr 1/1 Running 0 3m45s 10.244.171.131 vms82.liruilongs.github.io <none> <none> web1-66b5fd9bc8-9lxh2 1/1 Running 0 3m45s 10.244.171.130 vms82.liruilongs.github.io <none> <none> web1-66b5fd9bc8-s9w7g 1/1 Running 0 3m45s 10.244.70.3 vms83.liruilongs.github.io <none> <none>
Pod的扩容和缩容 在实际生产系统中,我们经常会遇到某个服务需要扩容的场景,也可能会遇到由于资源紧张或者工作负载降低而需要减少服务实例数量的场景。此时我们可以利用DeploymentRC的Scale机制来完成这些工作。Kubermetes对Pod的扩容和缩容操作提供了手动和自动两种模式,
手动模式通过执行kubecl scale
命令对一个Deploymen/RC
进行Pod副本数量的设置,即可一键完成。
自动模式则需要用户根据某个性能指标或者自定义业务指标,并指定Pod
副本数量的范围,系统将自动在这个范围内根据性能指标的变化进行调整。
手动模式 命令行修改kubectl scale deployment web1 --replicas=2
1 2 3 4 5 6 7 8 9 10 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl scale deployment web1 --replicas=2 deployment.apps/web1 scaled ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web1-66b5fd9bc8-2wpkr 1/1 Running 0 8m19s 10.244.171.131 vms82.liruilongs.github.io <none> <none> web1-66b5fd9bc8-s9w7g 1/1 Running 0 8m19s 10.244.70.3 vms83.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$
edit的方式修改kubectl edit deployment web1
1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl edit deployment web1 deployment.apps/web1 edited ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web1-66b5fd9bc8-2wpkr 1/1 Running 0 9m56s 10.244.171.131 vms82.liruilongs.github.io <none> <none> web1-66b5fd9bc8-9lnds 0/1 ContainerCreating 0 6s <none> vms82.liruilongs.github.io <none> <none> web1-66b5fd9bc8-s9w7g 1/1 Running 0 9m56s 10.244.70.3 vms83.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$
修改yaml文件方式
1 2 3 4 5 6 7 8 9 10 11 12 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$sed -i 's/replicas: 3/replicas: 2/' ngixndeplog.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl apply -f ngixndeplog.yaml deployment.apps/web1 configured ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web1-66b5fd9bc8-2wpkr 1/1 Running 0 12m 10.244.171.131 vms82.liruilongs.github.io <none> <none> web1-66b5fd9bc8-s9w7g 1/1 Running 0 12m 10.244.70.3 vms83.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$
HPA自动模式伸缩 从Kubernetes v1.1版本开始,新增了名为Horizontal Pod Autoscaler (HPA)的控制器,用于实现基于CPU使用率进行自动Pod扩容和缩容的功能。
HPA控制器基于Master的kube-controller-manager服务启动参数–horizontal-pod-autoscaler-sync-period定义的时长(默认值为30s),周期性地监测目标Pod的CPU使用率,并在满足条件时对ReplicationController或Deployment中的Pod副本数量进行调整,以符合用户定义的平均Pod CPU使用率。Pod CPU使用率来源于metric server 组件,所以需要预先安装好metric server .
HPA 可以基于内存,CPU,并发量来动态伸缩
创建HPA时可以使用kubectl autoscale 命令进行快速创建或者使用yaml配置文件进行创建。在创建HPA之前,需要已经存在一个DeploymentRC对象,并且该Deployment/RC中的Pod必须定义resources.requests.cpu的资源请求值,如果不设置该值,则metric server 将无法采集到该Pod的CPU使用情况,会导致HPA无法正常工作。
设置metric server 监控
1 2 3 4 5 6 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/metrics/deploy/1.8+] └─$kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% vms81.liruilongs.github.io 401m 20% 1562Mi 40% vms82.liruilongs.github.io 228m 11% 743Mi 19% vms83.liruilongs.github.io 221m 11% 720Mi 18%
配置HPA 设置副本数是最小2个,最大10个,CPU超过80
1 2 3 4 5 6 7 8 9 10 11 12 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl autoscale deployment web1 --min=2 --max=10 --cpu-percent=80 horizontalpodautoscaler.autoscaling/web1 autoscaled ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE web1 Deployment/web1 <unknown>/80% 2 10 2 15s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl delete hpa web1 horizontalpodautoscaler.autoscaling "web1" deleted ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$
解决当前cpu的使用量为unknown,这个占时没有解决办法 ngixndeplog.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: web1 name: web1 spec: replicas: 2 selector: matchLabels: app: web1 strategy: {} template: metadata: creationTimestamp: null labels: app: web1 spec: containers: - image: nginx name: nginx ports: - containerPort: 80 resources: limits: cpu: 500m requests: cpu: 200m
测试HPA 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$cat ngixndeplog.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: nginx name: nginxdep spec: replicas: 2 selector: matchLabels: app: nginx strategy: {} template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - image: nginx name: web resources: requests: cpu: 100m restartPolicy: Always
设置HPAkubectl autoscale deployment nginxdep --max=5 --cpu-percent=50
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get deployments.apps NAME READY UP-TO-DATE AVAILABLE AGE nginxdep 2/2 2 2 8m8s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl autoscale deployment nginxdep --max=5 --cpu-percent=50 horizontalpodautoscaler.autoscaling/nginxdep autoscaled ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginxdep-645bf755b9-27hzn 1/1 Running 0 97s 10.244.171.140 vms82.liruilongs.github.io <none> <none> nginxdep-645bf755b9-cb57p 1/1 Running 0 97s 10.244.70.10 vms83.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get hpa -o wide NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginxdep Deployment/nginxdep <unknown>/50% 1 5 2 21s
创建一个svc,然后模拟调用
1 2 3 4 5 6 7 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl expose --name=nginxsvc deployment nginxdep --port=80 service/nginxsvc exposed ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR nginxsvc ClusterIP 10.104.147.65 <none> 80/TCP 9s app=nginx
测试svc的调用
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.83 -m shell -a "curl http://10.104.147.65 " 192.168.26.83 | CHANGED | rc=0 >> <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/" >nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/" >nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 615 100 615 0 0 304k 0 --:--:-- --:--:-- --:--:-- 600k
安装http-tools(IP压力测试工具包),模拟调用
1 2 3 4 5 6 7 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.83 -m shell -a "yum install httpd-tools -y" ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.83 -m shell -a "ab -t 600 -n 1000000 -c 1000 http://10.104.147.65/ " & [1] 123433 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
观察pod的变化
deployment-健壮性测试 1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl scale deployment nginxdep --replicas=3 deployment.apps/nginxdep scaled ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginxdep-645bf755b9-27hzn 1/1 Running 1 (3m19s ago) 47m 10.244.171.141 vms82.liruilongs.github.io <none> <none> nginxdep-645bf755b9-4dkpp 1/1 Running 0 30s 10.244.171.144 vms82.liruilongs.github.io <none> <none> nginxdep-645bf755b9-vz5qt 1/1 Running 0 30s 10.244.70.11 vms83.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
把vms83.liruilongs.github.io
关机,等一段时间就会发现,pod都会在vms82.liruilongs.github.io
上运行
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl get nodes NAME STATUS ROLES AGE VERSION vms81.liruilongs.github.io Ready control-plane,master 47h v1.22.2 vms82.liruilongs.github.io Ready <none> 47h v1.22.2 vms83.liruilongs.github.io NotReady <none> 47h v1.22.2 ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginxdep-645bf755b9-27hzn 1/1 Running 1 (20m ago) 64m 10.244.171.141 vms82.liruilongs.github.io <none> <none> nginxdep-645bf755b9-4dkpp 1/1 Running 0 17m 10.244.171.144 vms82.liruilongs.github.io <none> <none> nginxdep-645bf755b9-9hzf2 1/1 Running 0 9m48s 10.244.171.145 vms82.liruilongs.github.io <none> <none> nginxdep-645bf755b9-vz5qt 1/1 Terminating 0 17m 10.244.70.11 vms83.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginxdep-645bf755b9-27hzn 1/1 Running 1 (27m ago) 71m 10.244.171.141 vms82.liruilongs.github.io <none> <none> nginxdep-645bf755b9-4dkpp 1/1 Running 0 24m 10.244.171.144 vms82.liruilongs.github.io <none> <none> nginxdep-645bf755b9-9hzf2 1/1 Running 0 16m 10.244.171.145 vms82.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl top pods NAME CPU(cores) MEMORY(bytes) nginxdep-645bf755b9-27hzn 0m 4Mi nginxdep-645bf755b9-4dkpp 0m 1Mi nginxdep-645bf755b9-9hzf2 0m 1Mi ┌──[root@vms81.liruilongs.github.io]-[~] └─$
当vms83.liruilongs.github.io
重新启动,pod并不会返回到vms83.liruilongs.github.io
上运行
1 2 3 4 5 6 7 8 9 10 11 12 13 14 ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl get nodes NAME STATUS ROLES AGE VERSION vms81.liruilongs.github.io Ready control-plane,master 2d v1.22.2 vms82.liruilongs.github.io Ready <none> 2d v1.22.2 vms83.liruilongs.github.io Ready <none> 2d v1.22.2 ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginxdep-645bf755b9-27hzn 1/1 Running 1 (27m ago) 71m 10.244.171.141 vms82.liruilongs.github.io <none> <none> nginxdep-645bf755b9-4dkpp 1/1 Running 0 24m 10.244.171.144 vms82.liruilongs.github.io <none> <none> nginxdep-645bf755b9-9hzf2 1/1 Running 0 16m 10.244.171.145 vms82.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~] └─$
deployment-更新回滚镜像 当集群中的某个服务需要升级时,我们需要停止目前与该服务相关的所有Pod,然后下载新版本镜像并创建新的Pod,如果集群规模比较大,则这个工作就变成了一个挑战,而且先全部停止然后逐步升级的方式会导致较长时间的服务不可用。
Kuberetes提供了滚动升级功能来解决上述问题。如果Pod是通过Deployment创建的,则用户可以在运行时修改Deployment的Pod定义(spec.template)或镜像名称,并应用到Deployment对象上,系统即可完成Deployment的自动更新操作。如果在更新过程中发生了错误,则还可以通过回滚(Rollback)操作恢复Pod的版本。 环境准备
1 2 3 ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl scale deployment nginxdep --replicas=5 deployment.apps/nginxdep scaled
1 2 3 4 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "docker pull nginx:1.9" ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "docker pull nginx:1.7.9"
通过deployment-更新镜像 现在pod
镜像需要更新为 Nginx l.9
,我们可以通 kubectl set image deployment/deploy名字 容器名字=nginx:1.9 --record
为 Deployment
设置新的镜像名称
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl set image deployment/nginxdep web=nginx:1.9 --record Flag --record has been deprecated, --record will be removed in the future deployment.apps/nginxdep image updated ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nginxdep-59d7c6b6f-6hdb8 0/1 ContainerCreating 0 26s nginxdep-59d7c6b6f-bd5z2 0/1 ContainerCreating 0 26s nginxdep-59d7c6b6f-jb2j7 1/1 Running 0 26s nginxdep-59d7c6b6f-jd5df 0/1 ContainerCreating 0 4s nginxdep-645bf755b9-27hzn 1/1 Running 1 (51m ago) 95m nginxdep-645bf755b9-4dkpp 1/1 Running 0 48m nginxdep-645bf755b9-hkcqx 1/1 Running 0 18m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nginxdep-59d7c6b6f-6hdb8 0/1 ContainerCreating 0 51s nginxdep-59d7c6b6f-bd5z2 1/1 Running 0 51s nginxdep-59d7c6b6f-jb2j7 1/1 Running 0 51s nginxdep-59d7c6b6f-jd5df 0/1 ContainerCreating 0 29s nginxdep-59d7c6b6f-prfzd 0/1 ContainerCreating 0 14s nginxdep-645bf755b9-27hzn 1/1 Running 1 (51m ago) 96m nginxdep-645bf755b9-4dkpp 1/1 Running 0 49m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nginxdep-59d7c6b6f-6hdb8 1/1 Running 0 2m28s nginxdep-59d7c6b6f-bd5z2 1/1 Running 0 2m28s nginxdep-59d7c6b6f-jb2j7 1/1 Running 0 2m28s nginxdep-59d7c6b6f-jd5df 1/1 Running 0 2m6s nginxdep-59d7c6b6f-prfzd 1/1 Running 0 111s
可以通过age的时间看到nginx的版本由latest滚动升级到 1.9的版本然后到1.7.9版本
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl set image deployment/nginxdep web=nginx:1.7.9 --record Flag --record has been deprecated, --record will be removed in the future deployment.apps/nginxdep image updated ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nginxdep-66587778f6-9jqfz 1/1 Running 0 4m37s nginxdep-66587778f6-jbsww 1/1 Running 0 5m2s nginxdep-66587778f6-lwkpg 1/1 Running 0 5m1s nginxdep-66587778f6-tmd4l 1/1 Running 0 4m41s nginxdep-66587778f6-v9f28 1/1 Running 0 5m2s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl describe pods nginxdep-66587778f6-jbsww | grep Image: Image: nginx:1.7.9
可以使用kubectl rollout pause deployment nginxdep
来暂停更新操作,完成复杂更新
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl rollout pause deployment nginxdep deployment.apps/nginxdep paused ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get deployments -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginxdep 5/5 5 5 147m web nginx:1.7.9 app=nginx ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl set image deployment/nginxdep web=nginx deployment.apps/nginxdep image updated ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl rollout history deployment nginxdep deployment.apps/nginxdep REVISION CHANGE-CAUSE 4 kubectl set image deployment/nginxdep web=nginx:1.9 --record=true 5 kubectl set image deployment/nginxdep web=nginx:1.9 --record=true 6 kubectl set image deployment/nginxdep web=nginx:1.9 --record=true ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl rollout resume deployment nginxdep deployment.apps/nginxdep resumed
deployment-回滚 这个和git基本类似。可以回滚到任意版本ID
查看版本历史记录
1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl rollout history deployment nginxdep deployment.apps/nginxdep REVISION CHANGE-CAUSE 1 kubectl set image deployment/nginxdep nginxdep=nginx:1.9 --record=true 2 kubectl set image deployment/nginxdep web=nginx:1.9 --record=true 3 kubectl set image deployment/nginxdep web=nginx:1.7.9 --record=true ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get deployments nginxdep -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginxdep 5/5 5 5 128m web nginx:1.7.9 app=nginx
回滚版本
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl rollout undo deployment nginxdep --to-revision=2 deployment.apps/nginxdep rolled back ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nginxdep-59d7c6b6f-ctdh2 0/1 ContainerCreating 0 6s nginxdep-59d7c6b6f-dk67c 0/1 ContainerCreating 0 6s nginxdep-59d7c6b6f-kr74k 0/1 ContainerCreating 0 6s nginxdep-66587778f6-9jqfz 1/1 Running 0 23m nginxdep-66587778f6-jbsww 1/1 Running 0 23m nginxdep-66587778f6-lwkpg 1/1 Running 0 23m nginxdep-66587778f6-v9f28 1/1 Running 0 23m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nginxdep-59d7c6b6f-7j9z7 0/1 ContainerCreating 0 37s nginxdep-59d7c6b6f-ctdh2 1/1 Running 0 59s nginxdep-59d7c6b6f-dk67c 1/1 Running 0 59s nginxdep-59d7c6b6f-f2sb4 0/1 ContainerCreating 0 21s nginxdep-59d7c6b6f-kr74k 1/1 Running 0 59s nginxdep-66587778f6-jbsww 1/1 Running 0 24m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$
查看版本详细信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl rollout history deployment nginxdep --revision=4 deployment.apps/nginxdep with revision Pod Template: Labels: app=nginx pod-template-hash=59d7c6b6f Annotations: kubernetes.io/change-cause: kubectl set image deployment/nginxdep web=nginx:1.9 --record=true Containers: web: Image: nginx:1.9 Port: <none> Host Port: <none> Requests: cpu: 100m Environment: <none> Mounts: <none> Volumes: <none>
滚动更新的相关参数 maxSurge :在升级过程中一次升级几个,即新旧的副本不超过 (1+ 设置的值)%
maxUnavailable :在升级过程中,pod不可用个数,一次性删除多少个pod
可以通过命令修改
1 2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl edit deployments nginxdep
默认值
1 2 3 4 5 6 7 8 9 10 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$kubectl get deployments nginxdep -o yaml | grep -A 5 strategy: strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type : RollingUpdate template: ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create] └─$
Recreate (重建): 设置spec.strategy.type:Recreate
,表示 Deployment
在更新Pod
时,会先杀掉所有正在运行的Pod
,然后创建新的Pod
RolligUpdate (滚动更新): 设置spec.strategy.type:RollingUupdate
,表示Deployment
会以滚动更新的方式来逐个更新Pod.同时,可以通过设置spec.strategy.rollingUuplate
下的两个参数(maxUnavailable
和maxSurge
)来控制滚动更新的过程。
daemonset DaemonSet
确保全部节点上运行一个 Pod 的副本。 当有节点加入集群时, 也会为他们新增一个 Pod
。 当有节点从集群移除时,这些Pod
也会被回收。删除DaemonSet
将会删除它创建的所有 Pod
。即单实例,每个节点只跑一个pod
DaemonSet
应用场景DaemonSet 的一些典型用法:
在每个Node上运行一个GlusterFS存储
或者Ceph存储
的Daemon进程
在每个Node上运行一个日志采集程序
,例如Fluentd
或者Logstach
.
在每个Node上运行一个性能监控程序
,采集该Node
的运行性能数据
,例如PrometheusNode Exporter
, collectd
, New Relic agent或者Ganglia gmond等。
一种简单的用法是为每种类型的守护进程在所有的节点上都启动一个 DaemonSet。 一个稍微复杂的用法是为同一种守护进程部署多个 DaemonSet;每个具有不同的标志, 并且对不同硬件类型具有不同的内存、CPU 要求。这句话不太懂,以后再研究下
DaemonSet
的Pod调度策略
与RC类似
,除了使用系统内置的算法在每台Node上进行调度,也可以在Pod
的定义中使用NodeSelector
或NodeAffinity
来指定满足条件的Node
范围进行调度。
学习环境准备
1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$dir =k8s-daemonset-create;mkdir $dir ;cd $dir ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-daemonset-create] └─$kubectl config current-context kubernetes-admin@kubernetes ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-daemonset-create] └─$kubectl create ns liruilong-dameonset-create namespace/liruilong-dameonset-create created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-daemonset-create] └─$kubectl config set-context $(kubectl config current-context) --namespace=liruilong-daemonset-create Context "kubernetes-admin@kubernetes" modified.
kubeadm中的deamonset 使用kubeadm安装的k8s环境中是使用的DaemonSet,calico是网路相关,所有节点都需要有,kube-proxy是代理相关,用于负载均衡等操作
1 2 3 4 5 6 7 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-ReplicationController] └─$kubectl get ds -A NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system calico-node 3 3 3 3 3 kubernetes.io/os=linux 4d23h kube-system kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 4d23h ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-ReplicationController] └─$
Demonset的创建 这里要说明的是deamonset
和deployment
只有在kind
的位置不同,可以拷贝deployment
的模板进行修改
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 apiVersion: apps/v1 kind: DaemonSet metadata: creationTimestamp: null labels: app: myds1 name: myds1 spec: selector: matchLabels: app: myds1 template: metadata: creationTimestamp: null labels: app: myds1 spec: containers: - image: nginx name: nginx resources: {}
1 2 3 4 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-daemonset-create] └─$kubectl create deployment myds1 --image=nginx --dry-run=client -o yaml > deamonset.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-daemonset-create] └─$vim deamonset.yaml
我们创建一个deamonset,当前只有master节点和一个node节点正常工作 因为master节点有污点,所以会发现这里只允许一个deamonset
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-daemonset-create] └─$kubectl apply -f deamonset.yaml daemonset.apps/myds1 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-daemonset-create] └─$kubectl get nodes NAME STATUS ROLES AGE VERSION vms81.liruilongs.github.io Ready control-plane,master 4d22h v1.22.2 vms82.liruilongs.github.io Ready <none> 4d22h v1.22.2 vms83.liruilongs.github.io NotReady <none> 4d22h v1.22.2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-daemonset-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE myds1-fbmhp 1/1 Running 0 35s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-daemonset-create] └─$
节点加入集群自动新增节点pod 我们在启动一台机器,会发现,新加入的vms83.liruilongs.github.io
节点自动运行一个deamonset
1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-daemonset-create] └─$kubectl get nodes NAME STATUS ROLES AGE VERSION vms81.liruilongs.github.io Ready control-plane,master 4d22h v1.22.2 vms82.liruilongs.github.io Ready <none> 4d22h v1.22.2 vms83.liruilongs.github.io Ready <none> 4d22h v1.22.2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-daemonset-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE myds1-prldj 1/1 Running 0 6m13s myds1-pvwm4 1/1 Running 0 10m
Deamonset污点节点加入pod 下面我们从新修改deamonset资源文件,容忍有污点的节点
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 apiVersion: apps/v1 kind: DaemonSet metadata: creationTimestamp: null labels: app: myds1 name: myds1 spec: selector: matchLabels: app: myds1 template: metadata: creationTimestamp: null labels: app: myds1 spec: terminationGracePeriodSeconds: 0 tolerations: - operator: Exists containers: - image: nginx name: nginx resources: {}
1 2 3 4 5 6 7 8 9 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-daemonset-create] └─$kubectl apply -f deamonsettaint.yaml daemonset.apps/myds1 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-daemonset-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE myds1-8tsnz 0/1 ContainerCreating 0 3s myds1-9l6d9 0/1 ContainerCreating 0 3s myds1-wz44b 0/1 ContainerCreating 0 3s
会发现每个节点都运行一个deamontset相关的pod
1 2 3 4 5 6 7 8 9 10 11 12 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-daemonset-create] └─$kubectl describe nodes vms81.liruilongs.github.io | grep Taint Taints: node-role.kubernetes.io/master:NoSchedule ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-daemonset-create] └─$kubectl run pod1 --image=nginx --dry-run=server -o yaml | grep -A 6 terminationGracePeriodSeconds terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute
当然,如果我们不想所以有污点的节点都运行deamonset相关pod,那么我们可以使用另一种指定kye的方式
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 apiVersion: apps/v1 kind: DaemonSet metadata: creationTimestamp: null labels: app: myds1 name: myds1 spec: selector: matchLabels: app: myds1 template: metadata: creationTimestamp: null labels: app: myds1 spec: terminationGracePeriodSeconds: 0 tolerations: - operator: Exists key: node-role.kubernetes.io/master effect: "NoSchedule" containers: - image: nginx name: nginx resources: {}
会发现deamonset可以运行在master和node节点
1 2 3 4 5 6 7 8 9 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-daemonset-create] └─$kubectl apply -f deamonsetaint.yaml daemonset.apps/myds1 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-daemonset-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE myds1-f7hbb 0/1 ContainerCreating 0 4s myds1-hksp9 0/1 ContainerCreating 0 4s myds1-nnmzp 0/1 ContainerCreating 0 4s
Daemon Pods 是如何被调度的 DaemonSet
确保所有符合条件的节点都运行该 Pod 的一个副本。 通常,运行 Pod
的节点由 Kubernetes
调度器选择。 不过,DaemonSet Pods
由 DaemonSet
控制器创建和调度。这就带来了以下问题: Pod 行为的不一致性:正常 Pod 在被创建后等待调度时处于 Pending 状态, DaemonSet Pods 创建后不会处于 Pending 状态下。 Pod 抢占 由默认调度器处理。启用抢占后,DaemonSet 控制器将在不考虑 Pod 优先级和抢占 的情况下制定调度决策。这里的默认调度器即k8s中调度器。
ScheduleDaemonSetPods
允许您使用默认调度器而不是 DaemonSet
控制器来调度 DaemonSets,
方法是将 NodeAffinity
条件而不是 .spec.nodeName
条件添加到 DaemonSet Pods
。 默认调度器接下来将 Pod
绑定到目标主机。
如果 DaemonSet Pod
的节点亲和性配置已存在,则被替换 (原始的节点亲和性配置在选择目标主机之前被考虑)。 DaemonSet
控制器仅在创建或修改DaemonSet Pod
时执行这些操作, 并且不会更改 DaemonSet
的 spec.template
。
1 2 3 4 5 6 7 8 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - target-host-name
与 Daemon Pods 通信
DaemonSet 中的 Pod 进行通信的几种可能模式如下:
推送(Push)
:配置 DaemonSet 中的 Pod,将更新发送到另一个服务,例如统计数据库。 这些服务没有客户端。
NodeIP 和已知端口
:DaemonSet 中的 Pod 可以使用 hostPort,从而可以通过节点 IP 访问到 Pod。客户端能通过某种方法获取节点 IP 列表,并且基于此也可以获取到相应的端口。
DNS:创建具有相同
Pod 选择算符的 无头服务, 通过使用 endpoints 资源或从 DNS 中检索到多个 A 记录来发现 DaemonSet。
Service
:创建具有相同 Pod 选择算符的服务,并使用该服务随机访问到某个节点上的 守护进程(没有办法访问到特定节点)。
更新 DaemonSet 如果节点的标签被修改,DaemonSet 将立刻向新匹配上的节点添加 Pod, 同时删除不匹配的节点上的 Pod。你可以修改 DaemonSet 创建的 Pod。不过并非 Pod 的所有字段都可更新。 下次当某节点(即使具有相同的名称)被创建时,DaemonSet 控制器还会使用最初的模板。
你可以修改 DaemonSet
创建的 Pod。不过并非 Pod
的所有字段都可更新。 下次当某节点(即使具有相同的名称)被创建时,DaemonSet 控制器还会使用最初的模板。
您可以删除一个 DaemonSet。如果使用 kubectl 并指定 –cascade=orphan 选项, 则 Pod 将被保留在节点上。接下来如果创建使用相同选择算符的新 DaemonSet, 新的 DaemonSet 会收养已有的 Pod。 如果有 Pod 需要被替换,DaemonSet 会根据其 updateStrategy 来替换。
DaemonSet 的替代方案 init 脚本 直接在节点上启动守护进程(例如使用 init、upstartd
或 systemd
)的做法当然是可行的。 不过,基于 DaemonSet
来运行这些进程有如下一些好处:
像所运行的其他应用一样,DaemonSet
具备为守护进程提供监控和日志管理的能力。
为守护进程和应用所使用的配置语言和工具(如 Pod 模板、kubectl)是相同的。
在资源受限的容器中运行守护进程能够增加守护进程和应用容器的隔离性。 然而,这一点也可以通过在容器中运行守护进程但却不在 Pod 中运行之来实现。 例如,直接基于 Docker 启动。
裸 Pod 直接创建 Pod
并指定其运行在特定的节点上也是可以的。 然而,DaemonSet 能够替换由于任何原因(例如节点失败、例行节点维护、内核升级) 而被删除或终止的 Pod
。 由于这个原因,你应该使用 DaemonSet 而不是单独创建 Pod。
静态 Pod 通过在一个指定的、受 kubelet
监视的目录下编写文件来创建 Pod
也是可行的。 这类 Pod
被称为静态 Pod。 不像 DaemonSet
,静态 Pod 不受 kubectl
和其它 Kubernetes API
客户端管理。 静态 Pod 不依赖于 API 服务器,这使得它们在启动引导新集群的情况下非常有用。 此外,静态 Pod 在将来可能会被废弃。
Deployments DaemonSet
与 Deployments
非常类似, 它们都能创建 Pod,并且 Pod
中的进程都不希望被终止(例如,Web 服务器、存储服务器)。建议为无状态的服务使用 Deployments
,比如前端服务。 对这些服务而言,对副本的数量进行扩缩容、平滑升级,比精确控制 Pod 运行在某个主机上要重要得多。 当需要 Pod
副本总是运行在全部或特定主机上,并且当该 DaemonSet
提供了节点级别的功能(允许其他 Pod 在该特定节点上正确运行)时, 应该使用 DaemonSet。
例如,网络插件通常包含一个以 DaemonSet
运行的组件。 这个 DaemonSet
组件确保它所在的节点的集群网络正常工作
1 2 3 4 5 6 7 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-ReplicationController] └─$kubectl get ds -A NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system calico-node 3 3 3 3 3 kubernetes.io/os=linux 4d23h kube-system kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 4d23h ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-ReplicationController] └─$
ReplicationController (RC) ReplicationController 确保在任何时候都有特定数量的 Pod 副本处于运行状态。 换句话说,ReplicationController 确保一个 Pod 或一组同类的 Pod 总是可用的。
推荐使用配置 ReplicaSet
的 Deployment
来建立副本管理机制。RC是一个很古老的资源控制器,现在一般不怎么使用,作为了解,和deploy很的相似。
ReplicationController 如何工作 当 Pod 数量过多时,ReplicationController 会终止多余的 Pod。当 Pod 数量太少时,ReplicationController 将会启动新的 Pod。 与手动创建的 Pod 不同,由 ReplicationController 创建的 Pod 在失败、被删除或被终止时会被自动替换。 例如,在中断性维护(如内核升级)之后,你的 Pod 会在节点上重新创建。 因此,即使你的应用程序只需要一个 Pod,你也应该使用 ReplicationController 创建 Pod。 ReplicationController 类似于进程管理器,但是 ReplicationController 不是监控单个节点上的单个进程,而是监控跨多个节点的多个 Pod。
ReplicationController 的替代方案 ReplicaSet ReplicaSet
是下一代 ReplicationController
, 支持新的基于集合的标签选择算符。 它主要被 Deployment
用来作为一种编排 Pod
创建、删除及更新的机制。 请注意,我们推荐使用 Deployment
而不是直接使用 ReplicaSet
,除非 你需要自定义更新编排或根本不需要更新。
Deployment
是一种更高级别的 API 对象, 它以类似于 kubectl rolling-update
的方式更新其底层 ReplicaSet
及其 Pod。
如果你想要这种滚动更新功能,那么推荐使用 Deployment
,因为与 kubectl rolling-update
不同, 它们是声明式的、服务端的,并且具有其它特性。
创建一个RC 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 apiVersion: v1 kind: ReplicationController metadata: name: nginxrc spec: replicas: 2 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - image: nginx name: web resources: requests: cpu: 100m restartPolicy: Always
1 2 3 4 5 6 7 8 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-ReplicationController] └─$kubectl apply -f rc.yaml replicationcontroller/nginxrc created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-ReplicationController] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nginxrc-5szqd 0/1 ContainerCreating 0 15s nginxrc-tstxl 1/1 Running 0 15s
修改RC副本数
1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-ReplicationController] └─$kubectl scale rc nginxrc --replicas=5 replicationcontroller/nginxrc scaled ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-ReplicationController] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nginxrc-5szqd 1/1 Running 0 84s nginxrc-6ptpt 0/1 ContainerCreating 0 3s nginxrc-pd6qw 0/1 ContainerCreating 0 3s nginxrc-tntbd 0/1 ContainerCreating 0 3s nginxrc-tstxl 1/1 Running 0 84s
删除RC
1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-ReplicationController] └─$kubectl delete -f rc.yaml replicationcontroller "nginxrc" deleted ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-ReplicationController] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nginxrc-5szqd 1/1 Terminating 0 110s nginxrc-6ptpt 1/1 Terminating 0 29s nginxrc-pd6qw 1/1 Terminating 0 29s nginxrc-tntbd 1/1 Terminating 0 29s nginxrc-tstxl 1/1 Terminating 0 110s
ReplicaSet(RS) ReplicaSet
的目的是维护一组在任何时候都处于运行状态的 Pod 副本的稳定集合
。 因此,它通常用来保证给定数量的、完全相同的 Pod 的可用性。
ReplicaSet 的工作原理 RepicaSet
是通过一组字段来定义的,包括:
一个用来识别可获得的 Pod 的集合的选择算符(选择器)、
一个用来标明应该维护的副本个数的数值、
一个用来指定应该创建新 Pod 以满足副本个数条件时要使用的 Pod 模板等等。
每个 ReplicaSet
都通过根据需要创建和 删除 Pod
以使得副本个数达到期望值, 进而实现其存在价值。当 ReplicaSet
需要创建新的 Pod
时,会使用所提供的 Pod
模板。
ReplicaSet
通过 Pod
上的 metadata.ownerReferences
字段连接到附属 Pod
,该字段给出当前对象的属主资源。 ReplicaSet 所获得的 Pod 都在其 ownerReferences
字段中包含了属主 ReplicaSet
的标识信息。正是通过这一连接,ReplicaSet
知道它所维护的 Pod
集合的状态, 并据此计划其操作行为。
ReplicaSet 使用其选择算符来辨识要获得的 Pod 集合。如果某个 Pod 没有 OwnerReference 或者其 OwnerReference 不是一个 控制器,且其匹配到 某 ReplicaSet 的选择算符,则该 Pod 立即被此 ReplicaSet 获得。
何时使用 ReplicaSet ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行。 然而,Deployment 是一个更高级的概念,它管理 ReplicaSet,并向 Pod 提供声明式的更新以及许多其他有用的功能。 因此,我们建议使用 Deployment 而不是直接使用 ReplicaSet,除非 你需要自定义更新业务流程或根本不需要更新。
创建一个RS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend labels: app: guestbook tier: frontend spec: replicas: 3 selector: matchLabels: tier: frontend template: metadata: labels: tier: frontend spec: containers: - name: nginx image: nginx
1 2 3 4 5 6 7 8 9 10 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-ReplicationController] └─$kubectl apply -f rs.yaml replicaset.apps/frontend created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-ReplicationController] └─$kubectl get pods NAME READY STATUS RESTARTS AGE frontend-8r27p 1/1 Running 0 33s frontend-lk46p 0/1 ContainerCreating 0 33s frontend-njjt2 0/1 ContainerCreating 0 33s
修改RS副本数
1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-ReplicationController] └─$kubectl scale rs frontend --replicas=1 replicaset.apps/frontend scaled ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-ReplicationController] └─$kubectl get pods NAME READY STATUS RESTARTS AGE frontend-8r27p 1/1 Running 0 60s frontend-lk46p 1/1 Terminating 0 60s frontend-njjt2 1/1 Terminating 0 60s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-ReplicationController] └─$
三者在胚子文件的区别
副本数的修改
1 2 3 kubectl scale deployment nginx --replicas=20 kubectl scale rs rs1 --replicas=4 kubectl scale dc nginx --replicas=20
Pod健康检查和服务可用性检查 健康检查的目的 探测的目的
: 用来维持 pod的健壮性,当pod挂掉之后,deployment会生成新的pod,但如果pod是正常运行的,但pod里面出了问题,此时deployment是监测不到的。故此需要探测(probe)-pod是不是正常提供服务的
探针类似 Kubernetes
对 Pod
的健康状态可以通过两类探针来检查:LivenessProbe
和ReadinessProbe
, kubelet定期执行这两类探针来诊断容器的健康状况。都是通过deployment实现的
探针类型
描述
LivenessProbe探针
用于判断容器是否存活(Running状态) ,如果LivenessProbe探针探测到容器不健康,则kubelet将杀掉该容器,并根据容器的重启策略做相应的处理。如果一个容器不包含LivenesspProbe探针,那么kubelet认为该容器的LivenessProbe探针返回的值永远是Success。
ReadinessProbe探针
用于判断容器服务是否可用(Ready状态) ,达到Ready状态的Pod才可以接收请求。对于被Service管理的Pod, Service与Pod Endpoint的关联关系也将基于Pod是否Ready进行设置。如果在运行过程中Ready状态变为False,则系统自动将其从Service的后端Endpoint列表中隔离出去,后续再把恢复到Ready状态的Pod加回后端Endpoint列表。这样就能保证客户端在访问Service时不会被转发到服务不可用的Pod实例上。
检测方式及参数配置 LivenessProbe
和ReadinessProbe
均可配置以下三种实现方式。
方式
描述
ExecAction
在容器内部执行一个命令,如果该命令的返回码为0,则表明容器健康。
TCPSocketAction
通过容器的IP地址和端口号执行TC检查,如果能够建立TCP连接,则表明容器健康。
HTTPGetAction
通过容器的IP地址、端口号及路径调用HTTP Get方法,如果响应的状态码大于等于200且小于400,则认为容器健康。
对于每种探测方式,需要设置initialDelaySeconds
和timeoutSeconds
等参数,它们的含义分别如下。
参数
描述
initialDelaySeconds:
启动容器后进行首次健康检查的等待时间,单位为s。
timeoutSeconds:
健康检查发送请求后等待响应的超时时间,单位为s。当超时发生时, kubelet会认为容器已经无法提供服务,将会重启该容器。
periodSeconds
执行探测的频率,默认是10秒,最小1秒。
successThreshold
探测失败后,最少连续探测成功多少次才被认定为成功,默认是1,对于liveness必须是1,最小值是1。
failureThreshold
当 Pod 启动了并且探测到失败,Kubernetes 的重试次数。存活探测情况下的放弃就意味着重新启动容器。就绪探测情况下的放弃 Pod 会被打上未就绪的标签。默认值是 3。最小值是 1
Kubernetes的ReadinessProbe机制可能无法满足某些复杂应用对容器内服务可用状态的判断
所以Kubernetes从1.11版本开始,引入PodReady++特性对Readiness探测机制进行扩展,在1.14版本时达到GA稳定版,称其为Pod Readiness Gates。
通过Pod Readiness Gates机制,用户可以将自定义的ReadinessProbe探测方式设置在Pod上,辅助Kubernetes设置Pod何时达到服务可用状态(Ready) 。为了使自定义的ReadinessProbe生效,用户需要提供一个外部的控制器(Controller)来设置相应的Condition状态。
Pod的Readiness Gates在Pod定义中的ReadinessGate字段进行设置。下面的例子设置了一个类型为www.example.com/feature-1的新ReadinessGate :
–
新增的自定义Condition的状态(status)将由用户自定义的外部控·制器设置,默认值为False. Kubernetes将在判断全部readinessGates条件都为True时,才设置Pod为服务可用状态(Ready为True) 。
这个不是太懂,需要以后再研究下
学习环境准备 1 2 3 4 5 6 7 8 9 10 11 12 13 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$mkdir liveness-probe ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$cd liveness-probe/ ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl create ns liveness-probe namespace/liveness-probe created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl config current-context kubernetes-admin@kubernetes ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl config set-context $(kubectl config current-context) --namespace=liveness-probe Context "kubernetes-admin@kubernetes" modified.
LivenessProbe探针 用于判断容器是否存活(Running状态) ,如果LivenessProbe探针探测到容器不健康,则kubelet将杀掉该容器,并根据容器的重启策略做相应的处理
ExecAction方式:command 在容器内部执行一个命令,如果该命令的返回码为0,则表明容器健康。
资源文件定义
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$cat liveness-probe.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod-liveness name: pod-liveness spec: containers: - args: - /bin/sh - -c - touch /tmp/healthy; sleep 30 ; rm -rf /tmp/healthy; slee 10 livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5 image: busybox imagePullPolicy: IfNotPresent name: pod-liveness resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {}
运行这个deploy。当pod创建成功后,新建文件,并睡眠30s,删掉文件在睡眠。使用liveness检测文件的存在
1 2 3 4 5 6 7 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl apply -f liveness-probe.yaml pod/pod-liveness created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get pods NAME READY STATUS RESTARTS AGE pod-liveness 1/1 Running 1 (8s ago) 41s
运行超过30s后。文件被删除,所以被健康检测命中,pod根据重启策略重启
1 2 3 4 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get pods NAME READY STATUS RESTARTS AGE pod-liveness 1/1 Running 2 (34s ago) 99s
99s后已经从起了第二次
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.83 -m shell -a "docker ps | grep pod-liveness" 192.168.26.83 | CHANGED | rc=0 >> 00f4182c014e 7138284460ff "/bin/sh -c 'touch /…" 6 seconds ago Up 5 seconds k8s_pod-liveness_pod-liveness_liveness-probe_81b4b086-fb28-4657-93d0-bd23e67f980a_0 01c5cfa02d8c registry.aliyuncs.com/google_containers/pause:3.5 "/pause" 7 seconds ago Up 6 seconds k8s_POD_pod-liveness_liveness-probe_81b4b086-fb28-4657-93d0-bd23e67f980a_0 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl get pods NAME READY STATUS RESTARTS AGE pod-liveness 1/1 Running 0 25s ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl get pods NAME READY STATUS RESTARTS AGE pod-liveness 1/1 Running 1 (12s ago) 44s ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.83 -m shell -a "docker ps | grep pod-liveness" 192.168.26.83 | CHANGED | rc=0 >> 1eafd7e8a12a 7138284460ff "/bin/sh -c 'touch /…" 15 seconds ago Up 14 seconds k8s_pod-liveness_pod-liveness_liveness-probe_81b4b086-fb28-4657-93d0-bd23e67f980a_1 01c5cfa02d8c registry.aliyuncs.com/google_containers/pause:3.5 "/pause" 47 seconds ago Up 47 seconds k8s_POD_pod-liveness_liveness-probe_81b4b086-fb28-4657-93d0-bd23e67f980a_0 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
查看节点机docker中的容器ID,前后不一样,确定是POD被杀掉后重启。
HTTPGetAction的方式 通过容器的IP地址、端口号及路径调用HTTP Get方法,如果响应的状态码大于等于200且小于400,则认为容器健康。 创建资源文件,即相关参数使用
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$cat liveness-probe-http.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod-livenss-probe name: pod-livenss-probe spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: pod-livenss-probe livenessProbe: failureThreshold: 3 httpGet: path: /index.html port: 80 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {}
运行deploy,这个的探测机制访问Ngixn的默认欢迎页
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$vim liveness-probe-http.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl apply -f liveness-probe-http.yaml pod/pod-livenss-probe created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get pods NAME READY STATUS RESTARTS AGE pod-livenss-probe 1/1 Running 0 15s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl exec -it pod-livenss-probe -- rm /usr/share/nginx/html/index.html ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get pods NAME READY STATUS RESTARTS AGE pod-livenss-probe 1/1 Running 1 (1s ago) 2m31s
当欢迎页被删除时,访问报错,被检测命中,pod重启
TCPSocketAction方式 通过容器的IP地址和端口号执行TCP检查,如果能够建立TCP连接,则表明容器健康。 资源文件定义
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$cat liveness-probe-tcp.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod-livenss-probe name: pod-livenss-probe spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: pod-livenss-probe livenessProbe: failureThreshold: 3 tcpSocket: port: 8080 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {}
访问8080端口,但是8080端口未开放,所以访问会超时,不能建立连接,命中检测,重启Pod
1 2 3 4 5 6 7 8 9 10 11 12 13 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl apply -f liveness-probe-tcp.yaml pod/pod-livenss-probe created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get pods NAME READY STATUS RESTARTS AGE pod-livenss-probe 1/1 Running 0 8s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get pods NAME READY STATUS RESTARTS AGE pod-livenss-probe 1/1 Running 1 (4s ago) 44s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$
ReadinessProbe探针 用于判断容器服务是否可用(Ready状态) ,达到Ready状态的Pod才可以接收请求。负责不能进行访问
ExecAction方式:command 资源文件定义,使用钩子建好需要检查的文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$cat readiness-probe.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod-liveness name: pod-liveness spec: containers: - readinessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5 image: nginx imagePullPolicy: IfNotPresent name: pod-liveness resources: {} lifecycle: postStart: exec: command: ["/bin/sh" , "-c" ,"touch /tmp/healthy" ] dnsPolicy: ClusterFirst restartPolicy: Always status: {}
创建3个有Ngixn的pod,通过POD创建一个SVC做测试用
1 2 3 4 5 6 7 8 9 10 11 12 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$sed 's/pod-liveness/pod-liveness-1/' readiness-probe.yaml | kubectl apply -f - pod/pod-liveness-1 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$sed 's/pod-liveness/pod-liveness-2/' readiness-probe.yaml | kubectl apply -f - pod/pod-liveness-2 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-liveness 1/1 Running 0 3m1s 10.244.70.50 vms83.liruilongs.github.io <none> <none> pod-liveness-1 1/1 Running 0 2m 10.244.70.51 vms83.liruilongs.github.io <none> <none> pod-liveness-2 1/1 Running 0 111s 10.244.70.52 vms83.liruilongs.github.io <none> <none>
修改主页文字
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$serve =pod-liveness ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl exec -it $serve -- sh -c "echo $serve > /usr/share/nginx/html/index.html" ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl exec -it $serve -- sh -c "cat /usr/share/nginx/html/index.html" pod-liveness ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$serve =pod-liveness-1 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl exec -it $serve -- sh -c "echo $serve > /usr/share/nginx/html/index.html" ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$serve =pod-liveness-2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl exec -it $serve -- sh -c "echo $serve > /usr/share/nginx/html/index.html" ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$
修改标签
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS pod-liveness 1/1 Running 0 15m run=pod-liveness pod-liveness-1 1/1 Running 0 14m run=pod-liveness-1 pod-liveness-2 1/1 Running 0 14m run=pod-liveness-2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl edit pods pod-liveness-1 pod/pod-liveness-1 edited ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl edit pods pod-liveness-2 pod/pod-liveness-2 edited ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS pod-liveness 1/1 Running 0 17m run=pod-liveness pod-liveness-1 1/1 Running 0 16m run=pod-liveness pod-liveness-2 1/1 Running 0 16m run=pod-liveness ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$
要删除文件检测
1 2 3 4 5 6 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl exec -it pod-liveness -- ls /tmp/ healthy ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl exec -it pod-liveness-1 -- ls /tmp/ healthy
使用POD创建SVC
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl expose --name=svc pod pod-liveness --port=80 service/svc exposed ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get ep NAME ENDPOINTS AGE svc 10.244.70.50:80,10.244.70.51:80,10.244.70.52:80 16s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc ClusterIP 10.104.246.121 <none> 80/TCP 36s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-liveness 1/1 Running 0 24m 10.244.70.50 vms83.liruilongs.github.io <none> <none> pod-liveness-1 1/1 Running 0 23m 10.244.70.51 vms83.liruilongs.github.io <none> <none> pod-liveness-2 1/1 Running 0 23m 10.244.70.52 vms83.liruilongs.github.io <none> <none>
测试SVC正常,三个POD会正常 负载
1 2 3 4 5 6 7 8 9 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$while true ; do curl 10.104.246.121 ; sleep 1 > done pod-liveness pod-liveness-2 pod-liveness pod-liveness-1 pod-liveness-2 ^C
删除文件测试
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl exec -it pod-liveness -- rm -rf /tmp/ ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl exec -it pod-liveness -- ls /tmp/ ls: cannot access '/tmp/' : No such file or directory command terminated with exit code 2┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$while true ; do curl 10.104.246.121 ; sleep 1; done pod-liveness-2 pod-liveness-2 pod-liveness-2 pod-liveness-1 pod-liveness-2 pod-liveness-2 pod-liveness-1 ^C
会发现pod-liveness的pod已经不提供服务了
kubeadm 中的一些健康检测 kube-apiserver.yaml中的使用,两种探针同时使用
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep -A 8 readi readinessProbe: failureThreshold: 3 httpGet: host: 192.168.26.81 path: /readyz port: 6443 scheme: HTTPS periodSeconds: 1 timeoutSeconds: 15 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep -A 9 liveness livenessProbe: failureThreshold: 8 httpGet: host: 192.168.26.81 path: /livez port: 6443 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
job&cronjob Job:批处理调度 Kubernetes从1.2版本
开始支持批处理类型的应用,我们可以通过Kubernetes Job
资源对象来定义并启动一个批处理任务。
批处理任务通常并行(或者串行)
启动多个计算进程去处理一批工作项(work item)
处理完成后,整个批处理任务结束。
K8s官网中这样描述 : Job 会创建一个或者多个 Pods,并将继续重试 Pods 的执行,直到指定数量的 Pods 成功终止。 随着 Pods 成功结束,Job 跟踪记录成功完成的 Pods 个数。 当数量达到指定的成功个数阈值时,任务(即 Job)结束。 删除 Job 的操作会清除所创建的全部 Pods。 挂起 Job 的操作会删除 Job 的所有活跃 Pod,直到 Job 被再次恢复执行。
一种简单的使用场景下,你会创建一个 Job 对象以便以一种可靠的方式运行某 Pod 直到完成。 当第一个 Pod 失败或者被删除(比如因为节点硬件失效或者重启)时,Job 对象会启动一个新的 Pod。也可以使用 Job 以并行的方式运行多个 Pod。
考虑到批处理的并行问题, Kubernetes将Job分以下三种类型。
类型
描述
Non-parallel Jobs
通常一个Job只启动一个Pod
,除非Pod异常,才会重启该Pod
,一旦此Pod正常结束, Job将结束
。
Parallel Jobs with a fixed completion count
并行Job会启动多个Pod
,此时需要设定Job的.spec.completions
参数为一个正数,当正常结束的Pod数量达至此参数设定的值后, Job结束
。此外, Job的.spec.parallelism参数用来控制并行度
,即同时启动几个Job来处理Work Item
.
Parallel Jobs with a work queue
任务队列方式的并行Job
需要一个独立的Queue
, Work item都在一个Queue中存放
,不能设置Job的.spec.completions参数
,此时Job有以下特性。 每个Pod都能独立判断和决定是否还有任务项需要处理。 如果某个Pod正常结束,则Job不会再启动新的Pod. 如果一个Pod成功结束,则此时应该不存在其他Pod还在工作的情况,它们应该都处于即将结束、退出的状态。 如果所有Pod都结束了,且至少有一个Pod成功结束,则整个Job成功结束。
嗯,我们就第一个,第二搞一个Demo,第三中之后有时间搞,其实就是资源配置参数的问题 环境准备
1 2 3 4 5 6 7 8 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl config set-context $(kubectl config current-context) --namespace=liruiling-job-create Context "kubernetes-admin@kubernetes" modified. ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl create ns liruiling-job-create namespace/liruiling-job-create created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$vim myjob.yaml
创建一个job 创建一个Job,执行echo "hello jobs"
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$cat myjob.yaml apiVersion: batch/v1 kind: Job metadata: creationTimestamp: null name: my-job spec: template: metadata: creationTimestamp: null spec: containers: - command : - sh - -c - echo "hello jobs" - sleep 15 image: busybox name: my-job resources: {} restartPolicy: Never status: {}
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl apply -f myjob.yaml job.batch/my-job created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE my-job--1-jdzqd 0/1 ContainerCreating 0 7s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get jobs NAME COMPLETIONS DURATION AGE my-job 0/1 17s 17s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE my-job--1-jdzqd 0/1 Completed 0 24s
STATUS
状态变成 Completed
意味着执行成功,查看日志
1 2 3 4 5 6 7 8 9 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get jobs NAME COMPLETIONS DURATION AGE my-job 1/1 19s 46s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl logs my-job--1-jdzqd hello jobs ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$
job的配置参数解析 job的restart策略
Nerver : 只要任务没有完成,则是新创建pod运行,直到job完成 会产生多个pod
OnFailure : 只要pod没有完成,则会重启pod,直到job完成
activeDeadlineSeconds:最大可以运行时间
1 2 3 4 5 6 7 8 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl explain jobs.spec | grep act activeDeadlineSeconds <integer > may be continuously active before the system tries to terminate it; value given time. The actual number of pods running in steady state will be less false to true ), the Job controller will delete all active Pods associated ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$vim myjobact.yaml
使用activeDeadlineSeconds:最大可以运行时间
创建一个job
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$cat myjobact.yaml apiVersion: batch/v1 kind: Job metadata: creationTimestamp: null name: my-job spec: template: metadata: creationTimestamp: null spec: activeDeadlineSeconds: 5 containers: - command : - sh - -c - echo "hello jobs" - sleep 15 image: busybox name: my-job resources: {} restartPolicy: Never status: {}
超过5秒任务没有完成,所以从新创建一个pod运行
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl delete -f myjob.yaml job.batch "my-job" deleted ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl apply -f myjobact.yaml job.batch/my-job created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE my-job--1-ddhbj 0/1 ContainerCreating 0 7s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get jobs NAME COMPLETIONS DURATION AGE my-job 0/1 16s 16s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE my-job--1-ddhbj 0/1 Completed 0 23s my-job--1-mzw2p 0/1 ContainerCreating 0 3s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE my-job--1-ddhbj 0/1 Completed 0 48s my-job--1-mzw2p 0/1 Completed 0 28s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get jobs NAME COMPLETIONS DURATION AGE my-job 0/1 55s 55s
其他的一些参数
parallelism: N 一次性运行N个pod completions: M job结束需要成功运行的Pod个数,即状态为Completed的pod数 backoffLimit: N 如果job失败,则重试几次 parallelism:一次性运行几个pod,这个值不会超过completions的值。
创建一个并行多任务的Job 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 apiVersion: batch/v1 kind: Job metadata: creationTimestamp: null name: my-job spec: backoffLimit: 6 completions: 6 parallelism: 2 template: metadata: creationTimestamp: null spec: containers: - command: - sh - -c - echo "hello jobs" - sleep 15 image: busybox name: my-job resources: {} restartPolicy: Never status: {}
创建一个有参数的job
1 2 3 4 5 6 7 8 9 10 11 12 13 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl apply -f myjob-parma.yaml job.batch/my-job created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get pods jobs Error from server (NotFound): pods "jobs" not found ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get pods job Error from server (NotFound): pods "job" not found ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get jobs NAME COMPLETIONS DURATION AGE my-job 0/6 19s 19s
查看参数设置的变化,运行6个job
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE my-job--1-9vvst 0/1 Completed 0 25s my-job--1-h24cw 0/1 ContainerCreating 0 5s my-job--1-jgq2j 0/1 Completed 0 24s my-job--1-mbmg6 0/1 ContainerCreating 0 1s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get jobs NAME COMPLETIONS DURATION AGE my-job 2/6 35s 35s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get jobs NAME COMPLETIONS DURATION AGE my-job 3/6 48s 48s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$ └─$kubectl get pods NAME READY STATUS RESTARTS AGE my-job--1-9vvst 0/1 Completed 0 91s my-job--1-b95qv 0/1 Completed 0 35s my-job--1-h24cw 0/1 Completed 0 71s my-job--1-jgq2j 0/1 Completed 0 90s my-job--1-mbmg6 0/1 Completed 0 67s my-job--1-njbfj 0/1 Completed 0 49s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get jobs NAME COMPLETIONS DURATION AGE my-job 6/6 76s 93s
实战:计算圆周率2000位 命令行的方式创建一个job
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl create job job3 --image=perl --dry-run=client -o yaml -- perl -Mbignum=bpi -wle 'print bpi(500)' apiVersion: batch/v1 kind: Job metadata: creationTimestamp: null name: job3 spec: template: metadata: creationTimestamp: null spec: containers: - command : - perl - -Mbignum=bpi - -wle - print bpi(500) image: perl name: job3 resources: {} restartPolicy: Never status: {}
拉取相关镜像,命令行创建job
1 2 3 4 5 6 7 8 9 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "docker pull perl" ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl create job job2 --image=perl -- perl -Mbignum=bpi -wle 'print bpi(500)' job.batch/job2 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE job2--1-5jlbl 0/1 Completed 0 2m4s
查看运行的job输出
1 2 3 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl logs job2--1-5jlbl 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491
Cronjob:定时任务 在 cronjob 的 yaml 文件里的 .spec.jobTemplate.spec
字段里,可以写 activeDeadlineSeconds
参数,指定 cronjob
所生成的 pod 只能运行多久
Kubernetes从1.5
版本开始增加了一种新类型的Job,即类似LinuxCron的定时任务Cron Job
,下面看看如何定义和使用这种类型的Job首先,确保Kubernetes的版本为1.8及以上
。
在Kubernetes 1.9
版本后,kubectl
命令增加了别名cj
来表示cronjob
,同时kubectl set image/env
命令也可以作用在CronJob
对象上了。
创建一个 Cronjob 每分钟创建一个pod执行一个date命令
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl create cronjob test-job --image=busybox --schedule="*/1 * * * *" --dry-run=client -o yaml -- /bin/sh -c "date" apiVersion: batch/v1 kind: CronJob metadata: creationTimestamp: null name: test-job spec: jobTemplate: metadata: creationTimestamp: null name: test-job spec: template: metadata: creationTimestamp: null spec: containers: - command : - /bin/sh - -c - date image: busybox name: test-job resources: {} restartPolicy: OnFailure schedule: '*/1 * * * *' status: {}
可是使用yaml文件或者命令行的方式创建
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get pods No resources found in liruiling-job-create namespace. ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl apply -f jobcron.yaml cronjob.batch/test-job configured ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get job NAME COMPLETIONS DURATION AGE test-job-27330246 0/1 0s 0s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE test-job-27330246--1-xn5r6 1/1 Running 0 4s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE test-job-27330246--1-xn5r6 0/1 Completed 0 100s test-job-27330247--1-9blnp 0/1 Completed 0 40s
运行--watch
命令,可以更直观地了解Cron Job定期触发任务执行的历史和现状:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl apply -f jobcron.yaml cronjob.batch/test-job created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get cronjobs NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE test-job */1 * * * * False 0 <none> 12s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get jobs --watch NAME COMPLETIONS DURATION AGE test-job-27336917 0/1 0s test-job-27336917 0/1 0s 0s test-job-27336917 1/1 25s 25s test-job-27336918 0/1 0s test-job-27336918 0/1 0s 0s test-job-27336918 1/1 26s 26s ^C┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$kubectl get jobs -o wide NAME COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR test-job-27336917 1/1 25s 105s test-job busybox controller-uid=35e43bbc-5869-4bda-97db-c027e9a36b97 test-job-27336918 1/1 26s 45s test-job busybox controller-uid=82d2e4a5-716c-42bf-bc7d-3137dd0e50e8 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-jobs-create] └─$
Service Service
是Kubernetes
的核心概念,可以为一组具有相同功能的容器应用提供一个统一的入口地址
,并且通过多实例的方式将请求负载分发到后端的各个容器应用上。具体涉及service的负载均衡机制、如何访问Service
、 Headless Service
, DNS服务
的机制和实践、Ingress 7层路由机制
等。
我们这里以服务的创建,发布,发现
三个角度来学习,偏实战,关于Headless Service
, DNS服务
的机制和实践、Ingress 7层路由机制
等一些原理理论会在之后的博文里分享
通过Service的定义, Kubernetes实现了一种分布式应用统一入口的定义和负载均衡机制。Service还可以进行其他类型的设置,例如设置多个端口号、直接设置为集群外部服务,或实现为Headless Service (无头服务)模式.
Kubernetes的Service定义了一个服务的访问入口地址
,前端的应用(Pod)
通过这个入口地址访问其背后的一组由Pod副本组成的集群实例, Service与其后端Pod副本集群之间则是通过Label Selector来实现“无缝对接”的。而RC或者deploy的作用实际上是保证Service的服务能力和服务质量始终处干预期的标准。
服务创建 为什么需要Service 学习环境准备,新建一个liruilong-svc-create
命名空间
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$d =k8s-svc-create ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$mkdir $d ;cd $d ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl config set-context $(kubectl config current-context) --namespace=liruilong-svc-create Context "kubernetes-admin@kubernetes" modified. ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl create ns liruilong-svc-create namespace/liruilong-svc-create created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get svc No resources found in liruilong-svc-create namespace. ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$
使用pod资源创建服务 我们先来创建一个普通服务即不使用Service资源,只是通过pod创建
通过命令行的方式生成一个pod资源的yaml文件,然后我们修改一下
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl run pod-svc --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod-svc name: pod-svc spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: pod-svc resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {}
这里我们修改下,使当前的pod可以对外提供能力,使用的方式,通过设置容器级别的hostPort
,将容器应用的端口号映射到宿主机上
1 2 3 ports: - containerPort: 80 hostPort: 800
通过宿主机映射,当pod发生调度后,节点没法提供能力
通过设置Pod级别
的hostNetwork=true
,该Pod中所有容器的端口号都将被直接映射到物理机上。 在设置hostNetwork=true
时需要注意,在容器的ports定义部分如果不指定hostPort,则默认hostPort等于containerPort,如果指定了hostPort,则hostPort必须等于containerPort的值:
1 2 3 4 5 6 7 8 spec nostNetwork: true containers: name: webapp image: tomcat imagePullPolicy: Never ports: - containerPort: 8080
通过下面的方式生成的pod可以通过800端口对外提供能力,这里的的IP为宿主机IP
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$cat pod-svc.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod-svc name: pod-svc spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: pod-svc ports: - containerPort: 80 hostPort: 800 resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$
这是一个单独的pod资源,生成的pod基于当前Node对外提供能力
1 2 3 4 5 6 7 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl apply -f pod-svc.yaml pod/pod-svc created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-svc 1/1 Running 0 3s 10.244.70.50 vms83.liruilongs.github.io <none> <none>
对于pod-svc
来讲,我们可以通过pod_ip+端口的方式访问,其实是类似于docker一样,把端口映射到宿主机的方式
然后我们以同样的方式生成几个新的pod,也是基于当前Node节点对外提供能力,这里我们只有两个节点,所以在生成第三个的时候,pod直接pending了,端口冲突,我们使用了宿主机映射的方式,所以每个node只能调度一个pop上去
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$sed 's/pod-svc/pod-svc-1/' pod-svc.yaml | kubectl apply -f - pod/pod-svc-1 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-svc 1/1 Running 0 2m46s 10.244.70.50 vms83.liruilongs.github.io <none> <none> pod-svc-1 1/1 Running 0 13s 10.244.171.176 vms82.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$sed 's/pod-svc/pod-svc-2/' pod-svc.yaml | kubectl apply -f - pod/pod-svc-2 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-svc 1/1 Running 0 4m18s 10.244.70.50 vms83.liruilongs.github.io <none> <none> pod-svc-1 1/1 Running 0 105s 10.244.171.176 vms82.liruilongs.github.io <none> <none> pod-svc-2 0/1 Pending 0 2s <none> <none> <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$
这个时候,如果我们想多创建几个pod来提供能力,亦或者做负载。就要用到Service了
Service的创建 一般来说,对外提供服务的应用程序需要通过某种机制来实现,对于容器应用最简便的方式就是通过TCP/IP机制及监听IP和端口号来实现。即PodIP+容器端口的方式
直接通过Pod的IP地址和端口号可以访问到容器应用内的服务,但是Pod的IP地址是不可靠的
,如果容器应用本身是分布式的部署方式,通过多个实例共同提供服务,就需要在这些实例的前端设置一个负载均衡器来实现请求的分发。
Kubernetes中的Service就是用于解决这些问题的核心组件。通过kubectl expose命令来创建Service
新创建的Service,系统为它分配了一个虚拟的IP地址(ClusterlP) , Service所需的端口号则从Pod中的containerPort复制而来:
下面我们就使用多个pod和deploy的方式为Service提供能力,创建Service
使用deployment创建SVC 创建一个有三个ng副本的deployment
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl create deployment web1 --image=nginx --replicas=3 --dry-run=client -o yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: web1 name: web1 spec: replicas: 3 selector: matchLabels: app: web1 strategy: {} template: metadata: creationTimestamp: null labels: app: web1 spec: containers: - image: nginx name: nginx resources: {} status: {} ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl create deployment web1 --image=nginx --replicas=3 --dry-run=client -o yaml > web1.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get pods -o wide No resources found in liruilong-svc-create namespace. ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl apply -f web1.yaml deployment.apps/web1 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web1-6fbb48567f-2zfkm 0/1 ContainerCreating 0 2s <none> vms83.liruilongs.github.io <none> <none> web1-6fbb48567f-krj4j 0/1 ContainerCreating 0 2s <none> vms83.liruilongs.github.io <none> <none> web1-6fbb48567f-mzvtk 0/1 ContainerCreating 0 2s <none> vms82.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$
通过deploy: web1 为服务能力提供者,创建一个Servie服务
除了使用kubectl expose
命令创建Service
,我们也可以通过配置文件定义Service
,再通过kubectl create
命令进行创建
Service定义中的关键字段是ports和selector 。**ports定义部分指定了Service所需的虚拟端口号为8081,如果与Pod容器端口号8080不一样,所以需要再通过targetPort来指定后端Pod的端口号。selector定义部分设置的是后端Pod所拥有的label: **
1 2 3 4 5 6 7 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl expose --name=svc1 deployment web1 --port=80 service/svc1 exposed ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR svc1 ClusterIP 10.110.53.142 <none> 80/TCP 23s app=web1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE web1-6fbb48567f-2zfkm 1/1 Running 0 14m web1-6fbb48567f-krj4j 1/1 Running 0 14m web1-6fbb48567f-mzvtk 1/1 Running 0 14m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get ep -owide NAME ENDPOINTS AGE svc1 10.244.171.177:80,10.244.70.60:80,10.244.70.61:80 13m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS web1-6fbb48567f-2zfkm 1/1 Running 0 18m app=web1,pod-template-hash=6fbb48567f web1-6fbb48567f-krj4j 1/1 Running 0 18m app=web1,pod-template-hash=6fbb48567f web1-6fbb48567f-mzvtk 1/1 Running 0 18m app=web1,pod-template-hash=6fbb48567f ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$
使用pod创建Service 每个Pod都会被分配一个单独的IP地址,而且每个Pod都提供了一个独立的Endpoint(Pod IP+ContainerPort)
以被客户端访问,现在多个Pod副本组成了一个集群来提供服务.客户端如何来访问它们呢?一般的做法是部署一个负载均衡器(软件或硬件),
**Kubernetes
中运行在每个Node
上的kube-proxy
进程其实就是一个智能的软件负载均衡器
,它负责把对Service的请求转发到后端的某个Pod实例上,并在内部实现服务的负载均衡与会话保持机制 **。资源文件定义
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$cat readiness-probe.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod-liveness name: pod-liveness spec: containers: image: nginx imagePullPolicy: IfNotPresent name: pod-liveness resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {}
创建3个有Ngixn的pod,通过POD创建一个SVC做测试用
1 2 3 4 5 6 7 8 9 10 11 12 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$sed 's/pod-liveness/pod-liveness-1/' readiness-probe.yaml | kubectl apply -f - pod/pod-liveness-1 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$sed 's/pod-liveness/pod-liveness-2/' readiness-probe.yaml | kubectl apply -f - pod/pod-liveness-2 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-liveness 1/1 Running 0 3m1s 10.244.70.50 vms83.liruilongs.github.io <none> <none> pod-liveness-1 1/1 Running 0 2m 10.244.70.51 vms83.liruilongs.github.io <none> <none> pod-liveness-2 1/1 Running 0 111s 10.244.70.52 vms83.liruilongs.github.io <none> <none>
修改主页文字
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$serve =pod-liveness ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl exec -it $serve -- sh -c "echo $serve > /usr/share/nginx/html/index.html" ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl exec -it $serve -- sh -c "cat /usr/share/nginx/html/index.html" pod-liveness ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$serve =pod-liveness-1 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl exec -it $serve -- sh -c "echo $serve > /usr/share/nginx/html/index.html" ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$serve =pod-liveness-2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl exec -it $serve -- sh -c "echo $serve > /usr/share/nginx/html/index.html" ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$
修改标签
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS pod-liveness 1/1 Running 0 15m run=pod-liveness pod-liveness-1 1/1 Running 0 14m run=pod-liveness-1 pod-liveness-2 1/1 Running 0 14m run=pod-liveness-2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl edit pods pod-liveness-1 pod/pod-liveness-1 edited ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl edit pods pod-liveness-2 pod/pod-liveness-2 edited ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS pod-liveness 1/1 Running 0 17m run=pod-liveness pod-liveness-1 1/1 Running 0 16m run=pod-liveness pod-liveness-2 1/1 Running 0 16m run=pod-liveness ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$
使用POD创建SVC
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl expose --name=svc pod pod-liveness --port=80 service/svc exposed ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get ep NAME ENDPOINTS AGE svc 10.244.70.50:80,10.244.70.51:80,10.244.70.52:80 16s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc ClusterIP 10.104.246.121 <none> 80/TCP 36s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/liveness-probe] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-liveness 1/1 Running 0 24m 10.244.70.50 vms83.liruilongs.github.io <none> <none> pod-liveness-1 1/1 Running 0 23m 10.244.70.51 vms83.liruilongs.github.io <none> <none> pod-liveness-2 1/1 Running 0 23m 10.244.70.52 vms83.liruilongs.github.io <none> <none>
测试SVC正常,三个POD会正常 负载
1 2 3 4 5 6 7 8 9 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$while true ; do curl 10.104.246.121 ; sleep 1 > done pod-liveness pod-liveness-2 pod-liveness pod-liveness-1 pod-liveness-2 ^C
基于 ClusterlP
提供的两种负载分发策略
目前 Kubernetes
提供了两种负载分发策略:RoundRobin
和SessionAffinity
负载分发策略
描述
RoundRobin
轮询模式,即轮询将请求转发到后端的各个Pod上。
SessionAffinity
基于客户端IP地址进行会话保持的模式,
在默认情况下, Kubernetes采用RoundRobin模式对客户端请求进行,负载分发,但我们也可以通过设置service.spec.sessionAffinity=ClientIP
来启用SessionAffinity
策略。
查看svc包含的Pod 1 2 3 4 5 6 7 8 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get svc -owide | grep -v NAME | awk '{print $NF}' | xargs kubectl get pods -l NAME READY STATUS RESTARTS AGE pod-svc 1/1 Running 0 18m pod-svc-1 1/1 Running 0 17m pod-svc-2 1/1 Running 0 16m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$
端口的请求转发及多端口设置 一个容器应用也可能提供多个端口的服务,那么在Service的定义中也可以相应地设置为将多个端口转发到多个应用服务。
Kubernetes Service支持多个Endpoint(端口),**在存在多个Endpoint的情况下,要求每个Endpoint定义一个名字来区分 **。下面是Tomcat多端口的Service定义样例:
1 2 3 4 5 6 - port: 8080 targetPort: 80 name: web1 - port: 8008 targetPort: 90 name: web2
多端口为什么需要给每个端口命名呢?这就涉及Kubernetes的服务发现机制了(通过DNS是方式实现的服务发布)
命令行的方式
1 2 3 4 5 6 7 8 9 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl expose --name=svc pod pod-svc --port=808 --target-port=80 --selector=run=pod-svc service/svc exposed ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get svc -owide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR svc ClusterIP 10.102.223.233 <none> 808/TCP 4s run=pod-svc ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$
kube-proxy的路由规则不同,ServiceIP的访问也不同
iptable: Service(CLUSTER-IP )地址 ping 不通
ipvs: Service(CLUSTER-IP )地址可以ping通
服务的发现 所谓服务发现,就是我们在pod内部,或者说容器内部,怎么获取到要访问的服务的IP和端口。类似于微服务中的注册中心概念
Kubernetes 的服务发现机制
区别
最早时Kubernetes采用了Linux环境变量
的方式解决这个问题,即每个Service生成一些对应的Linux环境变量(ENV),并在每个Pod的容器在启动时,自动注入这些环境变量
命名空间隔离
后来Kubernetes通过Add-On增值包的方式引入了DNS系统
,把服务名作为DNS域名
,这样一来,程序就可以直接使用服务名来建立通信连接了。目前Kubernetes上的大部分应用都已经采用了DNS这些新兴的服务发现机制
命名空间可见
环境准备,我们还是用之前的那个pod做的服务来处理
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-svc 1/1 Running 0 69m 10.244.70.35 vms83.liruilongs.github.io <none> <none> pod-svc-1 1/1 Running 0 68m 10.244.70.39 vms83.liruilongs.github.io <none> <none> pod-svc-2 1/1 Running 0 68m 10.244.171.153 vms82.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~] └─$s =pod-svc ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl exec -it $s -- sh -c "echo $s > /usr/share/nginx/html/index.html" ┌──[root@vms81.liruilongs.github.io]-[~] └─$s =pod-svc-1 ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl exec -it $s -- sh -c "echo $s > /usr/share/nginx/html/index.html" ┌──[root@vms81.liruilongs.github.io]-[~] └─$s =pod-svc-2 ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl exec -it $s -- sh -c "echo $s > /usr/share/nginx/html/index.html" ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl get svc -owide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR svc ClusterIP 10.102.223.233 <none> 808/TCP 46m run=pod-svc ┌──[root@vms81.liruilongs.github.io]-[~] └─$while true ;do curl 10.102.223.233:808;sleep 2 ; done pod-svc-2 pod-svc-1 pod-svc pod-svc pod-svc pod-svc-2 ^C ┌──[root@vms81.liruilongs.github.io]-[~] └─$
测试镜像准备
1 2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "docker pull yauritux/busybox-curl"
通过Linux环境变量方式发现:命名空间隔离 在每个创建的pod里会存在已经存在的SVC的变量信息,这些变量信息基于命名空间隔离, 其他命名空间没有
1 2 3 4 5 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl run testpod -it --rm --image=yauritux/busybox-curl --image-pull-policy=IfNotPresent -n default If you don't see a command prompt, try pressing enter. /home # env | grep ^SVC /home #
只存在当前命名空间,只能获取相同namespace里的变量
换句话的意思,在相同的命名空间里,我们可以在容器里通过变量的方式获取已经存在的Service来提供能力
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl run testpod -it --rm --image=yauritux/busybox-curl --image-pull-policy=IfNotPresent If you don't see a command prompt, try pressing enter. /home # env | grep ^SVC SVC_PORT_808_TCP_ADDR=10.102.223.233 SVC_PORT_808_TCP_PORT=808 SVC_PORT_808_TCP_PROTO=tcp SVC_SERVICE_HOST=10.102.223.233 SVC_PORT_808_TCP=tcp://10.102.223.233:808 SVC_SERVICE_PORT=808 SVC_PORT=tcp://10.102.223.233:808 /home # /home # while true ;do curl $SVC_SERVICE_HOST:$SVC_PORT_808_TCP_PORT ;sleep 2 ; done pod-svc-2 pod-svc-2 pod-svc pod-svc ^C /home #
通过DNS的方式发现:命名空间可见 Kubernetes发明了一种很巧妙又影响深远的设计:
Service不是共用一个负载均衡器的IP地址,而是每个Service
分配了一个全局唯一的虚拟IP地址,这个虚拟IP被称为Cluster IP
,这样一来,每个服务就变成了具备唯一IP地址的“通信节点”
,服务调用就变成了最基础的TCP网络通信问题
。
Service一旦被创建, Kubernetes就会自动为它分配一个可用的Cluster IP,而且在Service的整个生命周期内,它的Cluster IP不会发生改变 。于是,服务发现这个棘手的问题在Kubernetes的架构里也得以轻松解决: 只要用Service的Name与Service的Cluster IP地址做一个DNS域名映射即可完美解决问题。
1 2 3 4 5 6 7 8 9 10 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl get svc -n kube-system -owide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6d20h k8s-app=kube-dns metrics-server ClusterIP 10.111.104.173 <none> 443/TCP 6d18h k8s-app=metrics-server ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl get pods -n kube-system -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE coredns-7f6cbbb7b8-ncd2s 1/1 Running 2 (23h ago) 3d22h coredns-7f6cbbb7b8-pjnct 1/1 Running 2 (23h ago) 3d22h
有个这个DNS服务之后,创建的每个SVC就会自动的注册一个DNS
1 2 3 4 5 6 7 8 ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl run testpod -it --rm --image=yauritux/busybox-curl --image-pull-policy=IfNotPresent If you don't see a command prompt, try pressing enter. /home # cat /etc/resolv.conf nameserver 10.96.0.10 search liruilong-svc-create.svc.cluster.local svc.cluster.local cluster.local localdomain 168.26.131 options ndots:5 /home #
在kube-system里有dns,可以自动发现所有命名空间里的服务的clusterIP
,所以,在同一个命名空间里,一个服务访问另外一个服务的时候,可以直接通过服务名来访问
,只要创建了一个服务(不管在哪个ns里创建的),都会自动向kube-system里的DNS注册
如果是不同的命名空间,可以通过服务名.命名空间名
来访问`服务名.命名空间
1 2 3 4 5 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl config view | grep namesp namespace: liruilong-svc-create ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
我们这其他的命名空间里创建的一pod来访问当前空间的提供的服务能力
1 2 3 4 5 6 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl run testpod -it --rm --image=yauritux/busybox-curl --image-pull-policy=IfNotPresent -n default If you don't see a command prompt, try pressing enter.' /home pod-svc-2 /home
通过ClusterIP 实现 这是一种相对来说,简单的方法,即直接通过 ClusterIP 来访问服务能力,同时支持跨命名空间 不同命名空间的测试pod
1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl run testpod -it --rm --image=yauritux/busybox-curl --image-pull-policy=IfNotPresent -n default If you don't see a command prompt, try pressing enter. /home # while true ;do curl 10.102.223.233:808;sleep 2 ; done pod-svc pod-svc-1 pod-svc pod-svc-2 pod-svc ^C /home #
实战WordPress博客搭建
WordPress博客搭建
环境准备,没有的需要安装
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "docker images | grep mysql" 192.168.26.82 | CHANGED | rc=0 >> mysql latest ecac195d15af 2 months ago 516MB mysql <none> 9da615fced53 2 months ago 514MB hub.c.163.com/library/mysql latest 9e64176cd8a2 4 years ago 407MB 192.168.26.83 | CHANGED | rc=0 >> mysql latest ecac195d15af 2 months ago 516MB mysql <none> 9da615fced53 2 months ago 514MB hub.c.163.com/library/mysql latest 9e64176cd8a2 4 years ago 407MB ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "docker images | grep wordpress" 192.168.26.82 | CHANGED | rc=0 >> hub.c.163.com/library/wordpress latest dccaeccfba36 4 years ago 406MB 192.168.26.83 | CHANGED | rc=0 >> hub.c.163.com/library/wordpress latest dccaeccfba36 4 years ago 406MB ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
创建一个mysql数据库pod
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$cat db-pod-mysql.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: dbpod name: dbpod spec: containers: - image: hub.c.163.com/library/mysql imagePullPolicy: IfNotPresent name: dbpod resources: {} env: - name: MYSQL_ROOT_PASSWORD value: liruilong - name: MYSQL_USER value: root - name: MYSQL_DATABASE value: blog dnsPolicy: ClusterFirst restartPolicy: Always status: {}
1 2 3 4 5 6 7 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl apply -f db-pod-mysql.yaml pod/dbpod created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE dbpod 1/1 Running 0 5s
创建一个连接mysql-pod的Service,也可以理解为发布mysql服务,默认使用ClusterIP的方式
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS dbpod 1/1 Running 0 80s run=dbpod ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl expose --name=dbsvc pod dbpod --port=3306 service/dbsvc exposed ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get svc dbsvc -o yaml apiVersion: v1 kind: Service metadata: creationTimestamp: "2021-12-21T15:31:19Z" labels: run: dbpod name: dbsvc namespace: liruilong-svc-create resourceVersion: "310763" uid: 05ccb22d-19c4-443a-ba86-f17d63159144 spec: clusterIP: 10.102.137.59 clusterIPs: - 10.102.137.59 internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - port: 3306 protocol: TCP targetPort: 3306 selector: run: dbpod sessionAffinity: None type : ClusterIP status: loadBalancer: {}
创建一个WordPress博客的pod
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dbsvc ClusterIP 10.102.137.59 <none> 3306/TCP 3m12s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$cat blog-pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: blog name: blog spec: containers: - image: hub.c.163.com/library/wordpress imagePullPolicy: IfNotPresent name: blog resources: {} env: - name: WORDPRESS_DB_USER value: root - name: WORDPRESS_DB_PASSWORD value: liruilong - name: WORDPRESS_DB_NAME value: blog - name: WORDPRESS_DB_HOST value: 10.102.137.59 dnsPolicy: ClusterFirst restartPolicy: Always status: {}
创建一个发布博客服务的SVC
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl expose --name=blogsvc pod blog --port=80 --type =NodePort ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get svc blogsvc -o yaml apiVersion: v1 kind: Service metadata: creationTimestamp: "2021-12-20T17:11:03Z" labels: run: blog name: blogsvc namespace: liruilong-svc-create resourceVersion: "294057" uid: 4d350715-0210-441d-9c55-af0f31b7a090 spec: clusterIP: 10.110.28.191 clusterIPs: - 10.110.28.191 externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - nodePort: 31158 port: 80 protocol: TCP targetPort: 80 selector: run: blog sessionAffinity: None type : NodePort status: loadBalancer: {}
查看服务状态测试
1 2 3 4 5 6 7 8 9 10 11 12 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR blogsvc NodePort 10.110.28.191 <none> 80:31158/TCP 22h run=blog dbsvc ClusterIP 10.102.137.59 <none> 3306/TCP 15m run=dbpod ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES blog 1/1 Running 0 14m 10.244.171.159 vms82.liruilongs.github.io <none> <none> dbpod 1/1 Running 0 21m 10.244.171.163 vms82.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$
访问
这里的话,在同一个命名空间里。所以可以使用变量来读取数据库所发布服务的ServiceIP
1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl exec -it blog -- bash root@blog:/var/www/html DBSVC_PORT_3306_TCP_ADDR=10.102.137.59 DBSVC_SERVICE_PORT=3306 DBSVC_PORT_3306_TCP_PORT=3306 DBSVC_PORT_3306_TCP=tcp://10.102.137.59:3306 DBSVC_SERVICE_HOST=10.102.137.59 DBSVC_PORT=tcp://10.102.137.59:3306 DBSVC_PORT_3306_TCP_PROTO=tcp root@blog:/var/www/html
即博客的pod中也可以这样配置
1 2 3 4 5 6 7 8 9 10 env: - name: WORDPRESS_DB_USER value: root - name: WORDPRESS_DB_PASSWORD value: liruilong - name: WORDPRESS_DB_NAME value: blog - name: WORDPRESS_DB_HOST value: $(DBSVC_SERVICE_HOST)
或者这样
1 2 3 4 5 6 7 8 9 10 env: - name: WORDPRESS_DB_USER value: root - name: WORDPRESS_DB_PASSWORD value: liruilong - name: WORDPRESS_DB_NAME value: blog - name: WORDPRESS_DB_HOST value: dbsvc.liruilong-svc-create
服务的发布 所谓发布指的是,如何让集群之外的主机能访问服务
Kubernetes里的“三种IP”
描述
Node IP
Node 节点的IP地址,Node IP是Kubernetes集群中每个节点的物理网卡的IP地址,这是一个真实存在的物理网络,所有属于这个网络的服务器之间都能通过这个网络直接通信,不管它们中是否有部分节点不属于这个Kubernetes集群。**这也表明了Kubernetes集群之外的节点访问Kubernetes集群之内的某个节点或者TCP/IP服务时,必须要通过Node IP进行通信 **。
Pod IP
Pod 的 IP 地址:Pod IP是每个Pod的IP地址,它是Docker Engine
根据dockero网桥的IP地址段进行分配
的,通常是一个虚拟的二层网络
,前面我们说过, Kubernetes要求位于不同Node上的Pod能够彼此直接通信,所以Kubernetes里一个Pod里的容器访问另外一个Pod里的容器,就是通过Pod IP所在的虚拟二层网络进行通信的,而真实的TCP/IP流量则是通过Node IP所在的物理网卡流出的。
Cluster IP
Service 的IP地址,Cluster IP仅仅作用于Kubernetes Service这个对象,并由Kubernetes管理和分配IP地址(来源于Cluster IP地址池)。Cluster IP无法被Ping,因为没有一个“实体网络对象”来响应。Cluster IP只能结合Service Port组成一个具体的通信端口,单独的Cluster IP不具备TCP/IP通信的基础,并且它们属于Kubernetes集群这样一个封闭的空间,集群之外的节点如果要访问这个通信端口,则需要做一些额外的工作。在Kubernetes集群之内, Node IP网、Pod IP网与Cluster IP网之间的通信,采用的是Kubermetes自己设计的一种编程方式的特殊的路由规则,与我们所熟知的IP路由有很大的不同。
外部系统访问 Service,采用NodePort
是解决上述问题的最直接、最有效、最常用的做法。具体做法在Service的定义里做如下扩展即可:
1 2 3 4 5 6 7 8 9 ... spec: type: NodePort posts: - port: 8080 nodePort: 31002 selector: tier: frontend ...
即这里我们可以通过nodePort:31002 来访问Service,NodePort的实现方式是在Kubernetes集群里的每个Node上为需要外部访问的Service开启个对应的TCP监听端口,外部系统只要用任意一个Node的IP地址+具体的NodePort端口即可访问此服务,在任意Node上运行netstat命令,我们就可以看到有NodePort端口被监听:
下面我们具体看下实际案例
NodePort方式 1 2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl expose --name=blogsvc pod blog --port=80 --type =NodePort
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 apiVersion: v1 kind: Service metadata: creationTimestamp: "2021-12-20T17:11:03Z" labels: run: blog name: blogsvc namespace: liruilong-svc-create resourceVersion: "294057" uid: 4d350715-0210-441d-9c55-af0f31b7a090 spec: clusterIP: 10.110 .28 .191 clusterIPs: - 10.110 .28 .191 externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - nodePort: 31158 port: 80 protocol: TCP targetPort: 80 selector: run: blog sessionAffinity: None type: NodePort status: loadBalancer: {}
即我们前面的几个都是通过NodePort来服务映射,对所以工作节点映射,所以节点都可以访问,即外部通过节点IP+31158
的形式访问,确定当服务太多时,端口不好维护
1 2 3 4 5 6 7 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE blogsvc NodePort 10.110.28.191 <none> 80:31158/TCP 23h dbsvc ClusterIP 10.102.137.59 <none> 3306/TCP 49m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$
hostPort方式 hostPort 容器映射,只能在pod所在节点映射到宿主机,这种一般不建议使用,当然静态节点觉得可以
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$cat pod-svc.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod-svc name: pod-svc spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: pod-svc ports: - containerPort: 80 hostPort: 800 resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-svc 1/1 Running 0 69s 10.244.171.172 vms82.liruilongs.github.io <none> <none>
修改Service类型为ClusterIP ,从1.20开始可以直接修改,之前的版本需要删除nodepost
LoadBalancer方式 Service 负载均衡问题
NodePort
还没有完全解决外部访问Service
的所有问题,比如负载均衡
问题,假如我们的集群中有10个Node
,则此时最好有一个负载均衡器
,外部的请求只需访问此负载均衡器的IP地址
,由负载均衡器负责转发流量到后面某个Node的NodePort上。如图
NodePort的负载均衡
Load balancer
组件独立于Kubernetes集群
之外,通常是一个硬件的负载均衡器
,或者是以软件方式实现
的,例如HAProxy
或者Nginx
。对于每个Service,我们通常需要配置一个对应的Load balancer实例来转发流量到后端的Node上
Kubernetes
提供了自动化的解决方案
,如果我们的集群运行在谷歌的GCE公有云
上,那么只要我们把Service的type-NodePort改为type-LoadBalancer
,此时Kubernetes
会自动创建一个对应的Load balancer
实例并返回它的IP地址供外部客户端使用
。当让我们也可以用一些插件来实现,如metallb
等
LoadBalancer 需要建立服务之外的负载池。然后给Service分配一个IP。
我们直接创建一个LoadBalancer
的Service的时候,会一直处于pending状态,是因为我们没有对应的云负载均衡器
1 2 3 4 5 6 7 8 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl expose --name=blogsvc pod blog --port=80 --type =LoadBalancer service/blogsvc exposed ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get svc -o wide | grep blogsvc blogsvc LoadBalancer 10.106.28.175 <pending> 80:32745/TCP 26s run=blog ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$
Metallb可以通过k8s原生的方式提供LB类型的Service支持
1 2 3 4 5 6 7 8 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl create ns metallb-system namespace/metallb-system created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl config set-context $(kubectl config current-context) --namespace=metallb-system Context "kubernetes-admin@kubernetes" modified. ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$
:set paste 解决粘贴混乱的问题创建metallb
1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create/metalld] └─$kubectl apply -f metallb.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create/metalld] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES controller-66d9554cc-8rxq8 1/1 Running 0 3m36s 10.244.171.170 vms82.liruilongs.github.io <none> <none> speaker-bbl94 1/1 Running 0 3m36s 192.168.26.83 vms83.liruilongs.github.io <none> <none> speaker-ckbzj 1/1 Running 0 3m36s 192.168.26.81 vms81.liruilongs.github.io <none> <none> speaker-djmpr 1/1 Running 0 3m36s 192.168.26.82 vms82.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create/metalld] └─$
创建地址池
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create/metalld] └─$vim pool.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create/metalld] └─$kubectl apply -f pool.yaml configmap/config created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create/metalld] └─$cat pool.yaml apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 192.168.26.240-192.168.26.250 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create/metalld] └─$
使用type=LoadBalancer的配置通过metallb分配192.168.26.240这个地址给blogsvc
1 2 3 4 5 6 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create/metalld] └─$kubectl get svc No resources found in metallb-system namespace. ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create/metalld] └─$kubectl config set-context $(kubectl config current-context) --namespace=liruilong-svc-create Context "kubernetes-admin@kubernetes" modified.
1 2 3 4 5 6 7 8 9 10 11 12 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create/metalld] └─$kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dbsvc ClusterIP 10.102.137.59 <none> 3306/TCP 101m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create/metalld] └─$kubectl expose --name=blogsvc pod blog --port=80 --type =LoadBalancer service/blogsvc exposed ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create/metalld] └─$kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR blogsvc LoadBalancer 10.108.117.197 192.168.26.240 80:30230/TCP 9s run=blog dbsvc ClusterIP 10.102.137.59 <none> 3306/TCP 101m run=dbpod
直接访问192.168.26.240
就可以了
在创建一个也可以访问
1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create/metalld] └─$kubectl expose --name=blogsvc-1 pod blog --port=80 --type =LoadBalancer service/blogsvc-1 exposed ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create/metalld] └─$kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR blogsvc LoadBalancer 10.108.117.197 192.168.26.240 80:30230/TCP 11m run=blog blogsvc-1 LoadBalancer 10.110.58.143 192.168.26.241 80:31827/TCP 3s run=blog dbsvc ClusterIP 10.102.137.59 <none> 3306/TCP 113m run=dbpod ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create/metalld] └─$
也可以访问
ingress方式(推荐)
Ingress
Ingress 是对集群中服务的外部访问进行管理的 API 对象,典型的访问方式是 HTTP。
Ingress 可以提供负载均衡、SSL 终结和基于名称的虚拟托管。
Ingress 公开了从集群外部到集群内服务的 HTTP 和 HTTPS 路由。 流量路由由 Ingress 资源上定义的规则控制。
个人理解,就是实现了一个Ngixn功能,可以更具路由规则分配流量等
命名空间里配置ingress规则,嵌入到控制器nginx-反向代理的方式(ingress-nginx-controller)
可以将 Ingress 配置为服务提供外部可访问的 URL、负载均衡流量、终止 SSL/TLS,以及提供基于名称的虚拟主机等能力。 Ingress 控制器
通常负责通过负载均衡器来实现 Ingress
,尽管它也可以配置边缘路由器或其他前端来帮助处理流量。
Ingress 不会公开任意端口或协议。 将 HTTP 和 HTTPS 以外的服务公开到 Internet 时,通常使用 Service.Type=NodePort
或 Service.Type=LoadBalancer
类型的服务
ingress-nginx-controller 部署 需要的镜像
1 2 3 4 5 6 7 8 9 10 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$grep image nginx-controller.yaml image: docker.io/liangjw/ingress-nginx-controller:v1.0.1 imagePullPolicy: IfNotPresent image: docker.io/liangjw/kube-webhook-certgen:v1.1.1 imagePullPolicy: IfNotPresent image: docker.io/liangjw/kube-webhook-certgen:v1.1.1 imagePullPolicy: IfNotPresent ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$
准备工作,镜像上传,导入
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m copy -a "dest=/root/ src=./../ingress-nginx-controller-img.tar" 192.168.26.82 | CHANGED => { "ansible_facts" : { "discovered_interpreter_python" : "/usr/bin/python" }, "changed" : true , "checksum" : "a3c2f87fd640c0bfecebeab24369c7ca8d6f0fa0" , "dest" : "/root/ingress-nginx-controller-img.tar" , "gid" : 0, "group" : "root" , "md5sum" : "d5bf7924cb3c61104f7a07189a2e6ebd" , "mode" : "0644" , "owner" : "root" , "size" : 334879744, "src" : "/root/.ansible/tmp/ansible-tmp-1640207772.53-9140-99388332454846/source" , "state" : "file" , "uid" : 0 } 192.168.26.83 | CHANGED => { "ansible_facts" : { "discovered_interpreter_python" : "/usr/bin/python" }, "changed" : true , "checksum" : "a3c2f87fd640c0bfecebeab24369c7ca8d6f0fa0" , "dest" : "/root/ingress-nginx-controller-img.tar" , "gid" : 0, "group" : "root" , "md5sum" : "d5bf7924cb3c61104f7a07189a2e6ebd" , "mode" : "0644" , "owner" : "root" , "size" : 334879744, "src" : "/root/.ansible/tmp/ansible-tmp-1640207772.55-9142-78097462005167/source" , "state" : "file" , "uid" : 0 } ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "docker load -i /root/ingress-nginx-controller-img.tar"
创建ingress控制器ingress-nginx-controller
1 2 3 4 5 6 7 8 9 10 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl apply -f nginx-controller.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get pods -n ingress-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx-admission-create--1-hvvxd 0/1 Completed 0 89s 10.244.171.171 vms82.liruilongs.github.io <none> <none> ingress-nginx-admission-patch--1-g4ffs 0/1 Completed 0 89s 10.244.70.7 vms83.liruilongs.github.io <none> <none> ingress-nginx-controller-744d4fc6b7-7fcfj 1/1 Running 0 90s 192.168.26.83 vms83.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$
配置DNS 创建域名到服务的映射
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.83 -m shell -a "echo -e '192.168.26.83 liruilongs.nginx1\n192.168.26.83 liruilongs.nginx2\n192.168.26.83 liruilongs.nginx3' >> /etc/hosts" 192.168.26.83 | CHANGED | rc=0 >> ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.83 -m shell -a "cat /etc/hosts" 192.168.26.83 | CHANGED | rc=0 >> 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.26.81 vms81.liruilongs.github.io vms81 192.168.26.82 vms82.liruilongs.github.io vms82 192.168.26.83 vms83.liruilongs.github.io vms83 192.168.26.83 liruilongs.nginx1 192.168.26.83 liruilongs.nginx2 192.168.26.83 liruilongs.nginx3 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
服务模拟,创建三个pod做服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$cat pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod-svc name: pod-svc spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: pod-svc resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl apply -f pod.yaml pod/pod-svc created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$sed 's/pod-svc/pod-svc-1/' pod.yaml > pod-1.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$sed 's/pod-svc/pod-svc-2/' pod.yaml > pod-2.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl apply -f pod-1.yaml pod/pod-svc-1 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl apply -f pod-2.yaml pod/pod-svc-2 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-svc 1/1 Running 0 2m42s 10.244.171.174 vms82.liruilongs.github.io <none> <none> pod-svc-1 1/1 Running 0 80s 10.244.171.175 vms82.liruilongs.github.io <none> <none> pod-svc-2 1/1 Running 0 70s 10.244.171.176 vms82.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$
修改Nginx的主页,根据pod创建三个服务SVC
1 2 3 4 5 6 7 8 9 10 11 12 13 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS pod-svc 1/1 Running 0 3m7s run=pod-svc pod-svc-1 1/1 Running 0 105s run=pod-svc-1 pod-svc-2 1/1 Running 0 95s run=pod-svc-2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$serve =pod-svc ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl exec -it $serve -- sh -c "echo $serve > /usr/share/nginx/html/index.html" ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl expose --name=$serve -svc pod $serve --port=80 service/pod-svc-svc exposed
1 2 3 4 5 6 7 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$serve =pod-svc-1 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl exec -it $serve -- sh -c "echo $serve > /usr/share/nginx/html/index.html" ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl expose --name=$serve -svc pod $serve --port=80 service/pod-svc-1-svc exposed
1 2 3 4 5 6 7 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$serve =pod-svc-2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl exec -it $serve -- sh -c "echo $serve > /usr/share/nginx/html/index.html" ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl expose --name=$serve -svc pod $serve --port=80 service/pod-svc-2-svc exposed
创建了三个SVC做负载模拟
1 2 3 4 5 6 7 8 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR pod-svc-1-svc ClusterIP 10.99.80.121 <none> 80/TCP 94s run=pod-svc-1 pod-svc-2-svc ClusterIP 10.110.40.30 <none> 80/TCP 107s run=pod-svc-2 pod-svc-svc ClusterIP 10.96.152.5 <none> 80/TCP 85s run=pod-svc ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$
1 2 3 4 5 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get ing No resources found in liruilong-svc-create namespace. ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$vim ingress.yaml
创建 Ingress,当然这里只是简单测试,可以更具具体业务情况配置复杂的路由策略 ingress.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: liruilongs.nginx1 http: paths: - path: / pathType: Prefix backend: service: name: pod-svc-svc port: number: 80 - host: liruilongs.nginx2 http: paths: - path: / pathType: Prefix backend: service: name: pod-svc-1-svc port: number: 80 - host: liruilongs.nginx3 http: paths: - path: / pathType: Prefix backend: service: name: pod-svc-2-svc port: number: 80
1 2 3 4 5 6 7 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl apply -f ingress.yaml ingress.networking.k8s.io/my-ingress created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$kubectl get ing NAME CLASS HOSTS ADDRESS PORTS AGE my-ingress <none> liruilongs.nginx1,liruilongs.nginx2,liruilongs.nginx3 80 17s
负载测试
1 2 3 4 5 6 7 8 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.83 -m shell -a "curl liruilongs.nginx1" 192.168.26.83 | CHANGED | rc=0 >> pod-svc ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.83 -m shell -a "curl liruilongs.nginx2" 192.168.26.83 | CHANGED | rc=0 >> pod-svc-1
DNS解析的地址为控制器的地址,这里控制器使用的是docker内部网络的方式,即直接把端口映射宿主机了
1 2 3 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-svc-create] └─$grep -i hostN nginx-controller.yaml hostNetwork: true
网络 跨主机Docker网络通信 常见的跨主机通信方案主要有以下几种 :
形式
描述
Host模式
容器直接使用宿主机的网络,这样天生就可以支持跨主机通信。这种方式虽然可以解决跨主机通信问题,但应用场景很有限,容易出现端口冲突,也无法做到隔离网络环境,一个容器崩溃很可能引起整个宿主机的崩溃。
端口绑定
通过绑定容器端口到宿主机端口,跨主机通信时使用“主机IP+端口的方式访问容器中的服务。显然,**这种方式仅能支持网络栈的4层及以上的应用,·并且容器与宿主机紧耦合,很难灵活地处理问题,可扩展性不佳 **。
定义容器网络
使用Open vSwitch
或Flannel
等第三方SDN
工具,为容器构建可以跨主机通信的网络环境。这类方案一般要求各个主机上的Dockero网桥的cidr
不同,以避免出现IP冲突的问题,限制容器在宿主机上可获取的IP范围。并且在容器需要对集群外提供服务时,需要比较复杂的配置,对部署实施人员的网络技能要求比较高。
容器网络发展到现在,形成了两大阵营:
Docker的CNM;
Google, Coreos,Kuberenetes主导的CNI
CNM
和CNI
是网络规范或者网络体系,并不是网络实现因此并不关心容器网络的实现方式( Flannel或者Calico等), CNM和CNI关心的只是网络管理。
网络类型
描述
CNM (Container Network Model)
CNM的优势在于原生,容器网络和Docker容器,生命周期结合紧密;缺点是被Docker “绑架”。支持CNM网络规范的容器网络实现包括:Docker Swarm overlay, Macvlan & IP networkdrivers, Calico, Contiv, Weave等。
CNI ( Container Network Interface)
CNI的优势是兼容其他容器技术(如rkt)及上层编排系统(Kubernetes&Mesos),而且社区活跃势头迅猛;缺点是非Docker原生。支持CNI网络规范的容器网络实现包括: Kubernetes、 Weave,Macvlan, Calico, Flannel, Contiv.Mesos CNI等。
但从网络实现角度,又可分为:
网络实现角度
描述
隧道方案
隧道方案在laas层的网络中应用也比较多,它的主要缺点是随着节点规模的增长复杂度会提升,而且出了网络问题后跟踪起来比较麻烦,大规模集群情况下这是需要考虑的一个问题
路由方案
一般是基于3层或者2层实现网络隔离和跨主机容器互通的
,出了问题也很容易排查。Calico
:基于BGP协议的路由方案,支持很细致的ACL控制,对混合云亲和度比较高。Macvlan
:从逻辑和Kernel层来看,是隔离性和性能最优的方案。基于二层隔离,所以需要一层路由器支持,大多数云服务商不支持,所以混合云上比较难以实现。
calico通信过程 Calico把每个操作系统的协议栈当作一个路由器 ,认为所有的容器是连在这个路由器上的网络终端,在路由器之间运行标准的路由协议-BGP ,然后让它们自己去学习这个网络拓扑该如何转发。
Calico方案其实是一个纯三层的方案 ,也就是说让 **每台机器的协议栈的三层去确保两个容器、跨主机容器之间的三层连通性 **。其网络模型如图所示。
网络模型
对于控制平面,其每个Calico节点上会运行两个主要的程序
程序
描述
一个是Felix
它会监听etcd
,并从etcd
获取事件,如该节点新增容器或者增加IP地址等。当在这个节点上创建出一个容器,并将其网卡、IP, MAC
都设置好后,Felix在内核的路由表
里面写一条数据,注明这个IP应该配置到这张网卡
。
一个标准的路由程序
,它会从内核
里面获取哪一些IP的路由
发生了变化,然后通过标准BGP的路由协议扩散
到整个其他宿主机上,通知外界这个IP在这里。
由于Calico
是一种纯三层(网络层)
的实现,因此可以避免与二层方案相关的数据包封装的操作,·中间没有任何的NAT
,没有任何的Overlay
,所以它的转发效率可能是所有方案中最高
的。因为它的包直接走原生TCP/IP的协议栈
,它的隔离也因为这个栈而变得好做。因为TCP/IP的协议栈
提供了一整套的防火墙规则
,所以它可以通过iptables的规则达到比较复杂的隔离逻辑
。
Calico实现方案
拓扑模式
环境准备 这里我们通过calico来进行跨主机容器网络通信过程演示 ,ansible网络测试
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m ping 192.168.26.101 | SUCCESS => { "ansible_facts" : { "discovered_interpreter_python" : "/usr/bin/python" }, "changed" : false , "ping" : "pong" } 192.168.26.102 | SUCCESS => { "ansible_facts" : { "discovered_interpreter_python" : "/usr/bin/python" }, "changed" : false , "ping" : "pong" } 192.168.26.100 | SUCCESS => { "ansible_facts" : { "discovered_interpreter_python" : "/usr/bin/python" }, "changed" : false , "ping" : "pong" } ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
etcd集群测试,这里我们已经搭建好一个etcd集群,etcdctl member list
查看集群列表
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "etcdctl member list" 192.168.26.102 | CHANGED | rc=0 >> 6f2038a018db1103, started, etcd-100, http://192.168.26.100:2380, http://192.168.26.100:2379,http://localhost:2379 bd330576bb637f25, started, etcd-101, http://192.168.26.101:2380, http://192.168.26.101:2379,http://localhost:2379 fbd8a96cbf1c004d, started, etcd-102, http://192.168.26.102:2380, http://192.168.26.100:2379,http://localhost:2379 192.168.26.101 | CHANGED | rc=0 >> 6f2038a018db1103, started, etcd-100, http://192.168.26.100:2380, http://192.168.26.100:2379,http://localhost:2379 bd330576bb637f25, started, etcd-101, http://192.168.26.101:2380, http://192.168.26.101:2379,http://localhost:2379 fbd8a96cbf1c004d, started, etcd-102, http://192.168.26.102:2380, http://192.168.26.100:2379,http://localhost:2379 192.168.26.100 | CHANGED | rc=0 >> 6f2038a018db1103, started, etcd-100, http://192.168.26.100:2380, http://192.168.26.100:2379,http://localhost:2379 bd330576bb637f25, started, etcd-101, http://192.168.26.101:2380, http://192.168.26.101:2379,http://localhost:2379 fbd8a96cbf1c004d, started, etcd-102, http://192.168.26.102:2380, http://192.168.26.100:2379,http://localhost:2379 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
docker安装启动,修改数据存储位置
1 2 3 4 5 6 7 8 9 10 11 12 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "yum -y install docker-ce" ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "systemctl enable docker --now" ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "systemctl status docker" 192.168.26.100 | CHANGED | rc=0 >> ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2022-01-01 20:27:17 CST; 10min ago Docs: https://docs.docker.com ...
修改docker启动参数:数据存储位置--cluster-store=
1 2 3 4 5 6 7 8 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "cat /usr/lib/systemd/system/docker.service | grep containerd.sock" 192.168.26.100 | CHANGED | rc=0 >> ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 192.168.26.102 | CHANGED | rc=0 >> ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 192.168.26.101 | CHANGED | rc=0 >> ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
这里我们直接使用SED来修改
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.100 -m shell -a "sed -i 's#containerd\.sock#containerd.sock --cluster-store=etcd ://192.168.26.100:2379#' /usr/lib/systemd/system/docker.service " 192.168.26.100 | CHANGED | rc=0 >> ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.101 -m shell -a "sed -i 's#containerd\.sock#containerd.sock --cluster-store=etcd://192.168.26.101:2379#' /usr/lib/systemd/system/docker.service " 192.168.26.101 | CHANGED | rc=0 >> ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.102 -m shell -a "sed -i 's#containerd\.sock#containerd.sock --cluster-store=etcd ://192.168.26.102:2379#' /usr/lib/systemd/system/docker.service " 192.168.26.102 | CHANGED | rc=0 >> ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
刷新Service文件,重启docker
1 2 3 4 5 6 7 8 9 10 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "systemctl daemon-reload; systemctl restart docker" 192.168.26.100 | CHANGED | rc=0 >> 192.168.26.102 | CHANGED | rc=0 >> 192.168.26.101 | CHANGED | rc=0 >> ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "systemctl status docker"
然后我们需要创建calico配置文件,这里我们通过ansilbe 的方式 使用file模块新建文件夹mkdir /etc/calico
1 2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m file -a "path=/etc/calico/ state=directory force=yes"
使用template模块创建配置文件
新建模板,这里使用到j2模板,魔法变量 1 2 3 4 5 6 7 8 9 10 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$cat calicoctl.j2 apiVersion: v1 kind: calicoApiConfig metadata: spec: datastoreType: "etcdv2" etcdEndpoints: "http://{{inventory_hostname}}:2379" ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
calico集群创建配置文件 1 2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m template -a "src=calicoctl.j2 dest=/etc/calico/calicoctl.cfg force=yes"
核对创建的配置文件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "cat /etc/calico/calicoctl.cfg" 192.168.26.100 | CHANGED | rc=0 >> apiVersion: v1 kind: calicoApiConfig metadata: spec: datastoreType: "etcdv2" etcdEndpoints: "http://192.168.26.100:2379" 192.168.26.102 | CHANGED | rc=0 >> apiVersion: v1 kind: calicoApiConfig metadata: spec: datastoreType: "etcdv2" etcdEndpoints: "http://192.168.26.102:2379" 192.168.26.101 | CHANGED | rc=0 >> apiVersion: v1 kind: calicoApiConfig metadata: spec: datastoreType: "etcdv2" etcdEndpoints: "http://192.168.26.101:2379"
实验相关镜像导入 1 2 3 4 5 6 7 8 9 10 11 12 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m copy -a "src=/root/calico-node-v2.tar dest=/root/" ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "docker load -i /root/calico-node-v2.tar" 192.168.26.100 | CHANGED | rc=0 >> Loaded image: quay.io/calico/node:v2.6.12 192.168.26.102 | CHANGED | rc=0 >> Loaded image: quay.io/calico/node:v2.6.12 192.168.26.101 | CHANGED | rc=0 >> Loaded image: quay.io/calico/node:v2.6.12 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
镜像查看 1 2 3 4 5 6 7 8 9 10 11 12 13 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "docker images" 192.168.26.102 | CHANGED | rc=0 >> REPOSITORY TAG IMAGE ID CREATED SIZE quay.io/calico/node v2.6.12 401cc3e56a1a 3 years ago 281MB 192.168.26.100 | CHANGED | rc=0 >> REPOSITORY TAG IMAGE ID CREATED SIZE quay.io/calico/node v2.6.12 401cc3e56a1a 3 years ago 281MB 192.168.26.101 | CHANGED | rc=0 >> REPOSITORY TAG IMAGE ID CREATED SIZE quay.io/calico/node v2.6.12 401cc3e56a1a 3 years ago 281MB ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
calicoctl 工具导入 1 2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m copy -a "src=/root/calicoctl dest=/bin/ mode=+x"
搭建Calico网络 开始建立 calico node 信息:每个主机上都部署了Calico/Node作为虚拟路由器
1 2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "calicoctl node run --node-image=quay.io/calico/node:v2.6.12 -c /etc/calico/calicoctl.cfg"
查看node状态,通过Calico将宿主机组织成任意的拓扑集群
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "calicoctl node status" 192.168.26.102 | CHANGED | rc=0 >> Calico process is running. IPv4 BGP status +----------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +----------------+-------------------+-------+----------+-------------+ | 192.168.26.100 | node-to-node mesh | up | 14:46:35 | Established | | 192.168.26.101 | node-to-node mesh | up | 14:46:34 | Established | +----------------+-------------------+-------+----------+-------------+ IPv6 BGP status No IPv6 peers found. 192.168.26.101 | CHANGED | rc=0 >> Calico process is running. IPv4 BGP status +----------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +----------------+-------------------+-------+----------+-------------+ | 192.168.26.100 | node-to-node mesh | up | 14:46:31 | Established | | 192.168.26.102 | node-to-node mesh | up | 14:46:34 | Established | +----------------+-------------------+-------+----------+-------------+ IPv6 BGP status No IPv6 peers found. 192.168.26.100 | CHANGED | rc=0 >> Calico process is running. IPv4 BGP status +----------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +----------------+-------------------+-------+----------+-------------+ | 192.168.26.101 | node-to-node mesh | up | 14:46:31 | Established | | 192.168.26.102 | node-to-node mesh | up | 14:46:35 | Established | +----------------+-------------------+-------+----------+-------------+ IPv6 BGP status No IPv6 peers found. ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
当集群中的容器需要与外界通信时,就可以通过BGP协议将网关物理路由器加入到集群中,使外界可以直接访问容器IP,而不需要做任何NAT之类的复杂操作。
通过Calico网络实现跨主机通信 在某一个Node上创建一个docker
内部calico
网络
1 2 3 4 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.100 -m shell -a "docker network create --driver calico --ipam-driver calico-ipam calnet1" 192.168.26.100 | CHANGED | rc=0 >> 58121f89bcddec441770aa207ef662d09e4413625b0827ce4d8f601fb10650d0
会发现这个内网网络变成的一个全局的网络,在所有节点可见,58121f89bcdd
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "docker network list" 192.168.26.100 | CHANGED | rc=0 >> NETWORK ID NAME DRIVER SCOPE caa87ba3dd86 bridge bridge local 58121f89bcdd calnet1 calico global 1d63e3ad385f host host local adc94f172d5f none null local 192.168.26.102 | CHANGED | rc=0 >> NETWORK ID NAME DRIVER SCOPE cc37d3c66e2f bridge bridge local 58121f89bcdd calnet1 calico global 3b138015d4ab host host local 7481614a7084 none null local 192.168.26.101 | CHANGED | rc=0 >> NETWORK ID NAME DRIVER SCOPE d0cb224ed111 bridge bridge local 58121f89bcdd calnet1 calico global 106e1c9fb3d3 host host local f983021e2a02 none null local ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
查看节点中的网卡信息,这个时候没有容器运行,所以没有caliao网卡
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "ip a" 192.168.26.102 | CHANGED | rc=0 >> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:0f:98:f1 brd ff:ff:ff:ff:ff:ff inet 192.168.26.102/24 brd 192.168.26.255 scope global ens32 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe0f:98f1/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:c3:28:19:78 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 192.168.26.100 | CHANGED | rc=0 >> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:8c:e8:1a brd ff:ff:ff:ff:ff:ff inet 192.168.26.100/24 brd 192.168.26.255 scope global ens32 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe8c:e81a/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:f7:1a:2e:30 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 192.168.26.101 | CHANGED | rc=0 >> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:3b:6e:ef brd ff:ff:ff:ff:ff:ff inet 192.168.26.101/24 brd 192.168.26.255 scope global ens32 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe3b:6eef/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:70:a7:4e:7e brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
每个节点运行一个容器
1 2 3 4 5 6 7 8 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "docker run --name {{inventory_hostname}} -itd --net=calnet1 --restart=always busybox " 192.168.26.101 | CHANGED | rc=0 >> cf2ff4b65e6343fa6e9afba6e75376b97ac47ea59c35f3c492bb7051c15627f0 192.168.26.100 | CHANGED | rc=0 >> 065724c073ded04d6df41d295be3cd5585f8683664fd42a3953dc8067195c58e 192.168.26.102 | CHANGED | rc=0 >> 82e4d6dfde5a6e51f9a4d4f86909678a42e8d1e2d9bfa6edd9cc258b37dfc2db
查看容器节点信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "docker ps" 192.168.26.102 | CHANGED | rc=0 >> CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 82e4d6dfde5a busybox "sh" About a minute ago Up About a minute 192.168.26.102 c2d2ab904d6d quay.io/calico/node:v2.6.12 "start_runit" 2 hours ago Up 2 hours calico-node 192.168.26.100 | CHANGED | rc=0 >> CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 065724c073de busybox "sh" About a minute ago Up About a minute 192.168.26.100 f0b150a924d9 quay.io/calico/node:v2.6.12 "start_runit" 2 hours ago Up 2 hours calico-node 192.168.26.101 | CHANGED | rc=0 >> CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cf2ff4b65e63 busybox "sh" About a minute ago Up About a minute 192.168.26.101 0e4e6f005797 quay.io/calico/node:v2.6.12 "start_runit" 2 hours ago Up 2 hours calico-node ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
查看每个容器的内部网卡和IP
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "docker exec -it {{inventory_hostname}} ip a | grep cali0 -A 4" 192.168.26.100 | CHANGED | rc=0 >> 4: cali0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff inet 192.168.239.128/32 scope global cali0 valid_lft forever preferred_lft forever 192.168.26.102 | CHANGED | rc=0 >> 4: cali0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff inet 192.168.63.64/32 scope global cali0 valid_lft forever preferred_lft forever 192.168.26.101 | CHANGED | rc=0 >> 4: cali0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff inet 192.168.198.0/32 scope global cali0 valid_lft forever preferred_lft forever ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
查看容器内的路由关系,即所有的出口都是通过cali0网卡来实现的
1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "docker exec -it {{inventory_hostname}} ip route | grep cali0 " 192.168.26.101 | CHANGED | rc=0 >> default via 169.254.1.1 dev cali0 169.254.1.1 dev cali0 scope link 192.168.26.102 | CHANGED | rc=0 >> default via 169.254.1.1 dev cali0 169.254.1.1 dev cali0 scope link 192.168.26.100 | CHANGED | rc=0 >> default via 169.254.1.1 dev cali0 169.254.1.1 dev cali0 scope link
每创建一个容器,则会在物理机上创建一张虚拟网卡出来,对应容器中的网卡,从这里可以看到容器里的虚拟网卡 cali0 和物理机的 cali6f956c2ada9 是 veth pair
关系。
关于veth pair
小伙伴可以百度下,这里简单描述,作用很简单,就是要把从一个 network namespace
发出的数据包转发到另一个 namespace
。veth
设备是成对的,一个是container
之中,另一个在container
之外(宿主机),即在真实机器上能看到的。VETH
设备总是成对出现,送到一端请求发送的数据总是从另一端以请求接受的形式出现。创建并配置正确后,向其一端输入数据,VETH
会改变数据的方向并将其送入内核网络子系统
,完成数据的注入
,而在另一端则能读到此数据
。(Namespace,其中往veth设备上任意一端上RX到的数据,都会在另一端上以TX的方式发送出去)veth工作在L2数据链路层
,veth-pair设备在转发数据包过程中并不串改数据包内容
。
更多小伙伴可以参考:https://blog.csdn.net/sld880311/article/details/77650937
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "ip a | grep -A 4 cali" 192.168.26.102 | CHANGED | rc=0 >> 5: cali6f956c2ada9@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 6a:65:54:1a:19:e6 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::6865:54ff:fe1a:19e6/64 scope link valid_lft forever preferred_lft forever 192.168.26.100 | CHANGED | rc=0 >> 5: cali0b7f49da20a@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 9e:da:0e:cc:b3:7e brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::9cda:eff:fecc:b37e/64 scope link valid_lft forever preferred_lft forever 192.168.26.101 | CHANGED | rc=0 >> 5: calib6f7ddae7e3@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 1e:e6:16:ae:f0:91 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::1ce6:16ff:feae:f091/64 scope link valid_lft forever preferred_lft forever ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
查看宿主机路由关系
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "ip route " 192.168.26.101 | CHANGED | rc=0 >> default via 192.168.26.2 dev ens32 169.254.0.0/16 dev ens32 scope link metric 1002 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 192.168.26.0/24 dev ens32 proto kernel scope link src 192.168.26.101 192.168.63.64/26 via 192.168.26.102 dev ens32 proto bird blackhole 192.168.198.0/26 proto bird 192.168.198.1 dev cali2f9e2c68bad scope link 192.168.239.128/26 via 192.168.26.100 dev ens32 proto bird 192.168.26.100 | CHANGED | rc=0 >> default via 192.168.26.2 dev ens32 169.254.0.0/16 dev ens32 scope link metric 1002 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 192.168.26.0/24 dev ens32 proto kernel scope link src 192.168.26.100 192.168.63.64/26 via 192.168.26.102 dev ens32 proto bird 192.168.198.0/26 via 192.168.26.101 dev ens32 proto bird 192.168.239.128 dev cali0b7f49da20a scope link blackhole 192.168.239.128/26 proto bird 192.168.26.102 | CHANGED | rc=0 >> default via 192.168.26.2 dev ens32 169.254.0.0/16 dev ens32 scope link metric 1002 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 192.168.26.0/24 dev ens32 proto kernel scope link src 192.168.26.102 192.168.63.64 dev cali6f956c2ada9 scope link blackhole 192.168.63.64/26 proto bird 192.168.198.0/26 via 192.168.26.101 dev ens32 proto bird 192.168.239.128/26 via 192.168.26.100 dev ens32 proto bird ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
我们那其中一台机器来看:192.168.26.100宿主机来讲
192.168.239.128 dev cali0b7f49da20a scope link
进去:本机到目的地址到 容器IP(192.168.239.128 ) 的数据包都从 cali6f956c2ada9 (新产生的虚拟网卡)走。
192.168.63.64/26 via 192.168.26.102 dev ens32 proto bird 192.168.198.0/26 via 192.168.26.101 dev ens32 proto bird
出来:本机目的地址到 容器IP(192.168.63.64/26) 容器IP(192.168.198.0/26) 网段的数据包都从 ens32 发到 其他的两个宿主机上去。
每台主机都知道不同的容器在哪台主机上,所以会动态的设置路由。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible etcd -m shell -a "route -n " 192.168.26.101 | CHANGED | rc=0 >> Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.26.2 0.0.0.0 UG 0 0 0 ens32 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 ens32 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.26.0 0.0.0.0 255.255.255.0 U 0 0 0 ens32 192.168.63.64 192.168.26.102 255.255.255.192 UG 0 0 0 ens32 192.168.198.0 0.0.0.0 255.255.255.192 U 0 0 0 * 192.168.198.1 0.0.0.0 255.255.255.255 UH 0 0 0 cali2f9e2c68bad 192.168.239.128 192.168.26.100 255.255.255.192 UG 0 0 0 ens32 192.168.26.100 | CHANGED | rc=0 >> Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.26.2 0.0.0.0 UG 0 0 0 ens32 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 ens32 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.26.0 0.0.0.0 255.255.255.0 U 0 0 0 ens32 192.168.63.64 192.168.26.102 255.255.255.192 UG 0 0 0 ens32 192.168.198.0 192.168.26.101 255.255.255.192 UG 0 0 0 ens32 192.168.239.128 0.0.0.0 255.255.255.255 UH 0 0 0 cali0b7f49da20a 192.168.239.128 0.0.0.0 255.255.255.192 U 0 0 0 * 192.168.26.102 | CHANGED | rc=0 >> Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.26.2 0.0.0.0 UG 0 0 0 ens32 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 ens32 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.26.0 0.0.0.0 255.255.255.0 U 0 0 0 ens32 192.168.63.64 0.0.0.0 255.255.255.255 UH 0 0 0 cali6f956c2ada9 192.168.63.64 0.0.0.0 255.255.255.192 U 0 0 0 * 192.168.198.0 192.168.26.101 255.255.255.192 UG 0 0 0 ens32 192.168.239.128 192.168.26.100 255.255.255.192 UG 0 0 0 ens32 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
简单测试一下:192.168.26.100
宿主机上的容器(192.168.239.128
)去ping 192.168.63.64
(192.168.26.100
上的容器),实现跨主机互通。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 ┌──[root@vms100.liruilongs.github.io]-[~] └─$ docker exec -it 192.168.26.100 /bin/sh / bin dev etc home proc root sys tmp usr var / 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 4: cali0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff inet 192.168.239.128/32 scope global cali0 valid_lft forever preferred_lft forever / PING 192.168.63.64 (192.168.63.64): 56 data bytes 64 bytes from 192.168.63.64: seq=0 ttl=62 time=18.519 ms 64 bytes from 192.168.63.64: seq=1 ttl=62 time=0.950 ms 64 bytes from 192.168.63.64: seq=2 ttl=62 time=1.086 ms 64 bytes from 192.168.63.64: seq=3 ttl=62 time=0.846 ms 64 bytes from 192.168.63.64: seq=4 ttl=62 time=0.840 ms 64 bytes from 192.168.63.64: seq=5 ttl=62 time=1.151 ms 64 bytes from 192.168.63.64: seq=6 ttl=62 time=0.888 ms ^C --- 192.168.63.64 ping statistics --- 7 packets transmitted, 7 packets received, 0% packet loss round-trip min/avg/max = 0.840/3.468/18.519 ms /
在K8s集群的中,有一个容器,就会生成一个calico网卡
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:ad:e3:93 brd ff:ff:ff:ff:ff:ff inet 192.168.26.81/24 brd 192.168.26.255 scope global ens32 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fead:e393/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:0a:9e:7d:44 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 4: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 inet 10.244.88.64/32 scope global tunl0 valid_lft forever preferred_lft forever 5: cali12cf25006b5@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever 6: cali5a282a7bbb0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever 7: calicb34164ec79@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
K8s中Calico的实现方案
Calico的实现方案
Calico的核心组件包括: Felix, etcd, BGP Client (BIRD)、 BGPRoute Reflector
Felix ,即Calico代理
, “跑”在Kubernetes的Node节点
上,主要负责配置路由
及ACL等信息
来确保Endpoint
的连通状态。
etcd ,分布式键值存储
,主要负责网络元数据一致性
,确保Calico网络状态的准确性
,可以与Kubernetes共用
。BGP Client (BIRD) ,主要负责把Felix写入Kernel的路由信息
分发到当前Calico网络
,确保workload间通信
的有效性。
BGP Route Reflector ,大规模部署时使用,摒弃所有节点互联的Mesh模式,通过一个或者多个BGP Route Reflector来完成集中式路由分发
。
将整个互联网的可扩展IP网络原则压缩到数据中心级别, Calico在每一个计算节点利用Linux Kernel实现了一个高效的vRouter来负责数据转发,而每个vRouter通过BGP协议把在其上运行的容器的路由信息向整个Calico网络内传播,小规模部署可以直接互联,大规模下可通过指定的BGP Route Reflector来完成。这样保证最终所有的容器间的数据流量都是通过IP包的方式完成互联的。
基于三层实现通信,在二层上没有任何加密包装,因此只能在私有的可靠网络上使用。
流量隔离基于iptables
实现,并且从etcd
中获取需要生成的隔离规则,因此会有一些性能上的隐患。
每个主机上都部署了Calico-Node作为虚拟路由器,并且可以通过Calico将宿主机组织成任意的拓扑集群。当集群中的容器需要与外界通信时,就可以通过BGP协议将网关物理路由器加入到集群中,使外界可以直接访问容器IP,而不需要做任何NAT之类的复杂操作。
K8s整体流程图
Kubernetes网络策略
为了实现细粒度的容器间网络访问隔离策略(防火墙)
, Kubernetes从1.3
版本开始,由SIG-Network
小组主导研发了Network Policy
机制,目前已升级为networking.k8s.io/v1
稳定版本。
Network Policy
的主要功能是对Pod
间的网络通信进行限制和准入控制
设置方式为将Pod的Label
作为查询条件,设置允许访问
或禁止访问
的客户端Pod列表
。查询条件可以作用于Pod
和Namespace
级别。
为了使用Network Policy
, Kubernetes
引入了一个新的资源对象NetworkPolicy
,供用户设置Pod间网络访问的策略
。但仅定义一个网络策略
是无法完成实际的网络隔离
的,还需要一个策略控制器(PolicyController)进行策略的实现
。
策略控制器由第三方网络组件提供,目前Calico, Cilium, Kube-router, Romana, Weave Net
等开源项目均支持网络策略的实现。Network Policy
的工作原理如图
policy controller
需要实现一个API Listener
,监听用户设置的NetworkPolicy
定义,并将网络访问规则通过各Node的Agent
进行实际设置(Agent
则需要通过CNI网络插件实现)
网络策略配置说明 网络策略
的设置主要用于对目标Pod
的网络访问进行限制,在默认·情况下对所有Pod都是允许访问
的,在设置了指向Pod的NetworkPolicy网络策略
之后,到Pod的访问才会被限制。 需要注意的是网络策略是基于Pod的
NetWorkPolicy基于命名空间进行限制,即只作用当前命名空间,分为两种:
ingress:定义允许访问目标Pod的入站白名单规则
egress: 定义目标Pod允许访问的“出站”白名单规则
具体的规则限制方式分为三种(需要注意的是,多个限制之间是或的逻辑关系,如果希望变成与的关系,yaml文件需要配置为数组 ):
下面是一个资源文件的Demo
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: default spec: podSelector: matchLabels: role: db policyTypes: - Ingress - Egress ingress: - from: - ipBlock: cidr: 172.17 .0 .0 /16 except: - 172.17 .1 .0 /24 - namespaceSelector: matchLabels: project: myproject - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 6379 egress: - to: - ipBlock: cidr: 10.0 .0 .0 /24 ports: - protocol: TCP port: 5978
在Namespace级别设置默认的网络策略
在Namespace级别还可以设置一些默认的全局网络策略,以方便管理员对整个Namespace进行统一的网络策略设置。
默认拒绝所有入站流量 1 2 3 4 5 6 7 8 9 apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-ingress spec: podSelector: {} policyTypes: - Ingress
默认允许所有入站流量 1 2 3 4 5 6 7 8 9 10 11 apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-ingress spec: podSelector: {} ingress: - {} policyTypes: - Ingress
默认拒绝所有出站流量 1 2 3 4 5 6 7 8 9 apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-egress spec: podSelector: {} policyTypes: - Egress
默认允许所有出站流量 1 2 3 4 5 6 7 8 9 10 11 apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress spec: podSelector: {} egress: - {} policyTypes: - Egress
默认拒绝所有入口和所有出站流量 1 2 3 4 5 6 7 8 9 10 apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-all spec: podSelector: {} policyTypes: - Ingress - Egress
NetworkPolicy的发展
作为一个稳定特性,SCTP
支持默认是被启用的。 要在集群层面禁用 SCTP
,你(或你的集群管理员)需要为API
服务器指定 --feature-gates=SCTPSupport=false
,… 来禁用 SCTPSupport
特性门控。 启用该特性门控后,用户可以将 NetworkPolicy
的protocol
字段设置为 SCTP
(不同版本略有区别)
NetWorkPolicy实战 环境准备 先创建两个没有任何策略的SVC
1 2 3 4 5 6 7 8 9 10 11 12 13 14 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$d =k8s-network-create ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$mkdir $d ;cd $d ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl create ns liruilong-network-create namespace/liruilong-network-create created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl config set-context $(kubectl config current-context) --namespace=liruilong-network-createContext "kubernetes-admin@kubernetes" modified. ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl config view | grep namespace namespace: liruilong-network-create ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$
我们先构造两个pod,为两个SVC提供能力
1 2 3 4 5 6 7 8 9 10 11 12 13 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl run pod1 --image=nginx --image-pull-policy=IfNotPresent pod/pod1 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl run pod2 --image=nginx --image-pull-policy=IfNotPresent pod/pod2 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod1 1/1 Running 0 35s 10.244.70.31 vms83.liruilongs.github.io <none> <none> pod2 1/1 Running 0 21s 10.244.171.181 vms82.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$
然后我们分别修改pod中Ngixn容器主页
1 2 3 4 5 6 7 8 9 10 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS pod1 1/1 Running 0 100s run=pod1 pod2 1/1 Running 0 86s run=pod2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl exec -it pod1 -- sh -c "echo pod1 >/usr/share/nginx/html/index.html" ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl exec -it pod2 -- sh -c "echo pod2 >/usr/share/nginx/html/index.html"
创建两个SVC
1 2 3 4 5 6 7 8 9 10 11 12 13 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl expose --name=svc1 pod pod1 --port=80 --type =LoadBalancer service/svc1 exposed ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl expose --name=svc2 pod pod2 --port=80 --type =LoadBalancer service/svc2 exposed ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc1 LoadBalancer 10.106.61.84 192.168.26.240 80:30735/TCP 14s svc2 LoadBalancer 10.111.123.194 192.168.26.241 80:31034/TCP 5s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$
访问测试,无论在当前命名空间还是在指定命名空间,都可以相互访问
1 2 3 4 5 6 7 8 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl run testpod1 -it --rm --image=yauritux/busybox-curl --image-pull-policy=IfNotPresent If you don'' t see a command prompt, try pressing enter. /home pod1 /home pod2 /home
指定命名空间
1 2 3 4 5 6 7 8 9 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl run testpod2 -it --rm --image=yauritux/busybox-curl --image-pull-policy=IfNotPresent -n default If you don''t see a command prompt, try pressing enter. /home # curl svc1.liruilong-network-create pod1 /home # curl svc2.liruilong-network-create pod2 /home #
由于使用了LB,所以物理机也可以访问
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 PS E:\docker> curl 192.168.26.240 StatusCode : 200 StatusDescription : OK Content : pod1 RawContent : HTTP/1.1 200 OK Connection: keep-alive Accept-Ranges: bytes Content-Length: 5 Content-Type: text/html Date: Mon, 03 Jan 2022 12:29:32 GMT ETag: "61d27744-5" Last-Modified: Mon, 03 Jan 2022 04:1... Forms : {} Headers : {[Connection, keep-alive], [Accept-Ranges, bytes], [Content-Lengt h, 5], [Content-Type, text/html]...} Images : {} InputFields : {} Links : {} ParsedHtml : System.__ComObject RawContentLength : 5
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 PS E:\docker> curl 192.168.26.241 StatusCode : 200 StatusDescription : OK Content : pod2 RawContent : HTTP/1.1 200 OK Connection: keep-alive Accept-Ranges: bytes Content-Length: 5 Content-Type: text/html Date: Mon, 03 Jan 2022 12:29:49 GMT ETag: "61d27752-5" Last-Modified: Mon, 03 Jan 2022 04:1... Forms : {} Headers : {[Connection, keep-alive], [Accept-Ranges, bytes], [Content-Lengt h, 5], [Content-Type, text/html]...} Images : {} InputFields : {} Links : {} ParsedHtml : System.__ComObject RawContentLength : 5 PS E:\docker>
进入策略 下面我们看一下进入的策略
1 2 3 4 5 6 7 8 9 10 11 12 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS pod1 1/1 Running 2 (3d12h ago) 5d9h run=pod1 pod2 1/1 Running 2 (3d12h ago) 5d9h run=pod2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR svc1 LoadBalancer 10.106.61.84 192.168.26.240 80:30735/TCP 5d9h run=pod1 svc2 LoadBalancer 10.111.123.194 192.168.26.241 80:31034/TCP 5d9h run=pod2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$
测试的外部物理机机器IP
1 2 3 4 5 6 7 8 9 10 11 12 PS E:\docker> ipconfig Windows IP 配置 .......... 以太网适配器 VMware Network Adapter VMnet8: 连接特定的 DNS 后缀 . . . . . . . : 本地链接 IPv6 地址. . . . . . . . : fe80::f9c8:e941:4deb:698f%24 IPv4 地址 . . . . . . . . . . . . : 192.168.26.1 子网掩码 . . . . . . . . . . . . : 255.255.255.0 默认网关. . . . . . . . . . . . . :
IP限制 我们通过修改ip限制来演示网路策略,通过宿主机所在物理机访问。当设置指定网段可以访问,不是指定网段不可以访问
1 2 3 4 5 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$vim networkpolicy.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl apply -f networkpolicy.yaml networkpolicy.networking.k8s.io/test-network-policy configured
编写资源文件,允许172.17.0.0/16
网段的机器访问
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: liruilong-network-create spec: podSelector: matchLabels: run: pod1 policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 172.17 .0 .0 /16 ports: - protocol: TCP port: 80
集群外部机器无法访问
1 2 3 4 5 6 7 8 9 PS E:\docker> curl 192.168.26.240 curl : 无法连接到远程服务器 所在位置 行:1 字符: 1 + curl 192.168.26.240 + ~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest],WebExce ption + FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
配置允许当前网段的ip访问
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: liruilong-network-create spec: podSelector: matchLabels: run: pod1 policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 192.168 .26 .0 /24 ports: - protocol: TCP port: 80
修改网段之后正常外部机器可以访问
1 2 3 4 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$sed -i 's#172.17.0.0/16#192.168.26.0/24#' networkpolicy.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl apply -f networkpolicy.yaml
测试,外部机器可以访问
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 PS E:\docker> curl 192.168.26.240 StatusCode : 200 StatusDescription : OK Content : pod1 RawContent : HTTP/1.1 200 OK Connection: keep-alive Accept-Ranges: bytes Content-Length: 5 Content-Type: text/html Date: Sat, 08 Jan 2022 14:59:13 GMT ETag: "61d9a663-5" Last-Modified: Sat, 08 Jan 2022 14:5... Forms : {} Headers : {[Connection, keep-alive], [Accept-Ranges, bytes], [Content-Length, 5], [Content-T ype, text/html]...} Images : {} InputFields : {} Links : {} ParsedHtml : System.__ComObject RawContentLength : 5
命名空间限制 设置只允许default命名空间的数据通过
1 2 3 4 5 6 7 8 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl get ns --show-labels | grep default default Active 26d kubernetes.io/metadata.name=default ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$vim networkpolicy-name.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl apply -f networkpolicy-name.yaml networkpolicy.networking.k8s.io/test-network-policy configured
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: liruilong-network-create spec: podSelector: matchLabels: run: pod1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: default ports: - protocol: TCP port: 80
宿主机所在物理机无法访问访问
1 2 3 4 5 6 7 8 9 10 11 PS E:\docker> curl 192.168.26.240 curl : 无法连接到远程服务器 所在位置 行:1 字符: 1 + curl 192.168.26.240 + ~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebR equest],WebException + FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequ estCommand PS E:\docker>
当前命名空间也无法访问
1 2 3 4 5 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl run testpod1 -it --rm --image=yauritux/busybox-curl --image-pull-policy=IfNotPresent /home curl: (28) Connection timed out after 10413 milliseconds
default命名空间可以访问
1 2 3 4 5 6 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl run testpod1 -it --rm --image=yauritux/busybox-curl --image-pull-policy=IfNotPresent --namespace=default /home pod1 /home
pod选择器限制 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: liruilong-network-create spec: podSelector: matchLabels: run: pod1 policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: run: testpod ports: - protocol: TCP port: 80
创建一个策略,只允许标签为run=testpod
的pod访问
1 2 3 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl apply -f networkpolicy-pod.yaml networkpolicy.networking.k8s.io/test-network-policy created
创建两个pod,都设置--labels=run=testpod
标签,只有当前命名空间可以访问
1 2 3 4 5 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl run testpod1 -it --rm --image=yauritux/busybox-curl --image-pull-policy=IfNotPresent --labels=run=testpod --namespace=default /home curl: (28) Connection timed out after 10697 milliseconds
1 2 3 4 5 6 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl run testpod1 -it --rm --image=yauritux/busybox-curl --image-pull-policy=IfNotPresent --labels=run=testpod /home pod1 /home
下面的设置可以其他所有命名空间可以访问
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: liruilong-network-create spec: podSelector: matchLabels: run: pod1 policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: podSelector: matchLabels: run: testpod ports: - protocol: TCP port: 80
default 命名空间和当前命名空间可以访问
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: liruilong-network-create spec: podSelector: matchLabels: run: pod1 policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: kubernetes.io/metadata.name: default podSelector: matchLabels: run: testpod - podSelector: matchLabels: run: testpod ports: - protocol: TCP port: 80
定位pod所使用的网络策略 1 2 3 4 5 6 7 8 9 10 11 12 13 14 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl get networkpolicies NAME POD-SELECTOR AGE test-network-policy run=pod1 13m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS pod1 1/1 Running 2 (3d15h ago) 5d12h run=pod1 pod2 1/1 Running 2 (3d15h ago) 5d12h run=pod2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl get networkpolicies | grep run=pod1 test-network-policy run=pod1 15m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$
出去策略 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: matchLabels: run: pod1 policyTypes: - Egress egress: - to: - podSelector: matchLabels: run: pod2 ports: - protocol: TCP port: 80
pod1只能访问pod2的TCP协议80端口
1 2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl apply -f networkpolicy1.yaml
通过IP访问正常pod2
1 2 3 4 ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl exec -it pod1 -- bash root@pod1:/ pod2
因为DNS的pod在另一个命名空间(kube-system)运行,pod1只能到pod,所以无法通过域名访问,需要添加另一个命名空间
1 2 3 4 ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl exec -it pod1 -- bash root@pod1:/ ^C
相关参数获取
1 2 3 4 5 6 7 8 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl get ns --show-labels | grep kube-system kube-system Active 27d kubernetes.io/metadata.name=kube-system ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl get pods --show-labels -n kube-system | grep dns coredns-7f6cbbb7b8-ncd2s 1/1 Running 13 (3d19h ago) 24d k8s-app=kube-dns,pod-template-hash=7f6cbbb7b8 coredns-7f6cbbb7b8-pjnct 1/1 Running 13 (3d19h ago) 24d k8s-app=kube-dns,pod-template-hash=7f6cbbb7b8
配置两个出去规则,一个到pod2,一个到kube-dns,使用不同端口协议
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: matchLabels: run: pod1 policyTypes: - Egress egress: - to: - podSelector: matchLabels: run: pod2 ports: - protocol: TCP port: 80 - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system podSelector: matchLabels: k8s-app: kube-dns ports: - protocol: UDP port: 53
测试可以通过域名访问
1 2 3 4 5 6 7 8 9 10 11 12 13 14 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$vim networkpolicy2.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl apply -f networkpolicy2.yaml networkpolicy.networking.k8s.io/test-network-policy configured ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl get networkpolicies NAME POD-SELECTOR AGE test-network-policy run=pod1 3h38m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-network-create] └─$kubectl exec -it pod1 -- bash root@pod1:/ pod2 root@pod1:/