Kubernetes 管理员认证(CKA)考试笔记(一)

生活的意义就是学着真实的活下去,生命的意义就是寻找生活的意义 —–山河已无恙

写在前面


    • 嗯,准备考 cka证书,报了个班,花了好些钱,一定要考过去。
  • 这篇博客是报班听课后整理的笔记,适合温习。
  • 博客内容涉及 docker k8s +pod+etcd+volume;

生活的意义就是学着真实的活下去,生命的意义就是寻找生活的意义 —–山河已无恙


一、docker 基础

1、容器 ?= docker

容器是什么?docker是什么? 启动盘小伙伴都不陌生,电脑系统坏了,开不了机,我们插一个启动盘就可以了,这个启动盘里有一些基础的软件,那么这里,**我们用的启动盘,就可以理解是一个类似镜像的东东,这个启动盘在电脑上运行一个系统,这个win PE系统就是一个容器**,这个系统运行需要的物理内存CPU都是从物理机获取,也就是我们开不了机的那个电脑。

那现实场景中,我们要多管理容器和镜像,要怎么办,不能一个镜像放到一个U盘里吧,这里我们 **需要一个 runtime(运行时),即用于管理容器的一种软件**,比如 runc lxc gvisor kata 这些,只能管理容器,不能管理镜像,他们被称为 **低级别运行时**。

低级别的运行时功能单一,不能管理镜像,这时候需要有 高级别的运行时,比如 docker podman containerd .. ,用来调用管理低级别运行时 runc 等,即能管理容器,也能管理镜像。k8s是用来管理高级别运行时的。

关闭屏保

1
setterm -blank 0

配置yum源

1
2
rm -rf /etc/yum.repos.d/
wget ftp://ftp.rhce.cc/k8s/* -P /etc/yum.repos.d/

配置docker加速器

1
2
3
4
5
6
7
8
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://2tefyfv7.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

使用国内仓库

华为云 在这里插入图片描述
网易云 在这里插入图片描述
阿里云 在这里插入图片描述

2.docker镜像管理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
┌──(liruilong㉿Liruilong)-[/mnt/c/Users/lenovo]
└─$ ssh root@192.168.26.55
Last login: Fri Oct 1 16:39:16 2021 from 192.168.26.1
┌──[root@liruilongs.github.io]-[~]
└─$ systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2021-09-26 02:07:56 CST; 1 weeks 0 days ago
Docs: https://docs.docker.com
Main PID: 1004 (dockerd)
Memory: 136.1M
CGroup: /system.slice/docker.service
└─1004 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
。。。。。。。
┌──[root@liruilongs.github.io]-[~]
└─$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
┌──[root@liruilongs.github.io]-[~]
└─$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
┌──[root@liruilongs.github.io]-[~]
└─$

docker镜像管理
镜像的命名方式 默认docker.io,
docker pull 镜像 拉镜像
docker tag 镜像 打标签,重命名,类似于linxu里的硬连接
docker rmi 镜像 删除
docker save 镜像名 > filename.tar 保存,备份
docker load -i filename.tar 导入
docker export 容器名 > filename.tar 把容器导出为镜像:
导入 cat filename.tar docker import - 镜像名
docker history xxxx –no-trunc 可以显示完整的构建内容
1
2
3
4
┌──[root@liruilongs.github.io]-[~]
└─$ docker images | grep -v TAG | awk '{print $1":"$2}'
nginx:latest
mysql:latest

备份所有镜像
docker images | grep -v TAG | awk '{print $1":"$2}' | xargs docker save >all.tar

1
2
3
4
5
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker images | grep -v TAG | awk '{print $1":"$2}' | xargs docker save >all.tar
┌──[root@liruilongs.github.io]-[~/docker]
└─$ ls
all.tar docker_images_util_202110032229_UCPY4C5k.sh

删除所有镜像
docker images | grep -v TAG | awk '{print $1":"$2}' | xargs docker rmi

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest f8f4ffc8092c 5 days ago 133MB
mysql latest 2fe463762680 5 days ago 514MB
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker images | grep -v TAG | awk '{print $1":"$2}' | xargs docker rmi
Untagged: nginx:latest
Untagged: nginx@sha256:765e51caa9e739220d59c7f7a75508e77361b441dccf128483b7f5cce8306652
Deleted: sha256:f8f4ffc8092c956ddd6a3a64814f36882798065799b8aedeebedf2855af3395b
Deleted: sha256:f208904eecb00a0769d263e81b8234f741519fefa262482d106c321ddc9773df
Deleted: sha256:ed6dd2b44338215d30a589d7d36cb4ffd05eb28d2e663a23108d03b3ac273d27
Deleted: sha256:c9958d4f33715556566095ccc716e49175a1fded2fa759dbd747750a89453490
Deleted: sha256:c47815d475f74f82afb68ef7347b036957e7e1a1b0d71c300bdb4f5975163d6a
Deleted: sha256:3b06b30cf952c2f24b6eabdff61b633aa03e1367f1ace996260fc3e236991eec
Untagged: mysql:latest
Untagged: mysql@sha256:4fcf5df6c46c80db19675a5c067e737c1bc8b0e78e94e816a778ae2c6577213d
Deleted: sha256:2fe4637626805dc6df98d3dc17fa9b5035802dcbd3832ead172e3145cd7c07c2
Deleted: sha256:e00bdaa10222919253848d65585d53278a2f494ce8c6a445e5af0ebfe239b3b5
Deleted: sha256:83411745a5928b2a3c2b6510363218fb390329f824e04bab13573e7a752afd50
Deleted: sha256:e8e521a71a92aad623b250b0a192a22d54ad8bbeb943f7111026041dce20d94f
Deleted: sha256:024ee0ef78b28663bc07df401ae3a258ae012bd5f37c2960cf638ab4bc04fafd
Deleted: sha256:597139ec344c8cb622127618ae21345b96dd23e36b5d04b071a3fd92d207a2c0
Deleted: sha256:28909b85bd680fc47702edb647a06183ae5f3e3020f44ec0d125bf75936aa923
Deleted: sha256:4e007ef1e2a3e1e0ffb7c0ad8c9ea86d3d3064e360eaa16e7c8e10f514f68339
Deleted: sha256:b01d7bbbd5c0e2e5ae10de108aba7cd2d059bdd890814931f6192c97fc8aa984
Deleted: sha256:d98a368fc2299bfa2c34cc634fa9ca34bf1d035e0cca02e8c9f0a07700f18103
Deleted: sha256:95968d83b58ae5eec87e4c9027baa628d0e24e4acebea5d0f35eb1b957dd4672
Deleted: sha256:425adb901baf7d6686271d2ce9d42b8ca67e53cffa1bc05622fd0226ae40e9d8
Deleted: sha256:476baebdfbf7a68c50e979971fcd47d799d1b194bcf1f03c1c979e9262bcd364
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
┌──[root@liruilongs.github.io]-[~/docker]

导入所有镜像
docker load -i all.tar

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker load -i all.tar
476baebdfbf7: Loading layer 72.53MB/72.53MB
525950111558: Loading layer 64.97MB/64.97MB
0772cb25d5ca: Loading layer 3.072kB/3.072kB
6e109f6c2f99: Loading layer 4.096kB/4.096kB
88891187bdd7: Loading layer 3.584kB/3.584kB
65e1ea1dc98c: Loading layer 7.168kB/7.168kB
Loaded image: nginx:latest
f2f5bad82361: Loading layer 338.4kB/338.4kB
96fe563c6126: Loading layer 9.557MB/9.557MB
44bc6574c36f: Loading layer 4.202MB/4.202MB
e333ff907af7: Loading layer 2.048kB/2.048kB
4cffbf4e4fe3: Loading layer 53.77MB/53.77MB
42417c6d26fc: Loading layer 5.632kB/5.632kB
c786189c417d: Loading layer 3.584kB/3.584kB
2265f824a3a8: Loading layer 378.8MB/378.8MB
6eac57c056e6: Loading layer 5.632kB/5.632kB
92b76bd444bf: Loading layer 17.92kB/17.92kB
0b282e0f658a: Loading layer 1.536kB/1.536kB
Loaded image: mysql:latest
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest f8f4ffc8092c 5 days ago 133MB
mysql latest 2fe463762680 5 days ago 514MB
┌──[root@liruilongs.github.io]-[~/docker]
└─$

一个mysql镜像会运行一个 mysql进程, CMD [“mysqld”]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker history mysql
IMAGE CREATED CREATED BY SIZE COMMENT
2fe463762680 5 days ago /bin/sh -c #(nop) CMD ["mysqld"] 0B
<missing> 5 days ago /bin/sh -c #(nop) EXPOSE 3306 33060 0B
<missing> 5 days ago /bin/sh -c #(nop) ENTRYPOINT ["docker-entry… 0B
<missing> 5 days ago /bin/sh -c ln -s usr/local/bin/docker-entryp… 34B
<missing> 5 days ago /bin/sh -c #(nop) COPY file:345a22fe55d3e678… 14.5kB
<missing> 5 days ago /bin/sh -c #(nop) COPY dir:2e040acc386ebd23b… 1.12kB
<missing> 5 days ago /bin/sh -c #(nop) VOLUME [/var/lib/mysql] 0B
<missing> 5 days ago /bin/sh -c { echo mysql-community-server m… 378MB
<missing> 5 days ago /bin/sh -c echo 'deb http://repo.mysql.com/a… 55B
<missing> 5 days ago /bin/sh -c #(nop) ENV MYSQL_VERSION=8.0.26-… 0B
<missing> 5 days ago /bin/sh -c #(nop) ENV MYSQL_MAJOR=8.0 0B
<missing> 5 days ago /bin/sh -c set -ex; key='A4A9406876FCBD3C45… 1.84kB
<missing> 5 days ago /bin/sh -c apt-get update && apt-get install… 52.2MB
<missing> 5 days ago /bin/sh -c mkdir /docker-entrypoint-initdb.d 0B
<missing> 5 days ago /bin/sh -c set -eux; savedAptMark="$(apt-ma… 4.17MB
<missing> 5 days ago /bin/sh -c #(nop) ENV GOSU_VERSION=1.12 0B
<missing> 5 days ago /bin/sh -c apt-get update && apt-get install… 9.34MB
<missing> 5 days ago /bin/sh -c groupadd -r mysql && useradd -r -… 329kB
<missing> 5 days ago /bin/sh -c #(nop) CMD ["bash"] 0B
<missing> 5 days ago /bin/sh -c #(nop) ADD file:99db7cfe7952a1c7a… 69.3MB
┌──[root@liruilongs.github.io]-[~/docker]
└─$

3.docker管理容器

命令 描述
docker run 镜像 最简单的一个容器
docker run -it –rm hub.c.163.com/library/centos /bin/bash 有终端,有交互
docker run -dit -h node –name=c1 镜像名 命令 加名字,创建后不进去,进入 –attach,不进入 –detach,守护进程方式
docker run -dit –restart=always 镜像名 命令 退出时,容器依然活跃,设置自动重启
docker run -it –rm 镜像名 命令 进程结束,删除
docker run -dit –restart=always -e 变量1=值1 -e 变量2=值2 镜像 变量传递
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker pull centos
Using default tag: latest
latest: Pulling from library/centos
a1d0c7532777: Pull complete
Digest: sha256:a27fd8080b517143cbbbab9dfb7c8571c40d67d534bbdee55bd6c473f432b177
Status: Downloaded newer image for centos:latest
docker.io/library/centos:latest
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker run -it --name=c1 centos # -t将bash挂载到一个终端上,-i 提供交互的能力
WARNING: IPv4 forwarding is disabled. Networking will not work.
[root@f418f094e0d8 /]# ls
bin etc lib lost+found mnt proc run srv tmp var
dev home lib64 media opt root sbin sys usr
[root@f418f094e0d8 /]# exit
exit
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f418f094e0d8 centos "/bin/bash" 51 seconds ago Exited (0) 4 seconds ago c1
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker run -it --restart=always --name=c2 centos
WARNING: IPv4 forwarding is disabled. Networking will not work.
[root@ecec30685687 /]# exit
exit
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ecec30685687 centos "/bin/bash" 5 seconds ago Up 1 second c2
f418f094e0d8 centos "/bin/bash" About a minute ago Exited (0) About a minute ago c1
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker rm c1
c1
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker rm c2
Error response from daemon: You cannot remove a running container ecec30685687c9f0af08ea721f6293a3fb635c8290bee3347bb54f11ff3e32fa. Stop the container before attempting removal or force remove
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker run -itd --restart=always --name=c2 centos
docker: Error response from daemon: Conflict. The container name "/c2" is already in use by container "ecec30685687c9f0af08ea721f6293a3fb635c8290bee3347bb54f11ff3e32fa". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker run -itd --restart=always --name=c3 centos
WARNING: IPv4 forwarding is disabled. Networking will not work.
97ffd93370d4e23e6a3d2e6a0c68030d482cabb8ab71b5ceffb4d703de3a6b0c
┌──[root@liruilongs.github.io]-[~/docker]
└─$

创建一个mysql容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker run -dit --name=db --restart=always -e MYSQL_ROOT_PASSWORD=liruilong -e MYSQL_DATABASE=blog mysql
WARNING: IPv4 forwarding is disabled. Networking will not work.
0a79be3ed7dbd9bdf19202cda74aa3b3db818bd23deca23248404c673c7e1ff7
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
0a79be3ed7db mysql "docker-entrypoint.s…" 3 seconds ago Up 2 seconds 3306/tcp, 33060/tcp db
97ffd93370d4 centos "/bin/bash" 17 minutes ago Up 17 minutes c3
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker logs db
2021-10-03 16:49:41+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.26-1debian10 started.
2021-10-03 16:49:41+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2021-10-03 16:49:41+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.26-1debian10 started.
2021-10-03 16:49:41+00:00 [Note] [Entrypoint]: Initializing database files
2021-10-03T16:49:41.391137Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.26) initializing of server in progress as process 41
2021-10-03T16:49:41.400419Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-10-03T16:49:42.345302Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2021-10-03T16:49:46.187521Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1 is enabled for channel mysql_main
2021-10-03T16:49:46.188871Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1.1 is enabled for channel mysql_main
2021-10-03T16:49:46.312124Z 6 [Warning] [MY-010453] [Server] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
2021-10-03 16:49:55+00:00 [Note] [Entrypoint]: Database files initialized
2021-10-03 16:49:55+00:00 [Note] [Entrypoint]: Starting temporary server
mysqld will log errors to /var/lib/mysql/0a79be3ed7db.err
┌──[root@liruilongs.github.io]-[~/docker]
└─$
1

nginx 安装

1
2
3
4
5
6
7
8
9
10
11
12
13
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker run -dit --restart=always -p 80 nginx
WARNING: IPv4 forwarding is disabled. Networking will not work.
c7570bd68368f3e4c9a4c8fdce67845bcb5fee12d1cc785d6e448979592a691e
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
c7570bd68368 nginx "/docker-entrypoint.…" 4 seconds ago Up 2 seconds 0.0.0.0:49153->80/tcp, :::49153->80/tcp jovial_solomon
0a79be3ed7db mysql "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 3306/tcp, 33060/tcp db
97ffd93370d4 centos "/bin/bash" 20 minutes ago Up 20 minutes c3
┌──[root@liruilongs.github.io]-[~/docker]
└─$

4.管理容器的常见命令

命令 描述
docker exec xxxx 命令 新的进程进入容器
docker start xxxx 启动容器
docker stop xxxxx 停止容器,容器在stop后ip会被释放调
docker restart xxxxx 重启容器,当需要重启服务的时候可以重启容器
docker top xxxxx 查看进程
docker logs -f node 日志
docker inspect 容器 容器详细信息,ip等
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
┌──[root@liruilongs.github.io]-[~/docker]
└─$ mysql -uroot -pliruilong -h172.17.0.2 -P3306
ERROR 2059 (HY000): Authentication plugin 'caching_sha2_password' cannot be loaded: /usr/lib64/mysql/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker exec -it db /bin/bash
root@0a79be3ed7db:/# mysql -uroot -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 14
Server version: 8.0.26 MySQL Community Server - GPL

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> ALTER USER 'root'@'%' IDENTIFIED BY 'password' PASSWORD EXPIRE NEVER;
Query OK, 0 rows affected (0.02 sec)

mysql> ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY 'liruilong';
Query OK, 0 rows affected (0.01 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.01 sec)

mysql> exit
Bye
root@0a79be3ed7db:/# eixt
bash: eixt: command not found
root@0a79be3ed7db:/# exit
exit
┌──[root@liruilongs.github.io]-[~/docker]
└─$ mysql -uroot -pliruilong -h172.17.0.2 -P3306
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 15
Server version: 8.0.26 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> use blog
Database changed
MySQL [blog]>
1
2
3
4
5
6
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker top db
UID PID PPID C STIME TTY TIME CMD
polkitd 15911 15893 1 00:49 ? 00:00:45 mysqld
┌──[root@liruilongs.github.io]-[~/docker]
└─$
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7570bd68368 nginx "/docker-entrypoint.…" 43 minutes ago Up 43 minutes 0.0.0.0:49153->80/tcp, :::49153->80/tcp jovial_solomon
0a79be3ed7db mysql "docker-entrypoint.s…" 46 minutes ago Up 46 minutes 3306/tcp, 33060/tcp db
97ffd93370d4 centos "/bin/bash" About an hour ago Up About an hour c3
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker stop db
db
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7570bd68368 nginx "/docker-entrypoint.…" 43 minutes ago Up 43 minutes 0.0.0.0:49153->80/tcp, :::49153->80/tcp jovial_solomon
97ffd93370d4 centos "/bin/bash" About an hour ago Up About an hour c3
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker start db
db
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7570bd68368 nginx "/docker-entrypoint.…" 44 minutes ago Up 44 minutes 0.0.0.0:49153->80/tcp, :::49153->80/tcp jovial_solomon
0a79be3ed7db mysql "docker-entrypoint.s…" 47 minutes ago Up 2 seconds 3306/tcp, 33060/tcp db
97ffd93370d4 centos "/bin/bash" About an hour ago Up About an hour c3
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker restart db
db
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7570bd68368 nginx "/docker-entrypoint.…" 44 minutes ago Up 44 minutes 0.0.0.0:49153->80/tcp, :::49153->80/tcp jovial_solomon
0a79be3ed7db mysql "docker-entrypoint.s…" 47 minutes ago Up 2 seconds 3306/tcp, 33060/tcp db
97ffd93370d4 centos "/bin/bash" About an hour ago Up About an hour c3
┌──[root@liruilongs.github.io]-[~/docker]
└─$

删除所有容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
┌──[root@liruilongs.github.io]-[~]
└─$ docker ps | grep -v IMAGE
5b3557283314 nginx "/docker-entrypoint.…" About an hour ago Up About an hour 80/tcp web
c7570bd68368 nginx "/docker-entrypoint.…" 9 hours ago Up 9 hours 0.0.0.0:49153->80/tcp, :::49153->80/tcp jovial_solomon
0a79be3ed7db mysql "docker-entrypoint.s…" 9 hours ago Up 8 hours 3306/tcp, 33060/tcp db
97ffd93370d4 centos "/bin/bash" 9 hours ago Up 9 hours c3
┌──[root@liruilongs.github.io]-[~]
└─$ docker ps | grep -v IMAGE | awk '{print $1}'
5b3557283314
c7570bd68368
0a79be3ed7db
97ffd93370d4

┌──[root@liruilongs.github.io]-[~]
└─$ docker ps | grep -v IMAGE | awk '{print $1}'| xargs docker rm -f
5b3557283314
c7570bd68368
0a79be3ed7db
97ffd93370d4
┌──[root@liruilongs.github.io]-[~]
└─$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
┌──[root@liruilongs.github.io]-[~]
└─$

5.数据卷的使用

命令 描述
docker run -dit –restart=always -v p_path1:c_path2 镜像名 命令 与端口映射类似,直接映射宿主机目录
docker run -dit –restart=always -v c_path2 镜像名 命令 当只写了一个的时候,可以通过 docker inspect来查看映射,mounts属性
docker volume create v1 自定共享卷,然后挂载

数据会被写到容器层,删除容器,容器数据也会删除

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7570bd68368 nginx "/docker-entrypoint.…" 44 minutes ago Up 44 minutes 0.0.0.0:49153->80/tcp, :::49153->80/tcp jovial_solomon
0a79be3ed7db mysql "docker-entrypoint.s…" 47 minutes ago Up 2 seconds 3306/tcp, 33060/tcp db
97ffd93370d4 centos "/bin/bash" About an hour ago Up About an hour c3
┌──[root@liruilongs.github.io]-[~/docker]
└─$ find / -name liruilong.html
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker exec -it c7570bd68368 /bin/bash
root@c7570bd68368:/# echo "liruilong" > liruilong.html
root@c7570bd68368:/# exit
exit
┌──[root@liruilongs.github.io]-[~/docker]
└─$ find / -name liruilong.html
/var/lib/docker/overlay2/56de0e042c7c5b9704df156b6473b528ca7468d8b1085cb43294f9111b270540/diff/liruilong.html
/var/lib/docker/overlay2/56de0e042c7c5b9704df156b6473b528ca7468d8b1085cb43294f9111b270540/merged/liruilong.html
┌──[root@liruilongs.github.io]-[~/docker]
└─$

docker run -itd –name=web -v /root/docker/liruilong:/liruilong:rw nginx

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7570bd68368 nginx "/docker-entrypoint.…" 8 hours ago Up 8 hours 0.0.0.0:49153->80/tcp, :::49153->80/tcp jovial_solomon
0a79be3ed7db mysql "docker-entrypoint.s…" 8 hours ago Up 7 hours 3306/tcp, 33060/tcp db
97ffd93370d4 centos "/bin/bash" 8 hours ago Up 8 hours
c3
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker rm -f web
web
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker run -itd --name=web -v /root/docker/liruilong:/liruilong:rw nginx
WARNING: IPv4 forwarding is disabled. Networking will not work.
5949fba8c9c810ed3a06fcf1bc8148aef22893ec99450cec2443534b2f9eb063
┌──[root@liruilongs.github.io]-[~/docker]
└─$ ls
all.tar docker_images_util_202110032229_UCPY4C5k.sh liruilong
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5949fba8c9c8 nginx "/docker-entrypoint.…" 57 seconds ago Up 4 seconds 80/tcp web
c7570bd68368 nginx "/docker-entrypoint.…" 8 hours ago Up 8 hours 0.0.0.0:49153->80/tcp, :::49153->80/tcp jovial_solomon
0a79be3ed7db mysql "docker-entrypoint.s…" 8 hours ago Up 7 hours 3306/tcp, 33060/tcp db
97ffd93370d4 centos "/bin/bash" 8 hours ago Up 8 hours c3
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker exec -it web /bin/bash
root@5949fba8c9c8:/# ls
bin docker-entrypoint.d home liruilong opt run sys var
boot docker-entrypoint.sh lib media proc sbin tmp
dev etc lib64 mnt root srv usr
root@5949fba8c9c8:/#

docker volume create v1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
┌──[root@liruilongs.github.io]-[~/docker]
└─$
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker volume list
DRIVER VOLUME NAME
local 9e939eda6c4d8c574737905857d57014a1c4dda10eef77520e99804c7c67ac39
local 34f699eb0535315b651090afd90768f4e4cfa42acf920753de9015261424812c
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker volume create v1
v1
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker volume list
DRIVER VOLUME NAME
local 9e939eda6c4d8c574737905857d57014a1c4dda10eef77520e99804c7c67ac39
local 34f699eb0535315b651090afd90768f4e4cfa42acf920753de9015261424812c
local v1
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker volume inspect v1
[
{
"CreatedAt": "2021-10-04T08:46:55+08:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/v1/_data",
"Name": "v1",
"Options": {},
"Scope": "local"
}
]
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker run -itd --name=web -v v1:/liruilong:ro nginx
WARNING: IPv4 forwarding is disabled. Networking will not work.
5b3557283314d5ab745855f3827d070559cd3340f6a2d5a420941e717dc2145b
┌──[root@liruilongs.github.io]-[~/docker]
└─$ ls
all.tar docker_images_util_202110032229_UCPY4C5k.sh liruilong
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker exec -it web bash
root@5b3557283314:/# touch /liruilong/liruilong.sql
touch: cannot touch '/liruilong/liruilong.sql': Read-only file system
root@5b3557283314:/# exit
exit
┌──[root@liruilongs.github.io]-[~/docker]
└─$ touch /var/lib/docker/volumes/v1/_data/liruilong.sql
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker exec -it web bash
root@5b3557283314:/# ls /liruilong/
liruilong.sql
root@5b3557283314:/#

宿主机可以看到容器中的进程

1
2
3
4
5
6
7
8
9
10
11
12
┌──[root@liruilongs.github.io]-[~/docker]
└─$ ps aux | grep -v grep | grep mysqld
polkitd 16727 1.6 9.6 1732724 388964 pts/0 Ssl+ 06:48 2:10 mysqld
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5b3557283314 nginx "/docker-entrypoint.…" 12 minutes ago Up 12 minutes 80/tcp web
c7570bd68368 nginx "/docker-entrypoint.…" 8 hours ago Up 8 hours 0.0.0.0:49153->80/tcp, :::49153->80/tcp jovial_solomon
0a79be3ed7db mysql "docker-entrypoint.s…" 8 hours ago Up 7 hours 3306/tcp, 33060/tcp db
97ffd93370d4 centos "/bin/bash" 8 hours ago Up 8 hours c3
┌──[root@liruilongs.github.io]-[~/docker]
└─$

6.docker网络管理

命令 描述
docker network list 查看所有的网卡
docker network inspect 6f70229c85f0 查看网卡信息
man -k docker 帮助手册
man docker-network-create 创建网络
docker network create -d bridge –subnet=10.0.0.0/24 mynet 创建网络
docker run –net=mynet –rm -it centos /bin/bash 指定网络
docker run -dit -p 物理机端口:容器端口 镜像 指定端口
echo “net.ipv4.ip_forward = 1” >> /etc/sysctl.conf;sysctl -p NAT方式,需要开启路由转发
echo 1 > /proc/sys/net/ipv4/ip_forward NAT方式,需要开启路由转发。两种都可以
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
┌──[root@liruilongs.github.io]-[~]
└─$ ifconfig docker0 # 桥接网卡
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:38ff:fee1:6cb2 prefixlen 64 scopeid 0x20<link>
ether 02:42:38:e1:6c:b2 txqueuelen 0 (Ethernet)
RX packets 54 bytes 4305 (4.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 74 bytes 5306 (5.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

┌──[root@liruilongs.github.io]-[~]
└─$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "ebc5c96c853aa5271006387393b3b2dddcbfbc3b6f1f9ecba44bf87f550ed134",
"Created": "2021-09-26T02:07:56.019076931+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0a79be3ed7dbd9bdf19202cda74aa3b3db818bd23deca23248404c673c7e1ff7": {
"Name": "db",
"EndpointID": "8fe3dbabc838c14a6e23990abd860824d505d49bd437d47c45a85eed06de2aba",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"5b3557283314d5ab745855f3827d070559cd3340f6a2d5a420941e717dc2145b": {
"Name": "web",
"EndpointID": "3f52014a93e20c1f71fff7bda51a169648db932a72101e06d2c33633ac778c5b",
"MacAddress": "02:42:ac:11:00:05",
"IPv4Address": "172.17.0.5/16",
"IPv6Address": ""
},
"97ffd93370d4e23e6a3d2e6a0c68030d482cabb8ab71b5ceffb4d703de3a6b0c": {
"Name": "c3",
"EndpointID": "3dca7f002ebf82520ecc0b28ef4e19cd3bc867d1af9763b9a4969423b4e2a5f6",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"c7570bd68368f3e4c9a4c8fdce67845bcb5fee12d1cc785d6e448979592a691e": {
"Name": "jovial_solomon",
"EndpointID": "56be0daa5a7355201a0625259585561243a4ce1f37736874396a3fb0467f26fe",
"MacAddress": "02:42:ac:11:00:04",
"IPv4Address": "172.17.0.4/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
┌──[root@liruilongs.github.io]-[~]
└─$

创建网络

1
2
3
4
5
6
7
8
9
10
11
12
13
┌──[root@liruilongs.github.io]-[~]
└─$ docker network create -d bridge --subnet=10.0.0.0/24 mynet
4b3da203747c7885a7942ace7c72a2fdefd2f538256cfac1a545f7fd3a070dc5
┌──[root@liruilongs.github.io]-[~]
└─$ ifconfig
br-4b3da203747c: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.0.0.1 netmask 255.255.255.0 broadcast 10.0.0.255
ether 02:42:f4:31:01:9f txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 648 (648.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

指定网络运行容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
┌──[root@liruilongs.github.io]-[~]
└─$ docker history busybox:latest
IMAGE CREATED CREATED BY SIZE COMMENT
16ea53ea7c65 2 weeks ago /bin/sh -c #(nop) CMD ["sh"] 0B
<missing> 2 weeks ago /bin/sh -c #(nop) ADD file:c9e0c3d3badfd458c… 1.24MB
┌──[root@liruilongs.github.io]-[~]
└─$ docker run -it --rm --name=c1 busybox
WARNING: IPv4 forwarding is disabled. Networking will not work.
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:648 (648.0 B) TX bytes:0 (0.0 B)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

/ # exit
┌──[root@liruilongs.github.io]-[~]
└─$ docker run -it --rm --name=c2 --network=mynet busybox
WARNING: IPv4 forwarding is disabled. Networking will not work.
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:0A:00:00:02
inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:13 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1086 (1.0 KiB) TX bytes:0 (0.0 B)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

/ # exit
┌──[root@liruilongs.github.io]-[~]
└─$

配置路由转发

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
┌──[root@liruilongs.github.io]-[~]
└─$ cat /proc/sys/net/ipv4/ip_forward
0
┌──[root@liruilongs.github.io]-[~]
└─$ cat /etc/sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
┌──[root@liruilongs.github.io]-[~]
└─$ echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf;sysctl -p
net.ipv4.ip_forward = 1
┌──[root@liruilongs.github.io]-[~]
└─$ docker run -it --rm --name=c2 --network=mynet busybox
/ # ping www.baidu.com
PING www.baidu.com (220.181.38.150): 56 data bytes
64 bytes from 220.181.38.150: seq=0 ttl=127 time=34.047 ms
64 bytes from 220.181.38.150: seq=1 ttl=127 time=20.363 ms
64 bytes from 220.181.38.150: seq=2 ttl=127 time=112.075 ms
^C
--- www.baidu.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 20.363/55.495/112.075 ms
/ # exit
┌──[root@liruilongs.github.io]-[~]
└─$ cat /proc/sys/net/ipv4/ip_forward
1
┌──[root@liruilongs.github.io]-[~]
└─$

使用容器搭建wrodpress博客

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker ps | grep -v IMAGE | awk '{print $1}'| xargs docker rm -f
1ce97e8dc071
0d435b696a7e
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker run -dit --name=db --restart=always -v $PWD/db:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=liruilong -e WORDPRESS_DATABASE=wordpress hub.c.163.com/library/mysql
8605e77f8d50223f52619e6e349085566bc53a7e74470ac0a44340620f32abe8
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8605e77f8d50 hub.c.163.com/library/mysql "docker-entrypoint.s…" 6 seconds ago Up 4 seconds 3306/tcp db
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker run -itd --name=blog --restart=always -v $PWD/blog:/var/www/html -p 80 -e WORDPRESS_DB_HOST=172.17.0.2 -e WORDPRESS_DB_USER=root -e WORDPRESS_DB_PASSWORD=liruilong -e WORDPRESS_DB_NAME=wordpress hub.c.163.com/library/wordpr
ess
a90951cdac418db85e9dfd0e0890ec1590765c5770faf9893927a96ea93da9f5
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a90951cdac41 hub.c.163.com/library/wordpress "docker-entrypoint.s…" 3 seconds ago Up 2 seconds 0.0.0.0:49271->80/tcp, :::49271->80/tcp blog
8605e77f8d50 hub.c.163.com/library/mysql "docker-entrypoint.s…" 2 minutes ago Up 2 minutes 3306/tcp db
┌──[root@liruilongs.github.io]-[~/docker]
└─$
┌──[root@liruilongs.github.io]-[~/docker]
└─$

容器网络配置

模式 描述
bridge 桥接模式
host 主机模式
none 隔离模式

docker network list

1
2
3
4
5
6
┌──[root@liruilongs.github.io]-[~]
└─$ docker network list
NETWORK ID NAME DRIVER SCOPE
ebc5c96c853a bridge bridge local
25037835956b host host local
ba07e9427974 none null local

bridge,桥接模式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
┌──[root@liruilongs.github.io]-[~]
└─$ docker run -it --rm --name c1 centos /bin/bash
[root@62043df180e4 /]# ifconfig
bash: ifconfig: command not found
[root@62043df180e4 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
17: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.4/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
[root@62043df180e4 /]# exit
exit

host,共享宿主机网络空间

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
┌──[root@liruilongs.github.io]-[~]
└─$ docker run -it --rm --name c1 --network host centos /bin/bash
[root@liruilongs /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:c9:6f:ae brd ff:ff:ff:ff:ff:ff
inet 192.168.26.55/24 brd 192.168.26.255 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fec9:6fae/64 scope link
valid_lft forever preferred_lft forever
3: br-4b3da203747c: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:8e:25:1b:19 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 brd 10.0.0.255 scope global br-4b3da203747c
valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:0a:63:cf:de brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe63:cfde/64 scope link
valid_lft forever preferred_lft forever
14: veth9f0ef36@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 16:2f:a6:23:3b:88 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::142f:a6ff:fe23:3b88/64 scope link
valid_lft forever preferred_lft forever
16: veth37a0e67@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 56:b4:1b:74:cf:3f brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::54b4:1bff:fe74:cf3f/64 scope link
valid_lft forever preferred_lft forever
[root@liruilongs /]# exit
exit

none:于宿主机隔离,不同的单独的网络

1
2
3
4
5
6
7
8
9
10
11
┌──[root@liruilongs.github.io]-[~]
└─$ docker run -it --rm --name c1 --network none centos /bin/bash
[root@7f955d36625e /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
[root@7f955d36625e /]# exit
exit
┌──[root@liruilongs.github.io]-[~]
└─$

容器互联

docker run -it –rm –name=h1 centos /bin/bash 创建一个容器h1
再创建一个容器h2,和h1通信有两种方式
docker inspect h1 grep -i ipaddr
docker run -it –rm –name=h2 centos ping 172.17.0.4
docker run -it –rm –name=h2 –link h1:h1 centos ping h1
1
2
3
┌──[root@liruilongs.github.io]-[~]
└─$ docker run -it --rm --name=h1 centos /bin/bash
[root@207dbbda59af /]#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
┌──[root@liruilongs.github.io]-[~]
└─$ docker inspect h1 | grep -i ipaddr
"SecondaryIPAddresses": null,
"IPAddress": "172.17.0.4",
"IPAddress": "172.17.0.4",
┌──[root@liruilongs.github.io]-[~]
└─$ docker run -it --rm --name=h2 centos ping -c 3 172.17.0.4
PING 172.17.0.4 (172.17.0.4) 56(84) bytes of data.
64 bytes from 172.17.0.4: icmp_seq=1 ttl=64 time=0.284 ms
64 bytes from 172.17.0.4: icmp_seq=2 ttl=64 time=0.098 ms
64 bytes from 172.17.0.4: icmp_seq=3 ttl=64 time=0.142 ms

--- 172.17.0.4 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.098/0.174/0.284/0.080 ms
┌──[root@liruilongs.github.io]-[~]
└─$ docker run -it --rm --name=h2 --link h1:h1 centos ping -c 3 h1
PING h1 (172.17.0.4) 56(84) bytes of data.
64 bytes from h1 (172.17.0.4): icmp_seq=1 ttl=64 time=0.124 ms
64 bytes from h1 (172.17.0.4): icmp_seq=2 ttl=64 time=0.089 ms
64 bytes from h1 (172.17.0.4): icmp_seq=3 ttl=64 time=0.082 ms

--- h1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.082/0.098/0.124/0.020 ms
┌──[root@liruilongs.github.io]-[~]
└─$ docker run -it --rm --name=h2 --link h1 centos ping -c 3 h1
PING h1 (172.17.0.4) 56(84) bytes of data.
64 bytes from h1 (172.17.0.4): icmp_seq=1 ttl=64 time=0.129 ms
64 bytes from h1 (172.17.0.4): icmp_seq=2 ttl=64 time=0.079 ms
64 bytes from h1 (172.17.0.4): icmp_seq=3 ttl=64 time=0.117 ms

--- h1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.079/0.108/0.129/0.022 ms
┌──[root@liruilongs.github.io]-[~]
└─$

使用容器搭建wrodpress博客:简单的方式

1
2
3
4
5
6
7
8
9
10
11
12
13
┌──[root@liruilongs.github.io]-[~]
└─$ docker run -dit --name=db --restart=always -v $PWD/db:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=liruil
ong -e WORDPRESS_DATABASE=wordpress hub.c.163.com/library/mysql
c4a88590cb21977fc68022501fde1912d0bb248dcccc970ad839d17420b8b08d
┌──[root@liruilongs.github.io]-[~]
└─$ docker run -dit --name blog --link=db:mysql -p 80:80 hub.c.163.com/library/wordpress
8a91caa1f9fef1575cc38788b0e8739b7260729193cf18b094509dcd661f544b
┌──[root@liruilongs.github.io]-[~]
└─$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8a91caa1f9fe hub.c.163.com/library/wordpress "docker-entrypoint.s…" 6 seconds ago Up 4 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp blog
c4a88590cb21 hub.c.163.com/library/mysql "docker-entrypoint.s…" About a minute ago Up About a minute 3306/tcp db
┌──[root@liruilongs.github.io]-[~]

这几使用了容器链接的方式,默认别名为 mysql;可以看看镜像说明。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
┌──[root@liruilongs.github.io]-[~]
└─$ docker exec -it db env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=c4a88590cb21
TERM=xterm
MYSQL_ROOT_PASSWORD=liruilong
WORDPRESS_DATABASE=wordpress
GOSU_VERSION=1.7
MYSQL_MAJOR=5.7
MYSQL_VERSION=5.7.18-1debian8
HOME=/root
┌──[root@liruilongs.github.io]-[~]
└─$ docker exec -it blog env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=8a91caa1f9fe
TERM=xterm
MYSQL_PORT=tcp://172.17.0.2:3306
MYSQL_PORT_3306_TCP=tcp://172.17.0.2:3306
MYSQL_PORT_3306_TCP_ADDR=172.17.0.2
MYSQL_PORT_3306_TCP_PORT=3306
MYSQL_PORT_3306_TCP_PROTO=tcp
MYSQL_NAME=/blog/mysql
MYSQL_ENV_MYSQL_ROOT_PASSWORD=liruilong
MYSQL_ENV_WORDPRESS_DATABASE=wordpress
MYSQL_ENV_GOSU_VERSION=1.7
MYSQL_ENV_MYSQL_MAJOR=5.7
MYSQL_ENV_MYSQL_VERSION=5.7.18-1debian8
PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev libpcre3-dev make pkg-config re2c
PHP_INI_DIR=/usr/local/etc/php
APACHE_CONFDIR=/etc/apache2
APACHE_ENVVARS=/etc/apache2/envvars
PHP_EXTRA_BUILD_DEPS=apache2-dev
PHP_EXTRA_CONFIGURE_ARGS=--with-apxs2
PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2
PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2
PHP_LDFLAGS=-Wl,-O1 -Wl,--hash-style=both -pie
GPG_KEYS=0BD78B5F97500D450838F95DFE857D9A90D90EC1 6E4F6AB321FDC07F2C332E3AC2BF0BC433CFC8B3
PHP_VERSION=5.6.31
PHP_URL=https://secure.php.net/get/php-5.6.31.tar.xz/from/this/mirror
PHP_ASC_URL=https://secure.php.net/get/php-5.6.31.tar.xz.asc/from/this/mirror
PHP_SHA256=c464af61240a9b7729fabe0314cdbdd5a000a4f0c9bd201f89f8628732fe4ae4
PHP_MD5=
WORDPRESS_VERSION=4.8.1
WORDPRESS_SHA1=5376cf41403ae26d51ca55c32666ef68b10e35a4
HOME=/root
┌──[root@liruilongs.github.io]-[~]
└─$

7.自定义镜像

Docker镜像是由文件系统叠加而成,底端是一个引导文件系统 bootfsDocker用户几乎永远不会和引导文件交互。实际上,当一个容器启动.后,它将会被移到内存中,而引导文件系统则会被卸载(unmount),以留出更多的内存供initrd磁盘镜像使用。

Docker看起来还很像一个典型的Linux虚拟化栈。实际, Docker镜像的第二层是root文件系统rootfs, 位于引导文件系统之上。

rootfs可以或多种操作系如Debian或者ubuntu文件系统)。在传统的Linux引导过程中, root文件系统会最先以只读的方式加载,当引导结束并完成了完整性检查之后,它才会被切换为读写模式是在Docker里, root文件系统永远只能是只读状态,并且Docker利用联合加载(union mount)技术又会在root文件系统层上加载更多的只读文件系统

联合加载是指同时加载多个文件系统,但是在外面看起术只能看到只有一个文件系统。联合加载会将各层文件系统叠加到一起。

Docker将这样的文件系统称为镜像。一个镜像可以放到另一个镜像的顶部。位于下面的镜像称为父镜像(parent image),可以依次类推,直到镜像栈的最底部,最底部的镜像称为基础镜像(base image),最后,当从一个镜像启动容器时, Docker会在该镜像的最顶层加载一个读写文件系统。我们想在Docker中运行的程序就是在这个读写层中执行的。

在这里插入图片描述

Docker第一次启动一个容器时,初始的读写层的。当文件系统发生变化时,这些变化都会应用到这一层上。比如,如果想修改一个文件

  • 这个文件首先会从该读写层下面的只读层复制到该读写层。该文件的只读版本依然存在,但是已经被读写层中的该文件副本所隐藏。通常这种机制被称为写时复制(copy on write),这也是使Docker如此强大的技术之一。
  • 每个只读镜像层都是只读的,并且以后永远不会变化。当创建一个新容器时, Docker会构建出一个镜像栈,并在最顶端添加一个读写层。这个读写层再加上其下面的镜像层以及一些配置数据,就构成了一个容器。

命令
docker build -t v4 . -f filename
docker build -t name .

CMD 作用

1
2
3
4
5
6
7
8
9
10
11
12
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker run -it --rm --name c1 centos_ip_2
[root@4683bca411ec /]# exit
exit
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker run -it --rm --name c1 centos_ip_2 /bin/bash
[root@08e12bb46bcd /]# exit
exit
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker run -it --rm --name c1 centos_ip_2 echo liruilong
liruilong

**层数越小,占用内存越小,每一个RUN命令是一层,尽量写在一层**。

1
2
3
4
5
6
7
8
9
10
┌──[root@liruilongs.github.io]-[~/docker]
└─$ cat Dockerfile
FROM hub.c.163.com/library/centos
MAINTAINER liruilong

RUN yum -y install net-tools && \
yum -y install iproute -y
CMD ["/bin/bash"]
┌──[root@liruilongs.github.io]-[~/docker]
└─$

使用yum命令时,最好使用 yum clean all 清除一下缓存

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker images | grep centos_
centos_ip_3 latest 93e0d06f7dd5 3 minutes ago 216MB
centos_ip_2 latest 8eea343337d7 6 minutes ago 330MB
┌──[root@liruilongs.github.io]-[~/docker]
└─$ cat Dockerfile
FROM hub.c.163.com/library/centos
MAINTAINER liruilong

RUN yum -y install net-tools && \
yum -y install iproute -y && \
yum clean all

CMD ["/bin/bash"]



┌──[root@liruilongs.github.io]-[~/docker]
└─$

COPY和ADD的意思是一样,ADD带有自动解压功能,COPY没有自动解压功能
构建一个Nginx镜像

1
2
3
4
5
6
FROM centos
MAINTAINER liruilong
RUN yum -y install nginx && \
yum clean all
EXPOSE 80
CMD ["nginx", "-g","daemon off;"]

构建一个开启SSH的镜像

1

8.配置docker本地仓库

配置docker本地仓库
docker pull registry
docker run -d –name registry -p 5000:5000 –restart=always -v /myreg:/var/lib/registry registry

安装仓库镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
┌──[root@vms56.liruilongs.github.io]-[~]
└─#yum -y install docker-ce
Loaded plugins: fastestmirror
kubernetes/signature | 844 B 00:00:00
Retrieving key from https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Importing GPG key 0x307EA071:
Userid : "Rapture Automatic Signing Key (cloud-rapture-signing-key-2021-03-01-08_01_09.pub)"
Fingerprint: 7f92 e05b 3109 3bef 5a3c 2d38 feea 9169 307e a071
From : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Retrieving key from https://mirrors.aliyun.com/kubernetes/yum/doc/
.................
Complete!
┌──[root@vms56.liruilongs.github.io]-[~]
└─#sudo tee /etc/docker/daemon.json <<-'EOF'
> {
> "registry-mirrors": ["https://2tefyfv7.mirror.aliyuncs.com"]
> }
> EOF
{
"registry-mirrors": ["https://2tefyfv7.mirror.aliyuncs.com"]
}
┌──[root@vms56.liruilongs.github.io]-[~]
└─#sudo systemctl daemon-reload
┌──[root@vms56.liruilongs.github.io]-[~]
└─#sudo systemctl restart docker
┌──[root@vms56.liruilongs.github.io]-[~]
└─#docker pull hub.c.163.com/library/registry:latest
latest: Pulling from library/registry
25728a036091: Pull complete
0da5d1919042: Pull complete
e27a85fd6357: Pull complete
d9253dc430fe: Pull complete
916886b856db: Pull complete
Digest: sha256:fce8e7e1569d2f9193f75e9b42efb07a7557fc1e9d2c7154b23da591e324f3d1
Status: Downloaded newer image for hub.c.163.com/library/registry:latest
hub.c.163.com/library/registry:latest
┌──[root@vms56.liruilongs.github.io]-[~]
└─#docker run -dit --name=myreg -p 5000:5000 -v $PWD/myreg:^Cr
┌──[root@vms56.liruilongs.github.io]-[~]
└─#docker history hub.c.163.com/library/registry:latest
IMAGE CREATED CREATED BY SIZE COMMENT
751f286bc25e 4 years ago /bin/sh -c #(nop) CMD ["/etc/docker/registr… 0B
<missing> 4 years ago /bin/sh -c #(nop) ENTRYPOINT ["/entrypoint.… 0B
<missing> 4 years ago /bin/sh -c #(nop) COPY file:7b57f7ab1a8cf85c… 155B
<missing> 4 years ago /bin/sh -c #(nop) EXPOSE 5000/tcp 0B
<missing> 4 years ago /bin/sh -c #(nop) VOLUME [/var/lib/registry] 0B
<missing> 4 years ago /bin/sh -c #(nop) COPY file:6c4758d509045dc4… 295B
<missing> 4 years ago /bin/sh -c #(nop) COPY file:b99d4fe47ad1addf… 22.8MB
<missing> 4 years ago /bin/sh -c set -ex && apk add --no-cache… 5.61MB
<missing> 4 years ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B
<missing> 4 years ago /bin/sh -c #(nop) ADD file:89e72bfc19e81624b… 4.81MB
┌──[root@vms56.liruilongs.github.io]-[~]
└─#docker run -dit --name=myreg -p 5000:5000 -v $PWD/myreg:/var/lib/registry hub.c.163.com/library/registry
317bcc7bd882fd0d29cf9a2898e5cec4378431f029a796b9f9f643762679a14d
┌──[root@vms56.liruilongs.github.io]-[~]
└─#docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
317bcc7bd882 hub.c.163.com/library/registry "/entrypoint.sh /etc…" 5 seconds ago Up 3 seconds 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp myreg
└─#
└─#

selinux、防火墙设置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
┌──[root@vms56.liruilongs.github.io]-[~]
└─#getenforce
Disabled
┌──[root@vms56.liruilongs.github.io]-[~]
└─#systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-10-06 12:57:44 CST; 15min ago
Docs: man:firewalld(1)
Main PID: 608 (firewalld)
Memory: 1.7M
CGroup: /system.slice/firewalld.service
└─608 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

Oct 06 13:05:18 vms56.liruilongs.github.io firewalld[608]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C PREROUTING -m addrtype --dst-type LOCAL -j DOCKER' fa...that name.
Oct 06 13:05:18 vms56.liruilongs.github.io firewalld[608]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C OUTPUT -m addrtype --dst-type LOCAL -j DOCKER ! --dst...that name.
Oct 06 13:05:18 vms56.liruilongs.github.io firewalld[608]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C FORWARD -o docker0 -j DOCKER' failed: iptables: No...that name.
Oct 06 13:05:18 vms56.liruilongs.github.io firewalld[608]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C FORWARD -o docker0 -m conntrack --ctstate RELATED,...t chain?).
Oct 06 13:05:18 vms56.liruilongs.github.io firewalld[608]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C FORWARD -j DOCKER-ISOLATION-STAGE-1' failed: iptab...that name.
Oct 06 13:05:18 vms56.liruilongs.github.io firewalld[608]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -...that name.
Oct 06 13:05:18 vms56.liruilongs.github.io firewalld[608]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP' faile...t chain?).
Oct 06 13:08:01 vms56.liruilongs.github.io firewalld[608]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C DOCKER -p tcp -d 0/0 --dport 5000 -j DNAT --to-destin...that name.
Oct 06 13:08:01 vms56.liruilongs.github.io firewalld[608]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0....t chain?).
Oct 06 13:08:01 vms56.liruilongs.github.io firewalld[608]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C POSTROUTING -p tcp -s 172.17.0.2 -d 172.17.0.2 --dpor...that name.
Hint: Some lines were ellipsized, use -l to show in full.
┌──[root@vms56.liruilongs.github.io]-[~]
└─#systemctl disable firewalld.service --now
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
┌──[root@vms56.liruilongs.github.io]-[~]
└─#

镜像push 协议设置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
┌──[root@liruilongs.github.io]-[~]
└─$ cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://2tefyfv7.mirror.aliyuncs.com"]
}
┌──[root@liruilongs.github.io]-[~]
└─$ vim /etc/docker/daemon.json
┌──[root@liruilongs.github.io]-[~]
└─$ cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://2tefyfv7.mirror.aliyuncs.com"],
"insecure-registries": ["192.168.26.56:5000"]

}
┌──[root@liruilongs.github.io]-[~]
└─$
┌──[root@liruilongs.github.io]-[~]
└─$ systemctl restart docker
┌──[root@liruilongs.github.io]-[~]

API使用,查看脚本编写

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──[root@liruilongs.github.io]-[~/docker]
└─$ vim dockerimages.sh
┌──[root@liruilongs.github.io]-[~/docker]
└─$ sh dockerimages.sh 192.168.26.56
192.168.26.56:5000/db/mysql:v1
192.168.26.56:5000/os/centos:latest
┌──[root@liruilongs.github.io]-[~/docker]
└─$ curl http://192.168.26.56:5000/v2/_catalog
{"repositories":["db/mysql","os/centos"]}
┌──[root@liruilongs.github.io]-[~/docker]
└─$ curl -XGET http://192.168.26.56:5000/v2/_catalog
{"repositories":["db/mysql","os/centos"]}
┌──[root@liruilongs.github.io]-[~/docker]
└─$ curl -XGET http://192.168.26.56:5000/v2/os/centos/tags/list
{"name":"os/centos","tags":["latest"]}
┌──[root@liruilongs.github.io]-[~/docker]
└─$

1
2
3
4
5
6
7
8
9
10
11
12
13
14
┌──[root@liruilongs.github.io]-[~/docker]
└─$ cat dockerimages.sh
#!/bin/bash
file=$(mktemp)
curl -s $1:5000/v2/_catalog | jq | egrep -v '\{|\}|\[|]' | awk -F\" '{print $2}' > $file
while read aa ; do
tag=($(curl -s $1:5000/v2/$aa/tags/list | jq | egrep -v '\{|\}|\[|]|name' | awk -F\" '{print $2}'))
for i in ${tag[*]} ; do
echo $1:5000/${aa}:$i
done
done < $file
rm -rf $file
┌──[root@liruilongs.github.io]-[~/docker]
└─$ yum -y install jq

删除本地仓库里的镜像

1
2
curl https://raw.githubusercontent.com/burnettk/delete-docker-registry-image/master/delete_docker_registry_image.py | sudo tee /usr/local/bin/delete_docker_registry_image >/dev/null
sudo chmod a+x /usr/local/bin/delete_docker_registry_image
1
2
export REGISTRY_DATA_DIR=/opt/data/registry/docker/registry/v2

1
2
3
delete_docker_registry_image --image testrepo/awesomeimage --dry-run
delete_docker_registry_image --image testrepo/awesomeimage
delete_docker_registry_image --image testrepo/awesomeimage:supertag

9.harbor的使用

harbor的使用
安装并启动docker并安装docker-compose
上传harbor的离线包
导入harbor的镜像
编辑harbor.yml
修改hostname 为自己的主机名,不用证书需要注释掉https
harbor_admin_password 登录密码
安装compose
运行脚本 ./install.sh
在浏览器里输入IP
docker login IP –家目录下会有一个.docker文件夹
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
┌──[root@vms56.liruilongs.github.io]-[~]
└─#yum install -y docker-compose
┌──[root@vms56.liruilongs.github.io]-[/]
└─#ls
bin dev harbor-offline-installer-v2.0.6.tgz lib machine-id mnt proc run srv tmp var
boot etc home lib64 media opt root sbin sys usr
┌──[root@vms56.liruilongs.github.io]-[/]
└─#tar zxvf harbor-offline-installer-v2.0.6.tgz
harbor/harbor.v2.0.6.tar.gz
harbor/prepare
harbor/LICENSE
harbor/install.sh
harbor/common.sh
harbor/harbor.yml.tmpl
┌──[root@vms56.liruilongs.github.io]-[/]
└─#docker load -i harbor/harbor.v2.0.6.tar.gz

修改配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
┌──[root@vms56.liruilongs.github.io]-[/]
└─#cd harbor/
┌──[root@vms56.liruilongs.github.io]-[/harbor]
└─#ls
common.sh harbor.v2.0.6.tar.gz harbor.yml.tmpl install.sh LICENSE prepare
┌──[root@vms56.liruilongs.github.io]-[/harbor]
└─#cp harbor.yml.tmpl harbor.yml
┌──[root@vms56.liruilongs.github.io]-[/harbor]
└─#ls
common.sh harbor.v2.0.6.tar.gz harbor.yml harbor.yml.tmpl install.sh LICENSE prepare
┌──[root@vms56.liruilongs.github.io]-[/harbor]
└─#vim harbor.yml
┌──[root@vms56.liruilongs.github.io]-[/harbor]
└─#

harbor.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
4 # DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
5 hostname: 192.168.26.56
6
7 # http related config
.......
12 # https related config
13 #https:
14 # https port for harbor, default is 443
15 # port: 443
16 # The path of cert and key files for nginx
17 # certificate: /your/certificate/path
18 # private_key: /your/private/key/path
....
33 # Remember Change the admin password from UI after launching Harbor.
34 harbor_admin_password: Harbor12345
35
36 # Harbor DB configuration

./prepare && ./install.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
┌──[root@vms56.liruilongs.github.io]-[/harbor]
└─#./prepare
prepare base dir is set to /harbor
WARNING:root:WARNING: HTTP protocol is insecure. Harbor will deprecate http protocol in the future. Please make sure to upgrade to https
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/registryctl/config.yml
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
Generated and saved secret to file: /data/secret/keys/secretkey
Successfully called func: create_root_cert
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir
┌──[root@vms56.liruilongs.github.io]-[/harbor]
└─#./install.sh

[Step 0]: checking if docker is installed ...

Note: docker version: 20.10.9

[Step 1]: checking docker-compose is installed ...
harbor
在这里插入图片描述
在这里插入图片描述
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──[root@liruilongs.github.io]-[~/docker]
└─$ docker push 192.168.26.56/library/mysql
Using default tag: latest
The push refers to repository [192.168.26.56/library/mysql]
8129a85b4056: Pushed
3c376267ac82: Pushed
fa9efdcb088a: Pushed
9e615ff77b4f: Pushed
e5de8ba20fae: Pushed
2bee3420217b: Pushed
904af8e2b2d5: Pushed
daf31ec3573d: Pushed
da4155a7d640: Pushed
3b7c5f5acc82: Pushed
295d6a056bfd: Pushed
latest: digest: sha256:c0806ac73235043de2a6cb4738bb2f6a74f71d9c7aa0f19c8e7530fd6c299e75 size: 2617
┌──[root@liruilongs.github.io]-[~/docker]
└─$
harbor
在这里插入图片描述

10.限制容器资源

使用Cgroup限制资源
docker run -itd –name=c3 –cpuset-cpus 0 -m 200M centos
docker run -itd –name=c2 -m 200M centos

了解Cgroup的使用

  • 对内存的限制
    1
    2
    3
    4
    /etc/systemd/system/memload.service.d
    cat 00-aa.conf
    [Service]
    MemoryLimit=512M
  • 对CPU亲和性限制
    1
    2
    3
    4
    5
    ps mo pid,comm,psr $(pgrep httpd)
    /etc/systemd/system/httpd.service.d
    cat 00-aa.conf
    [Service]
    CPUAffinity=0
    容器如何限制
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    ┌──[root@liruilongs.github.io]-[/]
    └─$ docker exec -it c1 bash
    [root@55e45b34d93d /]# ls
    bin etc lib lost+found mnt proc run srv tmp var
    dev home lib64 media opt root sbin sys usr
    [root@55e45b34d93d /]# cd opt/
    [root@55e45b34d93d opt]# ls
    memload-7.0-1.r29766.x86_64.rpm
    [root@55e45b34d93d opt]# rpm -ivh memload-7.0-1.r29766.x86_64.rpm
    Verifying... ################################# [100%]
    Preparing... ################################# [100%]
    Updating / installing...
    1:memload-7.0-1.r29766 ################################# [100%]
    [root@55e45b34d93d opt]# exit
    exit
    ┌──[root@liruilongs.github.io]-[/]
    └─$ docker stats
    CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
    55e45b34d93d c1 0.00% 8.129MiB / 3.843GiB 0.21% 648B / 0B 30.4MB / 11.5MB 1
1
2
3
4
5
6
7
8
[root@55e45b34d93d /]# memload 1000
Attempting to allocate 1000 Mebibytes of resident memory...
^C
[root@55e45b34d93d /]#
┌──[root@liruilongs.github.io]-[/]
└─$ docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
55e45b34d93d c1 0.02% 165.7MiB / 3.843GiB 4.21% 648B / 0B 30.5MB / 11.5MB 3

内存限制

1
2
3
4
5
6
7
8
┌──[root@liruilongs.github.io]-[/]
└─$ docker run -itd --name=c2 -m 200M centos
3b2df1738e84159f4fa02dadbfc285f6da8ddde4d94cb449bc775c9a70eaa4ea
┌──[root@liruilongs.github.io]-[/]
└─$ docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3b2df1738e84 c2 0.00% 528KiB / 200MiB 0.26% 648B / 0B 0B / 0B 1
55e45b34d93d c1 0.00% 8.684MiB / 3.843GiB 0.22% 648B / 0B 30.5MB / 11.5MB 2

对容器CPU的限制

1
2
3
4
5
┌──[root@liruilongs.github.io]-[/]
└─$ ps mo pid,psr $(pgrep cat)
┌──[root@liruilongs.github.io]-[/]
└─$ docker run -itd --name=c3 --cpuset-cpus 0 -m 200M centos
a771eed8c7c39cd410bd6f43909a67bfcf181d87fcafffe57001f17f3fdff408

11.监控容器

cadvisor,读取宿主机信息

docker pull hub.c.163.com/xbingo/cadvisor:latest

1
2
3
4
5
6
docker run \
-v /var/run:/var/run \
-v /sys:/sys:ro \
-v /var/lib/docker:/var/lib/docker:ro \
-d -p 8080:8080 --name=mon \
hub.c.163.com/xbingo/cadvisor:latest
cadvisor
在这里插入图片描述
在这里插入图片描述

weavescope

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
┌──[root@liruilongs.github.io]-[/]
└─$ chmod +x ./scope
┌──[root@liruilongs.github.io]-[/]
└─$ ./scope launch
Unable to find image 'weaveworks/scope:1.13.1' locally
1.13.1: Pulling from weaveworks/scope
c9b1b535fdd9: Pull complete
550073704c23: Pull complete
8738e5bbaf1d: Pull complete
0a8826d26027: Pull complete
387c1aa951b4: Pull complete
e72d45461bb9: Pull complete
75cc44b65e98: Pull complete
11f7584a6ade: Pull complete
a5aa3ebbe1c2: Pull complete
7cdbc028c8d2: Pull complete
Digest: sha256:4342f1c799aba244b975dcf12317eb11858f9879a3699818e2bf4c37887584dc
Status: Downloaded newer image for weaveworks/scope:1.13.1
3254bcd54a7b2b1a5ece2ca873ab18c3215484e6b4f83617a522afe4e853c378
Scope probe started
The Scope App is not responding. Consult the container logs for further details.
┌──[root@liruilongs.github.io]-[/]
└─$
weavescope
在这里插入图片描述

二、kubernetes安装

在这里插入图片描述

1.ansible配置

这里我们用ansible来安装

  1. 配置控制机到受控机的ssh免密
  2. 配置 ansible配置文件,主机清单
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[root@vms81 ~]# ls
anaconda-ks.cfg calico_3_14.tar calico.yaml one-client-install.sh set.sh
[root@vms81 ~]# mkdir ansible
[root@vms81 ~]# cd ansible/
[root@vms81 ansible]# ls
[root@vms81 ansible]# vim ansible.cfg
[root@vms81 ansible]# cat ansible.cfg
[defaults]
# 主机清单文件,就是要控制的主机列表
inventory=inventory
# 连接受管机器的远程的用户名
remote_user=root
# 角色目录
roles_path=roles
# 设置用户的su 提权
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
[root@vms81 ansible]# vim inventory
[root@vms81 ansible]# cat inventory
[node]
192.168.26.82
192.168.26.83
[master]
192.168.26.81

[root@vms81 ansible]#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@vms81 ansible]# ansible all --list-hosts
hosts (3):
192.168.26.82
192.168.26.83
192.168.26.81
[root@vms81 ansible]# ansible all -m ping
192.168.26.81 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.26.83 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.26.82 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
[root@vms81 ansible]#

2.所有节点操作

所有节点操作
关闭防火墙,selinux,设置hosts
关闭swap
设置yum源
安装docker-ce,导入缺少的镜像
设置参数
安装相关软件包
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$vim init_k8s_playbook.yml
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ls
ansible.cfg init_k8s_playbook.yml inventory
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$vim daemon.json
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat daemon.json
{
"registry-mirrors": ["https://2tefyfv7.mirror.aliyuncs.com"]
}
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$vim hosts
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$car hosts
-bash: car: command not found
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.26.81 vms81.liruilongs.github.io vms81
192.168.26.82 vms82.liruilongs.github.io vms82
192.168.26.83 vms83.liruilongs.github.io vms83
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$vim k8s.conf
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat init_k8s_playbook.yml
- name: init k8s
hosts: all
tasks:
# 关闭防火墙
- shell: firewall-cmd --set-default-zone=trusted
# 关闭selinux
- shell: getenforce
register: out
- debug: msg="{{out}}"
- shell: setenforce 0
when: out.stdout != "Disabled"
- replace:
path: /etc/selinux/config
regexp: "SELINUX=enforcing"
replace: "SELINUX=disabled"
- shell: cat /etc/selinux/config
register: out
- debug: msg="{{out}}"
- copy:
src: ./hosts
dest: /etc/hosts
force: yes
# 关闭交换分区
- shell: swapoff -a
- shell: sed -i '/swap/d' /etc/fstab
- shell: cat /etc/fstab
register: out
- debug: msg="{{out}}"
# 配置yum源
- shell: tar -cvf /etc/yum.tar /etc/yum.repos.d/
- shell: rm -rf /etc/yum.repos.d/*
- shell: wget ftp://ftp.rhce.cc/k8s/* -P /etc/yum.repos.d/
# 安装docker-ce
- yum:
name: docker-ce
state: present
# 配置docker加速
- shell: mkdir /etc/docker
- copy:
src: ./daemon.json
dest: /etc/docker/daemon.json
- shell: systemctl daemon-reload
- shell: systemctl restart docker
# 配置属性,安装k8s相关包
- copy:
src: ./k8s.conf
dest: /etc/sysctl.d/k8s.conf
- shell: yum install -y kubelet-1.21.1-0 kubeadm-1.21.1-0 kubectl-1.21.1-0 --disableexcludes=kubernetes
# 缺少镜像导入
- copy:
src: ./coredns-1.21.tar
dest: /root/coredns-1.21.tar
- shell: docker load -i /root/coredns-1.21.tar
# 启动服务dok
- shell: systemctl restart kubelet
- shell: systemctl enable kubelet
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ls
ansible.cfg coredns-1.21.tar daemon.json hosts init_k8s_playbook.yml inventory k8s.conf
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$
结果
在这里插入图片描述

init_k8s_playbook.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58

- name: init k8s
hosts: all
tasks:
# 关闭防火墙
- shell: firewall-cmd --set-default-zone=trusted
# 关闭selinux
- shell: getenforce
register: out
- debug: msg="{{out}}"
- shell: setenforce 0
when: out.stdout != "Disabled"
- replace:
path: /etc/selinux/config
regexp: "SELINUX=enforcing"
replace: "SELINUX=disabled"
- shell: cat /etc/selinux/config
register: out
- debug: msg="{{out}}"
- copy:
src: ./hosts
dest: /etc/hosts
force: yes
# 关闭交换分区
- shell: swapoff -a
- shell: sed -i '/swap/d' /etc/fstab
- shell: cat /etc/fstab
register: out
- debug: msg="{{out}}"
# 配置yum源
- shell: tar -cvf /etc/yum.tar /etc/yum.repos.d/
- shell: rm -rf /etc/yum.repos.d/*
- shell: wget ftp://ftp.rhce.cc/k8s/* -P /etc/yum.repos.d/
# 安装docker-ce
- yum:
name: docker-ce
state: present
# 配置docker加速
- shell: mkdir /etc/docker
- copy:
src: ./daemon.json
dest: /etc/docker/daemon.json
- shell: systemctl daemon-reload
- shell: systemctl restart docker
- shell: systemctl enable docker --now
# 配置属性,安装k8s相关包
- copy:
src: ./k8s.conf
dest: /etc/sysctl.d/k8s.conf
- shell: yum install -y kubelet-1.21.1-0 kubeadm-1.21.1-0 kubectl-1.21.1-0 --disableexcludes=kubernetes
# 缺少镜像导入
- copy:
src: ./coredns-1.21.tar
dest: /root/coredns-1.21.tar
- shell: docker load -i /root/coredns-1.21.tar
# 启动服务
- shell: systemctl restart kubelet
- shell: systemctl enable kubelet

高版本需要修改资源管理为systemd

1
2
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible master,node -m shell -a "sed -i '3i ,\"exec-opts\": [\"native.cgroupdriver=systemd\"]' /etc/docker/daemon.json"

检查一下

1
2
3
4
5
6
7
8
9
10
11
12
13
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images"
192.168.26.83 | CHANGED | rc=0 >>
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/coredns/coredns v1.8.0 296a6d5035e2 11 months ago 42.5MB
192.168.26.81 | CHANGED | rc=0 >>
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/coredns/coredns v1.8.0 296a6d5035e2 11 months ago 42.5MB
192.168.26.82 | CHANGED | rc=0 >>
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/coredns/coredns v1.8.0 296a6d5035e2 11 months ago 42.5MB
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

3.master和node节点操作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible master -m shell -a "kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.1 --pod-network-cidr=10.244.0.0/16"
192.168.26.81 | CHANGED | rc=0 >>
[init] Using Kubernetes version: v1.21.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vms81.liruilongs.github.io] and IPs [10.96.0.1 192.168.26.81]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost vms81.liruilongs.github.io] and IPs [192.168.26.81 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost vms81.liruilongs.github.io] and IPs [192.168.26.81 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 23.005092 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node vms81.liruilongs.github.io as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node vms81.liruilongs.github.io as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 8e0tvh.1n0oqtp4lzwauqh0
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.26.81:6443 --token 8e0tvh.1n0oqtp4lzwauqh0 \
--discovery-token-ca-cert-hash sha256:7cdcd562e1f4d9a00a07e7b2c938ea3fbc81b8c42e475fe2b314863a764afe43 [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
1
2
3
4
5
6
7
8
9
10
11
12
13
14
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$mkdir -p $HOME/.kube
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$export KUBECONFIG=/etc/kubernetes/admin.conf
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io NotReady control-plane,master 6m25s v1.21.1
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

把node加入到集群
kubeadm join IP:6443 –token TOKEN 这个命令上面有提示如果后期忘记了,可以通过kubeadm token create –print-join-command 查看

1
2
3
4
5
6
7
8
9
10
11
12
13
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubeadm token create --print-join-command
kubeadm join 192.168.26.81:6443 --token j8poau.7praw6cppmvttbpa --discovery-token-ca-cert-hash sha256:7cdcd562e1f4d9a00a07e7b2c938ea3fbc81b8c42e475fe2b314863a764afe43
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "kubeadm join 192.168.26.81:6443 --token j8poau.7praw6cppmvttbpa --discovery-token-ca-cert-hash sha256:7cdcd562e1f4d9a00a07e7b2c938ea3fbc81b8c42e475fe2b314863a764afe43"
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io NotReady control-plane,master 11m v1.21.1
vms82.liruilongs.github.io NotReady <none> 12s v1.21.1
vms83.liruilongs.github.io NotReady <none> 11s v1.21.1
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

配置网络

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible all -m copy -a "src=./calico-3.19-img.tar dest=/root/calico-3.19-img.tar "
192.168.26.81 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "d150c7938f45a4c4dba3985a3a507a4d3ac025a0",
"dest": "/root/calico-3.19-img.tar",
"gid": 0,
"group": "root",
"md5sum": "ab25fc92d9156e8c28119b0d66d44f3a",
"mode": "0644",
"owner": "root",
"size": 399186944,
"src": "/root/.ansible/tmp/ansible-tmp-1633540967.78-26777-3922197447943/source",
"state": "file",
"uid": 0
}
192.168.26.82 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "d150c7938f45a4c4dba3985a3a507a4d3ac025a0",
"dest": "/root/calico-3.19-img.tar",
"gid": 0,
"group": "root",
"md5sum": "ab25fc92d9156e8c28119b0d66d44f3a",
"mode": "0644",
"owner": "root",
"size": 399186944,
"src": "/root/.ansible/tmp/ansible-tmp-1633540967.78-26773-26339453791576/source",
"state": "file",
"uid": 0
}
192.168.26.83 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "d150c7938f45a4c4dba3985a3a507a4d3ac025a0",
"dest": "/root/calico-3.19-img.tar",
"gid": 0,
"group": "root",
"md5sum": "ab25fc92d9156e8c28119b0d66d44f3a",
"mode": "0644",
"owner": "root",
"size": 399186944,
"src": "/root/.ansible/tmp/ansible-tmp-1633540967.79-26775-207298273694843/source",
"state": "file",
"uid": 0
}
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker load -i /root/calico-3.19-img.tar"
192.168.26.81 | CHANGED | rc=0 >>
Loaded image: calico/cni:v3.19.1
Loaded image: calico/pod2daemon-flexvol:v3.19.1
Loaded image: calico/node:v3.19.1
Loaded image: calico/kube-controllers:v3.19.1
192.168.26.83 | CHANGED | rc=0 >>
Loaded image: calico/cni:v3.19.1
Loaded image: calico/pod2daemon-flexvol:v3.19.1
Loaded image: calico/node:v3.19.1
Loaded image: calico/kube-controllers:v3.19.1
192.168.26.82 | CHANGED | rc=0 >>
Loaded image: calico/cni:v3.19.1
Loaded image: calico/pod2daemon-flexvol:v3.19.1
Loaded image: calico/node:v3.19.1
Loaded image: calico/kube-controllers:v3.19.1
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

修改配置文件

1
2
3
4
5
6
vim calico.yaml

### 修改为定义的局域网段
3683 - name: CALICO_IPV4POOL_CIDR
3684 value: "10.244.0.0/16"
3685 # Disable file logging so `kubectl logs` works.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io NotReady control-plane,master 30m v1.21.1
vms82.liruilongs.github.io NotReady <none> 19m v1.21.1
vms83.liruilongs.github.io Ready <none> 19m v1.21.1
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 30m v1.21.1
vms82.liruilongs.github.io Ready <none> 19m v1.21.1
vms83.liruilongs.github.io Ready <none> 19m v1.21.1
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

设置可以用tab补齐键 vim /etc/profile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$vim /etc/profile
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$source /etc/profile
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$head -10 /etc/profile
# /etc/profile

# System wide environment and startup programs, for login setup
# Functions and aliases go in /etc/bashrc

# It's NOT a good idea to change this file unless you know what you
# are doing. It's much better to create a custom.sh shell script in
# /etc/profile.d/ to make custom changes to your environment, as this
# will prevent the need for merging in future updates.
source <(kubectl completion bash)
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

添加 source <(kubectl completion bash)/etc/profile,前提: bash-completion.noarch 必须要安装才行

基本命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
vms81.liruilongs.github.io Ready control-plane,master 39m v1.21.1 192.168.26.81 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://20.10.9
vms82.liruilongs.github.io Ready <none> 28m v1.21.1 192.168.26.82 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://20.10.9
vms83.liruilongs.github.io Ready <none> 28m v1.21.1 192.168.26.83 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://20.10.9
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubeadm config view
Command "view" is deprecated, This command is deprecated and will be removed in a future release, please use 'kubectl get cm -o yaml -n kube-system kubeadm-config' to get the kubeadm config directly.
apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.21.1
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.26.81:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:18:45Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:12:29Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl cluster-info
Kubernetes control plane is running at https://192.168.26.81:6443
CoreDNS is running at https://192.168.26.81:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl api-versions
admissionregistration.k8s.io/v1
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1
coordination.k8s.io/v1beta1
crd.projectcalico.org/v1
discovery.k8s.io/v1
discovery.k8s.io/v1beta1
events.k8s.io/v1
events.k8s.io/v1beta1
extensions/v1beta1
flowcontrol.apiserver.k8s.io/v1beta1
networking.k8s.io/v1
networking.k8s.io/v1beta1
node.k8s.io/v1
node.k8s.io/v1beta1
policy/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

删除节点

删除节点
kubectl drain vms81.liruilongs.github.io --delete-local-data --force --ignore-daemonsets 设置节点为不可调度
kubectl delete node vms81.liruilongs.github.io 删除节点
添加节点
kubeadm reset 重置
kubeadm join 192.168.26.81:6443 --token j8poau.7praw6cppmvttbpa --discovery-token-ca-cert-hash sha256:7cdcd562e1f4d9a00a07e7b2c938ea3fbc81b8c42e475fe2b314863a764afe43 加入集群

master 节点删除的话,需要从新初始化kubeadm init ,需要从新配置网络,安装calico

4.设置metric server 监控Pod及节点负载

查看节点状态,我们使用docker的话可以通过docker stats.那使用k8s的话,我们可以通过metric server来查看

1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~]
└─$docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
781c898eea19 k8s_kube-scheduler_kube-scheduler-vms81.liruilongs.github.io_kube-system_5bd71ffab3a1f1d18cb589aa74fe082b_18 0.15% 23.22MiB / 3.843GiB 0.59% 0B / 0B 0B / 0B 7
acac8b21bb57 k8s_kube-controller-manager_kube-controller-manager-vms81.liruilongs.github.io_kube-system_93d9ae7b5a4ccec4429381d493b5d475_18 1.18% 59.16MiB / 3.843GiB 1.50% 0B / 0B 0B / 0B 6
fe97754d3dab k8s_calico-node_calico-node-skzjp_kube-system_a211c8be-3ee1-44a0-a4ce-3573922b65b2_14 4.89% 94.25MiB / 3.843GiB 2.39% 0B / 0B 0B / 4.1kB 40

相关镜像

1
2
3
curl -Ls https://api.github.com/repos/kubernetes-sigs/metrics-server/tarball/v0.3.6 -o metrics-server-v0.3.6.tar.gz
docker pull mirrorgooglecontainers/metrics-server-amd64:v0.3.6

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible all -m copy -a "src=./metrics-img.tar dest=/root/metrics-img.tar"
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "systemctl restart docker "
192.168.26.82 | CHANGED | rc=0 >>

192.168.26.83 | CHANGED | rc=0 >>

192.168.26.81 | CHANGED | rc=0 >>

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker load -i /root/metrics-img.tar"
192.168.26.83 | CHANGED | rc=0 >>
Loaded image: k8s.gcr.io/metrics-server-amd64:v0.3.6
192.168.26.81 | CHANGED | rc=0 >>
Loaded image: k8s.gcr.io/metrics-server-amd64:v0.3.6
192.168.26.82 | CHANGED | rc=0 >>
Loaded image: k8s.gcr.io/metrics-server-amd64:v0.3.6
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

修改metrics-server-deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$mv kubernetes-sigs-metrics-server-d1f4f6f/ metrics
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cd metrics/
┌──[root@vms81.liruilongs.github.io]-[~/ansible/metrics]
└─$ls
cmd deploy hack OWNERS README.md version
code-of-conduct.md Gopkg.lock LICENSE OWNERS_ALIASES SECURITY_CONTACTS
CONTRIBUTING.md Gopkg.toml Makefile pkg vendor
┌──[root@vms81.liruilongs.github.io]-[~/ansible/metrics]
└─$cd deploy/1.8+/
┌──[root@vms81.liruilongs.github.io]-[~/ansible/metrics/deploy/1.8+]
└─$ls
aggregated-metrics-reader.yaml metrics-apiservice.yaml resource-reader.yaml
auth-delegator.yaml metrics-server-deployment.yaml
auth-reader.yaml metrics-server-service.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/metrics/deploy/1.8+]
└─$vim metrics-server-deployment.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/metrics/deploy/1.8+]
└─$kubectl apply -f .
1
2
3
4
5
6
7
8
9
10
31       - name: metrics-server
32 image: k8s.gcr.io/metrics-server-amd64:v0.3.6
33 #imagePullPolicy: Always
34 imagePullPolicy: IfNotPresent
35 command:
36 - /metrics-server
37 - --metric-resolution=30s
38 - --kubelet-insecure-tls
39 - --kubelet-preferred-address-types=InternalIP
40 volumeMounts:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
┌──[root@vms81.liruilongs.github.io]-[~/ansible/metrics/deploy/1.8+]
└─$kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-78d6f96c7b-79xx4 1/1 Running 2 3h15m
calico-node-ntm7v 1/1 Running 1 12h
calico-node-skzjp 1/1 Running 4 12h
calico-node-v7pj5 1/1 Running 1 12h
coredns-545d6fc579-9h2z4 1/1 Running 2 3h15m
coredns-545d6fc579-xgn8x 1/1 Running 2 3h16m
etcd-vms81.liruilongs.github.io 1/1 Running 1 13h
kube-apiserver-vms81.liruilongs.github.io 1/1 Running 2 13h
kube-controller-manager-vms81.liruilongs.github.io 1/1 Running 4 13h
kube-proxy-rbhgf 1/1 Running 1 13h
kube-proxy-vm2sf 1/1 Running 1 13h
kube-proxy-zzbh9 1/1 Running 1 13h
kube-scheduler-vms81.liruilongs.github.io 1/1 Running 5 13h
metrics-server-bcfb98c76-gttkh 1/1 Running 0 70m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/metrics/deploy/1.8+]
└─$kubectl top nodes
W1007 14:23:06.102605 102831 top_node.go:119] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
vms81.liruilongs.github.io 555m 27% 2025Mi 52%
vms82.liruilongs.github.io 204m 10% 595Mi 15%
vms83.liruilongs.github.io 214m 10% 553Mi 14%
┌──[root@vms81.liruilongs.github.io]-[~/ansible/metrics/deploy/1.8+]
└─$

5.了解namespace

不同的namespace之间互相隔离
kubectl get ns
kubectl config get-contexts
kubectl config set-context 集群名 –namespace=命名空间
kubectl config set-context –current –namespace=命名空间

kub-system 本身的各种 pod,是kubamd默认的空间。pod使用命名空间相互隔离

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get namespaces
NAME STATUS AGE
default Active 13h
kube-node-lease Active 13h
kube-public Active 13h
kube-system Active 13h
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get ns
NAME STATUS AGE
default Active 13h
kube-node-lease Active 13h
kube-public Active 13h
kube-system Active 13h
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

命名空间基本命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl create ns liruilong
namespace/liruilong created
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get ns
NAME STATUS AGE
default Active 13h
kube-node-lease Active 13h
kube-public Active 13h
kube-system Active 13h
liruilong Active 4s
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl create ns k8s-demo
namespace/k8s-demo created
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get ns
NAME STATUS AGE
default Active 13h
k8s-demo Active 3s
kube-node-lease Active 13h
kube-public Active 13h
kube-system Active 13h
liruilong Active 20s
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl delete ns k8s-demo
namespace "k8s-demo" deleted
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get ns
NAME STATUS AGE
default Active 13h
kube-node-lease Active 13h
kube-public Active 13h
kube-system Active 13h
liruilong Active 54s
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

命名空间切换

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$vim config
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* context1 cluster1 kubernetes-admin1
context2 cluster2 kubernetes-admin2
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl get ns
NAME STATUS AGE
default Active 23h
kube-node-lease Active 23h
kube-public Active 23h
kube-system Active 23h
liruilong Active 10h
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl config set-context context2 --namespace=kube-system
Context "context2" modified.
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* context1 cluster1 kubernetes-admin1
context2 cluster2 kubernetes-admin2 kube-system
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl config set-context context1 --namespace=kube-public
Context "context1" modified.

切换名称空间

1
2
3
kubectl config set-context $(kubectl config current-context) --namespace=<namespace>
kubectl config view | grep namespace
kubectl get pods

k8s多集群切换

创建一个新的集群,配置ssh免密,修改主机清单,然后使用之前的配置文件修改下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat inventory
[node]
192.168.26.82
192.168.26.83
[master]
192.168.26.81
[temp]
192.168.26.91
192.168.26.92
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat init_c2_playbook.yml
- name: init k8s
hosts: temp
tasks:
# 关闭防火墙
- shell: firewall-cmd --set-default-zone=trusted
# 关闭selinux
- shell: getenforce
register: out
- debug: msg="{{out}}"
- shell: setenforce 0
when: out.stdout != "Disabled"
- replace:
path: /etc/selinux/config
regexp: "SELINUX=enforcing"
replace: "SELINUX=disabled"
- shell: cat /etc/selinux/config
register: out
- debug: msg="{{out}}"
- copy:
src: ./hosts_c2
dest: /etc/hosts
force: yes
# 关闭交换分区
- shell: swapoff -a
- shell: sed -i '/swap/d' /etc/fstab
- shell: cat /etc/fstab
register: out
- debug: msg="{{out}}"
# 配置yum源
- shell: tar -cvf /etc/yum.tar /etc/yum.repos.d/
- shell: rm -rf /etc/yum.repos.d/*
- shell: wget ftp://ftp.rhce.cc/k8s/* -P /etc/yum.repos.d/
# 安装docker-ce
- yum:
name: docker-ce
state: present
# 配置docker加速
- shell: mkdir /etc/docker
- copy:
src: ./daemon.json
dest: /etc/docker/daemon.json
- shell: systemctl daemon-reload
- shell: systemctl restart docker
# 配置属性,安装k8s相关包
- copy:
src: ./k8s.conf
dest: /etc/sysctl.d/k8s.conf
- shell: yum install -y kubelet-1.21.1-0 kubeadm-1.21.1-0 kubectl-1.21.1-0 --disableexcludes=kubernetes
# 缺少镜像导入
- copy:
src: ./coredns-1.21.tar
dest: /root/coredns-1.21.tar
- shell: docker load -i /root/coredns-1.21.tar
# 启动服务
- shell: systemctl restart kubelet
- shell: systemctl enable kubelet
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

第二个集群,一个node节点,一个master节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@vms91 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms91.liruilongs.github.io Ready control-plane,master 139m v1.21.1
vms92.liruilongs.github.io Ready <none> 131m v1.21.1
[root@vms91 ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.26.91:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
[root@vms91 ~]#
一个控制台管理多个集群,多集群切换:
一个控制台管理多个集群
对于一个 kubeconfig文件来说,有3个部分:
cluster:集群信息
context:属性–默认的命名空间
user: 用户密匙

需要配置config,多个集群配置文件合并为一个

1
2
3
4
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$pwd;ls
/root/.kube
cache config

config

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0.........0tCg==
server: https://192.168.26.81:6443
name: cluster1
- cluster:
certificate-authority-data: LS0.........0tCg==
server: https://192.168.26.91:6443
name: cluster2
contexts:
- context:
cluster: cluster1
namespace: kube-public
user: kubernetes-admin1
name: context1
- context:
cluster: cluster2
namespace: kube-system
user: kubernetes-admin2
name: context2
current-context: context2
kind: Config
preferences: {}
users:
- name: kubernetes-admin1
user:
client-certificate-data: LS0.......0tCg==
client-key-data: LS0......LQo=
- name: kubernetes-admin2
user:
client-certificate-data: LS0.......0tCg==
client-key-data: LS0......0tCg==

多集群切换:kubectl config use-context context2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* context1 cluster1 kubernetes-admin1 kube-public
context2 cluster2 kubernetes-admin2 kube-system
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 23h v1.21.1
vms82.liruilongs.github.io Ready <none> 23h v1.21.1
vms83.liruilongs.github.io Ready <none> 23h v1.21.1
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl config use-context context2
Switched to context "context2".
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
context1 cluster1 kubernetes-admin1 kube-public
* context2 cluster2 kubernetes-admin2 kube-system
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms91.liruilongs.github.io Ready control-plane,master 8h v1.21.1
vms92.liruilongs.github.io Ready <none> 8h v1.21.1
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$

三、ETCD

单节点ETCD

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
┌──[root@liruilongs.github.io]-[~]
└─$ yum -y install etcd
┌──[root@liruilongs.github.io]-[~]
└─$ rpm -qc etcd
/etc/etcd/etcd.conf
┌──[root@liruilongs.github.io]-[~]
└─$ vim $(rpm -qc etcd)
┌──[root@liruilongs.github.io]-[~]
└─$
#[Member]
# 数据位置
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
# 数据同步端口
ETCD_LISTEN_PEER_URLS="http://192.168.26.91:2380,http://localhost:2380"
# 读写端口
ETCD_LISTEN_CLIENT_URLS="http://192.168.26.91:2379,http://localhost:2379"
ETCD_NAME="default"
#[Clustering]
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
┌──[root@liruilongs.github.io]-[~]
└─$ systemctl enable etcd --now
┌──[root@liruilongs.github.io]-[~]
└─$ etcdctl member list
8e9e05c52164694d: name=default peerURLs=http://localhost:2380 clientURLs=http://localhost:2379 isLeader=true
┌──[root@liruilongs.github.io]-[~]
└─$ etcdctl cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://localhost:2379
cluster is healthy
┌──[root@liruilongs.github.io]-[~]
└─$ etcdctl ls /
┌──[root@liruilongs.github.io]-[~]
└─$ etcdctl mkdir cka
┌──[root@liruilongs.github.io]-[~]
└─$ etcdctl ls /
/cka
┌──[root@liruilongs.github.io]-[~]
└─$ etcdctl rmdir /cka
┌──[root@liruilongs.github.io]-[~]
└─$ etcdctl ls /
┌──[root@liruilongs.github.io]-[~]
└─$

2和3版本切换

1
2
3
4
5
6
7
8
9
10
11
12
┌──[root@liruilongs.github.io]-[~]
└─$ etcdctl -v
etcdctl version: 3.3.11
API version: 2
┌──[root@liruilongs.github.io]-[~]
└─$ export ETCDCTL_API=3
┌──[root@liruilongs.github.io]-[~]
└─$ etcdctl version
etcdctl version: 3.3.11
API version: 3.3
┌──[root@liruilongs.github.io]-[~]
└─$

etcd集群

ETCD集群是一个分布式系统,使用Raft协议来维护集群内各个节点状态的一致性。
主机状态 Leader, Follower, Candidate
当集群初始化时候,每个节点都是Follower角色
通过心跳与其他节点同步数据
当Follower在一定时间内没有收到来自主节点的心跳,会将自己角色改变为Candidate,并发起一
次选主投票
配置etcd集群,建议尽可能是奇数个节点,而不要偶数个节点

创建集群

环境准备

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat inventory
......
[etcd]
192.168.26.100
192.168.26.101
192.168.26.102
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible etcd -m ping
192.168.26.100 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.26.102 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.26.101 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible etcd -m yum -a "name=etcd state=installed"

配置文件修改

这里用前两台(192.168.26.100,192.168.26.101)初始化集群,第三台(192.168.26.102 )以添加的方式加入集群

本机编写配置文件。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/cluster.etcd"

ETCD_LISTEN_PEER_URLS="http://192.168.26.100:2380,http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.26.100:2379,http://localhost:2379"

ETCD_NAME="etcd-100"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.26.100:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379,http://192.168.26.100:2379"

ETCD_INITIAL_CLUSTER="etcd-100=http://192.168.26.100:2380,etcd-101=http://192.168.26.101:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

把配置文件拷贝到192.168.26.100,192.168.26.101

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.100,192.168.26.101 -m copy -a "src=./etcd.conf dest=/etc/etcd/etcd.conf force=yes"
192.168.26.101 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "bae3b8bc6636bf7304cce647b7068aa45ced859b",
"dest": "/etc/etcd/etcd.conf",
"gid": 0,
"group": "root",
"md5sum": "5f2a3fbe27515f85b7f9ed42a206c2a6",
"mode": "0644",
"owner": "root",
"size": 533,
"src": "/root/.ansible/tmp/ansible-tmp-1633800905.88-59602-39965601417441/source",
"state": "file",
"uid": 0
}
192.168.26.100 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "bae3b8bc6636bf7304cce647b7068aa45ced859b",
"dest": "/etc/etcd/etcd.conf",
"gid": 0,
"group": "root",
"md5sum": "5f2a3fbe27515f85b7f9ed42a206c2a6",
"mode": "0644",
"owner": "root",
"size": 533,
"src": "/root/.ansible/tmp/ansible-tmp-1633800905.9-59600-209338664801782/source",
"state": "file",
"uid": 0
}

检查配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.100,192.168.26.101 -m shell -a "cat /etc/etcd/etcd.conf"
192.168.26.101 | CHANGED | rc=0 >>
ETCD_DATA_DIR="/var/lib/etcd/cluster.etcd"

ETCD_LISTEN_PEER_URLS="http://192.168.26.100:2380,http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.26.100:2379,http://localhost:2379"

ETCD_NAME="etcd-100"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.26.100:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379,http://192.168.26.100:2379"

ETCD_INITIAL_CLUSTER="etcd-100=http://192.168.26.100:2380,etcd-101=http://192.168.26.101:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
192.168.26.100 | CHANGED | rc=0 >>
ETCD_DATA_DIR="/var/lib/etcd/cluster.etcd"

ETCD_LISTEN_PEER_URLS="http://192.168.26.100:2380,http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.26.100:2379,http://localhost:2379"

ETCD_NAME="etcd-100"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.26.100:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379,http://192.168.26.100:2379"

ETCD_INITIAL_CLUSTER="etcd-100=http://192.168.26.100:2380,etcd-101=http://192.168.26.101:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

修改101的配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.101 -m shell -a "sed -i '1,9s/100/101/g' /etc/etcd/etcd.conf"
192.168.26.101 | CHANGED | rc=0 >>

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.100,192.168.26.101 -m shell -a "cat -n /etc/etcd/etcd.conf"
192.168.26.100 | CHANGED | rc=0 >>
1 ETCD_DATA_DIR="/var/lib/etcd/cluster.etcd"
2
3 ETCD_LISTEN_PEER_URLS="http://192.168.26.100:2380,http://localhost:2380"
4 ETCD_LISTEN_CLIENT_URLS="http://192.168.26.100:2379,http://localhost:2379"
5
6 ETCD_NAME="etcd-100"
7 ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.26.100:2380"
8
9 ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379,http://192.168.26.100:2379"
10
11 ETCD_INITIAL_CLUSTER="etcd-100=http://192.168.26.100:2380,etcd-101=http://192.168.26.101:2380"
12 ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
13 ETCD_INITIAL_CLUSTER_STATE="new"
192.168.26.101 | CHANGED | rc=0 >>
1 ETCD_DATA_DIR="/var/lib/etcd/cluster.etcd"
2
3 ETCD_LISTEN_PEER_URLS="http://192.168.26.101:2380,http://localhost:2380"
4 ETCD_LISTEN_CLIENT_URLS="http://192.168.26.101:2379,http://localhost:2379"
5
6 ETCD_NAME="etcd-101"
7 ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.26.101:2380"
8
9 ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379,http://192.168.26.101:2379"
10
11 ETCD_INITIAL_CLUSTER="etcd-100=http://192.168.26.100:2380,etcd-101=http://192.168.26.101:2380"
12 ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
13 ETCD_INITIAL_CLUSTER_STATE="new"
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

查看etcd集群

1
2
3
4
5
6
7
8
9
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.100,192.168.26.101 -m shell -a "etcdctl member list"
192.168.26.100 | CHANGED | rc=0 >>
6f2038a018db1103: name=etcd-100 peerURLs=http://192.168.26.100:2380 clientURLs=http://192.168.26.100:2379,http://localhost:2379 isLeader=false
bd330576bb637f25: name=etcd-101 peerURLs=http://192.168.26.101:2380 clientURLs=http://192.168.26.101:2379,http://localhost:2379 isLeader=true
192.168.26.101 | CHANGED | rc=0 >>
6f2038a018db1103: name=etcd-100 peerURLs=http://192.168.26.100:2380 clientURLs=http://192.168.26.100:2379,http://localhost:2379 isLeader=false
bd330576bb637f25: name=etcd-101 peerURLs=http://192.168.26.101:2380 clientURLs=http://192.168.26.101:2379,http://localhost:2379 isLeader=true
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

添加etcd 192.168.26.102

1
2
3
4
5
6
7
8
9
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.100 -m shell -a "etcdctl member add etcd-102 http://192.168.26.102:2380"
192.168.26.100 | CHANGED | rc=0 >>
Added member named etcd-102 with ID 2fd4f9ba70a04579 to cluster

ETCD_NAME="etcd-102"
ETCD_INITIAL_CLUSTER="etcd-102=http://192.168.26.102:2380,etcd-100=http://192.168.26.100:2380,etcd-101=http://192.168.26.101:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

修改之前写好的配置文件给102

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$sed -i '1,8s/100/102/g' etcd.conf
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$sed -i '13s/new/existing/' etcd.conf
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$sed -i 's#ETCD_INITIAL_CLUSTER="#ETCD_INITIAL_CLUSTER="etcd-102=http://192.168.26.102:2380,#' etcd.conf
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat -n etcd.conf
1 ETCD_DATA_DIR="/var/lib/etcd/cluster.etcd"
2
3 ETCD_LISTEN_PEER_URLS="http://192.168.26.102:2380,http://localhost:2380"
4 ETCD_LISTEN_CLIENT_URLS="http://192.168.26.102:2379,http://localhost:2379"
5
6 ETCD_NAME="etcd-102"
7 ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.26.102:2380"
8
9 ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379,http://192.168.26.100:2379"
10
11 ETCD_INITIAL_CLUSTER="etcd-102=http://192.168.26.102:2380,etcd-100=http://192.168.26.100:2380,etcd-101=http://192.168.26.101:2380"
12 ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
13 ETCD_INITIAL_CLUSTER_STATE="existing"
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

配置文件拷贝替换,启动etcd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.102 -m copy -a "src=./etcd.conf dest=/etc/etcd/etcd.conf force=yes"
192.168.26.102 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "2d8fa163150e32da563f5e591134b38cc356d237",
"dest": "/etc/etcd/etcd.conf",
"gid": 0,
"group": "root",
"md5sum": "389c2850d434478e2d4d57a7798196de",
"mode": "0644",
"owner": "root",
"size": 574,
"src": "/root/.ansible/tmp/ansible-tmp-1633803533.57-102177-227527368141930/source",
"state": "file",
"uid": 0
}
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.102 -m shell -a "systemctl enable etcd --now"
192.168.26.102 | CHANGED | rc=0 >>
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

检查集群是否添加成功

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible etcd -m shell -a "etcdctl member list"
192.168.26.101 | CHANGED | rc=0 >>
2fd4f9ba70a04579: name=etcd-102 peerURLs=http://192.168.26.102:2380 clientURLs=http://192.168.26.100:2379,http://localhost:2379 isLeader=false
6f2038a018db1103: name=etcd-100 peerURLs=http://192.168.26.100:2380 clientURLs=http://192.168.26.100:2379,http://localhost:2379 isLeader=false
bd330576bb637f25: name=etcd-101 peerURLs=http://192.168.26.101:2380 clientURLs=http://192.168.26.101:2379,http://localhost:2379 isLeader=true
192.168.26.102 | CHANGED | rc=0 >>
2fd4f9ba70a04579: name=etcd-102 peerURLs=http://192.168.26.102:2380 clientURLs=http://192.168.26.100:2379,http://localhost:2379 isLeader=false
6f2038a018db1103: name=etcd-100 peerURLs=http://192.168.26.100:2380 clientURLs=http://192.168.26.100:2379,http://localhost:2379 isLeader=false
bd330576bb637f25: name=etcd-101 peerURLs=http://192.168.26.101:2380 clientURLs=http://192.168.26.101:2379,http://localhost:2379 isLeader=true
192.168.26.100 | CHANGED | rc=0 >>
2fd4f9ba70a04579: name=etcd-102 peerURLs=http://192.168.26.102:2380 clientURLs=http://192.168.26.100:2379,http://localhost:2379 isLeader=false
6f2038a018db1103: name=etcd-100 peerURLs=http://192.168.26.100:2380 clientURLs=http://192.168.26.100:2379,http://localhost:2379 isLeader=false
bd330576bb637f25: name=etcd-101 peerURLs=http://192.168.26.101:2380 clientURLs=http://192.168.26.101:2379,http://localhost:2379 isLeader=true
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

设置环境变量,这里有一点麻烦。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible etcd -m shell -a "echo 'export ETCDCTL_API=3' >> ~/.bashrc"
192.168.26.100 | CHANGED | rc=0 >>

192.168.26.102 | CHANGED | rc=0 >>

192.168.26.101 | CHANGED | rc=0 >>

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible etcd -m shell -a "cat ~/.bashrc"
192.168.26.100 | CHANGED | rc=0 >>
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
export ETCDCTL_API=3
192.168.26.102 | CHANGED | rc=0 >>
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
export ETCDCTL_API=3
192.168.26.101 | CHANGED | rc=0 >>
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
export ETCDCTL_API=3
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible etcd -m shell -a "etcdctl version"
192.168.26.100 | CHANGED | rc=0 >>
etcdctl version: 3.3.11
API version: 3.3
192.168.26.102 | CHANGED | rc=0 >>
etcdctl version: 3.3.11
API version: 3.3
192.168.26.101 | CHANGED | rc=0 >>
etcdctl version: 3.3.11
API version: 3.3
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

同步性测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$# 同步性测试
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.100 -a "etcdctl put name liruilong"
192.168.26.100 | CHANGED | rc=0 >>
OK
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible etcd -a "etcdctl get name"
192.168.26.100 | CHANGED | rc=0 >>
name
liruilong
192.168.26.101 | CHANGED | rc=0 >>
name
liruilong
192.168.26.102 | CHANGED | rc=0 >>
name
liruilong
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

etcd集群备份,恢复

准备数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.100 -a "etcdctl put name liruilong"
192.168.26.100 | CHANGED | rc=0 >>
OK
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible etcd -a "etcdctl get name"
192.168.26.102 | CHANGED | rc=0 >>
name
liruilong
192.168.26.100 | CHANGED | rc=0 >>
name
liruilong
192.168.26.101 | CHANGED | rc=0 >>
name
liruilong

在任何一台主机上对 etcd 做快照

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#在任何一台主机上对 etcd 做快照
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.101 -a "etcdctl snapshot save snap20211010.db"
192.168.26.101 | CHANGED | rc=0 >>
Snapshot saved at snap20211010.db
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$# 此快照里包含了刚刚写的数据 name=liruilong,然后把快照文件到所有节点
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.101 -a "scp /root/snap20211010.db root@192.168.26.100:/root/"
192.168.26.101 | CHANGED | rc=0 >>

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.101 -a "scp /root/snap20211010.db root@192.168.26.102:/root/"
192.168.26.101 | CHANGED | rc=0 >>

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

清空数据

1
2
3
4
5
6
7
8
9
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible etcd -a "etcdctl del name"
192.168.26.101 | CHANGED | rc=0 >>
1
192.168.26.102 | CHANGED | rc=0 >>
0
192.168.26.100 | CHANGED | rc=0 >>
0
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

在所有节点上关闭 etcd,并删除/var/lib/etcd/里所有数据:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$# 在所有节点上关闭 etcd,并删除/var/lib/etcd/里所有数据:
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible etcd -a "systemctl stop etcd"
192.168.26.100 | CHANGED | rc=0 >>

192.168.26.102 | CHANGED | rc=0 >>

192.168.26.101 | CHANGED | rc=0 >>

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible etcd -m shell -a "rm -rf /var/lib/etcd/*"
[WARNING]: Consider using the file module with state=absent rather than running 'rm'. If you need to
use command because file is insufficient you can add 'warn: false' to this command task or set
'command_warnings=False' in ansible.cfg to get rid of this message.
192.168.26.102 | CHANGED | rc=0 >>

192.168.26.100 | CHANGED | rc=0 >>

192.168.26.101 | CHANGED | rc=0 >>

在所有节点上把快照文件的所有者和所属组设置为 etcd:

1
2
3
4
5
6
7
8
9
10
11
12
13
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible etcd -a "chown etcd.etcd /root/snap20211010.db"
[WARNING]: Consider using the file module with owner rather than running 'chown'. If you need to use
command because file is insufficient you can add 'warn: false' to this command task or set
'command_warnings=False' in ansible.cfg to get rid of this message.
192.168.26.100 | CHANGED | rc=0 >>

192.168.26.102 | CHANGED | rc=0 >>

192.168.26.101 | CHANGED | rc=0 >>

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$# 在每台节点上开始恢复数据:

在每台节点上开始恢复数据:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.100 -m script -a "./snapshot_restore.sh"
192.168.26.100 | CHANGED => {
"changed": true,
"rc": 0,
"stderr": "Shared connection to 192.168.26.100 closed.\r\n",
"stderr_lines": [
"Shared connection to 192.168.26.100 closed."
],
"stdout": "2021-10-10 12:14:30.726021 I | etcdserver/membership: added member 6f2038a018db1103 [http://192.168.26.100:2380] to cluster af623437f584d792\r\n2021-10-10 12:14:30.726234 I | etcdserver/membership: added member bd330576bb637f25 [http://192.168.26.101:2380] to cluster af623437f584d792\r\n",
"stdout_lines": [
"2021-10-10 12:14:30.726021 I | etcdserver/membership: added member 6f2038a018db1103 [http://192.168.26.100:2380] to cluster af623437f584d792",
"2021-10-10 12:14:30.726234 I | etcdserver/membership: added member bd330576bb637f25 [http://192.168.26.101:2380] to cluster af623437f584d792"
]
}
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat -n ./snapshot_restore.sh
1 #!/bin/bash
2
3 # 每台节点恢复镜像
4
5 etcdctl snapshot restore /root/snap20211010.db \
6 --name etcd-100 \
7 --initial-advertise-peer-urls="http://192.168.26.100:2380" \
8 --initial-cluster="etcd-100=http://192.168.26.100:2380,etcd-101=http://192.168.26.101:2380" \
9 --data-dir="/var/lib/etcd/cluster.etcd"
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$sed '6,7s/100/101/g' ./snapshot_restore.sh
#!/bin/bash

# 每台节点恢复镜像

etcdctl snapshot restore /root/snap20211010.db \
--name etcd-101 \
--initial-advertise-peer-urls="http://192.168.26.101:2380" \
--initial-cluster="etcd-100=http://192.168.26.100:2380,etcd-101=http://192.168.26.101:2380" \
--data-dir="/var/lib/etcd/cluster.etcd"

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$sed -i '6,7s/100/101/g' ./snapshot_restore.sh
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat ./snapshot_restore.sh
#!/bin/bash

# 每台节点恢复镜像

etcdctl snapshot restore /root/snap20211010.db \
--name etcd-101 \
--initial-advertise-peer-urls="http://192.168.26.101:2380" \
--initial-cluster="etcd-100=http://192.168.26.100:2380,etcd-101=http://192.168.26.101:2380" \
--data-dir="/var/lib/etcd/cluster.etcd"

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.101 -m script -a "./snapshot_restore.sh"
192.168.26.101 | CHANGED => {
"changed": true,
"rc": 0,
"stderr": "Shared connection to 192.168.26.101 closed.\r\n",
"stderr_lines": [
"Shared connection to 192.168.26.101 closed."
],
"stdout": "2021-10-10 12:20:26.032754 I | etcdserver/membership: added member 6f2038a018db1103 [http://192.168.26.100:2380] to cluster af623437f584d792\r\n2021-10-10 12:20:26.032930 I | etcdserver/membership: added member bd330576bb637f25 [http://192.168.26.101:2380] to cluster af623437f584d792\r\n",
"stdout_lines": [
"2021-10-10 12:20:26.032754 I | etcdserver/membership: added member 6f2038a018db1103 [http://192.168.26.100:2380] to cluster af623437f584d792",
"2021-10-10 12:20:26.032930 I | etcdserver/membership: added member bd330576bb637f25 [http://192.168.26.101:2380] to cluster af623437f584d792"
]
}
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

所有节点把/var/lib/etcd 及里面内容的所有者和所属组改为 etcd:v然后分别启动 etcd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible etcd -a "chown -R etcd.etcd /var/lib/etcd/"
[WARNING]: Consider using the file module with owner rather than running 'chown'. If you need to use
command because file is insufficient you can add 'warn: false' to this command task or set
'command_warnings=False' in ansible.cfg to get rid of this message.
192.168.26.100 | CHANGED | rc=0 >>

192.168.26.101 | CHANGED | rc=0 >>

192.168.26.102 | CHANGED | rc=0 >>

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible etcd -a "systemctl start etcd"
192.168.26.102 | FAILED | rc=1 >>
Job for etcd.service failed because the control process exited with error code. See "systemctl status etcd.service" and "journalctl -xe" for details.non-zero return code
192.168.26.101 | CHANGED | rc=0 >>

192.168.26.100 | CHANGED | rc=0 >>

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

把剩下的节点添加进集群

1
2
3
4
5
6
7
8
9
# etcdctl member add etcd_name –peer-urls=”https://peerURLs”
[root@vms100 cluster.etcd]# etcdctl member add etcd-102 --peer-urls="http://192.168.26.102:2380"
Member fbd8a96cbf1c004d added to cluster af623437f584d792

ETCD_NAME="etcd-102"
ETCD_INITIAL_CLUSTER="etcd-100=http://192.168.26.100:2380,etcd-101=http://192.168.26.101:2380,etcd-102=http://192.168.26.102:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.26.102:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"
[root@vms100 cluster.etcd]#

测试恢复结果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.102 -m copy -a "src=./etcd.conf dest=/etc/etcd/etcd.conf force=yes"
192.168.26.102 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"checksum": "2d8fa163150e32da563f5e591134b38cc356d237",
"dest": "/etc/etcd/etcd.conf",
"gid": 0,
"group": "root",
"mode": "0644",
"owner": "root",
"path": "/etc/etcd/etcd.conf",
"size": 574,
"state": "file",
"uid": 0
}
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.102 -m shell -a "systemctl enable etcd --now"
192.168.26.102 | CHANGED | rc=0 >>

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible etcd -m shell -a "etcdctl member list"
192.168.26.101 | CHANGED | rc=0 >>
6f2038a018db1103, started, etcd-100, http://192.168.26.100:2380, http://192.168.26.100:2379,http://localhost:2379
bd330576bb637f25, started, etcd-101, http://192.168.26.101:2380, http://192.168.26.101:2379,http://localhost:2379
fbd8a96cbf1c004d, started, etcd-102, http://192.168.26.102:2380, http://192.168.26.100:2379,http://localhost:2379
192.168.26.100 | CHANGED | rc=0 >>
6f2038a018db1103, started, etcd-100, http://192.168.26.100:2380, http://192.168.26.100:2379,http://localhost:2379
bd330576bb637f25, started, etcd-101, http://192.168.26.101:2380, http://192.168.26.101:2379,http://localhost:2379
fbd8a96cbf1c004d, started, etcd-102, http://192.168.26.102:2380, http://192.168.26.100:2379,http://localhost:2379
192.168.26.102 | CHANGED | rc=0 >>
6f2038a018db1103, started, etcd-100, http://192.168.26.100:2380, http://192.168.26.100:2379,http://localhost:2379
bd330576bb637f25, started, etcd-101, http://192.168.26.101:2380, http://192.168.26.101:2379,http://localhost:2379
fbd8a96cbf1c004d, started, etcd-102, http://192.168.26.102:2380, http://192.168.26.100:2379,http://localhost:2379
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible etcd -a "etcdctl get name"
192.168.26.102 | CHANGED | rc=0 >>
name
liruilong
192.168.26.101 | CHANGED | rc=0 >>
name
liruilong
192.168.26.100 | CHANGED | rc=0 >>
name
liruilong
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

k8s中etcd以pod的方式设置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-78d6f96c7b-79xx4 1/1 Running 57 3d1h
calico-node-ntm7v 1/1 Running 16 3d11h
calico-node-skzjp 0/1 Running 42 3d11h
calico-node-v7pj5 1/1 Running 9 3d11h
coredns-545d6fc579-9h2z4 1/1 Running 9 3d1h
coredns-545d6fc579-xgn8x 1/1 Running 10 3d1h
etcd-vms81.liruilongs.github.io 1/1 Running 8 3d11h
kube-apiserver-vms81.liruilongs.github.io 1/1 Running 20 3d11h
kube-controller-manager-vms81.liruilongs.github.io 1/1 Running 26 3d11h
kube-proxy-rbhgf 1/1 Running 4 3d11h
kube-proxy-vm2sf 1/1 Running 3 3d11h
kube-proxy-zzbh9 1/1 Running 2 3d11h
kube-scheduler-vms81.liruilongs.github.io 1/1 Running 24 3d11h
metrics-server-bcfb98c76-6q5mb 1/1 Terminating 0 43h
metrics-server-bcfb98c76-9ptf4 1/1 Terminating 0 27h
metrics-server-bcfb98c76-bbr6n 0/1 Pending 0 12h
┌──[root@vms81.liruilongs.github.io]-[~]
└─$

四、升级K8S

不能跨版本更新

升级工作的基本流程如下
升级主控制节点
升级工作节点

确定要升级到哪个版本

1
2
3
4
┌──[root@vms81.liruilongs.github.io]-[~]
└─$yum list --showduplicates kubeadm --disableexcludes=kubernetes
# 在列表中查找最新的 1.22 版本
# 它看起来应该是 1.22.x-0,其中 x 是最新的补丁版本

现有环境

1
2
3
4
5
6
7
8
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io NotReady control-plane,master 11m v1.21.1
vms82.liruilongs.github.io NotReady <none> 12s v1.21.1
vms83.liruilongs.github.io NotReady <none> 11s v1.21.1
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

升级master

控制节点上的升级过程应该每次处理一个节点。 首先选择一个要先行升级的控制面节点。该节点上必须拥有 /etc/kubernetes/admin.conf 文件。

执行 “kubeadm upgrade”

升级 kubeadm:

1
2
3
# 用最新的补丁版本号替换 1.22.x-0 中的 x
┌──[root@vms81.liruilongs.github.io]-[~]
└─$yum install -y kubeadm-1.22.2-0 --disableexcludes=kubernetes

验证下载操作正常,并且 kubeadm 版本正确:

1
2
3
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:37:34Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}

验证升级计划:

1
2
3
4
5
6
7
8
9
10
11
12
13
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.21.1
[upgrade/versions] kubeadm version: v1.22.2
[upgrade/versions] Target version: v1.22.2
[upgrade/versions] Latest version in the v1.21 series: v1.21.5
................

选择要升级到的目标版本,运行合适的命令

1
2
3
4
5
6
7
8
┌──[root@vms81.liruilongs.github.io]-[~]
└─$sudo kubeadm upgrade apply v1.22.2
............
upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.2". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
┌──[root@vms81.liruilongs.github.io]-[~]
└─$

设置进入维护模式

通过将节点标记为不可调度并腾空节点为节点作升级准备:

1
2
3
4
5
6
# 将 <node-to-drain> 替换为你要腾空的控制面节点名称
#kubectl drain <node-to-drain> --ignore-daemonsets
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl drain vms81.liruilongs.github.io --ignore-daemonsets
┌──[root@vms81.liruilongs.github.io]-[~]
└─$

升级 kubelet 和 kubectl

1
2
3
4
# 用最新的补丁版本号替换 1.22.x-00 中的 x
#yum install -y kubelet-1.22.x-0 kubectl-1.22.x-0 --disableexcludes=kubernetes
┌──[root@vms81.liruilongs.github.io]-[~]
└─$yum install -y kubelet-1.22.2-0 kubectl-1.22.2-0 --disableexcludes=kubernetes

重启 kubelet

1
2
3
4
┌──[root@vms81.liruilongs.github.io]-[~]
└─$sudo systemctl daemon-reload
┌──[root@vms81.liruilongs.github.io]-[~]
└─$sudo systemctl restart kubelet

解除节点的保护

1
2
3
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl uncordon vms81.liruilongs.github.io
node/vms81.liruilongs.github.io uncordoned

master 节点版本以已经替换

1
2
3
4
5
6
7
8
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 11d v1.22.2
vms82.liruilongs.github.io NotReady <none> 11d v1.21.1
vms83.liruilongs.github.io Ready <none> 11d v1.21.1
┌──[root@vms81.liruilongs.github.io]-[~]
└─$

升级工作节点Node

工作节点上的升级过程应该一次执行一个节点,或者一次执行几个节点, 以不影响运行工作负载所需的最小容量。

升级 kubeadm

1
2
3
4
5
6
7
8
9
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -a "yum install -y kubeadm-1.22.2-0 --disableexcludes=kubernetes"
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -a "sudo kubeadm upgrade node" # 执行 "kubeadm upgrade" 对于工作节点,下面的命令会升级本地的 kubelet 配置:
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 12d v1.22.2
vms82.liruilongs.github.io Ready <none> 12d v1.21.1
vms83.liruilongs.github.io Ready,SchedulingDisabled <none> 12d v1.22.2

腾空节点,设置维护状态

1
2
3
4
5
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl drain vms82.liruilongs.github.io --ignore-daemonsets
node/vms82.liruilongs.github.io cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-ntm7v, kube-system/kube-proxy-nzm24
node/vms82.liruilongs.github.io drained

升级 kubelet 和 kubectl

1
2
3
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.82 -a "yum install -y kubelet-1.22.2-0 kubectl-1.22.2-0 --disableexcludes=kubernetes"

重启 kubelet

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.82 -a "systemctl daemon-reload"
192.168.26.82 | CHANGED | rc=0 >>

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.82 -a "systemctl restart kubelet"
192.168.26.82 | CHANGED | rc=0 >>
1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 13d v1.22.2
vms82.liruilongs.github.io Ready,SchedulingDisabled <none> 13d v1.22.2
vms83.liruilongs.github.io Ready,SchedulingDisabled <none> 13d v1.22.2

取消对节点的保护

1
2
3
4
5
6
7
8
9
10
11
12
13
14
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl uncordon vms82.liruilongs.github.io
node/vms82.liruilongs.github.io uncordoned
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl uncordon vms83.liruilongs.github.io
node/vms83.liruilongs.github.io uncordoned
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 13d v1.22.2
vms82.liruilongs.github.io Ready <none> 13d v1.22.2
vms83.liruilongs.github.io Ready <none> 13d v1.22.2
┌──[root@vms81.liruilongs.github.io]-[~]
└─$

kubeadm upgrade apply 做了以下工作:

  • 检查你的集群是否处于可升级状态:
    • API 服务器是可访问的
    • 所有节点处于 Ready 状态
    • 控制面是健康的
  • 强制执行版本偏差策略。
  • 确保控制面的镜像是可用的或可拉取到服务器上。
  • 如果组件配置要求版本升级,则生成替代配置与/或使用用户提供的覆盖版本配置。
  • 升级控制面组件或回滚(如果其中任何一个组件无法启动)。
  • 应用新的 CoreDNS 和 kube-proxy 清单,并强制创建所有必需的 RBAC 规则。
  • 如果旧文件在 180 天后过期,将创建 API 服务器的新证书和密钥文件并备份旧文件。

kubeadm upgrade node 在其他控制平节点上执行以下操作:

  • 从集群中获取 kubeadm ClusterConfiguration。
  • (可选操作)备份 kube-apiserver 证书。
  • 升级控制平面组件的静态 Pod 清单。
  • 为本节点升级 kubelet 配置

kubeadm upgrade node 在工作节点上完成以下工作:

  • 从集群取回 kubeadm ClusterConfiguration。
  • 为本节点升级 kubelet 配置。

五、Pod

环境测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 7d23h v1.21.1
vms82.liruilongs.github.io Ready <none> 7d23h v1.21.1
vms83.liruilongs.github.io NotReady <none> 7d23h v1.21.1
┌──[root@vms81.liruilongs.github.io]-[~]
└─$cd ansible/
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m ping
192.168.26.82 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.26.83 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "systemctl is-active docker"
192.168.26.83 | FAILED | rc=3 >>
unknownnon-zero return code
192.168.26.82 | CHANGED | rc=0 >>
active
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "systemctl enable docker --now"
192.168.26.83 | CHANGED | rc=0 >>
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 7d23h v1.21.1
vms82.liruilongs.github.io Ready <none> 7d23h v1.21.1
vms83.liruilongs.github.io Ready <none> 7d23h v1.21.1
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

帮助文档的使用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl explain --help
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl explain pods
KIND: Pod
VERSION: v1
DESCRIPTION:
Pod is a collection of containers that can run on a host. This resource is
created by clients and scheduled onto hosts.
FIELDS:
apiVersion <string>
....
kind <string>
.....
metadata <Object>
.....
spec <Object>
.....
status <Object>
....
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl explain pods.metadata
KIND: Pod
VERSION: v1

创建Pod的方式

新建命名空间:

kubectl config set-context context1 --namespace=liruilong-pod-create

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$mkdir k8s-pod-create
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cd k8s-pod-create/
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl create ns liruilong-pod-create
namespace/liruilong-pod-create created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.26.81:6443
name: cluster1
contexts:
- context:
cluster: cluster1
namespace: kube-system
user: kubernetes-admin1
name: context1
current-context: context1
kind: Config
preferences: {}
users:
- name: kubernetes-admin1
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get ns
NAME STATUS AGE
default Active 8d
kube-node-lease Active 8d
kube-public Active 8d
kube-system Active 8d
liruilong Active 7d10h
liruilong-pod-create Active 4m18s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl config set-context context1 --namespace=liruilong-pod-create
Context "context1" modified.
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
b380bbd43752: Pull complete
fca7e12d1754: Pull complete
745ab57616cb: Pull complete
a4723e260b6f: Pull complete
1c84ebdff681: Pull complete
858292fd2e56: Pull complete
Digest: sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest

命令行的方式创建pod

kubectl run podcommon --image=nginx --image-pull-policy=IfNotPresent --labels="name=liruilong" --env="name=liruilong"

1
2
3
4
5
6
7
8
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl run podcommon --image=nginx --image-pull-policy=IfNotPresent --labels="name=liruilong" --env="name=liruilong"
pod/podcommon created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
podcommon 0/1 ContainerCreating 0 12s

查看调度节点

kubectl get pods -o wide

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl run pod-demo --image=nginx --labels=name=nginx --env="user=liruilong" --port=8888 --image-pull-policy=IfNotPresent
pod/pod-demo created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods | grep pod-
pod-demo 1/1 Running 0 73s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-577h7 1/1 Running 0 19m 10.244.70.39 vms83.liruilongs.github.io <none> <none>
myweb-4xlc5 1/1 Running 0 18m 10.244.70.40 vms83.liruilongs.github.io <none> <none>
myweb-ltqdt 1/1 Running 0 18m 10.244.171.148 vms82.liruilongs.github.io <none> <none>
pod-demo 1/1 Running 0 94s 10.244.171.149 vms82.liruilongs.github.io <none> <none>
poddemo 1/1 Running 0 8m22s 10.244.70.41 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

删除pod

kubectl delete pod pod-demo --force

1
2
3
4
5
6
7
8
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl delete pod pod-demo --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod-demo" force deleted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods | grep pod-
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

生成yaml文件的方式创建pod

kubectl run pod-demo --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml >pod-demo.yaml

yaml文件的获取方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]  # yaml文件的获取方法:
└─$kubectl run pod-demo --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod-demo
name: pod-demo
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod-demo
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

yaml文件创建pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl run pod-demo --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml >pod-demo.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$cat pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod-demo
name: pod-demo
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod-demo
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-demo.yaml
pod/pod-demo created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
pod-demo 1/1 Running 0 12s
podcommon 1/1 Running 0 13m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-demo 1/1 Running 0 27s 10.244.70.4 vms83.liruilongs.github.io <none> <none>
podcommon 1/1 Running 0 13m 10.244.70.3 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl delete pod pod-demo
pod "pod-demo" deleted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podcommon 1/1 Running 0 14m 10.244.70.3 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

创建pod时指定运行命令。替换镜像中CMD的命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$# 创建pod时指定运行命令。替换镜像中CMD的命令
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl run comm-pod --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml -- "echo liruilong"
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: comm-pod
name: comm-pod
spec:
containers:
- args:
- echo liruilong
image: nginx
imagePullPolicy: IfNotPresent
name: comm-pod
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl run comm-pod --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml -- sh -c "echo liruilong"
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: comm-pod
name: comm-pod
spec:
containers:
- args:
- sh
- -c
- echo liruilong
image: nginx
imagePullPolicy: IfNotPresent
name: comm-pod
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

kubectl delete -f comm-pod.yaml删除pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl run comm-pod --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml -- sh c "echo liruilong" > comm-pod.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f comm-pod.yaml
pod/comm-pod created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
comm-pod 0/1 CrashLoopBackOff 3 (27s ago) 72s
mysql-577h7 1/1 Running 0 54m
myweb-4xlc5 1/1 Running 0 53m
myweb-ltqdt 1/1 Running 0 52m
poddemo 1/1 Running 0 42m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl delete -f comm-pod.yaml
pod "comm-pod" deleted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

批量创建pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$sed 's/demo/demo1/' demo.yaml | kubectl apply -f -
pod/demo1 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$sed 's/demo/demo2/' demo.yaml | kubectl create -f -
pod/demo2 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
demo 0/1 CrashLoopBackOff 7 (4m28s ago) 18m
demo1 1/1 Running 0 49s
demo2 1/1 Running 0 26s
mysql-d4n6j 1/1 Running 0 23m
myweb-85kf8 1/1 Running 0 22m
myweb-z4qnz 1/1 Running 0 22m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo1 1/1 Running 0 3m29s 10.244.70.32 vms83.liruilongs.github.io <none> <none>
demo2 1/1 Running 0 3m6s 10.244.70.33 vms83.liruilongs.github.io <none> <none>
mysql-d4n6j 1/1 Running 0 25m 10.244.171.137 vms82.liruilongs.github.io <none> <none>
myweb-85kf8 1/1 Running 0 25m 10.244.171.138 vms82.liruilongs.github.io <none> <none>
myweb-z4qnz 1/1 Running 0 25m 10.244.171.139 vms82.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

容器共享pod的网络空间的。即使用同一个IP地址

1
2
3
4
5
6
7
8
9
10
11
12
13
14
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "docker ps | grep demo1"
192.168.26.83 | CHANGED | rc=0 >>
0d644ad550f5 87a94228f133 "/docker-entrypoint.…" 8 minutes ago Up 8 minutes k8s_demo1_demo1_liruilong-pod-create_b721b109-a656-4379-9d3c-26710dadbf70_0
0bcffe0f8e2d registry.aliyuncs.com/google_containers/pause:3.4.1 "/pause" 8 minutes ago Up 8 minutes k8s_POD_demo1_liruilong-pod-create_b721b109-a656-4379-9d3c-26710dadbf70_0
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "docker inspect 0d644ad550f5 | grep -i ipaddress "
192.168.26.83 | CHANGED | rc=0 >>
"SecondaryIPAddresses": null,
"IPAddress": "",
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$# 容器共享pod的网络空间的。
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

一个pod内创建多个容器

yaml 文件编写

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: comm-pod
name: comm-pod
spec:
containers:
- args:
- sh
- -c
- echo liruilong;sleep 10000
image: nginx
imagePullPolicy: IfNotPresent
name: comm-pod0
resources: {}
- name: comm-pod1
image: nginx
imagePullPolicy: IfNotPresent
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

创建 pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl delete -f comm-pod.yaml
pod "comm-pod" deleted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim comm-pod.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f comm-pod.yaml
pod/comm-pod created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
comm-pod 2/2 Running 0 20s
mysql-577h7 1/1 Running 0 89m
myweb-4xlc5 1/1 Running 0 87m
myweb-ltqdt 1/1 Running 0 87m

查看标签,指定标签过滤

1
2
3
4
5
6
7
8
9
10
11
12
13
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
comm-pod 2/2 Running 0 4m43s run=comm-pod
mysql-577h7 1/1 Running 0 93m app=mysql
myweb-4xlc5 1/1 Running 0 92m app=myweb
myweb-ltqdt 1/1 Running 0 91m app=myweb
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -l run=comm-pod
NAME READY STATUS RESTARTS AGE
comm-pod 2/2 Running 0 5m12s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

镜像的下载策略

--image-pull-policy

  • Always 每次都下载最新镜像
  • Never 只使用本地镜像,从不下载
  • IfNotPresent 本地没有才下载

pod的重启策略–单个容器正常退出

restartPolicy

  • Always 总是重启
  • OnFailure 非正常退出才重启
  • Never 从不重启

labels 标签

1
2
3
4
5
6
7
8
9
10
11
12
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$# 每个对象都有标签
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
vms81.liruilongs.github.io Ready control-plane,master 8d v1.21.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms81.liruilongs.github.io,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
vms82.liruilongs.github.io Ready <none> 8d v1.21.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms82.liruilongs.github.io,kubernetes.io/os=linux
vms83.liruilongs.github.io Ready <none> 8d v1.21.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms83.liruilongs.github.io,kubernetes.io/os=linux
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
podcommon 1/1 Running 0 87s name=liruilong

每个Pod都有一个pause镜像

1
2
3
4
5
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "docker ps | grep podcomm"
192.168.26.83 | CHANGED | rc=0 >>
c04e155aa25d nginx "/docker-entrypoint.…" 21 minutes ago Up 21 minutes k8s_podcommon_podcommon_liruilong-pod-create_dbfc4fcd-d62b-4339-9f15-0a48802f60ad_0
309925812d42 registry.aliyuncs.com/google_containers/pause:3.4.1 "/pause" 21 minutes ago Up 21 minutes k8s_POD_podcommon_liruilong-pod-create_dbfc4fcd-d62b-4339-9f15-0a48802f60ad_0

pod的状态

pod的状态
Pending pod 因为其他的原因导致pod准备开始创建 还没有创建(卡住了)
Running pod 已经被调度到节点上,且容器工作正常
Completed pod 里所有容器正常退出
error/CrashLoopBackOff 创建的时候就出错,属于内部原因
imagePullBackoff 创建pod的时候,镜像下载失败

Pod的基本操作

在pod里执行命令,查看pod详细信息。查看pod日志

1
2
3
4
5
6
kubectl exec 命令
kubectl exec -it pod sh #如果pod里有多个容器,则命令是在第一个容器里执行
kubectl exec -it demo -c demo1 sh # 指定容器
kubectl describe pod pod名
kubectl logs pod名 -c 容器名 #如果有多个容器的话 查看日志。
kubectl edit pod pod名 # 部分可以修改,有些不能修改

查看pod详细信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl describe pod demo1
Name: demo1
Namespace: liruilong-pod-create
Priority: 0
Node: vms83.liruilongs.github.io/192.168.26.83
Start Time: Wed, 20 Oct 2021 22:27:15 +0800
Labels: run=demo1
Annotations: cni.projectcalico.org/podIP: 10.244.70.32/32
cni.projectcalico.org/podIPs: 10.244.70.32/32
Status: Running
IP: 10.244.70.32
IPs:
IP: 10.244.70.32
Containers:
demo1:
Container ID: docker://0d644ad550f59029036fd73d420d4d2c651801dd12814bb26ad8e979dc0b59c1
Image: nginx
Image ID: docker-pullable://nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 20 Oct 2021 22:27:20 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-scc89 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-scc89:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned liruilong-pod-create/demo1 to vms83.liruilongs.github.io
Normal Pulled 13m kubelet Container image "nginx" already present on machine
Normal Created 13m kubelet Created container demo1
Normal Started 13m kubelet Started container demo1

在pod里执行命令

多个容器需要用-c指定

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl exec -it demo1 -- ls /tmp
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl exec -it demo1 -- sh
# ls
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
boot docker-entrypoint.d etc lib media opt root sbin sys usr
# exit
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl exec -it demo1 -- bash
root@demo1:/# ls
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
boot docker-entrypoint.d etc lib media opt root sbin sys usr
root@demo1:/# exit
exit
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl exec comm-pod -c comm-pod1 -- echo liruilong
liruilong
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl exec -it comm-pod -c comm-pod1 -- sh
# ls
bin boot dev docker-entrypoint.d docker-entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
# exit
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$#

查看日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl logs demo1
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2021/10/20 14:27:21 [notice] 1#1: using the "epoll" event method
2021/10/20 14:27:21 [notice] 1#1: nginx/1.21.3
2021/10/20 14:27:21 [notice] 1#1: built by gcc 8.3.0 (Debian 8.3.0-6)
2021/10/20 14:27:21 [notice] 1#1: OS: Linux 3.10.0-693.el7.x86_64
2021/10/20 14:27:21 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2021/10/20 14:27:21 [notice] 1#1: start worker processes
2021/10/20 14:27:21 [notice] 1#1: start worker process 32
2021/10/20 14:27:21 [notice] 1#1: start worker process 33
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

拷贝文件

和docke一样的,可以相互拷贝

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl cp /etc/hosts comm-pod:/usr/share/nginx/html -c comm-pod1
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl exec comm-pod -c comm-pod1 -- ls /usr/share/nginx/html
50x.html
hosts
index.html
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

pod里运行命令command的执行方式

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo OK! && sleep 60']

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
command:
- sh
- -c
- echo OK! && sleep 60

pod生命周期,优雅的关闭pod

pod的延期删除

k8s对于pod的删除有一个延期的删除期,即宽限期,这个时间默认为30s,如果删除时加了 --force选项,就会强制删除。

在删除宽限期内,节点状态被标记为treminating ,宽限期结束后删掉pod,这里的宽限期通过参数 terminationGracePeriodSeconds 设定

1
2
3
4
5
6
7
8
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl explain pod.spec
....
terminationGracePeriodSeconds <integer>
pod需要优雅终止的可选持续时间(以秒为单位)。可在删除请求中增加。值必须是非负整数。
值0表示通过kill信号立即停止(没有机会关机)。如果该值为null,则使用默认的宽限期。
宽限期是在pod中运行的进程收到终止信号后的持续时间(以秒为单位),以及进程被kill信号强制停止的时间。
设置此值比流程的预期清理时间长。默认为30秒。

如果pod里面是Nginx进程,就不行,Nginx的处理信号的方式和k8s不同,当我们使用Nginx作为镜像来生成一个个pod的时候,pod里面的Nginx进程就会被很快的关闭,之后的pod也会被删除,并不会使用k8s的宽限期

当某个pod正在被使用是,突然关闭,那这个时候我们还想处理一些事情,这里可以用 pod hook

pod hook(钩子)

hook是一个很常见的功能,有时候也称回调,即在到达某一预期事件时触发的操作,比如 前端框架 Vue 的生命周期回调函数,java 虚拟机 JVM 在进程结束时的钩子线程。

在pod的整个生命周期内,有两个回调可以使用

两个回调可以使用
postStart: 当创建pod的时候调用,会随着pod里的主进程同时运行,并行操作,没有先后顺序
preStop: 当删除pod的时候创建,要先运行perStop里的程序,之后在关闭pod,这里的preStop必须是在pod的宽限期内完成,没有完成pod也会被强制删除

下面我们创建一个带钩子的pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$cat demo.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: demo
name: demo
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: demo
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

通过帮助文档查看宽限期的命令

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl explain pod.spec | grep termin*
terminationGracePeriodSeconds <integer>
Optional duration in seconds the pod needs to terminate gracefully. May be
the pod are sent a termination signal and the time when the processes are
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

修改yaml文件

demo.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: demo
name: demo
spec:
terminationGracePeriodSeconds: 600
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: demo
resources: {}
lifecycle:
postStart:
exec:
command: ["bin/sh", "-c","echo liruilong`date` >> /liruilong"]
preStop:
exec:
command: ["bin/sh","-c","use/sbin/nginx -s quit"]
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim demo.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f demo.yaml
pod/demo created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
demo 1/1 Running 0 21s
mysql-cp7qd 1/1 Running 0 2d13h
myweb-bh9g7 1/1 Running 0 2d4h
myweb-zdc4q 1/1 Running 0 2d13h
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl exec -it demo -- bin/bash
root@demo:/# ls
bin dev docker-entrypoint.sh home lib64 media opt root sbin sys usr
boot docker-entrypoint.d etc lib liruilong mnt proc run srv tmp var
root@demo:/# cat liruilong
liruilongSun Nov 14 05:10:51 UTC 2021
root@demo:/#

这里关闭的话,主进程不会等到宽限期结束,会找Ngixn收到关闭信号时直接关闭

初始化Pod

所谓初始化pod,类比java中的构造概念,如果pod的创建命令类比java的构造函数的话,那么初始化容器即为构造块,java中构造块是在构造函数之前执行的一些语句块。初始化容器即为主容器构造前执行的一些语句

初始化规则:
它们总是运行到完成。
每个都必须在下一个启动之前成功完成。
如果 Pod 的 Init 容器失败,Kubernetes 会不断地重启该 Pod,直到 Init 容器成功为止。然而,如果 Pod 对应的restartPolicy 为 Never,它不会重新启动。
Init 容器支持应用容器的全部字段和特性,但不支持 Readiness Probe,因为它们必须在 Pod 就绪之前运行完成。
如果为一个 Pod 指定了多个 Init 容器,那些容器会按顺序一次运行一个。 每个 Init 容器必须运行成功,下一个才能够运行。
因为Init容器可能会被重启、重试或者重新执行,所以 Init 容器的代码应该是幂等的。 特别地,被写到EmptyDirs 中文件的代码,应该对输出文件可能已经存在做好准备。
Pod 上使用 activeDeadlineSeconds,在容器上使用 livenessProbe,这样能够避免Init容器一直失败。 这就为 Init 容器活跃设置了一个期限。
Pod中的每个 appInit容器的名称必须唯一;与任何其它容器共享同一个名称,会在验证时抛出错误。
Init 容器 spec 的修改,被限制在容器 image 字段中。 更改 Init 容器的image字段,等价于重启该 Pod

初始化容器在pod资源文件里 的initContainers里定义,和containers是同一级

通过初始化容器修改内核参数

创建初始化容器,这里我们通过初始化容器修改swap的一个内核参数为0,即使用交换分区频率为0

Alpine 操作系统是一个面向安全的轻型 Linux 发行版。它不同于通常 Linux 发行版,Alpine 采用了 musl libc 和 busybox 以减小系统的体积和运行时资源消耗,但功能上比 busybox 又完善的多,因此得到开源社区越来越多的青睐。在保持瘦身的同时,Alpine 还提供了自己的包管理工具 apk,可以通过 https://pkgs.alpinelinux.org/packages 网站上查询包信息,也可以直接通过 apk 命令直接查询和安装各种软件

YAML文件编写

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod-init
name: pod-init
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod1-init
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
initContainers:
- image: alpine
name: init
imagePullPolicy: IfNotPresent
command: ["/bin/sh","-c","sbin/sysctl -w vm.swappiness=0"]
securityContext:
privileged: true
status: {}

查看系统默认值,运行pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$cat /proc/sys/vm/swappiness
30
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim pod_init.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod_init.yaml
pod/pod-init created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-hhjnk 1/1 Running 0 3d9h
myweb-bn5h4 1/1 Running 0 3d9h
myweb-h8jkc 1/1 Running 0 3d9h
pod-init 0/1 PodInitializing 0 7s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-hhjnk 1/1 Running 0 3d9h
myweb-bn5h4 1/1 Running 0 3d9h
myweb-h8jkc 1/1 Running 0 3d9h
pod-init 1/1 Running 0 14s

pod创建成功验证一下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-hhjnk 1/1 Running 0 3d9h 10.244.171.162 vms82.liruilongs.github.io <none> <none>
myweb-bn5h4 1/1 Running 0 3d9h 10.244.171.163 vms82.liruilongs.github.io <none> <none>
myweb-h8jkc 1/1 Running 0 3d9h 10.244.171.160 vms82.liruilongs.github.io <none> <none>
pod-init 1/1 Running 0 11m 10.244.70.54 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$cd ..
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m ping
192.168.26.83 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "cat /proc/sys/vm/swappiness"
192.168.26.83 | CHANGED | rc=0 >>
0
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

初始化容器和普通容器数据共享

配置文件编写
这里我们配置一个共享卷,然后再初始化容器里同步数据到普通的容器里。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$cp pod_init.yaml pod_init1.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim pod_init1.yaml
31L, 604C 已写入
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod_init1.yaml
pod/pod-init1 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-hhjnk 1/1 Running 0 3d9h
myweb-bn5h4 1/1 Running 0 3d9h
myweb-h8jkc 1/1 Running 0 3d9h
pod-init 1/1 Running 0 31m
pod-init1 1/1 Running 0 10s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods pod-init1
NAME READY STATUS RESTARTS AGE
pod-init1 1/1 Running 0 30s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl exec -it pod-init1 /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Defaulted container "pod1-init" out of: pod1-init, init (init)
# ls
2021 boot docker-entrypoint.d etc lib media opt root sbin sys usr
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
# cd 2021;ls
liruilong.txt
#

静态pod

正常情况下,pod是在master上统一管理的,所谓静态pod就是,即不是由master上创建调度的,是属于node自身特的pod,在node上只要启动kubelet之后,就会自动的创建的pod。这里理解的话,结合java静态熟悉,静态方法理解,即的node节点初始化的时候需要创建的一些pod

比如 kubeadm的安装k8s的话,所以的服务都是通过容器的方式运行的。相比较二进制的方式方便很多,这里的话,那么涉及到master节点的相关组件在没有k8s环境时是如何运行,构建master节点的,这里就涉及到静态pod的问题。

工作节点创建 静态pod

工作节点查看kubelet 启动参数配置文件

/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
--pod-manifest-path=/etc/kubernetes/kubelet.d
在这里插入图片描述
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/kubelet.d"
mkdir -p /etc/kubernetes/kubelet.d

首先需要在配置文件中添加加载静态pod 的yaml文件位置
先在本地改配置文件,使用ansible发送到node节点上,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/kubelet.d"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$mkdir -p /etc/kubernetes/kubelet.d

修改配置后需要加载配置文件重启kubelet

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m copy -a "src=/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf dest=/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf force
=yes"
192.168.26.82 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "13994d828e831f4aa8760c2de36e100e7e255526",
"dest": "/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf",
"gid": 0,
"group": "root",
"md5sum": "0cfe0f899ea24596f95aa2e175f0dd08",
"mode": "0644",
"owner": "root",
"size": 946,
"src": "/root/.ansible/tmp/ansible-tmp-1637403640.92-32296-63660481173900/source",
"state": "file",
"uid": 0
}
192.168.26.83 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "13994d828e831f4aa8760c2de36e100e7e255526",
"dest": "/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf",
"gid": 0,
"group": "root",
"md5sum": "0cfe0f899ea24596f95aa2e175f0dd08",
"mode": "0644",
"owner": "root",
"size": 946,
"src": "/root/.ansible/tmp/ansible-tmp-1637403640.89-32297-164984088437265/source",
"state": "file",
"uid": 0
}
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "mkdir -p /etc/kubernetes/kubelet.d"
192.168.26.83 | CHANGED | rc=0 >>

192.168.26.82 | CHANGED | rc=0 >>

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "systemctl daemon-reload"
192.168.26.82 | CHANGED | rc=0 >>

192.168.26.83 | CHANGED | rc=0 >>

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "systemctl restart kubelet"
192.168.26.83 | CHANGED | rc=0 >>

192.168.26.82 | CHANGED | rc=0 >>

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

现在我们需要到Node的/etc/kubernetes/kubelet.d里创建一个yaml文件,然后根据这个yaml文件,创建一个pod,这样创建出来的node,是不会接受master的管理的。我们同样使用ansible的方式来处理

default名称空间里创建两个静态pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat static-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod-static
name: pod-static
namespeace: default
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod-demo
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m copy -a "src=./static-pod.yaml dest=/etc/kubernetes/kubelet.d/static-pod.yaml"
192.168.26.83 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "9b059b0acb4cd99272809d1785926092816f8771",
"dest": "/etc/kubernetes/kubelet.d/static-pod.yaml",
"gid": 0,
"group": "root",
"md5sum": "41515d4c5c116404cff9289690cdcc20",
"mode": "0644",
"owner": "root",
"size": 302,
"src": "/root/.ansible/tmp/ansible-tmp-1637474358.05-72240-139405051351544/source",
"state": "file",
"uid": 0
}
192.168.26.82 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "9b059b0acb4cd99272809d1785926092816f8771",
"dest": "/etc/kubernetes/kubelet.d/static-pod.yaml",
"gid": 0,
"group": "root",
"md5sum": "41515d4c5c116404cff9289690cdcc20",
"mode": "0644",
"owner": "root",
"size": 302,
"src": "/root/.ansible/tmp/ansible-tmp-1637474357.94-72238-185516913523170/source",
"state": "file",
"uid": 0
}

node检查一下,配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a " cat /etc/kubernetes/kubelet.d/static-pod.yaml"
192.168.26.83 | CHANGED | rc=0 >>
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod-static
name: pod-static
namespeace: default
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod-demo
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
192.168.26.82 | CHANGED | rc=0 >>
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod-static
name: pod-static
namespeace: default
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod-demo
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

查看静态pod

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get pod -n default
NAME READY STATUS RESTARTS AGE
pod-static-vms82.liruilongs.github.io 1/1 Running 0 8m17s
pod-static-vms83.liruilongs.github.io 1/1 Running 0 9m3s
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "rm -rf /etc/kubernetes/kubelet.d/static-pod.yaml"

master 节点创建pod

这里我们换一种方式创建一个pod,通过 KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml中定义的静态pod位置的方式创建pod

这里需要注意的是如果master 节点是使用 --pod-manifest-path=/etc/kubernetes/kubelet.d的方式的话,k8s就会无法启动,因为--pod-manifest-path会覆盖staticPodPath: /etc/kubernetes/manifests

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf "
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$grep static /var/lib/kubelet/config.yaml
staticPodPath: /etc/kubernetes/manifests
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

/etc/kubernetes/manifests/ 里面放着k8s环境需要的一些静态pod组件

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ls -l /etc/kubernetes/manifests/
总用量 16
-rw------- 1 root root 2284 10月 19 00:09 etcd.yaml
-rw------- 1 root root 3372 10月 19 00:10 kube-apiserver.yaml
-rw------- 1 root root 2893 10月 19 00:10 kube-controller-manager.yaml
-rw------- 1 root root 1479 10月 19 00:10 kube-scheduler.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

直接copy之前的配置文件在master节点创建静态pod,并检查

1
2
3
4
5
6
7
8
9
10
11
12
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cp static-pod.yaml /etc/kubernetes/manifests/static-pod.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
pod-static-vms81.liruilongs.github.io 1/1 Running 0 13s
pod-static-vms82.liruilongs.github.io 1/1 Running 0 34m
pod-static-vms83.liruilongs.github.io 1/1 Running 0 35m
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$rm -rf /etc/kubernetes/manifests/static-pod.yaml

调度的三个对象

待调度Pod列表

有多少个pod需要调度,即创建的pod列表

可用node列表

有那些节点可以参与调度,排除有污点,端口的一些node

调度算法

主机过滤

  • NoDiskConflict
  • PodFitsResources
  • PodFitsPorts
  • MatchNodeSelector
  • HostName
  • NoVolumeZoneConflict
  • PodToleratesNodeTaints
  • CheckNodeMemoryPressure
  • CheckNodeDiskPressure
  • MaxEBSVolumeCount
  • MaxGCEPDVolumeCount
  • MaxAzureDiskVolumeCount
  • MatchInterPodAffinity
  • GeneralPredicates
  • NodeVolumeNodeConflic

主机打分

分数项 公式
LeastRequestedPriority score=cpu ( ( capacity - sum ( requested ) ) * 10 / capacity) + memory ( ( capacity - sum ( requested) ) * 10 / capacity )/2
BalanceResourceAllocation score = 10 -abs ( cpuFraction - memoryFraction ) * 10
CalculateSpreadPriority Score = 10 * ((maxCount -counts)/ (maxCount))

手动指定pod的运行位置:pod调度

标签设置

标签设置
查看 kubectl get nodes --show-labels
设置 kubectl label node node2 disktype=ssd
取消 kubectl label node node2 disktype
所有节点设置 kubectl label node all key=vale

可以给node设置指定的标签,然后我们可以在创建pod里指定node标签
查看节点pod:kubectl get node --show-labels

1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
vms81.liruilongs.github.io Ready control-plane,master 45d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms81.liruilongs.github.io,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
vms82.liruilongs.github.io Ready <none> 45d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms82.liruilongs.github.io,kubernetes.io/os=linux
vms83.liruilongs.github.io Ready <none> 45d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms83.liruilongs.github.io,kubernetes.io/os=linux

给节点设置标签

1
2
3
4
5
6
7
8
9
10
11
12
13
14
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl label node vms82.liruilongs.github.io disktype=node1
node/vms82.liruilongs.github.io labeled
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl label node vms83.liruilongs.github.io disktype=node2
node/vms83.liruilongs.github.io labeled
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
vms81.liruilongs.github.io Ready control-plane,master 45d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms81.liruilongs.github.io,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
vms82.liruilongs.github.io Ready <none> 45d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=node1,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms82.liruilongs.github.io,kubernetes.io/os=linux
vms83.liruilongs.github.io Ready <none> 45d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=node2,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms83.liruilongs.github.io,kubernetes.io/os=linux
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

特殊的内置标签node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,用于设置角色列roles

1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get node
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 45d v1.22.2
vms82.liruilongs.github.io Ready <none> 45d v1.22.2
vms83.liruilongs.github.io Ready <none> 45d v1.22.2

我们也可以做worker节点上设置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl label nodes vms82.liruilongs.github.io node-role.kubernetes.io/worker1=
node/vms82.liruilongs.github.io labeled
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl label nodes vms83.liruilongs.github.io node-role.kubernetes.io/worker2=
node/vms83.liruilongs.github.io labeled
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get node
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 45d v1.22.2
vms82.liruilongs.github.io Ready worker1 45d v1.22.2
vms83.liruilongs.github.io Ready worker2 45d v1.22.2
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

选择器(nodeSelector)方式

在特定节点上运行pod

1
2
3
4
5
6
7
8
9
10
11
12
13
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes -l disktype=node2
NAME STATUS ROLES AGE VERSION
vms83.liruilongs.github.io Ready worker2 45d v1.22.2
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim pod-node2.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-node2.yaml
pod/podnode2 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podnode2 1/1 Running 0 13m 10.244.70.60 vms83.liruilongs.github.io <none> <none>

pod-node2.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podnode2
name: podnode2
spec:
nodeSelector:
disktype: node2
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: podnode2
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

指定节点名称(nodeName)的方式

1
2
3
4
5
6
7
8
9
10
11
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim pod-node1.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-node1.yaml
pod/podnode1 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podnode1 1/1 Running 0 36s 10.244.171.165 vms82.liruilongs.github.io <none> <none>
podnode2 1/1 Running 0 13m 10.244.70.60 vms83.liruilongs.github.io <none> <none>

pod-node1.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podnode1
name: podnode1
spec:
nodeName: vms82.liruilongs.github.io
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: podnode1
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

当pod资源文件指定的节点标签,或者节点名不存在时,这个pod资源是无法创建成功的

主机亲和性

所谓主机亲和性,即在满足指定条件的节点上运行。分为硬策略(必须满足),软策略(最好满足)

硬策略(requiredDuringSchedulingIgnoredDuringExecution)

pod-node-a.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podnodea
name: podnodea
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: podnodea
resources: {}
affinity:
nodeAffinity: #主机亲和性
requiredDuringSchedulingIgnoredDuringExecution: #硬策略
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- vms85.liruilongs.github.io
- vms84.liruilongs.github.io
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

条件不满足,所以 Pending

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-node-a.yaml
pod/podnodea created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
podnodea 0/1 Pending 0 8s

我梦修改一下

1
2
3
4
5
6
7
8
9
10
11
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$sed -i 's/vms84.liruilongs.github.io/vms83.liruilongs.github.io/' pod-node-a.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-node-a.yaml
pod/podnodea created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podnodea 1/1 Running 0 13s 10.244.70.61 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

软策略(preferredDuringSchedulingIgnoredDuringExecution)

pod-node-a-r.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podnodea
name: podnodea
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: podnodea
resources: {}
affinity:
nodeAffinity: #主机亲和性
preferredDuringSchedulingIgnoredDuringExecution: # 软策略
- weight: 2
preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- vms85.liruilongs.github.io
- vms84.liruilongs.github.io
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

检查一下

1
2
3
4
5
6
7
8
9
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim pod-node-a-r.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-node-a-r.yaml
pod/podnodea created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podnodea 1/1 Running 0 28s 10.244.70.62 vms83.liruilongs.github.io <none> <none>

运算符 描述
In 包含自, 比如上面的硬亲和就包含env_role=dev、env_role=test两种标签
NotIn 和上面相反,凡是包含该标签的节点都不会匹配到
Exists 存在里面和In比较类似,凡是有某个标签的机器都会被选择出来。使用Exists的operator的话,values里面就不能写东西了。
Gt greater than的意思,表示凡是某个value大于设定的值的机器则会被选择出来。
Lt less than的意思,表示凡是某个value小于设定的值的机器则会被选择出来。
DoesNotExists 不存在该标签的节点

Annotations 的设置

Annotations 即注释,设置查看方式很简单

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl annotate nodes vms82.liruilongs.github.io "dest=这是一个工作节点"
node/vms82.liruilongs.github.io annotated
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl describe nodes vms82.liruilongs.github.io
Name: vms82.liruilongs.github.io
Roles: worker1
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
disktype=node1
kubernetes.io/arch=amd64
kubernetes.io/hostname=vms82.liruilongs.github.io
kubernetes.io/os=linux
node-role.kubernetes.io/worker1=
Annotations: dest: 这是一个工作节点
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.26.82/24
projectcalico.org/IPv4IPIPTunnelAddr: 10.244.171.128
volumes.kubernetes.io/controller-managed-attach-detach: true

节点的coedon与drain

如果想把某个节点设置为不可用的话,可以对节点实施cordon或者drain

如果一个node被标记为cordon,新创建的pod不会被调度到此node上,已经调度上去的不会被移走

coedon用于节点的维护,当不希望再节点分配pod,那么可以使用coedon把节点标记为不可调度。

这里我们为了方便,创建一个Deployment控制器用去用于演示,关于Deployment,可以简单理解为他能保证你的pod保持在一定数量,当pod挂掉事,

1
2
3
4
5
6
7
8
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl create deployment nginx --image=nginx --dry-run=client -o yaml >nginx-dep.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cp nginx-dep.yaml ./k8s-pod-create/nginx-dep.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cd k8s-pod-create/
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim nginx-dep.yaml

nginx-dep.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
imagePullPolicy: IfNotPresent
resources: {}
status: {}

创建 deploy资源

1
2
3
4
5
6
7
8
9
10
11
12
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f nginx-dep.yaml
deployment.apps/nginx created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-hx96s 1/1 Running 0 2m16s 10.244.171.167 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-wshxp 1/1 Running 0 2m16s 10.244.70.1 vms83.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-x78x4 1/1 Running 0 2m16s 10.244.70.63 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

节点的coedon

1
2
kubectl cordon vms83.liruilongs.github.io  #标记不可用
kubectl uncordon vms83.liruilongs.github.io #取消标记

通过cordonvms83.liruilongs.github.io标记为不可调度

1
2
3
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl cordon vms83.liruilongs.github.io #通过cordon把83标记为不可调度
node/vms83.liruilongs.github.io cordoned

查看节点状态,vms83.liruilongs.github.io变成SchedulingDisabled

1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 48d v1.22.2
vms82.liruilongs.github.io Ready worker1 48d v1.22.2
vms83.liruilongs.github.io Ready,SchedulingDisabled worker2 48d v1.22.2

修改deployment副本数量 –replicas=6

1
2
3
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl scale deployment nginx --replicas=6
deployment.apps/nginx scaled

新增的pod都调度到了vms82.liruilongs.github.io 节点

1
2
3
4
5
6
7
8
9
10
11
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-2nmsj 1/1 Running 0 64s 10.244.171.170 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-chsrn 1/1 Running 0 63s 10.244.171.168 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-hx96s 1/1 Running 0 7m30s 10.244.171.167 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-lppbp 1/1 Running 0 63s 10.244.171.169 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-wshxp 1/1 Running 0 7m30s 10.244.70.1 vms83.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-x78x4 1/1 Running 0 7m30s 10.244.70.63 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

vms83.liruilongs.github.io节点上的Nginx都干掉,会发现新增pod都调度到了vms82.liruilongs.github.io

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl delete pod nginx-7cf7d6dbc8-wshxp
pod "nginx-7cf7d6dbc8-wshxp" deleted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-2nmsj 1/1 Running 0 2m42s 10.244.171.170 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-5hnc7 1/1 Running 0 10s 10.244.171.171 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-chsrn 1/1 Running 0 2m41s 10.244.171.168 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-hx96s 1/1 Running 0 9m8s 10.244.171.167 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-lppbp 1/1 Running 0 2m41s 10.244.171.169 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-x78x4 1/1 Running 0 9m8s 10.244.70.63 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl delete pod nginx-7cf7d6dbc8-x78x4
pod "nginx-7cf7d6dbc8-x78x4" deleted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-2nmsj 1/1 Running 0 3m31s 10.244.171.170 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-5hnc7 1/1 Running 0 59s 10.244.171.171 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-chsrn 1/1 Running 0 3m30s 10.244.171.168 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-hx96s 1/1 Running 0 9m57s 10.244.171.167 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-lppbp 1/1 Running 0 3m30s 10.244.171.169 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-m8ltr 1/1 Running 0 30s 10.244.171.172 vms82.liruilongs.github.io <none> <none>

通过 uncordon恢复节点vms83.liruilongs.github.io状态

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl uncordon vms83.liruilongs.github.io #恢复节点状态
node/vms83.liruilongs.github.io uncordoned
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 48d v1.22.2
vms82.liruilongs.github.io Ready worker1 48d v1.22.2
vms83.liruilongs.github.io Ready worker2 48d v1.22.2

删除所有的pod

1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl scale deployment nginx --replicas=0
deployment.apps/nginx scaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
No resources found in liruilong-pod-create namespace.

节点的为drain

如果一个节点被设置为drain,则此节点不再被调度pod,且此节点上已经运行的pod会被驱逐(evicted)到其他节点

drain包含两种状态:cordon不可被调度,evicted驱逐当前节点所以pod

1
2
kubectl drain vms83.liruilongs.github.io   --ignore-daemonsets
kubectl uncordon vms83.liruilongs.github.io

通过deployment添加4个nginx副本--replicas=4

1
2
3
4
5
6
7
8
9
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl scale deployment nginx --replicas=4
deployment.apps/nginx scaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide --one-output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-2clnb 1/1 Running 0 22s 10.244.171.174 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-9p6g2 1/1 Running 0 22s 10.244.70.2 vms83.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-ptqxm 1/1 Running 0 22s 10.244.171.173 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-zmdqm 1/1 Running 0 22s 10.244.70.4 vms83.liruilongs.github.io <none> <none>

添加一下污点 将节点vms82.liruilongs.github.io设置为drain

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl drain vms82.liruilongs.github.io --ignore-daemonsets --delete-emptydir-data
node/vms82.liruilongs.github.io cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-ntm7v, kube-system/kube-proxy-nzm24
evicting pod liruilong-pod-create/nginx-7cf7d6dbc8-ptqxm
evicting pod kube-system/metrics-server-bcfb98c76-wxv5l
evicting pod liruilong-pod-create/nginx-7cf7d6dbc8-2clnb
pod/nginx-7cf7d6dbc8-2clnb evicted
pod/nginx-7cf7d6dbc8-ptqxm evicted
pod/metrics-server-bcfb98c76-wxv5l evicted
node/vms82.liruilongs.github.io evicted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 48d v1.22.2
vms82.liruilongs.github.io Ready,SchedulingDisabled worker1 48d v1.22.2
vms83.liruilongs.github.io Ready worker2 48d v1.22.2

查看节点调度,所有pod调度到了vms83.liruilongs.github.io这台机器

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide --one-output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-9p6g2 1/1 Running 0 4m20s 10.244.70.2 vms83.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-hkflr 1/1 Running 0 25s 10.244.70.5 vms83.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-qt48k 1/1 Running 0 26s 10.244.70.7 vms83.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-zmdqm 1/1 Running 0 4m20s 10.244.70.4 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

取消污点:kubectl uncordon vms82.liruilongs.github.io

1
2
3
4
5
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl uncordon vms82.liruilongs.github.io
node/vms82.liruilongs.github.io uncordoned
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

报错的情况

将节点vms82.liruilongs.github.io设置为drain

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl drain vms82.liruilongs.github.io
node/vms82.liruilongs.github.io cordoned
DEPRECATED WARNING: Aborting the drain command in a list of nodes will be deprecated in v1.23.
The new behavior will make the drain command go through all nodes even if one or more nodes failed during the drain.
For now, users can try such experience via: --ignore-errors
error: unable to drain node "vms82.liruilongs.github.io", aborting command...

There are pending nodes to be drained:
vms82.liruilongs.github.io
cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-ntm7v, kube-system/kube-proxy-nzm24
cannot delete Pods with local storage (use --delete-emptydir-data to override): kube-system/metrics-server-bcfb98c76-wxv5l
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 48d v1.22.2
vms82.liruilongs.github.io Ready,SchedulingDisabled worker1 48d v1.22.2
vms83.liruilongs.github.io Ready worker2 48d v1.22.2

uncordon掉刚才的节点

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl uncordon vms82.liruilongs.github.io
node/vms82.liruilongs.github.io uncordoned
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 48d v1.22.2
vms82.liruilongs.github.io Ready worker1 48d v1.22.2
vms83.liruilongs.github.io Ready worker2 48d v1.22.2

节点taint(污点)及pod的tolerations(容忍污点)

给节点设置及删除taint,设置operator的值为Equal,以及设置operator的值为Exists

1
2
3
4
5
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible master -m shell -a "kubectl describe nodes vms81.liruilongs.github.io | grep -E '(Roles|Taints)'"
192.168.26.81 | CHANGED | rc=0 >>
Roles: control-plane,master
Taints: node-role.kubernetes.io/master:NoSchedule

master节点从来没有调度到pod,因为master节点设置了污点,如果想要在某个被设置了污点的机器调度pod,那么pod需要设置tolerations(容忍污点)才能够被运行。

taint(污点)的设置和查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 查看节点角色,和是否设置污点
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl describe nodes vms82.liruilongs.github.io | grep -E '(Roles|Taints)'
Roles: worker1
Taints: <none>
# 给 vms83.liruilongs.github.io节点设置污点,指定key为key83
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl describe nodes vms83.liruilongs.github.io | grep -E '(Roles|Taints)'
Roles: worker2
Taints: <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl taint node vms83.liruilongs.github.io key83=:NoSchedule
node/vms83.liruilongs.github.io tainted
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl describe nodes vms83.liruilongs.github.io | grep -E '(Roles|Taints)' # 从新查看污点信息
Roles: worker2
Taints: key83:NoSchedule
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

重新通过deployment 创建pod,会发现pod都调度到82上面,因为83设置了污点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl scale deployment nginx --replicas=0
deployment.apps/nginx scaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl scale deployment nginx --replicas=4
deployment.apps/nginx scaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide --one-output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-dhst5 0/1 ContainerCreating 0 12s <none> vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-j6g25 0/1 ContainerCreating 0 12s <none> vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-wpnhr 0/1 ContainerCreating 0 12s <none> vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-zkww8 0/1 ContainerCreating 0 11s <none> vms82.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl delete deployment nginx
deployment.apps "nginx" deleted

取消污点设置

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl taint node vms83.liruilongs.github.io key83-
node/vms83.liruilongs.github.io untainted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl describe nodes vms83.liruilongs.github.io | grep -E '(Roles|Taints)'
Roles: worker2
Taints: <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

设置operator的值为Equal

如果需要在有污点的节点上运行pod,那么需要在定义pod的时候指定toleration属性

在设置节点taint的时候,如果value的值为不为空,在pod里的tolerations字段只能写Equal,不能写Exists,

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl taint nodes vms82.liruilongs.github.io key82=val82:NoSchedule
node/vms82.liruilongs.github.io tainted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl describe nodes vms82.liruilongs.github.io | grep -E '(Roles|Taints)'
Roles: worker1
Taints: key82=val82:NoSchedule
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

修改yaml文件 pod-taint3.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$cat pod-taint2.yaml > pod-taint3.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim pod-taint3.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$cat pod-taint3.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
spec:
nodeSelector:
disktype: node2
tolerations:
- key: "key82"
operator: "Equal"
value: "val82"
effect: "NoSchedule"
containers:
- image: nginx
name: pod1
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-taint3.yaml
pod/pod1 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1 1/1 Running 0 11s 10.244.171.180 vms82.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

设置operator的值为Exists

如果使用Exists的话,那么pod中不能写value

设置vms83.liruilongs.github.io 节点污点标记

1
2
3
4
5
6
7
8
9
10
11
12
13
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl taint node vms83.liruilongs.github.io key83=:NoSchedule
node/vms83.liruilongs.github.io tainted
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl describe nodes vms83.liruilongs.github.io | grep -E '(Roles|Taints)'
Roles: worker2
Taints: key83:NoSchedule
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
vms81.liruilongs.github.io Ready control-plane,master 48d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms81.liruilongs.github.io,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
vms82.liruilongs.github.io Ready worker1 48d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=node1,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms82.liruilongs.github.io,kubernetes.io/os=linux,node-role.kubernetes.io/worker1=
vms83.liruilongs.github.io Ready worker2 48d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=node2,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms83.liruilongs.github.io,kubernetes.io/os=linux,node-role.kubernetes.io/worker2=

pod-taint.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
spec:
nodeSelector:
disktype: node2
tolerations:
- key: "key83"
operator: "Exists"
effect: "NoSchedule"
containers:
- image: nginx
name: pod1
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

会发现节点调度到了有污点的vms83.liruilongs.github.io节点

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-taint.yaml
pod/pod1 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1 1/1 Running 0 3m4s 10.244.70.8 vms83.liruilongs.github.io <none> <none>

当然,value没有值也可以这样使用Equal

1
2
3
4
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$cp pod-taint.yaml pod-taint2.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim pod-taint2.yaml

pod-taint2.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
spec:
nodeSelector:
disktype: node2
tolerations:
- key: "key83"
operator: "Equal"
value: ""
effect: "NoSchedule"
containers:
- image: nginx
name: pod1
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

会发现节点还是调度到了有污点的vms83.liruilongs.github.io节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl delete -f pod-taint.yaml
pod "pod1" deleted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-taint2.yaml
pod/pod1 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1 0/1 ContainerCreating 0 8s <none> vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl taint nodes vms83.liruilongs.github.io key83-
node/vms83.liruilongs.github.io untainted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$

存在两种特殊情况:
如果一个容忍度的 key 为空且 operator 为 Exists, 表示这个容忍度与任意的 key 、value 和 effect 都匹配,即这个容忍度能容忍任意 taint。
如果 effect 为空,则可以与所有键名 key1 的效果相匹配。

数据卷(Volume)管理

Volume是Pod中能够被多个容器访问的共享目录。Kuberetes的Volume概念、用途和目的与Docker的Volume比较类似,但两者不能等价

Volume (存储卷)
Kubernetes中的Volume定义在Pod上,然后被一个Pod里的多个容器挂载到具体的文件目录下;
Kubernetes中的Volume与Pod的生命周期相同,但与容器的生命周期不相关,当容器终止或者重启时, Volume中的数据也不会丢失。
Kubernetes支持多种类型的Volume,例如GlusterFS, Ceph等先进的分布式文件系统

Volume的使用也比较简单,在大多数情况下,我们先在Pod上声明一个Volume,然后在容器里引用该VolumeMount到容器里的某个目录上。举例来说,我们要给之前的Tomcat Pod增加一个名字为datavolVolume,并且Mount到容器的/mydata-data目录上,则只要对Pod的定义文件做如下修正即可(注意黑体字部分):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
template:
metadata:
labels:
app: app-demo
tier: frontend
spec:
volumes:
- name: datavol
emptyDir: {}
containers:
- name: tomcat-demo
image: tomcat
volumeMounts:
- mountPath: /myddata-data
name: datavol
imagePullPolicy: IfNotPresent

除了可以让一个Pod里的多个容器共享文件、让容器的数据写到宿主机的磁盘上或者写文件到网络存储中, Kubernetes的Volume还扩展出了一种非常有实用价值的功能,即 :**容器配置文件集中化定义与管理**,这是通过ConfigMap这个新的资源对象来实现的.

Kubernetes提供了非常丰富的Volume类型

学习环境准备

1
2
3
4
5
6
7
8
9
10
11
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$mkdir k8s-volume-create;cd k8s-volume-create
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get ns
NAME STATUS AGE
default Active 49d
kube-node-lease Active 49d
kube-public Active 49d
kube-system Active 49d
liruilong Active 49d
liruilong-pod-create Active 41d
1
2
3
4
5
6
7
8
9
10
11
12
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl create ns liruilong-volume-create
namespace/liruilong-volume-create created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl config set-context $(kubectl config current-context) --namespace=liruilong-volume-create
Context "context1" modified.
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
cluster1 default
* context1 cluster1 kubernetes-admin1 liruilong-volume-create
context2 kube-system

emptyDir

一个emptyDir Volume是在Pod分配到Node时创建的从它的名称就可以看出,它的初始内容为空,并且无须指定宿主机上对应的目录文件,因为这是 Kubernetes自动分配的一个目录,而且这个目录实际是挂载中物理机内存中的的,当Pod从Node上移除时, emptyDir中的数据也会被永久删除

emptyDir的一些用途如下:

emptyDir的一些用途
临时空间,例如用于某些应用程序运行时所需的临时目录,且无须永久保留。
长时间任务的中间过程CheckPoint的临时保存目录。
一个容器需要从另一个容器中获取数据的目录(多容器共享目录)

创建一个Pod,声明volume卷

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podvolume
name: podvolume
spec:
volumes:
- name: volume1
emptyDir: {}
- name: volume2
emptyDir: {}
containers:
- image: busybox
imagePullPolicy: IfNotPresent
command: ['sh','-c','sleep 5000']
resources: {}
name: podvolume1
volumeMounts:
- mountPath: /liruilong
name: volume1
- image: busybox
imagePullPolicy: IfNotPresent
name: podvolume2
volumeMounts:
- mountPath: /liruilong
name: volume2
command: ['sh','-c','sleep 5000']
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

创建pod,查看运行状态

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f pod_volume.yaml
pod/podvolume configured
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podvolume 0/2 CrashLoopBackOff 164 (117s ago) 37h 10.244.70.14 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$

查看pod的数据卷类型

1
2
3
4
5
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl describe pod podvolume | grep -A2 Volumes
Volumes:
volume1:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)

通过docker命令来查看对应的宿主机容器

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "docker ps | grep podvolume"
192.168.26.83 | CHANGED | rc=0 >>
bbb287afc518 cabb9f684f8b "sh -c 'sleep 5000'" 12 minutes ago Up 12 minutes k8s_podvolume2_podvolume_liruilong-volume-create_76b518f6-9575-4412-b161-f590ab3c3135_0
dcbf5c63263f cabb9f684f8b "sh -c 'sleep 5000'" 12 minutes ago Up 12 minutes k8s_podvolume1_podvolume_liruilong-volume-create_76b518f6-9575-4412-b161-f590ab3c3135_0
5bb9ee2ed134 registry.aliyuncs.com/google_containers/pause:3.4.1 "/pause" 12 minutes ago Up 12 minutes k8s_POD_podvolume_liruilong-volume-create_76b518f6-9575-4412-b161-f590ab3c3135_0
┌──[root@vms81.liruilongs.github.io]-[~/ansible]

通过inspect查看映射的宿主机信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "docker inspect dcbf5c63263f | grep -A5 Mounts"
192.168.26.83 | CHANGED | rc=0 >>
"Mounts": [
{
"Type": "bind",
"Source": "/var/lib/kubelet/pods/76b518f6-9575-4412-b161-f590ab3c3135/volumes/kubernetes.io~empty-dir/volume1",
"Destination": "/liruilong",
"Mode": "",
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "docker inspect bbb287afc518 | grep -A5 Mounts"
192.168.26.83 | CHANGED | rc=0 >>
"Mounts": [
{
"Type": "bind",
"Source": "/var/lib/kubelet/pods/76b518f6-9575-4412-b161-f590ab3c3135/volumes/kubernetes.io~empty-dir/volume2",
"Destination": "/liruilong",
"Mode": "",
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

pod内多容器数据卷共享

1
2
3
4
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$sed 's/podvolume/podvolumes/' pod_volume.yaml >pod_volumes.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$vim pod_volumes.yaml

编写pod_volumes.yaml文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podvolumes
name: podvolumes
spec:
volumes:
- name: volume1
emptyDir: {}
containers:
- image: busybox
imagePullPolicy: IfNotPresent
command: ['sh','-c','sleep 5000']
resources: {}
name: podvolumes1
volumeMounts:
- mountPath: /liruilong
name: volume1
- image: busybox
imagePullPolicy: IfNotPresent
name: podvolumes2
volumeMounts:
- mountPath: /liruilong
name: volume1
command: ['sh','-c','sleep 5000']
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

新建的文件夹中两个pod中同时存在

1
2
3
4
5
6
7
8
9
10
11
12
13
14
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f pod_volumes.yaml
pod/podvolumes created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl exec -it podvolumes -c podvolumes1 -- sh
/ # mkdir -p /liruilong/$(date +"%Y%m%d%H%M%S");cd /liruilong/;ls
20211127080726
/liruilong #
/liruilong # exit
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl exec -it podvolumes -c podvolumes2 -- sh
/ # cd /liruilong/;ls
20211127080726
/liruilong #

设置数据卷的读写权限

pod_volume_r.yaml:设置数据卷pod1只读

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podvolume
name: podvolume
spec:
volumes:
- name: volume1
emptyDir: {}
- name: volume2
emptyDir: {}
containers:
- image: busybox
imagePullPolicy: IfNotPresent
command: ['sh','-c','sleep 5000']
resources: {}
name: podvolume1
volumeMounts:
- mountPath: /liruilong
name: volume1
readOnly: true # 设置数据卷pod1只读
- image: busybox
imagePullPolicy: IfNotPresent
name: podvolume2
volumeMounts:
- mountPath: /liruilong
name: volume2
command: ['sh','-c','sleep 5000']
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
1
2
3
4
5
6
7
8
9
10
11
12
13
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl exec -it podvolume -c podvolume1 -- sh
/ # cd liruilong/;touch lrl.txt
touch: lrl.txt: Read-only file system
/liruilong #
/liruilong # exit
command terminated with exit code 1
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl exec -it podvolume -c podvolume2 -- sh
/ # cd liruilong/;touch lrl.txt
/liruilong # ls
lrl.txt
/liruilong #

hostPath

hostPath为在Pod上挂载宿主机上的文件或目录,它通常可以用于以下几方面。

hostPath的应用
容器应用程序生成的日志文件需要永久保存时,可以使用宿主机的高速文件系统进行存储。
需要访问宿主机上Docker引擎内部数据结构的容器应用时,可以通过定义hostPath为宿主机/var/lib/docker目录,使容器内部应用可以直接访问Docker的文件系统。

在使用这种类型的Volume时,需要注意以下几点。

在不同的Node上具有相同配置的Pod可能会因为宿主机上的目录和文件不同而导致对Volume上目录和文件的访问结果不一致。

如果使用了资源配额管理,则Kubernetes无法将hostPath在宿主机上使用的资源纳入管理cgroup。在下面的例子中使用宿主机的/data目录定义了一个hostPath类型的Volume:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podvolumehostpath
name: podvolumehostpath
spec:
volumes:
- name: volumes1
hostPath:
path: /data
containers:
- image: busybox
name: podvolumehostpath
command: ['sh','-c','sleep 5000']
resources: {}
volumeMounts:
- mountPath: /liruilong
name: volumes1
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
1
2
3
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f PodVolumeHostPath.yaml
pod/podvolumehostpath created

宿主机创建一个文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podvolumehostpath 1/1 Running 0 5m44s 10.244.70.9 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$cd ..
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "cd /data;touch liruilong"
192.168.26.83 | CHANGED | rc=0 >>

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "cd /data;ls"
192.168.26.83 | CHANGED | rc=0 >>
liruilong
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

pod容器内同样存在

1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl exec -it podvolumehostpath -- sh
/ # ls
bin dev etc home liruilong proc root sys tmp usr var
/ # cd liruilong/;ls
liruilong
/liruilong #

NFS

不管是emptyDir还是hostPath,数据都是存放到宿主机,但是如某个pod出现了问题,通过控制器重启时,会通过调度生产一个新的Pod,如果调度的节点不是原来的节点,那么数据就会丢失。这里的话,使用网路存储就很方便。

部署一个NFSServer

使用NFS网络文件系统提供的共享目录存储数据时,我们需要在系统中部署一个NFSServer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
┌──[root@vms81.liruilongs.github.io]-[~]
└─$yum -y install nfs-utils.x86_64
┌──[root@vms81.liruilongs.github.io]-[~]
└─$systemctl enable nfs-server.service --now
┌──[root@vms81.liruilongs.github.io]-[~]
└─$mkdir -p /liruilong
┌──[root@vms81.liruilongs.github.io]-[/liruilong]
└─$cd /liruilong/;echo `date` > liruilong.txt
┌──[root@vms81.liruilongs.github.io]-[/liruilong]
└─$cd /liruilong/;cat liruilong.txt
2021年 11月 27日 星期六 21:57:10 CST
┌──[root@vms81.liruilongs.github.io]-[/liruilong]
└─$cat /etc/exports
┌──[root@vms81.liruilongs.github.io]-[/liruilong]
└─$echo "/liruilong *(rw,sync,no_root_squash)" > /etc/exports
┌──[root@vms81.liruilongs.github.io]-[/liruilong]
└─$exportfs -arv
exporting *:/liruilong
┌──[root@vms81.liruilongs.github.io]-[/liruilong]
└─$showmount -e
Export list for vms81.liruilongs.github.io:
/liruilong *
┌──[root@vms81.liruilongs.github.io]-[/liruilong]
└─$

然后我们需要在所有的工作节点安装nfs-utils,然后挂载

1
2
3
4
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "yum -y install nfs-utils"
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "systemctl enable nfs-server.service --now"

nfs共享文件测试

1
2
3
4
5
6
7
8
9
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "showmount -e vms81.liruilongs.github.io"
192.168.26.83 | CHANGED | rc=0 >>
Export list for vms81.liruilongs.github.io:
/liruilong *
192.168.26.82 | CHANGED | rc=0 >>
Export list for vms81.liruilongs.github.io:
/liruilong *
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

挂载测试

1
2
3
4
5
6
7
8
9
10
11
12
13
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "mount vms81.liruilongs.github.io:/liruilong /mnt"

192.168.26.82 | CHANGED | rc=0 >>

192.168.26.83 | CHANGED | rc=0 >>

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "cd /mnt/;ls"
192.168.26.83 | CHANGED | rc=0 >>
liruilong.txt
192.168.26.82 | CHANGED | rc=0 >>
liruilong.txt
1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "df -h | grep liruilong"
192.168.26.82 | CHANGED | rc=0 >>
vms81.liruilongs.github.io:/liruilong 150G 8.3G 142G 6% /mnt
192.168.26.83 | CHANGED | rc=0 >>
vms81.liruilongs.github.io:/liruilong 150G 8.3G 142G 6% /mnt

取消挂载

1
2
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "umount /mnt"

使用nfs数据卷pod资源yaml文件

podvolumenfs.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podvolumehostpath
name: podvolumehostpath
spec:
volumes:
- name: volumes1
nfs:
server: vms81.liruilongs.github.io
path: /liruilong
containers:
- image: busybox
name: podvolumehostpath
command: ['sh','-c','sleep 5000']
resources: {}
volumeMounts:
- mountPath: /liruilong
name: volumes1
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

创建nfs数据卷 pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f podvolumenfs.yaml
pod/podvolumehostpath created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podvolumehostpath 1/1 Running 0 24s 10.244.171.182 vms82.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl exec -it podvolumehostpath -- sh
/ # cd liruilong/;ls
liruilong.txt
/liruilong # exit
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$

持久性存储(Persistent Volume)

Volume是定义在Pod上的,属于“计算资源”的一部分,而实际上, “网络存储”是相对独立于“计算资源”而存在的一种实体资源。比如在使用虚拟机的情况下,我们通常会先定义一个网络存储,然后从中划出一个“网盘”并挂接到虚拟机

Persistent Volume(简称PV)和与之相关联的Persistent Volume Claim (简称PVC)也起到了类似的作用。PV可以理解成 Kubernetes集群中的某个网络存储中对应的一块存储,它与Volume很类似,但有以下区别。

这里也可以结合物理盘区和逻辑卷来理解,PV可以理解为物理卷,PVC可以理解为划分的逻辑卷。

Persistent Volume与Volume的区别
PV只能是网络存储,不属于任何Node,但可以在每个Node上访问。
PV并不是定义在Pod上的,而是独立于Pod之外定义。
PV目前支持的类型包括: gcePersistentDisk、 AWSElasticBlockStore, AzureFileAzureDisk, FC (Fibre Channel). Flocker, NFS, isCSI, RBD (Rados Block Device)CephFS. Cinder, GlusterFS. VsphereVolume. Quobyte Volumes, VMware Photon.PortworxVolumes, ScalelO Volumes和HostPath (仅供单机测试)。

pv的创建

PV的accessModes属性, 目前有以下类型:

  • ReadWriteOnce:读写权限、并且只能被单个Node挂载。
  • ReadOnlyMany:只读权限、允许被多个Node挂载。
  • ReadWriteMany:读写权限、允许被多个Node挂载。
1
2
3
4
5
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pv
No resources found
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$vim pod_volunms-pv.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
#storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: vms81.liruilongs.github.io
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat /etc/exports
/liruilong *(rw,sync,no_root_squash)
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$echo "/tmp *(rw,sync,no_root_squash)" >>/etc/exports
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat /etc/exports
/liruilong *(rw,sync,no_root_squash)
/tmp *(rw,sync,no_root_squash)
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$exportfs -avr
exporting *:/tmp
exporting *:/liruilong
1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f pod_volunms-pv.yaml
persistentvolume/pv0003 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pv -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
pv0003 5Gi RWO Recycle Available 16s Filesystem

PV是有状态的对象,它有以下几种状态。
Available:空闲状态。
Bound:已经绑定到某个Pvc上。
Released:对应的PVC已经删除,但资源还没有被集群收回。
Failed: PV自动回收失败。

PVC的创建

如果某个Pod想申请某种类型的PV,则首先需要定义一个PersistentVolumeClaim (PVC)对象:

PVC是基于命名空间相互隔离的,不同命名空间的PVC相互隔离PVC通过accessModes和storage的约束关系来匹配PV,不需要显示定义,accessModes必须相同,storage必须小于等于。

1
2
3
4
5
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pvc
No resources found in liruilong-volume-create namespace.
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$vim pod_volumes-pvc.yaml
1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc01
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 4Gi
#storageClassName: slow
1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f pod_volumes-pvc.yaml
persistentvolumeclaim/mypvc01 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pvc -o wide
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
mypvc01 Bound pv0003 5Gi RWO 10s Filesystem
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$

storageClassName

storageClassName 用于控制那个PVC能和PV绑定,只有在storageClassName相同的情况下才去匹配storage和accessModes

1
2
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$vim pod_volunms-pv.yaml

pod_volunms-pv.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: vms81.liruilongs.github.io
1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f pod_volunms-pv.yaml
persistentvolume/pv0003 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pv -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv0003 5Gi RWO Recycle Available slow 8s

pod_volumes-pvc.yaml

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc01
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 4Gi
storageClassName: slow
1
2
3
4
5
6
7
8
9
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pvc -A
No resources found
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f pod_volumes-pvc.yaml
persistentvolumeclaim/mypvc01 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
liruilong-volume-create mypvc01 Bound pv0003 5Gi RWO slow 5s

使用持久性存储

在pod里面使用PVC

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podvolumepvc
name: podvolumepvc
spec:
volumes:
- name: volumes1
persistentVolumeClaim:
claimName: mypvc01
containers:
- image: nginx
name: podvolumehostpath
resources: {}
volumeMounts:
- mountPath: /liruilong
name: volumes1
imagePullPolicy: IfNotPresent
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f pod_volumespvc.yaml
pod/podvolumepvc created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podvolumepvc 1/1 Running 0 15s 10.244.171.184 vms82.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl exec -it podvolumepvc -- sh
# ls
bin dev docker-entrypoint.sh home lib64 media opt root sbin sys usr
boot docker-entrypoint.d etc lib liruilong mnt proc run srv tmp var
# cd liruilong
# ls
runc-process838092734
systemd-private-66344110bb03430193d445f816f4f4c4-chronyd.service-SzL7id
systemd-private-6cf1f72056ed4482a65bf89ec2a130a9-chronyd.service-5m7c2i
systemd-private-b1dc4ffda1d74bb3bec5ab11e5832635-chronyd.service-cPC3Bv
systemd-private-bb19f3d6802e46ab8dcb5b88a38b41b8-chronyd.service-cjnt04
#

pv回收策略

persistentVolumeReclaimPolicy: Recycle

策略 描述
Recycle –会删除数据 会生成一个pod回收数据,删除pvc之后,pv可复用,pv状态由Released变为Available
Retain–不回收数据 但是删除pvc之后,pv依然不可用,pv状态长期保持为 Released

会生成一个pod回收数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv0003 5Gi RWO Recycle Bound liruilong-volume-create/mypvc01 slow 131m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl describe pv pv0003
..................
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal RecyclerPod 53s persistentvolume-controller Recycler pod: Successfully assigned default/recycler-for-pv0003 to vms82.liruilongs.github.io
Normal RecyclerPod 51s persistentvolume-controller Recycler pod: Pulling image "busybox:1.27"
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv0003 5Gi RWO Recycle Available slow 136m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$

动态卷供应storageClass

通过storageClass来动态处理PV的创建,管理员只需要创建好storageClass就可以了,用户创建PVC时会自动的创建PV和PVC。当创建 pvc 的时候,系统会通知 storageClass,storageClass 会从它所关联的分配器来获取后端存储类型,然后动态的创建一个 pv 出来和此 pvc 进行关联

storageClass 的工作流程

定义 storageClass 时必须要包含一个分配器(provisioner),不同的分配器指定了动态创建 pv时使用什么后端存储。

分配器使用 aws 的 ebs 作为 pv 的后端存储

1
2
3
4
5
6
7
8
9
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/aws-ebs
parameters:
type: io1
iopsPerGB: "10"
fsType: ext4

分配器使用 lvm 作为 pv 的后端存储

1
2
3
4
5
6
7
8
9
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-lvm
provisioner: lvmplugin.csi.alibabacloud.com
parameters:
vgName: volumegroup1
fsType: ext4
reclaimPolicy: Delete

使用 hostPath 作为 pv 的后端存储

1
2
3
4
5
6
7
8
9
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-hostpath-sc
provisioner: hostpath.csi.k8s.io
reclaimPolicy: Delete
#volumeBindingMode: Immediate
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

上面 3 个例子里所使用的分配器中,有一些是 kubernetes 内置的分配器,比如kubernetes.io/aws-ebs,其他两个分配器不是 kubernetes 自带的。kubernetes 自带的分配器:

  • kubernetes.io/aws-ebs
  • kubernetes.io/gce-pd
  • kubernetes.io/glusterfs
  • kubernetes.io/cinder
  • kubernetes.io/vsphere-volume
  • kubernetes.io/rbd
  • kubernetes.io/quobyte
  • kubernetes.io/azure-disk
  • kubernetes.io/azure-file
  • kubernetes.io/portworx-volume
  • kubernetes.io/scaleio
  • kubernetes.io/storageos
  • kubernetes.io/no-provisioner

在动态创建 pv 的时候,根据使用不同的后端存储,应该选择一个合适的分配器。但是像lvmplugin.csi.alibabacloud.com 和 hostpath.csi.k8s.io 这样的分配器不是 kubernetes 自带的,称之为外部分配器,这些外部分配器由第三方提供,是通过自定义 ** CSIDriver(容器存储接口驱动)来实现的分配器**。

所以整个流程就是,管理员创建storageClass时会通过provisioner 字段指定分配器。创建好storageClass之后,用户在定义pvc时需要通过.spec.storageClassName 指定使用哪个storageClass

利用 nfs 创建动态卷供应

创建一个目录/vdisk,并共享这个目录。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
┌──[root@vms81.liruilongs.github.io]-[~]
└─$cat /etc/exports
/liruilong *(rw,sync,no_root_squash)
/tmp *(rw,sync,no_root_squash)
┌──[root@vms81.liruilongs.github.io]-[~]
└─$echo "/vdisk *(rw,sync,no_root_squash)" >>/etc/exports
┌──[root@vms81.liruilongs.github.io]-[~]
└─$exportfs -avr
exporting *:/vdisk
exportfs: Failed to stat /vdisk: No such file or directory
exporting *:/tmp
exporting *:/liruilong
┌──[root@vms81.liruilongs.github.io]-[/]
└─$mkdir vdisks

因为 kubernetes 里,nfs 没有内置分配器,所以需要下载相关插件来创建 nfs 外部分配器。

插件包下载地址: https://github.com/kubernetes-incubator/external-storage.git

rbac.yaml 部署 rbac 权限。命名空间更换

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: liruilong-volume-create
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: liruilong-volume-create
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: liruilong-volume-create
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: liruilong-volume-create
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: liruilong-volume-create
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io

因为 nfs 分配器不是自带的,所以这里需要先把 nfs 分配器创建出来。

配置文件参数设置,1.20之后的版本都需要: - --feature-gates=RemoveSelfLink=false

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$pwd
/etc/kubernetes/manifests
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$head -n 20 kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.26.81:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.26.81
- --feature-gates=RemoveSelfLink=false
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$

deployment.yaml

  1. 因为当前是在命名空间 liruilong-volume-create 里的,所以要把 namespace 的值改为 liruilong-volume-create
  2. image 后面的镜像需要提前在所有节点上 pull 下来,并修改镜像下载策略
  3. env 字段里,PROVISIONER_NAME 用于指定分配器的名字,这里是 fuseim.pri/ifsNFS_SERVERNFS_PATH 分别指定这个分配器所使用的存储信息。
  4. volumes 里的 serverpath 里指定共享服务器和目录
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: liruilong-volume-create
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.26.81
- name: NFS_PATH
value: /vdisk
volumes:
- name: nfs-client-root
nfs:
server: 192.168.26.81
path: /vdisk

部署 nfs 分配器,查看 pod 的运行情况

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy]
└─$kubectl apply -f deployment.yaml
deployment.apps/nfs-client-provisioner created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-65b5569d76-cz6hh 1/1 Running 0 73s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy]
└─$

创建了 nfs 分配器之后,下面开始创建一个使用这个分配器的 storageClass。

1
2
3
4
5
6
7
8
9
10
11
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy]
└─$kubectl get sc
No resources found
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy]
└─$kubectl apply -f class.yaml
storageclass.storage.k8s.io/managed-nfs-storage created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy]
└─$kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage fuseim.pri/ifs Delete Immediate false 3s

class.yaml

1
2
3
4
5
6
7
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"

这里 provisioner 的值 fuseim.pri/ifs 是由 deployment.yaml 文件里指定的分配器的名字,这
个 yaml 文件的意思是创建一个名字是managed-nfs-storagestorageClass,使用名字为fuseim.pri/ifs 的分配器。

下面开始创建 pvc

pvc_nfs.yaml

1
2
3
4
5
6
7
8
9
10
11
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-nfs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Mi
storageClassName: "managed-nfs-storage"
1
2
3
4
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f ./pvc_nfs.yaml
persistentvolumeclaim/pvc-nfs created

查看创建信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-65b5569d76-7k6gm 1/1 Running 0 35s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage fuseim.pri/ifs Delete Immediate false 30s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs Bound pvc-b12e988a-8b55-4d48-87cf-998500df16f8 20Mi RWX managed-nfs-storage 28s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy]
└─$kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-b12e988a-8b55-4d48-87cf-998500df16f8 20Mi RWX Delete Bound liruilong-volume-create/pvc-nfs managed-nfs-storage 126m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy]
└─$

使用声明的PVC

pod_storageclass.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podvolumepvc
name: podvolumepvc
spec:
volumes:
- name: volumes1
persistentVolumeClaim:
claimName: pvc-nfs
containers:
- image: nginx
name: podvolumehostpath
resources: {}
volumeMounts:
- mountPath: /liruilong
name: volumes1
imagePullPolicy: IfNotPresent
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f pod_storageclass.yaml
pod/podvolumepvc created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-65b5569d76-7k6gm 1/1 Running 0 140m
podvolumepvc 1/1 Running 0 7s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl describe pods podvolumepvc | grep -A 4 Volumes:
Volumes:
volumes1:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc-nfs
ReadOnly: false

其他的数据卷类型

gcePersistentDisk

使用这种类型的Volume表示使用谷歌公有云提供的永久磁盘(PersistentDisk, PD)存放Volume的数据,它与emptyDir不同, PD上的内容会被永久存,当Pod被删除时, PD只是被卸载(Unmount),但不会被删除。需要注意是,你需要先创建一个永久磁盘(PD),才能使用gcePersistentDisk.

awsElasticBlockStore

与GCE类似,该类型的Volume使用亚马逊公有云提供的EBS Volume存储数据,需要先创建一个EBS Volume才能使用awsElasticBlockStore.

发布于

2021-09-21

更新于

2023-06-21

许可协议

评论
加载中,最新评论有1分钟缓存...
Your browser is out-of-date!

Update your browser to view this website correctly.&npsb;Update my browser now

×