男女情事,谁先动心谁吃亏,越吃亏越难忘,到最后,到底是喜欢对方呢,还是喜欢自己,都搞不清楚了,答案偏偏在对方身上,所以才说,由爱故生忧。——–《剑来》
写在前面
学习K8s
,刚把这一块学完,整理笔记,理论很少,偏实战,适合温习
博文内容涉及:
常见nfs,hostPath,emptyDir数据卷
类型
PV
+PVC
的创建
持久性存储
及动态卷供应
男女情事,谁先动心谁吃亏,越吃亏越难忘,到最后,到底是喜欢对方呢,还是喜欢自己,都搞不清楚了,答案偏偏在对方身上,所以才说,由爱故生忧。——–《剑来》
数据卷(Volume)管理 Volume是Pod中能够被多个容器访问的共享目录。Kuberetes的Volume概念、用途和目的与Docker的Volume比较类似,但两者不能等价
。
Volume (存储卷)
Kubernetes中的Volume定义在Pod上
,然后被一个Pod里的多个容器挂载到具体的文件目录下;
Kubernetes中的Volume与Pod的生命周期相同
,但与容器的生命周期不相关
,当容器终止或者重启时, Volume中的数据也不会丢失。
Kubernetes支持多种类型的Volume
,例如GlusterFS, Ceph
等先进的分布式文件系统
。
Volume
的使用也比较简单,在大多数情况下,我们先在Pod
上声明一个Volume
,然后在容器里引用该Volume
并Mount
到容器里的某个目录上。举例来说,我们要给之前的Tomcat Pod
增加一个名字为datavol
的Volume
,并且Mount
到容器的/mydata-data
目录上,则只要对Pod的定义文件做如下修正即可(注意黑体字部分):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 template: metadata: labels: app: app-demo tier: frontend spec: volumes: - name: datavol emptyDir: {} containers: - name: tomcat-demo image: tomcat volumeMounts: - mountPath: /myddata-data name: datavol imagePullPolicy: IfNotPresent
除了可以让一个Pod
里的多个容器共享文件、让容器的数据写到宿主机的磁盘上或者写文件到网络存储中
, Kubernetes的Volume
还扩展出了一种非常有实用价值的功能,即 :**容器配置文件集中化定义与管理 **,这是通过ConfigMap
这个新的资源对象来实现的.
Kubernetes提供了非常丰富的Volume类型
学习环境准备 1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$mkdir k8s-volume-create;cd k8s-volume-create ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get ns NAME STATUS AGE default Active 49d kube-node-lease Active 49d kube-public Active 49d kube-system Active 49d liruilong Active 49d liruilong-pod-create Active 41d
1 2 3 4 5 6 7 8 9 10 11 12 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl create ns liruilong-volume-create namespace/liruilong-volume-create created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl config set-context $(kubectl config current-context) --namespace=liruilong-volume-create Context "context1" modified. ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE cluster1 default * context1 cluster1 kubernetes-admin1 liruilong-volume-create context2 kube-system
emptyDir 一个emptyDir Volume是在Pod分配到Node时创建的 。 从它的名称就可以看出,它的初始内容为空
,并且无须指定宿主机上对应的目录文件 ,因为这是 Kubernetes自动分配的一个目录 ,而且这个目录实际是挂载中物理机内存中的的,当Pod从Node上移除时, emptyDir中的数据也会被永久删除
。
emptyDir
的一些用途如下:
emptyDir的一些用途
临时空间,例如用于某些应用程序运行时所需的临时目录,且无须永久保留。
长时间任务的中间过程CheckPoint
的临时保存目录。
一个容器需要从另一个容器中获取数据的目录(多容器共享目录)
创建一个Pod,声明volume卷 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: podvolume name: podvolume spec: volumes: - name: volume1 emptyDir: {} - name: volume2 emptyDir: {} containers: - image: busybox imagePullPolicy: IfNotPresent command: ['sh' ,'-c' ,'sleep 5000' ] resources: {} name: podvolume1 volumeMounts: - mountPath: /liruilong name: volume1 - image: busybox imagePullPolicy: IfNotPresent name: podvolume2 volumeMounts: - mountPath: /liruilong name: volume2 command: ['sh' ,'-c' ,'sleep 5000' ] dnsPolicy: ClusterFirst restartPolicy: Always status: {}
创建pod,查看运行状态
1 2 3 4 5 6 7 8 9 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f pod_volume.yaml pod/podvolume configured ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES podvolume 0/2 CrashLoopBackOff 164 (117s ago) 37h 10.244.70.14 vms83.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$
查看pod的数据卷类型 1 2 3 4 5 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl describe pod podvolume | grep -A2 Volumes Volumes: volume1: Type: EmptyDir (a temporary directory that shares a pod's lifetime)
通过docker命令来查看对应的宿主机容器
1 2 3 4 5 6 7 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.83 -m shell -a "docker ps | grep podvolume" 192.168.26.83 | CHANGED | rc=0 >> bbb287afc518 cabb9f684f8b "sh -c 'sleep 5000'" 12 minutes ago Up 12 minutes k8s_podvolume2_podvolume_liruilong-volume-create_76b518f6-9575-4412-b161-f590ab3c3135_0 dcbf5c63263f cabb9f684f8b "sh -c 'sleep 5000'" 12 minutes ago Up 12 minutes k8s_podvolume1_podvolume_liruilong-volume-create_76b518f6-9575-4412-b161-f590ab3c3135_0 5bb9ee2ed134 registry.aliyuncs.com/google_containers/pause:3.4.1 "/pause" 12 minutes ago Up 12 minutes k8s_POD_podvolume_liruilong-volume-create_76b518f6-9575-4412-b161-f590ab3c3135_0 ┌──[root@vms81.liruilongs.github.io]-[~/ansible]
通过inspect查看映射的宿主机信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.83 -m shell -a "docker inspect dcbf5c63263f | grep -A5 Mounts" 192.168.26.83 | CHANGED | rc=0 >> "Mounts" : [ { "Type" : "bind" , "Source" : "/var/lib/kubelet/pods/76b518f6-9575-4412-b161-f590ab3c3135/volumes/kubernetes.io~empty-dir/volume1" , "Destination" : "/liruilong" , "Mode" : "" , ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.83 -m shell -a "docker inspect bbb287afc518 | grep -A5 Mounts" 192.168.26.83 | CHANGED | rc=0 >> "Mounts" : [ { "Type" : "bind" , "Source" : "/var/lib/kubelet/pods/76b518f6-9575-4412-b161-f590ab3c3135/volumes/kubernetes.io~empty-dir/volume2" , "Destination" : "/liruilong" , "Mode" : "" , ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
pod内多容器数据卷共享 1 2 3 4 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$sed 's/podvolume/podvolumes/' pod_volume.yaml >pod_volumes.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$vim pod_volumes.yaml
编写pod_volumes.yaml文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: podvolumes name: podvolumes spec: volumes: - name: volume1 emptyDir: {} containers: - image: busybox imagePullPolicy: IfNotPresent command: ['sh' ,'-c' ,'sleep 5000' ] resources: {} name: podvolumes1 volumeMounts: - mountPath: /liruilong name: volume1 - image: busybox imagePullPolicy: IfNotPresent name: podvolumes2 volumeMounts: - mountPath: /liruilong name: volume1 command: ['sh' ,'-c' ,'sleep 5000' ] dnsPolicy: ClusterFirst restartPolicy: Always status: {}
新建的文件夹中两个pod中同时存在
1 2 3 4 5 6 7 8 9 10 11 12 13 14 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f pod_volumes.yaml pod/podvolumes created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl exec -it podvolumes -c podvolumes1 -- sh / 20211127080726 /liruilong /liruilong ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl exec -it podvolumes -c podvolumes2 -- sh / 20211127080726 /liruilong
设置数据卷的读写权限
pod_volume_r.yaml:设置数据卷pod1只读
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: podvolume name: podvolume spec: volumes: - name: volume1 emptyDir: {} - name: volume2 emptyDir: {} containers: - image: busybox imagePullPolicy: IfNotPresent command: ['sh' ,'-c' ,'sleep 5000' ] resources: {} name: podvolume1 volumeMounts: - mountPath: /liruilong name: volume1 readOnly: true - image: busybox imagePullPolicy: IfNotPresent name: podvolume2 volumeMounts: - mountPath: /liruilong name: volume2 command: ['sh' ,'-c' ,'sleep 5000' ] dnsPolicy: ClusterFirst restartPolicy: Always status: {}
1 2 3 4 5 6 7 8 9 10 11 12 13 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl exec -it podvolume -c podvolume1 -- sh / touch: lrl.txt: Read-only file system /liruilong /liruilong command terminated with exit code 1┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl exec -it podvolume -c podvolume2 -- sh / /liruilong lrl.txt /liruilong
hostPath hostPath为在Pod上挂载宿主机上的文件或目录
,它通常可以用于以下几方面。
hostPath
的应用
容器应用程序生成的日志文件需要永久保存时,可以使用宿主机的高速文件系统进行存储。
需要访问宿主机上Docker
引擎内部数据结构的容器应用时,可以通过定义hostPath
为宿主机/var/lib/docker
目录,使容器内部应用可以直接访问Docker
的文件系统。
在使用这种类型的Volume
时,需要注意以下几点。
在不同的Node上具有相同配置的Pod
可能会因为宿主机上的目录和文件不同而导致对Volume
上目录和文件的访问结果不一致。
如果使用了资源配额管理,则Kubernetes无法将hostPath在宿主机上使用的资源纳入管理cgroup。在下面的例子中使用宿主机的/data目录定义了一个 hostPath
类型的Volume
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: podvolumehostpath name: podvolumehostpath spec: volumes: - name: volumes1 hostPath: path: /data containers: - image: busybox name: podvolumehostpath command: ['sh' ,'-c' ,'sleep 5000' ] resources: {} volumeMounts: - mountPath: /liruilong name: volumes1 dnsPolicy: ClusterFirst restartPolicy: Always status: {}
1 2 3 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f PodVolumeHostPath.yaml pod/podvolumehostpath created
宿主机创建一个文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES podvolumehostpath 1/1 Running 0 5m44s 10.244.70.9 vms83.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$cd .. ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.83 -m shell -a "cd /data;touch liruilong" 192.168.26.83 | CHANGED | rc=0 >> ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.83 -m shell -a "cd /data;ls" 192.168.26.83 | CHANGED | rc=0 >> liruilong ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
pod容器内同样存在
1 2 3 4 5 6 7 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl exec -it podvolumehostpath -- sh / bin dev etc home liruilong proc root sys tmp usr var / liruilong /liruilong
NFS 不管是emptyDir
还是hostPath
,数据都是存放到宿主机,但是如某个pod出现了问题,通过控制器重启时,会通过调度生产一个新的Pod,如果调度的节点不是原来的节点,那么数据就会丢失。这里的话,使用网路存储就很方便。
部署一个NFSServer 使用NFS网络文件系统提供的共享目录存储数据时,我们需要在系统中部署一个NFSServer
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ┌──[root@vms81.liruilongs.github.io]-[~] └─$yum -y install nfs-utils.x86_64 ┌──[root@vms81.liruilongs.github.io]-[~] └─$systemctl enable nfs-server.service --now ┌──[root@vms81.liruilongs.github.io]-[~] └─$mkdir -p /liruilong ┌──[root@vms81.liruilongs.github.io]-[/liruilong] └─$cd /liruilong/;echo `date` > liruilong.txt ┌──[root@vms81.liruilongs.github.io]-[/liruilong] └─$cd /liruilong/;cat liruilong.txt 2021年 11月 27日 星期六 21:57:10 CST ┌──[root@vms81.liruilongs.github.io]-[/liruilong] └─$cat /etc/exports ┌──[root@vms81.liruilongs.github.io]-[/liruilong] └─$echo "/liruilong *(rw,sync,no_root_squash)" > /etc/exports ┌──[root@vms81.liruilongs.github.io]-[/liruilong] └─$exportfs -arv exporting *:/liruilong ┌──[root@vms81.liruilongs.github.io]-[/liruilong] └─$showmount -e Export list for vms81.liruilongs.github.io: /liruilong * ┌──[root@vms81.liruilongs.github.io]-[/liruilong] └─$
然后我们需要在所有的工作节点安装nfs-utils,然后挂载
1 2 3 4 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "yum -y install nfs-utils" ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "systemctl enable nfs-server.service --now"
nfs共享文件测试
1 2 3 4 5 6 7 8 9 10 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "showmount -e vms81.liruilongs.github.io" 192.168.26.83 | CHANGED | rc=0 >> Export list for vms81.liruilongs.github.io: /liruilong * 192.168.26.82 | CHANGED | rc=0 >> Export list for vms81.liruilongs.github.io: /liruilong * ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
挂载测试
1 2 3 4 5 6 7 8 9 10 11 12 13 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "mount vms81.liruilongs.github.io:/liruilong /mnt" 192.168.26.82 | CHANGED | rc=0 >> 192.168.26.83 | CHANGED | rc=0 >> ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "cd /mnt/;ls" 192.168.26.83 | CHANGED | rc=0 >> liruilong.txt 192.168.26.82 | CHANGED | rc=0 >> liruilong.txt
1 2 3 4 5 6 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "df -h | grep liruilong" 192.168.26.82 | CHANGED | rc=0 >> vms81.liruilongs.github.io:/liruilong 150G 8.3G 142G 6% /mnt 192.168.26.83 | CHANGED | rc=0 >> vms81.liruilongs.github.io:/liruilong 150G 8.3G 142G 6% /mnt
取消挂载
1 2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -m shell -a "umount /mnt"
使用nfs数据卷pod资源yaml文件
podvolumenfs.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: podvolumehostpath name: podvolumehostpath spec: volumes: - name: volumes1 nfs: server: vms81.liruilongs.github.io path: /liruilong containers: - image: busybox name: podvolumehostpath command : ['sh' ,'-c' ,'sleep 5000' ] resources: {} volumeMounts: - mountPath: /liruilong name: volumes1 dnsPolicy: ClusterFirst restartPolicy: Always status: {}
创建nfs数据卷 pod 1 2 3 4 5 6 7 8 9 10 11 12 13 14 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f podvolumenfs.yaml pod/podvolumehostpath created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES podvolumehostpath 1/1 Running 0 24s 10.244.171.182 vms82.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl exec -it podvolumehostpath -- sh / liruilong.txt /liruilong ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$
持久性存储(Persistent Volume) Volume
是定义在Pod
上的,属于“计算资源
”的一部分,而实际上, “网络存储
”是相对独立于“计算资源
”而存在的一种实体资源
。比如在使用虚拟机
的情况下,我们通常会先定义一个网络存储,然后从中划出一个“网盘”并挂接到虚拟机
上
Persistent Volume(简称PV)
和与之相关联的Persistent Volume Claim (简称PVC)
也起到了类似的作用。PV
可以理解成 Kubernetes集群中的某个网络存储中对应的一块存储 ,它与Volume很类似,但有以下区别。
这里也可以结合物理盘区和逻辑卷来理解,PV可以理解为物理卷,PVC可以理解为划分的逻辑卷。
Persistent Volume与Volume的区别
PV只能是网络存储,不属于任何Node,但可以在每个Node上访问。
PV并不是定义在Pod上的,而是独立于Pod之外定义。
PV目前支持的类型包括: gcePersistentDisk、 AWSElasticBlockStore, AzureFileAzureDisk, FC (Fibre Channel). Flocker, NFS, isCSI, RBD (Rados Block Device)CephFS. Cinder, GlusterFS. VsphereVolume. Quobyte Volumes, VMware Photon.PortworxVolumes, ScalelO Volumes和HostPath (仅供单机测试)。
pv的创建 PV的accessModes属性 , 目前有以下类型:
ReadWriteOnce:读写权限、并且只能被单个Node挂载。
ReadOnlyMany:只读权限、允许被多个Node挂载。
ReadWriteMany:读写权限、允许被多个Node挂载。
1 2 3 4 5 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pv No resources found ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$vim pod_volunms-pv.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 apiVersion: v1 kind: PersistentVolume metadata: name: pv0003 spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle mountOptions: - hard - nfsvers=4.1 nfs: path: /tmp server: vms81.liruilongs.github.io
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$cat /etc/exports /liruilong *(rw,sync,no_root_squash) ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$echo "/tmp *(rw,sync,no_root_squash)" >>/etc/exports ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$cat /etc/exports /liruilong *(rw,sync,no_root_squash) /tmp *(rw,sync,no_root_squash) ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$exportfs -avr exporting *:/tmp exporting *:/liruilong
1 2 3 4 5 6 7 8 9 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f pod_volunms-pv.yaml persistentvolume/pv0003 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pv -o wide NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE pv0003 5Gi RWO Recycle Available 16s Filesystem
PV
是有状态的对象,它有以下几种状态。
Available
:空闲状态。
Bound
:已经绑定到某个Pvc上。
Released
:对应的PVC已经删除,但资源还没有被集群收回。
Failed
: PV自动回收失败。
PVC的创建 如果某个Pod想申请某种类型的PV,则首先需要定义一个PersistentVolumeClaim (PVC)对象:
PVC是基于命名空间相互隔离的,不同命名空间的PVC相互隔离PVC通过accessModes和storage的约束关系来匹配PV,不需要显示定义,accessModes必须相同,storage必须小于等于。
1 2 3 4 5 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pvc No resources found in liruilong-volume-create namespace. ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$vim pod_volumes-pvc.yaml
1 2 3 4 5 6 7 8 9 10 11 12 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc01 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 4Gi
1 2 3 4 5 6 7 8 9 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f pod_volumes-pvc.yaml persistentvolumeclaim/mypvc01 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pvc -o wide NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE mypvc01 Bound pv0003 5Gi RWO 10s Filesystem ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$
storageClassName storageClassName 用于控制那个PVC能和PV绑定,只有在storageClassName相同的情况下才去匹配storage和accessModes
1 2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$vim pod_volunms-pv.yaml
pod_volunms-pv.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 apiVersion: v1 kind: PersistentVolume metadata: name: pv0003 spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: slow mountOptions: - hard - nfsvers=4.1 nfs: path: /tmp server: vms81.liruilongs.github.io
1 2 3 4 5 6 7 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f pod_volunms-pv.yaml persistentvolume/pv0003 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pv -A NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv0003 5Gi RWO Recycle Available slow 8s
pod_volumes-pvc.yaml
1 2 3 4 5 6 7 8 9 10 11 12 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc01 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 4Gi storageClassName: slow
1 2 3 4 5 6 7 8 9 10 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pvc -A No resources found ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f pod_volumes-pvc.yaml persistentvolumeclaim/mypvc01 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE liruilong-volume-create mypvc01 Bound pv0003 5Gi RWO slow 5s
使用持久性存储 在pod里面使用PVC
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: podvolumepvc name: podvolumepvc spec: volumes: - name: volumes1 persistentVolumeClaim: claimName: mypvc01 containers: - image: nginx name: podvolumehostpath resources: {} volumeMounts: - mountPath: /liruilong name: volumes1 imagePullPolicy: IfNotPresent dnsPolicy: ClusterFirst restartPolicy: Always status: {}
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f pod_volumespvc.yaml pod/podvolumepvc created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES podvolumepvc 1/1 Running 0 15s 10.244.171.184 vms82.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl exec -it podvolumepvc -- sh bin dev docker-entrypoint.sh home lib64 media opt root sbin sys usr boot docker-entrypoint.d etc lib liruilong mnt proc run srv tmp var runc-process838092734 systemd-private-66344110bb03430193d445f816f4f4c4-chronyd.service-SzL7id systemd-private-6cf1f72056ed4482a65bf89ec2a130a9-chronyd.service-5m7c2i systemd-private-b1dc4ffda1d74bb3bec5ab11e5832635-chronyd.service-cPC3Bv systemd-private-bb19f3d6802e46ab8dcb5b88a38b41b8-chronyd.service-cjnt04
pv回收策略 persistentVolumeReclaimPolicy: Recycle
策略
描述
Recycle –会删除数据
会生成一个pod回收数据,删除pvc之后,pv可复用,pv状态由Released变为Available
Retain–不回收数据
但是删除pvc之后,pv依然不可用,pv状态长期保持为 Released
会生成一个pod回收数据
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv0003 5Gi RWO Recycle Bound liruilong-volume-create/mypvc01 slow 131m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl describe pv pv0003 .................. Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal RecyclerPod 53s persistentvolume-controller Recycler pod: Successfully assigned default/recycler-for-pv0003 to vms82.liruilongs.github.io Normal RecyclerPod 51s persistentvolume-controller Recycler pod: Pulling image "busybox:1.27" ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv0003 5Gi RWO Recycle Available slow 136m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$
动态卷供应storageClass 通过storageClass来动态处理PV的创建,管理员只需要创建好storageClass就可以了,用户创建PVC时会自动的创建PV和PVC。当创建 pvc 的时候,系统会通知 storageClass,storageClass 会从它所关联的分配器来获取后端存储类型,然后动态的创建一个 pv 出来和此 pvc 进行关联
storageClass 的工作流程 定义 storageClass 时必须要包含一个分配器(provisioner),不同的分配器指定了动态创建 pv时使用什么后端存储。
分配器使用 aws 的 ebs 作为 pv 的后端存储
1 2 3 4 5 6 7 8 9 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: slow provisioner: kubernetes.io/aws-ebs parameters: type: io1 iopsPerGB: "10" fsType: ext4
分配器使用 lvm 作为 pv 的后端存储
1 2 3 4 5 6 7 8 9 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-lvm provisioner: lvmplugin.csi.alibabacloud.com parameters: vgName: volumegroup1 fsType: ext4 reclaimPolicy: Delete
使用 hostPath 作为 pv 的后端存储
1 2 3 4 5 6 7 8 9 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-hostpath-sc provisioner: hostpath.csi.k8s.io reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true
上面 3 个例子里所使用的分配器中,有一些是 kubernetes 内置的分配器,比如kubernetes.io/aws-ebs,其他两个分配器不是 kubernetes 自带的。kubernetes 自带的分配器:
kubernetes.io/aws-ebs
kubernetes.io/gce-pd
kubernetes.io/glusterfs
kubernetes.io/cinder
kubernetes.io/vsphere-volume
kubernetes.io/rbd
kubernetes.io/quobyte
kubernetes.io/azure-disk
kubernetes.io/azure-file
kubernetes.io/portworx-volume
kubernetes.io/scaleio
kubernetes.io/storageos
kubernetes.io/no-provisioner
在动态创建 pv 的时候,根据使用不同的后端存储,应该选择一个合适的分配器。但是像lvmplugin.csi.alibabacloud.com 和 hostpath.csi.k8s.io 这样的分配器不是 kubernetes 自带的,称之为外部分配器,这些外部分配器由第三方提供,是通过自定义 ** CSIDriver(容器存储接口驱动)来实现的分配器 **。
所以整个流程就是,管理员创建storageClass
时会通过provisioner
字段指定分配器。创建好storageClass
之后,用户在定义pvc
时需要通过.spec.storageClassName
指定使用哪个storageClass
。
利用 nfs 创建动态卷供应 创建一个目录/vdisk,并共享这个目录。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 ┌──[root@vms81.liruilongs.github.io]-[~] └─$cat /etc/exports /liruilong *(rw,sync,no_root_squash) /tmp *(rw,sync,no_root_squash) ┌──[root@vms81.liruilongs.github.io]-[~] └─$echo "/vdisk *(rw,sync,no_root_squash)" >>/etc/exports ┌──[root@vms81.liruilongs.github.io]-[~] └─$exportfs -avr exporting *:/vdisk exportfs: Failed to stat /vdisk: No such file or directory exporting *:/tmp exporting *:/liruilong ┌──[root@vms81.liruilongs.github.io]-[/] └─$mkdir vdisks
因为 kubernetes 里,nfs 没有内置分配器,所以需要下载相关插件来创建 nfs 外部分配器。
插件包下载地址: https://github.com/kubernetes-incubator/external-storage.git
rbac.yaml 部署 rbac 权限。命名空间更换
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner namespace: liruilong-volume-create --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: ["" ] resources: ["persistentvolumes" ] verbs: ["get" , "list" , "watch" , "create" , "delete" ] - apiGroups: ["" ] resources: ["persistentvolumeclaims" ] verbs: ["get" , "list" , "watch" , "update" ] - apiGroups: ["storage.k8s.io" ] resources: ["storageclasses" ] verbs: ["get" , "list" , "watch" ] - apiGroups: ["" ] resources: ["events" ] verbs: ["create" , "update" , "patch" ] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: liruilong-volume-create roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner namespace: liruilong-volume-create rules: - apiGroups: ["" ] resources: ["endpoints" ] verbs: ["get" , "list" , "watch" , "create" , "update" , "patch" ] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner namespace: liruilong-volume-create subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: liruilong-volume-create roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
因为 nfs 分配器不是自带的,所以这里需要先把 nfs 分配器创建出来。
配置文件参数设置,1.20之后的版本都需要: - --feature-gates=RemoveSelfLink=false
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 ┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests] └─$pwd /etc/kubernetes/manifests ┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests] └─$head -n 20 kube-apiserver.yaml apiVersion: v1 kind: Pod metadata: annotations: kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.26.81:6443 creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command : - kube-apiserver - --advertise-address=192.168.26.81 - --feature-gates=RemoveSelfLink=false - --allow-privileged=true - --authorization-mode=Node,RBAC - --client-ca-file=/etc/kubernetes/pki/ca.crt ┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests] └─$
deployment.yaml
因为当前是在命名空间 liruilong-volume-create
里的,所以要把 namespace 的值改为 liruilong-volume-create
image
后面的镜像需要提前在所有节点上 pull
下来,并修改镜像下载策略
env
字段里,PROVISIONER_NAME
用于指定分配器的名字,这里是 fuseim.pri/ifs
,NFS_SERVER
和 NFS_PATH
分别指定这个分配器所使用的存储信息。
在 volumes
里的 server
和 path
里指定共享服务器和目录
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner namespace: liruilong-volume-create spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest imagePullPolicy: IfNotPresent volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168 .26 .81 - name: NFS_PATH value: /vdisk volumes: - name: nfs-client-root nfs: server: 192.168 .26 .81 path: /vdisk
部署 nfs 分配器,查看 pod 的运行情况
1 2 3 4 5 6 7 8 9 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy] └─$kubectl apply -f deployment.yaml deployment.apps/nfs-client-provisioner created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-65b5569d76-cz6hh 1/1 Running 0 73s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy] └─$
创建了 nfs 分配器之后,下面开始创建一个使用这个分配器的 storageClass。
1 2 3 4 5 6 7 8 9 10 11 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy] └─$kubectl get sc No resources found ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy] └─$kubectl apply -f class.yaml storageclass.storage.k8s.io/managed-nfs-storage created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy] └─$kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage fuseim.pri/ifs Delete Immediate false 3s
class.yaml
1 2 3 4 5 6 7 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: fuseim.pri/ifs parameters: archiveOnDelete: "false"
这里 provisioner
的值 fuseim.pri/ifs
是由 deployment.yaml
文件里指定的分配器的名字,这 个 yaml 文件的意思是创建一个名字是managed-nfs-storage
的 storageClass
,使用名字为fuseim.pri/ifs
的分配器。
下面开始创建 pvc
pvc_nfs.yaml
1 2 3 4 5 6 7 8 9 10 11 kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-nfs spec: accessModes: - ReadWriteMany resources: requests: storage: 20Mi storageClassName: "managed-nfs-storage"
1 2 3 4 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f ./pvc_nfs.yaml persistentvolumeclaim/pvc-nfs created
查看创建信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-65b5569d76-7k6gm 1/1 Running 0 35s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage fuseim.pri/ifs Delete Immediate false 30s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-nfs Bound pvc-b12e988a-8b55-4d48-87cf-998500df16f8 20Mi RWX managed-nfs-storage 28s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy] └─$kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b12e988a-8b55-4d48-87cf-998500df16f8 20Mi RWX Delete Bound liruilong-volume-create/pvc-nfs managed-nfs-storage 126m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy] └─$
使用声明的PVC
pod_storageclass.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: podvolumepvc name: podvolumepvc spec: volumes: - name: volumes1 persistentVolumeClaim: claimName: pvc-nfs containers: - image: nginx name: podvolumehostpath resources: {} volumeMounts: - mountPath: /liruilong name: volumes1 imagePullPolicy: IfNotPresent dnsPolicy: ClusterFirst restartPolicy: Always status: {}
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f pod_storageclass.yaml pod/podvolumepvc created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-65b5569d76-7k6gm 1/1 Running 0 140m podvolumepvc 1/1 Running 0 7s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl describe pods podvolumepvc | grep -A 4 Volumes: Volumes: volumes1: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: pvc-nfs ReadOnly: false
其他的数据卷类型 gcePersistentDisk 使用这种类型的Volume表示使用谷歌公有云提供的永久磁盘(PersistentDisk, PD)存放Volume的数据,它与emptyDir不同, PD上的内容会被永久存,当Pod被删除时, PD只是被卸载(Unmount),但不会被删除。需要注意是,你需要先创建一个永久磁盘(PD),才能使用gcePersistentDisk.
awsElasticBlockStore 与GCE类似,该类型的Volume使用亚马逊公有云提供的EBS Volume存储数据,需要先创建一个EBS Volume才能使用awsElasticBlockStore.