【七】K8S_存储卷

存储卷介绍

在Kubernetes(K8S)中,存储卷(Storage Volume)是一种持久化的存储机制,用于在Pod中保存数据,并提供了一种抽象层来将底层存储技术(如本地磁盘、网络附加存储等)与应用程序解耦。以下是K8S中存储卷的类型、用途和相关概念:

  1. 空白存储卷(EmptyDir Volume):EmptyDir是一种简单的临时存储解决方案,它使用Pod所在Node节点上的本地磁盘空间。在Pod生命周期内,EmptyDir会一直存在,并随着Pod的停止而被清除。
  2. 持久化存储卷(Persistent Volume):持久化存储卷是一种保留数据的存储解决方案,可以在不同的Pod之间共享和重用。它由物理存储和K8S中相应的资源对象组成,包括Persistent Volume Claim(PVC)、Persistent Volume(PV)等。
    • Persistent Volume Claim(PVC)是一种声明式的资源对象,用于定义持久化存储卷的需求。它定义了容量大小、存储类别、访问模式等属性,并请求系统分配对应的PV。
    • Persistent Volume(PV)是K8S中的实际存储资源对象,用于提供持久化的存储容量。它与底层物理存储设备(如NFS、Ceph、AWS EBS等)挂钩,使用“绑定”和“释放”等机制来管理PV和PVC之间的关系。
  3. 配置映射存储卷(ConfigMap Volume):ConfigMap是一种K8S资源对象,用于保存应用程序和集群的配置信息,并通过ConfigMap Volume将这些信息挂载到Pod中。这使得应用程序可以采用更加灵活和可靠的方式来获取和使用配置信息。
  4. 密钥和证书存储卷(Secret Volume): Secret是一种用于存储机密数据(例如,密码、证书等)的K8S资源对象。Secret Volume与ConfigMap Volume非常相似,但它们用于保存敏感的数据,只能被Pod中的应用程序访问。
  5. 存储类(Storage Class):存储类是一种抽象层,用于定义可用的存储介质和对应的存储策略。它允许管理员为应用程序和团队提供不同的存储类别,并根据需求自动选择合适的存储介质来满足需求。

通过使用不同类型的存储卷,在Kubernetes中可以实现数据的持久化、共享和保密等功能,从而提高应用程序的可靠性、可移植性和可扩展性。根据业务需求和资源预算,运维人员可以选择不同的存储卷和存储类别,并通过K8S提供的API和CLI工具进行创建、管理和删除等操作。

NFS存储持久化

所有节点安装NFS

yum install -y nfs-utils

 

然后再主节点:

# nfs主节点

echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports

mkdir -p /nfs/data

systemctl enable rpcbind --now

systemctl enable nfs-server --now

# 配置生效

exportfs -r

 

 

[root@master ~]# echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
[root@master ~]# mkdir -p /nfs/data
[root@master ~]# systemctl enable rpcbind --now
[root@master ~]# systemctl enable nfs-server --now
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@master ~]# exportfs -r 
[root@master ~]# 
[root@master ~]# exportfs 
/nfs/data         <world>

 从节点

#看一下远程的服务器有哪些可以挂载

showmount -e masterIP

#先在本机创建一个/nfs/data

#然后将远程服务器(master)的/nfs/data挂载到本机的/nfs/data

mkdir -p /nfs/data

mount -t nfs masteIP:/nfs/data /nfs/data

#写入一个测试文件

echo "hello nfs server" > /nfs/data/test.txt

 

 

[root@node02 ~]# showmount -e 172.31.0.4
Export list for 172.31.0.4:
/nfs/data *
[root@node02 ~]# mkdir -p /nfs/data
[root@node02 ~]# mount -t nfs 172.31.0.4:/nfs/data /nfs/data
[root@node02 ~]# ls /nfs/data/

在主节点:

[root@master data]# echo AAAAAAAA > 1111111
[root@master data]# ls
1111111

在从节点;

[root@node02 ~]# ls /nfs/data/
1111111
[root@node02 ~]# echo 222222 > /nfs/data/222222
[root@node02 ~]# ls /nfs/data/
1111111  222222

在主节点

[root@master data]# ls
1111111  222222

数据挂载

写一个ymal

apiVersion: apps/v1
kind: Deployment
metadata: 
  labels: 
    app: nginx-pv
  name: nginx-pv
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-pv
  template: 
    metadata: 
      labels: 
        app: nginx-pv
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts: 
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
        - name: html
          nfs: 
            server: 172.31.0.4
            path: /nfs/data/nginx-pv

执行yaml

[root@master ~]# vi nfs.ymal
[root@master ~]# 
[root@master ~]# 
[root@master ~]# kubectl apply -f nfs.ymal   
deployment.apps/nginx-pv created

#缺少/nfs/data/nginx-pv,重新安装
[root@master ~]# mkdir /nfs/data/nginx-pv
[root@master ~]# kubectl delete -f nfs.ymal 
deployment.apps "nginx-pv" deleted
[root@master ~]# kubectl apply -f nfs.ymal 
deployment.apps/nginx-pv created

测试一下

在master

[root@master ~]# cd /nfs/data/nginx-pv/
[root@master nginx-pv]# ls

[root@master nginx-pv]# echo AAAAAAAAAAA > index.html
[root@master nginx-pv]# ls
index.html
[root@master nginx-pv]# kubectl get pod
NAME                            READY   STATUS    RESTARTS   AGE

nginx-pv-5f884c45b8-rk85j       1/1     Running   0          3m59s
nginx-pv-5f884c45b8-s5gxz       1/1     Running   0          3m59s




[root@master nginx-pv]# 
[root@master nginx-pv]# kubectl exec -it nginx-pv-5f884c45b8-rk85j -- /bin/bash
root@nginx-pv-5f884c45b8-rk85j:/# 
root@nginx-pv-5f884c45b8-rk85j:/# cd /usr/share/nginx

root@nginx-pv-5f884c45b8-rk85j:/usr/share/nginx# cd html/
root@nginx-pv-5f884c45b8-rk85j:/usr/share/nginx/html# cat index.html 
AAAAAAAAAAA

PV&PVC

PV:持久卷(Persistent Volume):将应用筹要持久化的数据保存到指定位置

PVC持久卷申明(Persistent Volume Claim):申明需要使用时持久卷规格

创建pv池

#nfs主节点

[root@master nginx-pv]# mkdir -p /nfs/data/01
[root@master nginx-pv]# mkdir -p /nfs/data/02
[root@master nginx-pv]# mkdir -p /nfs/data/03

创建pv

写一个ymal文件:

apiVersion: v1
kind: PersistentVolume
metadata: 
  name: pv01-10m
spec:
  capacity:
    storage: 10M
  accessModes: 
    - ReadWriteMany
  storageClassName: nfs
  nfs: 
    path: /nfs/data/01
    server: 172.31.0.4
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  name: pv02-1gi
spec:
  capacity:
    storage: 1Gi
  accessModes: 
    - ReadWriteMany
  storageClassName: nfs
  nfs: 
    path: /nfs/data/02
    server: 172.31.0.4
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  name: pv03-3gi
spec:
  capacity:
    storage: 3Gi
  accessModes: 
    - ReadWriteMany
  storageClassName: nfs
  nfs: 
    path: /nfs/data/03
    server: 172.31.0.4
[root@master nginx-pv]# vi pv.yaml

[root@master nginx-pv]# kubectl apply -f pv.yaml 
persistentvolume/pv01-10m created
persistentvolume/pv02-1gi created
persistentvolume/pv03-3gi created
[root@master nginx-pv]# kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv01-10m   10M        RWX            Retain           Available           nfs                     36s
pv02-1gi   1Gi        RWX            Retain           Available           nfs                     36s
pv03-3gi   3Gi        RWX            Retain           Available           nfs                     36s

创建pvc

用yaml的方式

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-pvc
spec:
  accessModes: 
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi
  storageClassName: nfs
[root@master ~]# vi pvc.yaml

[root@master ~]# kubectl apply -f pvc.yaml 
persistentvolumeclaim/nginx-pvc created
[root@master ~]# kubectl get pvc
NAME        STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nginx-pvc   Bound    pv02-1gi   1Gi        RWX            nfs            42s
[root@master ~]# kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM               STORAGECLASS   REASON   AGE
pv01-10m   10M        RWX            Retain           Available                       nfs                     13m
pv02-1gi   1Gi        RWX            Retain           Bound       default/nginx-pvc   nfs                     13m
pv03-3gi   3Gi        RWX            Retain           Available                       nfs                     13m
[root@master ~]# 

创建Pod绑定PVC

apiVersion: apps/v1
kind: Deployment
metadata: 
  labels: 
    app: nginx-deploy-pvc
  name: nginx-deploy-pvc
spec: 
  replicas: 2
  selector:
    matchLabels:
      app: nginx-deploy-pvc
  template: 
    metadata:
      labels:
        app: nginx-deploy-pvc
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts: 
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
        - name: html
          persistentVolumeClaim:
            claimName: nginx-pvc
aster ~]# vi pod_pvc.yaml
[root@master ~]# 
[root@master ~]# 
[root@master ~]# kubectl apply -f pod_pvc.yaml 
deployment.apps/nginx-deploy-pvc created
[root@master ~]# kubectl get pvc,pv
NAME                              STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/nginx-pvc   Bound    pv02-1gi   1Gi        RWX            nfs            11m

NAME                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM               STORAGECLASS   REASON   AGE
persistentvolume/pv01-10m   10M        RWX            Retain           Available                       nfs                     23m
persistentvolume/pv02-1gi   1Gi        RWX            Retain           Bound       default/nginx-pvc   nfs                     23m
persistentvolume/pv03-3gi   3Gi        RWX            Retain           Available                       nfs    

3、ConfigMap

创建配置,redis保存到k8s的etcd

[root@master ~]# echo appendonly yes > vi redis.conf
[root@master ~]# kubectl create cm redis.conf --from-file=redis.conf
configmap/redis.conf created
[root@master ~]# kubectl get cm
NAME               DATA   AGE
kube-root-ca.crt   1      5d17h
redis.conf         1      11s


[root@master ~]# kubectl get cm redis.conf -oyaml
apiVersion: v1
data:
  redis.conf: |
    appendonly yes
kind: ConfigMap
metadata:
  creationTimestamp: "2023-01-11T08:52:39Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:redis.conf: {}
    manager: kubectl-create
    operation: Update
    time: "2023-01-11T08:52:39Z"
  name: redis.conf
  namespace: default
  resourceVersion: "78979"
  uid: 9ce6bc39-16e4-4c88-9380-3440ac787099

 创建pod

apiVersion: v1
kind: Pod
metadata: 
  name: redis
spec: 
  containers:
  - name: redis
    image: redis
    command:
      - redis-server
      - "/redis-master/redis.conf"
    ports:
    - containerPort: 6379
    volumeMounts: 
    - mountPath: /data
      name: data
    - mountPath: /redis-master
      name: config
  volumes:
    - name: data
      emptyDir: {}
    - name: config
      configMap: 
        name: redis-config
        items:
        - key: redis.config
          path: redis.config  
[root@master ~]# vi pod1.yaml

[root@master ~]# kubectl apply -f pod1.yaml 
pod/redis created
[root@master ~]# 

Secret

kubectl create secret docker-registry regcred /
    --docker-server=<镜像仓库服务器> /
    --docker-username=<用户名> /
    --docker-password=<密码> /
    --docker-email=<邮箱地址> 

管理员执行上面命令后,执行下面yaml就可以从私有仓库中下载安装Pod

apiVersion: v1
kind: Pod
metadata: 
  name: private-nginx
spec:
  containers:
  - name: private-nginx
    image: qrxqrx/nginx:v1.0
  imagePullSecret:
  - name: regcred
kubectl apply -f mypod.yaml
阅读剩余
THE END
诺言博客