我有一个kubernetes集群,并且有一个带有mongodb持久卷集的NFS的简单部署。它工作正常,但是由于数据库等资源都是stateful,因此我想到将Statefulset用作mongodb,但是现在的问题是,当我浏览文档时,statefulset具有volumeClaimTemplates而不是volumes(在部署中)。
但是现在问题来了。
deployment中这样做:PersistentVolume-> PersistentVolumeClaim-> Deployment但是,我们如何在Statefulset中做到这一点呢?
是这样的吗:volumeClaimTemplates-> StatefulSet如何设置PersistentVolumevolumeClaimTemplates。如果我们不将PersistentVolume用作StatefulSet,它将如何创建卷?在何处创建卷?是在host机器中(即kubernetes worker 节点)吗?
因为我有一个用于NFS部署的单独mongodb设置程序(副本副本= 1),如何将相同的设置用于StatefulSet
这是我的mongo-deployment.yaml->,我将其转换为有状态集,如第二个代码段(mongo-stateful.yaml)中所示

  • mongo-deployment.yaml
  • <omitted>
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: task-pv-volume
      labels:
        name: mynfs # name can be anything
    spec:
      storageClassName: manual # same storage class as pvc
      capacity:
        storage: 10Gi
      accessModes:
        - ReadWriteMany
      nfs:
        server: <nfs-server-ip>
        path: "/srv/nfs/mydata"
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: task-pv-claim
    spec:
      storageClassName: manual
      accessModes:
        - ReadWriteMany #  must be the same as PersistentVolume
      resources:
        requests:
          storage: 1Gi
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: mongodb-deployment
      labels:
        name: mongodb
    spec:
      selector:
        matchLabels:
          app: mongodb
      replicas: 1
      template:
        metadata:
          labels:
            app: mongodb
        spec:
          containers:
          - name: mongodb
            image: mongo
            ports:
            -  containerPort: 27017
            ... # omitted some parts for easy reading
            volumeMounts:
            - name: data
              mountPath: /data/db
          volumes:
            - name: data
              persistentVolumeClaim:
                claimName: task-pv-claim
    
  • mongo-stateful.yaml
  • ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: task-pv-volume
      labels:
        name: mynfs # name can be anything
    spec:
      storageClassName: manual # same storage class as pvc
      capacity:
        storage: 10Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: <nfs-server-ip>
        path: "/srv/nfs/mydata"
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: mongodb-statefulset
    spec:
      selector:
        matchLabels:
          name: mongodb-statefulset
      serviceName: mongodb-statefulset
      replicas: 2
      template:
        metadata:
          labels:
            name: mongodb-statefulset
        spec:
          terminationGracePeriodSeconds: 10
          containers:
          - name: mongodb
            image: mongo:3.6.4
            ports:
            - containerPort: 27017
            volumeMounts:
            - name: db-data
              mountPath: /data/db
      volumeClaimTemplates:
      - metadata:
          name: db-data
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: "manual"
          resources:
            requests:
              storage: 2Gi
    
    
    但这不起作用(mongo-stateful.yaml)pod处于pending状态,如我所描述的那样:

    PS:部署工作正常,没有任何错误,问题出在Statefulset
    有人可以帮助我,如何编写带有卷的有状态集?

    最佳答案

    如果您的存储类不支持动态卷配置,则必须使用yaml文件手动创建PV和关联的PVC ,然后volumeClaimTemplates将允许将现有的PVC与statefulset的Pod链接。
    这是一个工作示例:https://github.com/k8s-school/k8s-school/blob/master/examples/MONGODB-install.sh
    你应该:

  • https://kind.sigs.k8s.io/上本地运行,它支持动态卷配置,因此这里将自动创建PVC和PV
  • 导出PV和PVC Yaml文件
  • 使用这些yaml文件作为模板来为NFS后端创建PV和PVC。

  • 这是您将获得的种类:
    $ ./MONGODB-install.sh
    + kubectl apply -f 13-12-mongo-configmap.yaml
    configmap/mongo-init created
    + kubectl apply -f 13-11-mongo-service.yaml
    service/mongo created
    + kubectl apply -f 13-14-mongo-pvc.yaml
    statefulset.apps/mongo created
    $ kubectl get pods
    NAME      READY   STATUS    RESTARTS   AGE
    mongo-0   2/2     Running   0          8m38s
    mongo-1   2/2     Running   0          5m58s
    mongo-2   2/2     Running   0          5m45s
    $ kubectl get pvc
    NAME               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    database-mongo-0   Bound    pvc-05247511-096e-4af5-8944-17e0d8222512   1Gi        RWO            standard       8m42s
    database-mongo-1   Bound    pvc-f53c35a4-6fc0-4b18-b5fc-d7646815c0dd   1Gi        RWO            standard       6m2s
    database-mongo-2   Bound    pvc-2a711892-eeee-4481-94b7-6b46bf5b76a7   1Gi        RWO            standard       5m49s
    $ kubectl get pv
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                      STORAGECLASS   REASON   AGE
    pvc-05247511-096e-4af5-8944-17e0d8222512   1Gi        RWO            Delete           Bound    default/database-mongo-0   standard                8m40s
    pvc-2a711892-eeee-4481-94b7-6b46bf5b76a7   1Gi        RWO            Delete           Bound    default/database-mongo-2   standard                5m47s
    pvc-f53c35a4-6fc0-4b18-b5fc-d7646815c0dd   1Gi        RWO            Delete           Bound    default/database-mongo-1   standard                6m1s
    
    和PVC的转储(在这里由volumeClaimTemplate生成,因为odf类动态卷配置):
    $ kubectl get pvc database-mongo-0 -o yaml
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      annotations:
        pv.kubernetes.io/bind-completed: "yes"
        pv.kubernetes.io/bound-by-controller: "yes"
        volume.beta.kubernetes.io/storage-provisioner: rancher.io/local-path
        volume.kubernetes.io/selected-node: kind-worker2
      creationTimestamp: "2020-10-16T15:05:20Z"
      finalizers:
      - kubernetes.io/pvc-protection
      labels:
        app: mongo
      managedFields:
        ...
      name: database-mongo-0
      namespace: default
      resourceVersion: "2259"
      selfLink: /api/v1/namespaces/default/persistentvolumeclaims/database-mongo-0
      uid: 05247511-096e-4af5-8944-17e0d8222512
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      storageClassName: standard
      volumeMode: Filesystem
      volumeName: pvc-05247511-096e-4af5-8944-17e0d8222512
    status:
      accessModes:
      - ReadWriteOnce
      capacity:
        storage: 1Gi
      phase: Bound
    
    以及相关的PV:
    kubectl get pv pvc-05247511-096e-4af5-8944-17e0d8222512 -o yaml
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      annotations:
        pv.kubernetes.io/provisioned-by: rancher.io/local-path
      creationTimestamp: "2020-10-16T15:05:23Z"
      finalizers:
      - kubernetes.io/pv-protection
      managedFields:
        ...
      name: pvc-05247511-096e-4af5-8944-17e0d8222512
      resourceVersion: "2256"
      selfLink: /api/v1/persistentvolumes/pvc-05247511-096e-4af5-8944-17e0d8222512
      uid: 3d1e894e-0924-411a-8378-338e48ba4a28
    spec:
      accessModes:
      - ReadWriteOnce
      capacity:
        storage: 1Gi
      claimRef:
        apiVersion: v1
        kind: PersistentVolumeClaim
        name: database-mongo-0
        namespace: default
        resourceVersion: "2238"
        uid: 05247511-096e-4af5-8944-17e0d8222512
      hostPath:
        path: /var/local-path-provisioner/pvc-05247511-096e-4af5-8944-17e0d8222512_default_database-mongo-0
        type: DirectoryOrCreate
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - kind-worker2
      persistentVolumeReclaimPolicy: Delete
      storageClassName: standard
      volumeMode: Filesystem
    status:
      phase: Bound
    

    关于mongodb - 具有NFS持久卷的Kubernetes statefulset,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/64386094/

    10-10 07:46