我有一个包含configMap,persistentVolumeClaim和服务的部署。我已经更改了configMap并将部署重新应用于我的集群。我了解此更改不会自动重新启动部署中的Pod:

configmap change doesn't reflect automatically on respective pods

Updated configMap.yaml but it's not being applied to Kubernetes pods

我知道我可以kubectl delete -f wiki.yaml && kubectl apply -f wiki.yaml。但这破坏了持久卷,该持久卷具有我想在重启后幸存的数据。如何以保持现有音量的方式重新启动Pod?

wiki.yaml如下所示:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: dot-wiki
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 4Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: wiki-config
data:
  config.json: |
    {
      "farm": true,
      "security_type": "friends",
      "secure_cookie": false,
      "allowed": "*"
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wiki-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wiki
  template:
    metadata:
      labels:
        app: wiki
    spec:
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000
      initContainers:
      - name: wiki-config
        image: dobbs/farm:restrict-new-wiki
        securityContext:
          runAsUser: 0
          runAsGroup: 0
          allowPrivilegeEscalation: false
        volumeMounts:
          - name: dot-wiki
            mountPath: /home/node/.wiki
        command: ["chown", "-R", "1000:1000", "/home/node/.wiki"]
      containers:
      - name: farm
        image: dobbs/farm:restrict-new-wiki
        command: [
          "wiki", "--config", "/etc/config/config.json",
          "--admin", "bad password but memorable",
          "--cookieSecret", "any-random-string-will-do-the-trick"]
        ports:
        - containerPort: 3000
        volumeMounts:
          - name: dot-wiki
            mountPath: /home/node/.wiki
          - name: config-templates
            mountPath: /etc/config
      volumes:
      - name: dot-wiki
        persistentVolumeClaim:
          claimName: dot-wiki
      - name: config-templates
        configMap:
          name: wiki-config
---
apiVersion: v1
kind: Service
metadata:
  name: wiki-service
spec:
  ports:
  - name: http
    targetPort: 3000
    port: 80
  selector:
    app: wiki

最佳答案

除了kubectl rollout restart deployment,还有一些替代方法可以做到这一点:

1.重新启动Pod

kubectl delete pods -l app=wiki

这将导致您的Deployment Pod重新启动,在这种情况下,它们将读取更新的ConfigMap。

2.对ConfigMap的版本进行版本控制

与其将ConfigMap命名为wiki-config,不如将其命名为wiki-config-v1。然后,当您更新配置时,只需创建一个名为wiki-config-v2的新ConfigMap即可。

现在,编辑您的Deployment规范,以引用wiki-config-v2 ConfigMap而不是wiki-config-v1:
apiVersion: apps/v1
kind: Deployment
# ...
      volumes:
      - name: config-templates
        configMap:
          name: wiki-config-v2

然后,重新应用部署:

kubectl apply -f wiki.yaml

由于Deployment list 中的Pod模板已更改,因此重新部署Deployment将重新创建所有Pod。新的Pod将使用新版本的ConfigMap。

这种方法的另一个优点是,如果保留旧的ConfigMap(wiki-config-v1)而不是删除它,则可以通过再次编辑Deployment list 随时恢复到以前的配置。

Kubernetes Best Practices(O'Reilly,2019)的第1章中介绍了这种方法。

关于kubernetes - 更改configMap后重新启动kubernetes部署,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/59113591/

10-16 10:43