本文介绍了Kubernetes:未能获得GCE GCECloudProvider,错误为< nil>.的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经使用kubeadm在GCE上设置了一个自定义的kubernetes集群.我正在尝试将StatefulSets与持久性存储一起使用.

I have set up a custom kubernetes cluster on GCE using kubeadm. I am trying to use StatefulSets with persistent storage.

我有以下配置:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gce-slow
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
  zones: europe-west3-b
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myname
  labels:
    app: myapp
spec:
  serviceName: myservice
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: mycontainer
          image: ubuntu:16.04
          env:
          volumeMounts:
          - name: myapp-data
            mountPath: /srv/data
      imagePullSecrets:
      - name: sitesearch-secret
  volumeClaimTemplates:
  - metadata:
      name: myapp-data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: gce-slow
      resources:
        requests:
          storage: 1Gi

然后出现以下错误:

Nopx@vm0:~$ kubectl describe pvc
 Name:          myapp-data-myname-0
 Namespace:     default
 StorageClass:  gce-slow
 Status:        Pending
 Volume:
 Labels:        app=myapp
 Annotations:   volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/gce-pd
 Finalizers:    [kubernetes.io/pvc-protection]
 Capacity:
 Access Modes:
 Events:
   Type     Reason              Age   From                         Message
   ----     ------              ----  ----                         -------
   Warning  ProvisioningFailed  5s    persistentvolume-controller  Failed to provision volume
 with StorageClass "gce-slow": Failed to get GCE GCECloudProvider with error <nil>

我在黑暗中涉猎,不知道缺少了什么.由于供应者从未向GCE进行身份验证,因此似乎不可行是合乎逻辑的.非常感谢任何提示和指针.

I am treading in the dark and do not know what is missing. It seems logical that it doesn't work, since the provisioner never authenticates to GCE. Any hints and pointers are very much appreciated.

编辑

我尝试了解决方案,但是错误仍然存​​在. kubadm配置现在看起来像这样:

I Tried the solution here, by editing the config file in kubeadm with kubeadm config upload from-file, however the error persists. The kubadm config looks like this right now:

api:
  advertiseAddress: 10.156.0.2
  bindPort: 6443
  controlPlaneEndpoint: ""
auditPolicy:
  logDir: /var/log/kubernetes/audit
  logMaxAge: 2
  path: ""
authorizationModes:
- Node
- RBAC
certificatesDir: /etc/kubernetes/pki
cloudProvider: gce
criSocket: /var/run/dockershim.sock
etcd:
  caFile: ""
  certFile: ""
  dataDir: /var/lib/etcd
  endpoints: null
  image: ""
  keyFile: ""
imageRepository: k8s.gcr.io
kubeProxy:
  config:
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 10
      contentType: application/vnd.kubernetes.protobuf
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 5
    clusterCIDR: 192.168.0.0/16
    configSyncPeriod: 15m0s
    conntrack:
      max: null
      maxPerCore: 32768
      min: 131072
      tcpCloseWaitTimeout: 1h0m0s
      tcpEstablishedTimeout: 24h0m0s
    enableProfiling: false
    healthzBindAddress: 0.0.0.0:10256
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: 14
      minSyncPeriod: 0s
      syncPeriod: 30s
    ipvs:
      minSyncPeriod: 0s
      scheduler: ""
      syncPeriod: 30s
    metricsBindAddress: 127.0.0.1:10249
    mode: ""
    nodePortAddresses: null
    oomScoreAdj: -999
    portRange: ""
    resourceContainer: /kube-proxy
    udpIdleTimeout: 250ms
kubeletConfiguration: {}
kubernetesVersion: v1.10.2
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.0.0/16
  serviceSubnet: 10.96.0.0/12
nodeName: mynode
privilegedPods: false
token: ""
tokenGroups:
- system:bootstrappers:kubeadm:default-node-token
tokenTTL: 24h0m0s
tokenUsages:
- signing
- authentication
unifiedControlPlaneImage: ""

修改

感谢安东·科斯滕科(Anton Kostenko)的意见,该问题已得到解决.最后的编辑加上kubeadm upgrade可以解决此问题.

The issue was resolved in the comments thanks to Anton Kostenko. The last edit coupled with kubeadm upgrade solves the problem.

推荐答案

在Google云虚拟机的Kubernetes节点中创建动态持久卷.

GCP角色:

  1. Google云控制台转到IAM&管理员.
  2. 添加一个新的服务帐户,例如gce-user.
  3. 添加角色计算实例管理员".

将角色添加到GCP VM:

Add the role to GCP VM:

  1. 停止实例,然后单击编辑".
  2. 点击服务帐户,然后选择新帐户,例如gce-user.
  3. 启动虚拟机.

在所有节点的kubelet中添加GCE参数.

Add GCE parameter in kubelet in all nodes.

  • 添加"--cloud-provider = gce"
  • sudo vi/etc/systemd/system/kubelet.service.d/10-kubeadm.conf

添加值:

  • 在所有节点中创建新文件/etc/kubernetes/cloud-config

添加此参数.[全球的]project-id ="xxxxxxxxxxxx"

add this param.[Global]project-id = "xxxxxxxxxxxx"

  • 重新启动kubelet
  • 在控制器主控制器中添加gce
  • vi/etc/kubernetes/manifests在命令下添加以下参数:
  • -cloud-provider = gce
  • restart kubelet
  • Add gce in controller-master
  • vi /etc/kubernetes/manifestsadd this params under commands:
  • --cloud-provider=gce

然后重新启动控制平面.

then restart the control plane.

运行ps -ef | grep控制器,然后在控制器输出中必须看到"gce".

run the ps -ef |grep controller then you must see "gce" in controller output.

注意:不建议在生产系统上使用上述方法,请使用kubeadm config更新控制器管理器设置.

Note: Above method is not recommended on the production system, use kubeadm config to update the controller-manager settings.

这篇关于Kubernetes:未能获得GCE GCECloudProvider,错误为&lt; nil&gt;.的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-13 13:13