I am setting up a kubernetes lab using one node only and learning to setup kubernetes nfs.I am following kubernetes nfs example step by step from the following link:https://github.com/kubernetes/examples/tree/master/staging/volumes/nfsTrying the first section, NFS server part, executed 3 commands:$ kubectl create -f examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml$ kubectl create -f examples/volumes/nfs/nfs-server-rc.yaml$ kubectl create -f examples/volumes/nfs/nfs-server-service.yamlI experience problem, where I see the following event:PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"Research done:https://github.com/kubernetes/kubernetes/issues/43120https://github.com/kubernetes/examples/pull/30None of those links above help me to resolve issue I experience.I have made sure it is using image 0.8.Image: gcr.io/google_containers/volume-nfs:0.8Does anyone know what does this message mean?Clue and guidance on how to troubleshoot this issue is very much appreciated.Thank you.$ docker versionClient: Version: 17.09.0-ce API version: 1.32 Go version: go1.8.3 Git commit: afdb6d4 Built: Tue Sep 26 22:41:23 2017 OS/Arch: linux/amd64Server: Version: 17.09.0-ce API version: 1.32 (minimum version 1.12) Go version: go1.8.3 Git commit: afdb6d4 Built: Tue Sep 26 22:42:49 2017 OS/Arch: linux/amd64 Experimental: false$ kubectl versionClient Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:27:48Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}$ kubectl get nodesNAME STATUS ROLES AGE VERSIONlab-kube-06 Ready master 2m v1.8.3$ kubectl describe nodes lab-kube-06Name: lab-kube-06Roles: masterLabels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=lab-kube-06 node-role.kubernetes.io/master=Annotations: node.alpha.kubernetes.io/ttl=0 volumes.kubernetes.io/controller-managed-attach-detach=trueTaints: <none>CreationTimestamp: Thu, 16 Nov 2017 16:51:28 +0000Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Thu, 16 Nov 2017 17:30:36 +0000 Thu, 16 Nov 2017 16:51:28 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Thu, 16 Nov 2017 17:30:36 +0000 Thu, 16 Nov 2017 16:51:28 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 16 Nov 2017 17:30:36 +0000 Thu, 16 Nov 2017 16:51:28 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure Ready True Thu, 16 Nov 2017 17:30:36 +0000 Thu, 16 Nov 2017 16:51:28 +0000 KubeletReady kubelet is posting ready statusAddresses: InternalIP: 10.0.0.6 Hostname: lab-kube-06Capacity: cpu: 2 memory: 8159076Ki pods: 110Allocatable: cpu: 2 memory: 8056676Ki pods: 110System Info: Machine ID: e198b57826ab4704a6526baea5fa1d06 System UUID: 05EF54CC-E8C8-874B-A708-BBC7BC140FF2 Boot ID: 3d64ad16-5603-42e9-bd34-84f6069ded5f Kernel Version: 3.10.0-693.el7.x86_64 OS Image: Red Hat Enterprise Linux Server 7.4 (Maipo) Operating System: linux Architecture: amd64 Container Runtime Version: docker://Unknown Kubelet Version: v1.8.3 Kube-Proxy Version: v1.8.3ExternalID: lab-kube-06Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- kube-system etcd-lab-kube-06 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-apiserver-lab-kube-06 250m (12%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-controller-manager-lab-kube-06 200m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-dns-545bc4bfd4-gmdvn 260m (13%) 0 (0%) 110Mi (1%) 170Mi (2%) kube-system kube-proxy-68w8k 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-scheduler-lab-kube-06 100m (5%) 0 (0%) 0 (0%) 0 (0%) kube-system weave-net-7zlbg 20m (1%) 0 (0%) 0 (0%) 0 (0%)Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 830m (41%) 0 (0%) 110Mi (1%) 170Mi (2%)Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 39m kubelet, lab-kube-06 Starting kubelet. Normal NodeAllocatableEnforced 39m kubelet, lab-kube-06 Updated Node Allocatable limit across pods Normal NodeHasSufficientDisk 39m (x8 over 39m) kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasSufficientDisk Normal NodeHasSufficientMemory 39m (x8 over 39m) kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 39m (x7 over 39m) kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasNoDiskPressure Normal Starting 38m kube-proxy, lab-kube-06 Starting kube-proxy.$ kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEnfs-pv-provisioning-demo Pending 14s$ kubectl get eventsLAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE18m 18m 1 lab-kube-06.14f79f093119829a Node Normal Starting kubelet, lab-kube-06 Starting kubelet.18m 18m 8 lab-kube-06.14f79f0931d0eb6e Node Normal NodeHasSufficientDisk kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasSufficientDisk18m 18m 8 lab-kube-06.14f79f0931d1253e Node Normal NodeHasSufficientMemory kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasSufficientMemory18m 18m 7 lab-kube-06.14f79f0931d131be Node Normal NodeHasNoDiskPressure kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasNoDiskPressure18m 18m 1 lab-kube-06.14f79f0932f3f1b0 Node Normal NodeAllocatableEnforced kubelet, lab-kube-06 Updated Node Allocatable limit across pods18m 18m 1 lab-kube-06.14f79f122a32282d Node Normal RegisteredNode controllermanager Node lab-kube-06 event: Registered Node lab-kube-06 in Controller17m 17m 1 lab-kube-06.14f79f1cdfc4c3b1 Node Normal Starting kube-proxy, lab-kube-06 Starting kube-proxy.17m 17m 1 lab-kube-06.14f79f1d94ef1c17 Node Normal RegisteredNode controllermanager Node lab-kube-06 event: Registered Node lab-kube-06 in Controller14m 14m 1 lab-kube-06.14f79f4b91cf73b3 Node Normal RegisteredNode controllermanager Node lab-kube-06 event: Registered Node lab-kube-06 in Controller58s 11m 42 nfs-pv-provisioning-demo.14f79f766cf887f2 PersistentVolumeClaim Normal FailedBinding persistentvolume-controller no persistent volumes available for this claim and no storage class is set14s 4m 20 nfs-server-kq44h.14f79fd21b9db5f9 Pod Warning FailedScheduling default-scheduler PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"4m 4m 1 nfs-server.14f79fd21b946027 ReplicationController Normal SuccessfulCreate replication-controller Created pod: nfs-server-kq44h 2m$ kubectl get podsNAME READY STATUS RESTARTS AGEnfs-server-kq44h 0/1 Pending 0 16s$ kubectl get podsNAME READY STATUS RESTARTS AGEnfs-server-kq44h 0/1 Pending 0 26s$ kubectl get rcNAME DESIRED CURRENT READY AGEnfs-server 1 1 0 40s$ kubectl describe pods nfs-server-kq44hName: nfs-server-kq44hNamespace: defaultNode: <none>Labels: role=nfs-serverAnnotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"nfs-server","uid":"5653eb53-caf0-11e7-ac02-000d3a04eb...Status: PendingIP:Created By: ReplicationController/nfs-serverControlled By: ReplicationController/nfs-serverContainers: nfs-server: Image: gcr.io/google_containers/volume-nfs:0.8 Ports: 2049/TCP, 20048/TCP, 111/TCP Environment: <none> Mounts: /exports from mypvc (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-plgv5 (ro)Conditions: Type Status PodScheduled FalseVolumes: mypvc: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: nfs-pv-provisioning-demo ReadOnly: false default-token-plgv5: Type: Secret (a volume populated by a Secret) SecretName: default-token-plgv5 Optional: falseQoS Class: BestEffortNode-Selectors: <none>Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s node.alpha.kubernetes.io/unreachable:NoExecute for 300sEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 39s (x22 over 5m) default-scheduler PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo" 解决方案 Each Persistent Volume Claim (PVC) needs a Persistent Volume (PV) that it can bind to. In your example, you have only created a PVC, but not the volume itself.A PV can either be created manually, or automatically by using a Volume class with a provisioner. Have a look at the docs of static and dynamic provisioning for more information):In your example, you are creating a storage class provisioner (defined in examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml) that seems to be tailored for usage within the Google cloud (which it will probably not be able to actually create PVs in your lab setup).You can create a persistent volume manually on your own. After creating the PV, the PVC should automatically bind itself to the volume and your pods should start. Below is an example for a persistent volume that uses the node's local file system as a volume (which is probably OK for a one-node test setup):apiVersion: v1kind: PersistentVolumemetadata: name: someVolumespec: capacity: storage: 200Gi accessModes: - ReadWriteOnce hostPath: path: /path/on/hostFor a production setup, you'll probably want to choose a different volume type at hostPath, although the volume types available to you will greatly differ depending on the environment that you're in (cloud or self-hosted/bare-metal). 这篇关于未绑定PersistentVolumeClaim:"nfs-pv-provisioning-demo"的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 09-15 21:16