我正在建立一个仅使用一个节点的kubernetes实验室,并学习设置kubernetes nfs。
我正在从以下链接逐步关注kubernetes nfs示例:
https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs

尝试第一部分NFS服务器部分,执行了3条命令:

$ kubectl create -f examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-rc.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-service.yaml

我遇到问题,看到以下事件:
PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"

研究完成:

https://github.com/kubernetes/kubernetes/issues/43120

https://github.com/kubernetes/examples/pull/30

以上所有链接都无法帮助我解决遇到的问题。
我确保它正在使用图像0.8。
Image:        gcr.io/google_containers/volume-nfs:0.8

有人知道这个消息是什么意思吗?
非常感谢您提供有关如何解决此问题的线索和指导。
谢谢。
$ docker version

Client:
 Version:      17.09.0-ce
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:41:23 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.09.0-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:42:49 2017
 OS/Arch:      linux/amd64
 Experimental: false


$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:27:48Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}


$ kubectl get nodes
NAME          STATUS    ROLES     AGE       VERSION
lab-kube-06   Ready     master    2m        v1.8.3


$ kubectl describe nodes lab-kube-06
Name:               lab-kube-06
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=lab-kube-06
                    node-role.kubernetes.io/master=
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             <none>
CreationTimestamp:  Thu, 16 Nov 2017 16:51:28 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 16 Nov 2017 17:30:36 +0000   Thu, 16 Nov 2017 16:51:28 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 16 Nov 2017 17:30:36 +0000   Thu, 16 Nov 2017 16:51:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 16 Nov 2017 17:30:36 +0000   Thu, 16 Nov 2017 16:51:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready            True    Thu, 16 Nov 2017 17:30:36 +0000   Thu, 16 Nov 2017 16:51:28 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  10.0.0.6
  Hostname:    lab-kube-06
Capacity:
 cpu:     2
 memory:  8159076Ki
 pods:    110
Allocatable:
 cpu:     2
 memory:  8056676Ki
 pods:    110
System Info:
 Machine ID:                 e198b57826ab4704a6526baea5fa1d06
 System UUID:                05EF54CC-E8C8-874B-A708-BBC7BC140FF2
 Boot ID:                    3d64ad16-5603-42e9-bd34-84f6069ded5f
 Kernel Version:             3.10.0-693.el7.x86_64
 OS Image:                   Red Hat Enterprise Linux Server 7.4 (Maipo)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://Unknown
 Kubelet Version:            v1.8.3
 Kube-Proxy Version:         v1.8.3
ExternalID:                  lab-kube-06
Non-terminated Pods:         (7 in total)
  Namespace                  Name                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                   ------------  ----------  ---------------  -------------
  kube-system                etcd-lab-kube-06                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-lab-kube-06             250m (12%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-lab-kube-06    200m (10%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-dns-545bc4bfd4-gmdvn              260m (13%)    0 (0%)      110Mi (1%)       170Mi (2%)
  kube-system                kube-proxy-68w8k                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-lab-kube-06             100m (5%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                weave-net-7zlbg                        20m (1%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  830m (41%)    0 (0%)      110Mi (1%)       170Mi (2%)
Events:
  Type    Reason                   Age                From                     Message
  ----    ------                   ----               ----                     -------
  Normal  Starting                 39m                kubelet, lab-kube-06     Starting kubelet.
  Normal  NodeAllocatableEnforced  39m                kubelet, lab-kube-06     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientDisk    39m (x8 over 39m)  kubelet, lab-kube-06     Node lab-kube-06 status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  39m (x8 over 39m)  kubelet, lab-kube-06     Node lab-kube-06 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    39m (x7 over 39m)  kubelet, lab-kube-06     Node lab-kube-06 status is now: NodeHasNoDiskPressure
  Normal  Starting                 38m                kube-proxy, lab-kube-06  Starting kube-proxy.



$ kubectl get pvc
NAME                       STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pv-provisioning-demo   Pending                                                      14s


$ kubectl get events
LAST SEEN   FIRST SEEN   COUNT     NAME                                        KIND                    SUBOBJECT   TYPE      REASON                    SOURCE                        MESSAGE
18m         18m          1         lab-kube-06.14f79f093119829a                Node                                Normal    Starting                  kubelet, lab-kube-06          Starting kubelet.
18m         18m          8         lab-kube-06.14f79f0931d0eb6e                Node                                Normal    NodeHasSufficientDisk     kubelet, lab-kube-06          Node lab-kube-06 status is now: NodeHasSufficientDisk
18m         18m          8         lab-kube-06.14f79f0931d1253e                Node                                Normal    NodeHasSufficientMemory   kubelet, lab-kube-06          Node lab-kube-06 status is now: NodeHasSufficientMemory
18m         18m          7         lab-kube-06.14f79f0931d131be                Node                                Normal    NodeHasNoDiskPressure     kubelet, lab-kube-06          Node lab-kube-06 status is now: NodeHasNoDiskPressure
18m         18m          1         lab-kube-06.14f79f0932f3f1b0                Node                                Normal    NodeAllocatableEnforced   kubelet, lab-kube-06          Updated Node Allocatable limit across pods
18m         18m          1         lab-kube-06.14f79f122a32282d                Node                                Normal    RegisteredNode            controllermanager             Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
17m         17m          1         lab-kube-06.14f79f1cdfc4c3b1                Node                                Normal    Starting                  kube-proxy, lab-kube-06       Starting kube-proxy.
17m         17m          1         lab-kube-06.14f79f1d94ef1c17                Node                                Normal    RegisteredNode            controllermanager             Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
14m         14m          1         lab-kube-06.14f79f4b91cf73b3                Node                                Normal    RegisteredNode            controllermanager             Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
58s         11m          42        nfs-pv-provisioning-demo.14f79f766cf887f2   PersistentVolumeClaim               Normal    FailedBinding             persistentvolume-controller   no persistent volumes available for this claim and no storage class is set
14s         4m           20        nfs-server-kq44h.14f79fd21b9db5f9           Pod                                 Warning   FailedScheduling          default-scheduler             PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"
4m          4m           1         nfs-server.14f79fd21b946027                 ReplicationController               Normal    SuccessfulCreate          replication-controller        Created pod: nfs-server-kq44h
                                       2m

$ kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
nfs-server-kq44h   0/1       Pending   0          16s


$ kubectl get pods

NAME               READY     STATUS    RESTARTS   AGE
nfs-server-kq44h   0/1       Pending   0          26s


$ kubectl get rc

NAME         DESIRED   CURRENT   READY     AGE
nfs-server   1         1         0         40s


$ kubectl describe pods nfs-server-kq44h

Name:           nfs-server-kq44h
Namespace:      default
Node:           <none>
Labels:         role=nfs-server
Annotations:    kubernetes.io/created-

by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"nfs-server","uid":"5653eb53-caf0-11e7-ac02-000d3a04eb...
Status:         Pending
IP:
Created By:     ReplicationController/nfs-server
Controlled By:  ReplicationController/nfs-server
Containers:
  nfs-server:
    Image:        gcr.io/google_containers/volume-nfs:0.8
    Ports:        2049/TCP, 20048/TCP, 111/TCP
    Environment:  <none>
    Mounts:
      /exports from mypvc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-plgv5 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  mypvc:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nfs-pv-provisioning-demo
    ReadOnly:   false
  default-token-plgv5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-plgv5
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  39s (x22 over 5m)  default-scheduler  PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"

最佳答案

每个永久卷声明(PVC)需要一个可以绑定(bind)到的永久卷(PV)。在您的示例中,您仅创建了PVC,但未创建卷本身。
PV可以手动创建,也可以通过将卷类与供应商一起使用来自动创建。看看the docs of static and dynamic provisioning了解更多信息):

在您的示例中,您正在创建一个存储类供应商(在examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml中定义),该供应商似乎是为在Google云中使用而量身定制的(它可能无法在您的实验室设置中实际创建PV)。
您可以自己手动创建一个持久卷。创建PV之后,PVC应该自动将自身绑定(bind)到该卷,并且您的Pod应该启动。以下是一个使用节点的本地文件系统作为卷的持久卷的示例(对于单节点测试设置来说可能是可以的):

apiVersion: v1
kind: PersistentVolume
metadata:
  name: someVolume
spec:
  capacity:
    storage: 200Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /path/on/host
对于生产设置,您可能需要在hostPath上选择一个different volume type,尽管根据您所处的环境(云或自托管/裸机),可用的卷类型会大不相同。

关于kubernetes - 未绑定(bind)PersistentVolumeClaim: “nfs-pv-provisioning-demo”,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/47335939/

10-12 23:43