问题描述
我有一个在外部网络中运行的kubernetes集群,并且已经在同一网络中的另一台机器上设置了NFS服务器.我能够通过运行sudo mount -t nfs 10.17.10.190:/export/test /mnt
ssh到群集中的任何节点并从服务器挂载,但是每当我的测试舱尝试使用指向该服务器的nfs持久卷时,它都会失败,并显示以下消息:
I have a kubernetes cluster that is running in out network and have setup an NFS server on another machine in the same network. I am able to ssh to any of the nodes in the cluster and mount from the server by running sudo mount -t nfs 10.17.10.190:/export/test /mnt
but whenever my test pod tries to use an nfs persistent volume that points at that server it fails with this message:
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
19s 19s 1 default-scheduler Normal Scheduled Successfully assigned nfs-web-58z83 to wal-vm-newt02
19s 3s 6 kubelet, wal-vm-newt02 Warning
FailedMount MountVolume.SetUp failed for volume "kubernetes.io/nfs/bad55e9c-7303-11e7-9c2f-005056b40350-test-nfs" (spec.Name: "test-nfs") pod "bad55e9c-7303-11e7-9c2f-005056b40350" (UID: "bad55e9c-7303-11e7-9c2f-005056b40350") with: mount failed: exit status 32
Mounting command: mount
Mounting arguments: 10.17.10.190:/exports/test /var/lib/kubelet/pods/bad55e9c-7303-11e7-9c2f-005056b40350/volumes/kubernetes.io~nfs/test-nfs nfs []
Output: mount.nfs: access denied by server while mounting 10.17.10.190:/exports/test
有人知道我该如何修复并使其能够从外部NFS服务器挂载吗?
Does anyone know how I can fix this and make it so that I can mount from the external NFS server?
集群的节点在10.17.10.185 - 10.17.10.189
上运行,并且所有Pod均以以10.0.x.x
开头的ips运行.群集和NFS服务器上的所有节点都在运行Ubuntu. NFS服务器在10.17.10.190
上使用以下/etc/exports
运行:
The nodes of the cluster are running on 10.17.10.185 - 10.17.10.189
and all of the pods run with ips that start with 10.0.x.x
. All of the nodes on the cluster and the NFS server are running Ubuntu. The NFS server is running on 10.17.10.190
with this /etc/exports
:
/export 10.17.10.185/255.0.0.0(rw,sync,no_subtree_check)
我设置了持久卷和持久卷声明,它们都创建成功,显示了运行kubectl get pv,pvc
时的输出:
I set up a persistent volume and persistent volume claim and they both create successfully showing this output from running kubectl get pv,pvc
:
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/test-nfs 1Mi RWX Retain Bound staging/test-nfs 15m
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
pvc/test-nfs Bound test-nfs 1Mi RWX 15m
它们是这样创建的:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: 10.17.10.190
path: "/exports/test"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-nfs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
我的测试平台正在使用以下配置:
My test pod is using this configuration:
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-web
spec:
replicas: 1
selector:
role: web-frontend
template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: test-nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: test-nfs
persistentVolumeClaim:
claimName: test-nfs
推荐答案
可能是因为在pod/容器中使用的uid在NFS服务器上没有足够的权限.
It's probably because the uid used in your pod/container has not enough rights on the NFS server.
您可以按照@Giorgio的说明运行runAsUser,或尝试编辑名称空间的uid-range注释并固定值(例如:666).这样,您命名空间中的每个Pod将与uid 666一起运行.
You can runAsUser as mentioned by @Giorgio or try to edit uid-range annotations of your namespace and fix a value (ex : 666). Like this every pod in your namespacewill run with uid 666.
请不要忘记正确地chown 666
您的NFS目录.
Don't forget to chown 666
properly your NFS directory.
这篇关于Kubernetes mount.nfs:服务器在挂载时拒绝访问的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!