问题描述
我使用以下YAML创建了一个持久卷
I created a persistent volume using the following YAML
apiVersion: v1
kind: PersistentVolume
metadata:
name: dq-tools-volume
labels:
name: dq-tools-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: volume-class
nfs:
server: 192.168.215.83
path: "/var/nfsshare"
创建此代码后,我使用以下YAMLS创建了两个持久卷声明
After creating this I created two persistentvolumeclaims using following YAMLS
PVC1:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-volume-1
labels:
name: jenkins-volume-1
spec:
accessMOdes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: volume-class
selector:
matchLabels:
name: dq-tools-volume
PVC2:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-volume-2
labels:
name: jenkins-volume-2
spec:
accessMOdes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: volume-class
selector:
matchLabels:
name: dq-tools-volume
但是我注意到这两个持久卷声明都在写相同的后端卷.
But i noticed that both of these persistent volume claims are writing to same backend volume.
如何将一个持久性卷声明的数据与另一个持久性卷声明隔离.我将其用于Jenkins的多个安装.我希望每个Jenkins的工作区都被隔离.
How can i isolate data of one persistentvolumeclaim from another. I am using this for multiple installations of Jenkins. I want workspace of each Jenkins to be isolated.
推荐答案
为@ D.T.解释了持久卷声明仅与持久卷绑定.
您不能将2个pvc绑定到同一pv .
As @D.T. explained a persistent volume claim is exclusively bound to a persistent volume.
You cannot bind 2 pvc to the same pv.
此处,您可以在讨论该案例的地方找到另一种情况.
Here you can find another case where it was discussed.
针对您的情况,有一个更好的解决方案,它涉及使用 nfs-client-provisioner .为此,首先必须在集群中安装头盔,然后按照我为先前的.
There is a better solution for your scenario and it involves using nfs-client-provisioner. To achive that, firstly you have to install helm in your cluster an than follow these steps that I created for a previous answer on ServerFault.
我已经对其进行了测试,并且使用此解决方案,您可以将一个PVC与另一个PVC隔离.
I've tested it and using this solution you can isolate one PVC from the other.
1-在我的主节点上安装和配置NFS服务器(Debian Linux,这可能会有所不同,具体取决于您的Linux发行版):
在安装NFS内核服务器之前,我们需要更新系统的存储库索引:
Before installing the NFS Kernel server, we need to update our system’s repository index:
$ sudo apt-get update
现在,运行以下命令以在系统上安装NFS内核服务器:
Now, run the following command in order to install the NFS Kernel Server on your system:
$ sudo apt install nfs-kernel-server
创建导出目录
$ sudo mkdir -p /mnt/nfs_server_files
由于我们希望所有客户端访问目录,因此我们将通过以下命令删除导出文件夹的限制性权限(根据您的安全策略,设置可能有所不同):
As we want all clients to access the directory, we will remove restrictive permissions of the export folder through the following commands (this may vary on your set-up according to your security policy):
$ sudo chown nobody:nogroup /mnt/nfs_server_files
$ sudo chmod 777 /mnt/nfs_server_files
通过NFS导出文件将服务器访问权限分配给客户端
Assign server access to client(s) through NFS export file
$ sudo nano /etc/exports
在此文件中,添加新行,以允许其他服务器访问您的共享.
Inside this file, add a new line to allow access from other servers to your share.
/mnt/nfs_server_files 10.128.0.0/24(rw,sync,no_subtree_check)
您可能想在共享中使用其他选项. 10.128.0.0/24是我的k8s内部网络.
You may want to use different options in your share. 10.128.0.0/24 is my k8s internal network.
导出共享目录并重新启动服务,以确保所有配置文件均正确.
Export the shared directory and restart the service to make sure all configuration files are correct.
$ sudo exportfs -a
$ sudo systemctl restart nfs-kernel-server
检查所有活动共享:
$ sudo exportfs
/mnt/nfs_server_files
10.128.0.0/24
2-在我的所有辅助节点上安装NFS客户端:
$ sudo apt-get update
$ sudo apt-get install nfs-common
这时,您可以进行测试以检查是否可以从工作节点访问共享:
At this point you can make a test to check if you have access to your share from your worker nodes:
$ sudo mkdir -p /mnt/sharedfolder_client
$ sudo mount kubemaster:/mnt/nfs_server_files /mnt/sharedfolder_client
请注意,此时您可以使用主节点的名称. K8s在这里负责DNS.检查该卷是否按预期装入,并创建一些文件夹和文件以确保一切正常.
Notice that at this point you can use the name of your master node. K8s is taking care of the DNS here.Check if the volume mounted as expected and create some folders and files to male sure everything is working fine.
$ cd /mnt/sharedfolder_client
$ mkdir test
$ touch file
返回到您的主节点,并检查这些文件是否在/mnt/nfs_server_files文件夹中.
Go back to your master node and check if these files are at /mnt/nfs_server_files folder.
3-安装NFS Client Provisioner .
使用头盔安装预配器:
$ helm install --name ext --namespace nfs --set nfs.server=kubemaster --set nfs.path=/mnt/nfs_server_files stable/nfs-client-provisioner
请注意,我已经为其指定了名称空间.检查它们是否正在运行:
Notice that I've specified a namespace for it.Check if they are running:
$ kubectl get pods -n nfs
NAME READY STATUS RESTARTS AGE
ext-nfs-client-provisioner-f8964b44c-2876n 1/1 Running 0 84s
这时,我们有一个名为nfs-client的存储类:
At this point we have a storageclass called nfs-client:
$ kubectl get storageclass -n nfs
NAME PROVISIONER AGE
nfs-client cluster.local/ext-nfs-client-provisioner 5m30s
我们需要创建一个PersistentVolumeClaim:
We need to create a PersistentVolumeClaim:
$ more nfs-client-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: nfs
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "nfs-client"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
$ kubectl apply -f nfs-client-pvc.yaml
检查状态(需要绑定):
Check the status (Bound is expected):
$ kubectl get persistentvolumeclaim/test-claim -n nfs
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Bound pvc-e1cd4c78-7c7c-4280-b1e0-41c0473652d5 1Mi RWX nfs-client 24s
4-创建一个简单的pod来测试我们是否可以读取/写入NFS共享:
使用此Yaml创建广告连播:
Create a pod using this yaml:
apiVersion: v1
kind: Pod
metadata:
name: pod0
labels:
env: test
namespace: nfs
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
$ kubectl apply -f pod.yaml
让我们列出我们的pod上所有已装入的卷:
Let's list all mounted volumes on our pod:
$ kubectl exec -ti -n nfs pod0 -- df -h /mnt
Filesystem Size Used Avail Use% Mounted on
kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-a2e53b0e-f9bb-4723-ad62-860030fb93b1 99G 11G 84G 11% /mnt
我们可以看到,我们在/mnt上安装了一个NFS卷. (重要注意路径kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-a2e53b0e-f9bb-4723-ad62-860030fb93b1
)
As we can see, we have a NFS volume mounted on /mnt. (Important to notice the path kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-a2e53b0e-f9bb-4723-ad62-860030fb93b1
)
让我们检查一下:
root@pod0:/# cd /mnt
root@pod0:/mnt# ls -la
total 8
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 08:33 .
drwxr-xr-x 1 root root 4096 Nov 5 08:38 ..
它是空的.让我们创建一些文件:
It's empty. Let's create some files:
$ for i in 1 2; do touch file$i; done;
$ ls -l
total 8
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 08:58 .
drwxr-xr-x 1 root root 4096 Nov 5 08:38 ..
-rw-r--r-- 1 nobody nogroup 0 Nov 5 08:58 file1
-rw-r--r-- 1 nobody nogroup 0 Nov 5 08:58 file2
现在让我们在NFS服务器(主节点)上找到这些文件的位置:
Now let's where are these files on our NFS Server (Master Node):
$ cd /mnt/nfs_server_files
$ ls -l
total 4
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 09:11 nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12
$ cd nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12/
$ ls -l
total 0
-rw-r--r-- 1 nobody nogroup 0 Nov 5 09:11 file1
-rw-r--r-- 1 nobody nogroup 0 Nov 5 09:11 file2
这是我们刚刚在Pod中创建的文件!
And here are the files we just created inside our pod!
这篇关于如何将一个持久性卷声明的数据与另一个持久性卷声明隔离的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!