问题描述
我正在尝试在kubernetes上部署弹性服务器
以上错误表示没有 persistentVolume
可以绑定到 PersistentVolumeClaim
。默认情况下,本地存储
不会真正动态地创建 persistentVolume
。
要使用 local-storage
存储类的动态预配置机制,您需要配置 local-storage
类,以便它可以设置 persistentVolume
。查看此讨论。
或者不使用存储类的动态预配置机制,您需要使用 hostPath
创建一个 persistentVolume
,它可以绑定到 PersistentVolumeClaim
。但这不是推荐用于生产用途的解决方案。在。。 p>
PersistentVolumeClaim
将基于弹性Yaml中的 volumeClaimTemplates
自动创建。因此,您不应创建
PersistentVolumeClaim
。
由于 nodeSets
计数为2两个 PersistentVolumeClaim
已创建。因此,您需要创建两个 persistentVolume
。
apiVersion:v1
类型: PersistentVolume
元数据:
名称:elasticsearch-data1
标签:
类型:本地
规格:
storageClassName:标准
容量:
存储空间:10Gi
访问模式:
-ReadWriteOnce
hostPath:
路径: / mnt / data
---
api版本:v1
类型:PersistentVolume
元数据:
名称:elasticsearch-data2
标签:
类型:本地
规范:
storageClassName:标准
容量:
存储量:10Gi
accessModes:
-ReadWriteOnce
hostPath:
路径: / mnt / data;
I am trying to deploy elastic on kubernetes https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html on a local minikube cluster. I have already installed the operator.
When i apply the elasticsearch cluster below, i get the following pod error:
volume/claim:
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
--
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: elasticsearch-data
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
elastic.yml
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: data-es
spec:
version: 7.4.2
nodeSets:
- name: default
count: 2
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 10Gi
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
xpack.security.authc.realms:
native:
native1:
order: 1
---
apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
name: data-kibana
spec:
version: 7.4.2
count: 1
elasticsearchRef:
name: data-es
kubectl get pvc
Above error means there is no persistentVolume
that can be bound to the PersistentVolumeClaim
. By default local-storage
does not really create a persistentVolume
dynamically.
To use dynamic provisioning mechanism of local-storage
storage class you need to configure the local-storage
class so that it can provision the persistentVolume
. Check this discussion Kubernetes: What is the best practice for create dynamic local volume to auto assign PVs for PVCs?.
Alternatively without using dynamic provisioning mechanism of a storageclass you need to create a persistentVolume
using hostPath
which can be bound to the PersistentVolumeClaim
.But this is not a recommended solution for production usage. Check this guide here.
PersistentVolumeClaim
will be automatically created based on volumeClaimTemplates
in the elastic yaml. Hence you should not create aPersistentVolumeClaim
.
Since nodeSets
count is 2 two PersistentVolumeClaim
is created. So you need to create two persistentVolume
.
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data1
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data2
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
这篇关于pod具有未绑定的立即PersistentVolumeClaims ECK(Elasticsearch on Kubernetes)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!