我正在尝试通过Oracle OCI使用Helm图表在Kubernetes集群上安装VerneMQ。
Kubernetes基础架构似乎已启动并正在运行,我可以毫无问题地部署自定义微服务。
我正在按照https://github.com/vernemq/docker-vernemq的指示进行操作
步骤如下:
helt / vernemq目录中的
helm install --name="broker" ./
输出为:
NAME: broker
LAST DEPLOYED: Fri Mar 1 11:07:37 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/RoleBinding
NAME AGE
broker-vernemq 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
broker-vernemq-headless ClusterIP None <none> 4369/TCP 1s
broker-vernemq ClusterIP 10.96.120.32 <none> 1883/TCP 1s
==> v1/StatefulSet
NAME DESIRED CURRENT AGE
broker-vernemq 3 1 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
broker-vernemq-0 0/1 ContainerCreating 0 1s
==> v1/ServiceAccount
NAME SECRETS AGE
broker-vernemq 1 1s
==> v1/Role
NAME AGE
broker-vernemq 1s
NOTES:
1. Check your VerneMQ cluster status:
kubectl exec --namespace default broker-vernemq-0 /usr/sbin/vmq-admin cluster show
2. Get VerneMQ MQTT port
echo "Subscribe/publish MQTT messages there: 127.0.0.1:1883"
kubectl port-forward svc/broker-vernemq 1883:1883
但是当我做这个检查
kubectl exec --namespace default broker-vernemq-0 vmq-admin cluster show
我有
Node '[email protected]' not responding to pings.
command terminated with exit code 1
我认为子域有问题(双点之间没有任何东西)
该命令
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c kubedns
最后一条日志行是
I0301 10:07:38.366826 1 dns.go:552] Could not find endpoints for service "broker-vernemq-headless" in namespace "default". DNS records will be created once endpoints show up.
我也尝试过使用此自定义yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: default
name: vernemq
labels:
app: vernemq
spec:
serviceName: vernemq
replicas: 3
selector:
matchLabels:
app: vernemq
template:
metadata:
labels:
app: vernemq
spec:
containers:
- name: vernemq
image: erlio/docker-vernemq:latest
imagePullPolicy: Always
ports:
- containerPort: 1883
name: mqtt
- containerPort: 8883
name: mqtts
- containerPort: 4369
name: epmd
env:
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
value: "off"
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
value: "/etc/vernemq-passwd/vmq.passwd"
volumeMounts:
- name: vernemq-passwd
mountPath: /etc/vernemq-passwd
readOnly: true
volumes:
- name: vernemq-passwd
secret:
secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
labels:
app: vernemq
spec:
clusterIP: None
selector:
app: vernemq
ports:
- port: 4369
name: epmd
---
apiVersion: v1
kind: Service
metadata:
name: mqtt
labels:
app: mqtt
spec:
type: ClusterIP
selector:
app: vernemq
ports:
- port: 1883
name: mqtt
---
apiVersion: v1
kind: Service
metadata:
name: mqtts
labels:
app: mqtts
spec:
type: LoadBalancer
selector:
app: vernemq
ports:
- port: 8883
name: mqtts
有什么建议吗?
非常感谢
插口
最佳答案
似乎是Docker镜像中的错误。 github上的建议是构建您自己的镜像或使用已修复的更高版本的VerneMQ镜像(在1.6.x之后)。
这里提到的建议:https://github.com/vernemq/docker-vernemq/pull/92
请求请求进行可能的修复:https://github.com/vernemq/docker-vernemq/pull/97
编辑:
我只有在没有掌 Helm 的情况下才能正常工作。使用kubectl create -f ./cluster.yaml
和以下cluster.yaml
:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: vernemq
namespace: default
spec:
serviceName: vernemq
replicas: 3
selector:
matchLabels:
app: vernemq
template:
metadata:
labels:
app: vernemq
spec:
serviceAccountName: vernemq
containers:
- name: vernemq
image: erlio/docker-vernemq:latest
ports:
- containerPort: 1883
name: mqttlb
- containerPort: 1883
name: mqtt
- containerPort: 4369
name: epmd
- containerPort: 44053
name: vmq
- containerPort: 9100
- containerPort: 9101
- containerPort: 9102
- containerPort: 9103
- containerPort: 9104
- containerPort: 9105
- containerPort: 9106
- containerPort: 9107
- containerPort: 9108
- containerPort: 9109
env:
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
value: "9100"
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
value: "9109"
- name: DOCKER_VERNEMQ_KUBERNETES_INSECURE
value: "1"
# only allow anonymous access for development / testing purposes!
# - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
# value: "on"
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
labels:
app: vernemq
spec:
clusterIP: None
selector:
app: vernemq
ports:
- port: 4369
name: empd
- port: 44053
name: vmq
---
apiVersion: v1
kind: Service
metadata:
name: mqttlb
labels:
app: mqttlb
spec:
type: LoadBalancer
selector:
app: vernemq
ports:
- port: 1883
name: mqttlb
---
apiVersion: v1
kind: Service
metadata:
name: mqtt
labels:
app: mqtt
spec:
type: NodePort
selector:
app: vernemq
ports:
- port: 1883
name: mqtt
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vernemq
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["endpoints", "deployments", "replicasets", "pods"]
verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: endpoint-reader
subjects:
- kind: ServiceAccount
name: vernemq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: endpoint-reader
需要几秒钟才能准备好 pod 。