问题描述
我已经基于 contrib存储库在CoreOS上创建了Kubernetes v1.3.3群集一个>.我的群集看起来正常,我想使用仪表板,但是即使禁用了所有身份验证,也无法访问UI.以下是kubernetes-dashboard
组件的详细信息,以及一些API服务器的配置/输出.我在这里想念什么?
I have created a Kubernetes v1.3.3 cluster on CoreOS based on the contrib repo. My cluster appears healthy, and I would like to use the Dashboard but I am unable to access the UI, even when all authentication is disabled. Below are details of the kubernetes-dashboard
components, as well as some API server configs/output. What am I missing here?
仪表板组件
core@ip-10-178-153-240 ~ $ kubectl get ep kubernetes-dashboard --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2016-07-28T23:40:57Z
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "345970"
selfLink: /api/v1/namespaces/kube-system/endpoints/kubernetes-dashboard
uid: bb49360f-551c-11e6-be8c-02b43b6aa639
subsets:
- addresses:
- ip: 172.16.100.9
targetRef:
kind: Pod
name: kubernetes-dashboard-v1.1.0-nog8g
namespace: kube-system
resourceVersion: "345969"
uid: d4791722-5908-11e6-9697-02b43b6aa639
ports:
- port: 9090
protocol: TCP
core@ip-10-178-153-240 ~ $ kubectl get svc kubernetes-dashboard --namespace=kube-system -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2016-07-28T23:40:57Z
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "109199"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: bb4804bd-551c-11e6-be8c-02b43b6aa639
spec:
clusterIP: 172.20.164.194
ports:
- port: 80
protocol: TCP
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
core@ip-10-178-153-240 ~ $ kubectl describe svc/kubernetes-dashboard --
namespace=kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
kubernetes.io/cluster-service=true
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 172.20.164.194
Port: <unset> 80/TCP
Endpoints: 172.16.100.9:9090
Session Affinity: None
No events.
core@ip-10-178-153-240 ~ $ kubectl get po kubernetes-dashboard-v1.1.0-nog8g --namespace=kube-system -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"kube-system","name":"kubernetes-dashboard-v1.1.0","uid":"3a282a06-58c9-11e6-9ce6-02b43b6aa639","apiVersion":"v1","resourceVersion":"338823"}}
creationTimestamp: 2016-08-02T23:28:34Z
generateName: kubernetes-dashboard-v1.1.0-
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
version: v1.1.0
name: kubernetes-dashboard-v1.1.0-nog8g
namespace: kube-system
resourceVersion: "345969"
selfLink: /api/v1/namespaces/kube-system/pods/kubernetes-dashboard-v1.1.0-nog8g
uid: d4791722-5908-11e6-9697-02b43b6aa639
spec:
containers:
- image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: kubernetes-dashboard
ports:
- containerPort: 9090
protocol: TCP
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-lvmnw
readOnly: true
dnsPolicy: ClusterFirst
nodeName: ip-10-178-153-57.us-west-2.compute.internal
restartPolicy: Always
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: default-token-lvmnw
secret:
secretName: default-token-lvmnw
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2016-08-02T23:28:34Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2016-08-02T23:28:35Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2016-08-02T23:28:34Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://1bf65bbec830e32e85e1cd9e22a5db7a2b623c6d9d7da17c747d256a9838676f
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
imageID: docker://sha256:d023c050c0651bd96508b874ca1cd628fd0077f8327e1aeec92d22070b331c53
lastState: {}
name: kubernetes-dashboard
ready: true
restartCount: 0
state:
running:
startedAt: 2016-08-02T23:28:34Z
hostIP: 10.178.153.57
phase: Running
podIP: 172.16.100.9
startTime: 2016-08-02T23:28:34Z
API服务器配置
/opt/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://internal-etcd-elb-236896596.us-west-2.elb.amazonaws.com:80 --insecure-bind-address=0.0.0.0 --secure-port=443 --allow-privileged=true --service-cluster-ip-range=172.20.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ServiceAccount,ResourceQuota --bind-address=0.0.0.0 --cloud-provider=aws
可从远程主机(笔记本电脑)访问API服务器
$ curl http://10.178.153.240:8080/
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/apps",
"/apis/apps/v1alpha1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v2alpha1",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/apis/policy",
"/apis/policy/v1alpha1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1alpha1",
"/healthz",
"/healthz/ping",
"/logs/",
"/metrics",
"/swaggerapi/",
"/ui/",
"/version"
]
无法远程访问UI
$ curl -L http://10.178.153.240:8080/ui
Error: 'dial tcp 172.16.100.9:9090: i/o timeout'
Trying to reach: 'http://172.16.100.9:9090/'
可从Minion Node上访问UI
core@ip-10-178-153-57 ~$ curl -L 172.16.100.9:9090
<!doctype html> <html ng-app="kubernetesDashboard">...
API服务器路由表
core@ip-10-178-153-240 ~ $ ip route show
default via 10.178.153.1 dev eth0 proto dhcp src 10.178.153.240 metric 1024
10.178.153.0/24 dev eth0 proto kernel scope link src 10.178.153.240
10.178.153.1 dev eth0 proto dhcp scope link src 10.178.153.240 metric 1024
172.16.0.0/12 dev flannel.1 proto kernel scope link src 172.16.6.0
172.16.6.0/24 dev docker0 proto kernel scope link src 172.16.6.1
奴才(豆荚所在的地方)路线表
core@ip-10-178-153-57 ~ $ ip route show
default via 10.178.153.1 dev eth0 proto dhcp src 10.178.153.57 metric 1024
10.178.153.0/24 dev eth0 proto kernel scope link src 10.178.153.57
10.178.153.1 dev eth0 proto dhcp scope link src 10.178.153.57 metric 1024
172.16.0.0/12 dev flannel.1
172.16.100.0/24 dev docker0 proto kernel scope link src 172.16.100.1
法兰绒日志似乎这条路线与Flannel行为不符.我在日志中收到这些错误,但是重新启动守护程序似乎无法解决该问题.
Flannel LogsIt seems that this one route is misbehaving with Flannel. I'm getting these errors in the logs but restarting the daemon does not seem to resolve it.
...Watch subnets: client: etcd cluster is unavailable or misconfigured
... L3 miss: 172.16.100.9
... calling NeighSet: 172.16.100.9
推荐答案
对于任何发现此问题的人,我想发布最终解决方案,因为它不是Flannel,Kubernetes或SkyDNS问题,这是一个疏忽的防火墙.一旦我在API服务器上打开防火墙,我的Flannel路由就可以正常使用,并且可以访问仪表板(假设在API服务器上启用了基本身份验证).
For anyone who finds their way to this question, I wanted to post the final resolution as it was not a Flannel, Kubernetes, or SkyDNS issue, it was an inadvertent firewall. As soon as I opened up the firewall on the API server, my Flannel routes were fully functional and I could access the Dashboard (assuming basic auth was enabled on the API Server).
最后,用户出错:)
这篇关于无法访问Kubernetes仪表板的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!