本文介绍了kubernetes nslookup kubernetes.default失败的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我的环境:
OS - CentOS-8.2
Kubernetes Vesion:
Client Version: v1.18.8
Server Version: v1.18.8
我已经成功配置了Kubernetes集群(一个主控和一个工作器),但是当前在使用以下代码检查dns分辨率时失败了.
I have successfully configured Kubernetes cluster (One master & one worker), But currently while checking the dns resolution with below code it is failing.
apiVersion: v1
kind: Pod
metadata:
name: dnsutils
namespace: default
spec:
containers:
- name: dnsutils
image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
# kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default dnsutils 1/1 Running 0 4m38s 10.244.1.20 K8s-Worker-1 <none> <none>
kube-system coredns-66bff467f8-2q4z9 1/1 Running 1 4d14h 10.244.0.5 K8s-Master <none> <none>
kube-system coredns-66bff467f8-ktbd4 1/1 Running 1 4d14h 10.244.0.4 K8s-Master <none> <none>
kube-system etcd-K8s-Master 1/1 Running 1 4d14h 65.66.67.5 K8s-Master <none> <none>
kube-system kube-apiserver-K8s-Master 1/1 Running 1 4d14h 65.66.67.5 K8s-Master <none> <none>
kube-system kube-controller-manager-K8s-Master 1/1 Running 1 4d14h 65.66.67.5 K8s-Master <none> <none>
kube-system kube-flannel-ds-amd64-d6h9c 1/1 Running 61 45h 65.66.67.6 K8s-Worker-1 <none> <none>
kube-system kube-flannel-ds-amd64-tc4qf 1/1 Running 202 4d14h 65.66.67.5 K8s-Master <none> <none>
kube-system kube-proxy-cl9n4 1/1 Running 0 45h 65.66.67.6 K8s-Worker-1 <none> <none>
kube-system kube-proxy-s7jlc 1/1 Running 1 4d14h 65.66.67.5 K8s-Master <none> <none>
kube-system kube-scheduler-K8s-Master 1/1 Running 1 4d14h 65.66.67.5 K8s-Master <none> <none>
# kubectl get pods
NAME READY STATUS RESTARTS AGE
dnsutils 1/1 Running 0 22m
当前在Kubernetes集群主服务器和nslookup kubernetes上执行的以下命令失败.
Currently below commands executed on Kubernetes cluster master and nslookup kubernetes.default is failing.
# kubectl exec -i -t dnsutils -- nslookup kubernetes.default
;; connection timed out; no servers could be reached
command terminated with exit code 1
# kubectl exec -ti dnsutils -- cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local company.domain.com
options ndots:5
# kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-2q4z9 1/1 Running 1 4d14h
coredns-66bff467f8-ktbd4 1/1 Running 1 4d14h
# kubectl logs --namespace=kube-system -l k8s-app=kube-dns
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
# kubectl get svc --namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4d14h
# kubectl get endpoints kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 10.244.0.4:53,10.244.0.5:53,10.244.0.4:9153 + 3 more... 4d14h
# kubectl describe svc -n kube-system kube-dns
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: prometheus.io/port: 9153
prometheus.io/scrape: true
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.96.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.244.0.4:53,10.244.0.5:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.244.0.4:53,10.244.0.5:53
Port: metrics 9153/TCP
TargetPort: 9153/TCP
Endpoints: 10.244.0.4:9153,10.244.0.5:9153
Session Affinity: None
Events: <none>
# kubectl describe svc kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.96.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 65.66.67.5:6443
Session Affinity: None
Events: <none>
任何人都可以帮助我调试此问题.谢谢.
Can anyone please help me to debug this issue. Thanks.
推荐答案
我已经卸载并重新安装了Kubernetes version - v1.19.0
现在一切正常.谢谢.
I have uninstalled and re-installed Kubernetes version - v1.19.0
Now everything working fine. Thanks.
这篇关于kubernetes nslookup kubernetes.default失败的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!