问题描述
我在ubuntu 16.04
中设置了kubernetes
.我正在使用kube版本1.13.1
并使用weave进行网络连接.我已经使用:
I have setup kubernetes
in ubuntu 16.04
. I am using kube version 1.13.1
and using weave for networking. I have initialized the cluster using :
sudo kubeadm init --token-ttl=0 --apiserver-advertise-address=192.168.88.142
并编织:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
所有Pod似乎运行良好,但coredns
始终保持为CrashLoopBackOff
状态.我已经阅读了几乎所有可用于此目的的解决方案.
All the pods seems to be running fine but coredns
always remains in CrashLoopBackOff
status. I have read mostly all the solution available for this.
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-h5plc 0/1 CrashLoopBackOff 7 18m
coredns-86c58d9df4-l77rw 0/1 CrashLoopBackOff 7 18m
etcd-tx-g1-209 1/1 Running 0 17m
kube-apiserver-tx-g1-209 1/1 Running 0 17m
kube-controller-manager-tx-g1-209 1/1 Running 0 17m
kube-proxy-2jdpp 1/1 Running 0 18m
kube-scheduler-tx-g1-209 1/1 Running 0 17m
weave-net-npgnc 2/2 Running 0 13m
我最初是通过编辑cordens文件并删除循环开始的.它解决了问题,但是后来我意识到我无法从容器内ping www.google.com
,但是能够ping google.com的IP地址.因此,删除循环不是一个完美的解决方案.
I initially started by editing the cordens file and deleting the loop. It resolves the issue but then later I realized that I wasn't able to ping www.google.com
from within the container but I was able to ping the ip address of google.com. Thus deleting the loop is not a perfect solution.
接下来,我尝试查看/etc/resolv.conf
并发现以下内容:
Next I tried looking at the /etc/resolv.conf
and found below contents:
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 127.0.1.1
search APSDC.local
这是解决方法在kubernetes页面上提供,该页面上说应该避免使用任何类型的IP地址,例如127.0.0.1.我无法理解这一行,因为此文件是自动生成的.如何更改文件,以便coredns可以正常工作.贝洛是coredns的日志:
Here is the workaround provided on kubernetes page which says that any type IP address like 127.0.0.1 should be avoided. I am not able to understand this line as this file is automatically generated. How can make changes to the file so that coredns can work fine. Belo is the logs of coredns:
$ kubectl logs coredns-86c58d9df4-h5plc -n kube-system
.:53
2019-01-31T17:26:43.665Z [INFO] CoreDNS-1.2.6
2019-01-31T17:26:43.666Z [INFO] linux/amd64, go1.11.2, 756749c
CoreDNS-1.2.6
linux/amd64, go1.11.2, 756749c
[INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
[FATAL] plugin/loop: Forwarding loop detected in "." zone. Exiting. See https://coredns.io/plugins/loop#troubleshooting. Probe query: "HINFO 1423429973721138313.4523734933111484351.".
任何人都可以为我指出正确的方向,以解决此问题.请帮忙.谢谢
Can anyone please point me to right direction in order to resolve this issue. Please help. Thanks
推荐答案
我已经解决了此问题.就我而言,我有以下内容/etc/resolv.conf
I have resolved this issue. In my case I had below contents of /etc/resolv.conf
nameserver 127.0.1.1
我首先使用以下命令来获取设备在客户端网络中的正确IP.
I first used the below command to get the correct IP as the device was in client's network.
nmcli device show <interfacename> | grep IP4.DNS
此后,我用以下内容更新了文件/etc/resolvconf/resolv.conf.d/head
After this I updated the file /etc/resolvconf/resolv.conf.d/head
with below contents
nameserver 192.168.66.21
,然后运行以下命令以重新生成resolv.conf
and then run the below command to regenerate the resolv.conf
sudo resolvconf -u
此后,我在/etc/resolv.conf
中具有以下内容:
After this I had below contents in /etc/resolv.conf
:
nameserver 192.168.66.21
nameserver 127.0.1.1
然后我删除了coredns
窗格,一切正常.谢谢.
I then deleted the coredns
pods and everything worked fine. Thanks.
这篇关于kubernetes中的coredns crashloopbackoff的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!