完全缺少cni0接口(interface)。
在不破坏群集的情况下如何取回它的任何方向将不胜感激。
基本上,内部容器网络无法正常工作,试图从中恢复,我发现了这一点
coredns的IP是docker0接口(interface)而不是cni0,所以如果我把cni0重新拿回来,一切都会开始工作

以下是屏幕截图,如果您需要任何其他命令输出,请告诉我


ip ro
default via 10.123.0.1 dev ens160 proto static metric 100
10.123.0.0/19 dev ens160 proto kernel scope link src 10.123.24.103 metric 100
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
10.244.3.0/24 via 10.244.3.0 dev flannel.1 onlink
172.17.77.0/24 dev docker0 proto kernel scope link src 172.17.77.1
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1

工作节点
default via 10.123.0.1 dev ens160 proto static metric 100
10.123.0.0/19 dev ens160 proto kernel scope link src 10.123.24.105 metric 100
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
10.244.3.0/24 via 10.244.3.0 dev flannel.1 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1


ifconfig -a
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:27ff:fe72:a287  prefixlen 64  scopeid 0x20<link>
        ether 02:42:27:72:a2:87  txqueuelen 0  (Ethernet)
        RX packets 3218  bytes 272206 (265.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 286  bytes 199673 (194.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                         READY     STATUS    RESTARTS   AGE       IP              NODE
kube-system   coredns-99b9bb8bd-j77zx                      1/1       Running   1          20m       172.17.0.2      abc-sjkubenode02
kube-system   coredns-99b9bb8bd-sjnhs                      1/1       Running   1          20m       172.17.0.3      abc-xxxxxxxxxxxx02
kube-system   elasticsearch-logging-0                      1/1       Running   6          2d        172.17.0.2      abc-xxxxxxxxxxxx02
kube-system   etcd-abc-xxxxxxxxxxxx01                      1/1       Running   3          26d       10.123.24.103   abc-xxxxxxxxxxxx01
kube-system   fluentd-es-v2.0.3-6flxh                      1/1       Running   5          2d        172.17.0.4      abc-xxxxxxxxxxxx02
kube-system   fluentd-es-v2.0.3-7qdxl                      1/1       Running   19         131d      172.17.0.2      abc-sjkubenode01
kube-system   fluentd-es-v2.0.3-l5thl                      1/1       Running   6          2d        172.17.0.3      abc-sjkubenode02
kube-system   heapster-66bf5bd78f-twwd2                    1/1       Running   4          2d        172.17.0.4      abc-sjkubenode01
kube-system   kibana-logging-8b9699f9c-nrcpb               1/1       Running   3          2d        172.17.0.3      abc-sjkubenode01
kube-system   kube-apiserver-abc-xxxxxxxxxxxx01            1/1       Running   2          2h        10.123.24.103   abc-xxxxxxxxxxxx01
kube-system   kube-controller-manager-abc-xxxxxxxxxxxx01   1/1       Running   3          2h        10.123.24.103   abc-xxxxxxxxxxxx01
kube-system   kube-flannel-ds-5lmmd                        1/1       Running   3          3h        10.123.24.106   abc-sjkubenode02
kube-system   kube-flannel-ds-92gd9                        1/1       Running   2          3h        10.123.24.104   abc-xxxxxxxxxxxx02
kube-system   kube-flannel-ds-nnxv6                        1/1       Running   3          3h        10.123.24.105   abc-sjkubenode01
kube-system   kube-flannel-ds-ns9ls                        1/1       Running   2          3h        10.123.24.103   abc-xxxxxxxxxxxx01
kube-system   kube-proxy-7h54h                             1/1       Running   3          3h        10.123.24.105   abc-sjkubenode01
kube-system   kube-proxy-7hrln                             1/1       Running   2          3h        10.123.24.104   abc-xxxxxxxxxxxx02
kube-system   kube-proxy-s4rt7                             1/1       Running   3          3h        10.123.24.103   abc-xxxxxxxxxxxx01
kube-system   kube-proxy-swmrc                             1/1       Running   2          3h        10.123.24.106   abc-sjkubenode02
kube-system   kube-scheduler-abc-xxxxxxxxxxxx01            1/1       Running   2          2h        10.123.24.103   abc-xxxxxxxxxxxx01
kube-system   kubernetes-dashboard-58c479587f-bkqgf        1/1       Running   30         116d      10.244.0.56     abc-xxxxxxxxxxxx01
kube-system   monitoring-influxdb-54bd58b4c9-4phxl         1/1       Running   3          2d        172.17.0.5      abc-sjkubenode01
kube-system   nginx-ingress-5565bdd5fc-nc962               1/1       Running   2          2d        10.123.24.103   abc-xxxxxxxxxxxx01


NAME                 STATUS    ROLES     AGE       VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
abc-sjkubemaster01   Ready     master    131d      v1.11.2   10.123.24.103   <none>        CentOS Linux 7 (Core)   3.10.0-862.2.3.el7.x86_64   docker://17.12.1-ce
abc-sjkubemaster02   Ready     <none>    131d      v1.11.2   10.123.24.104   <none>        CentOS Linux 7 (Core)   3.10.0-862.2.3.el7.x86_64   docker://17.12.1-ce
abc-sjkubenode01     Ready     <none>    131d      v1.11.2   10.123.24.105   <none>        CentOS Linux 7 (Core)   3.10.0-862.2.3.el7.x86_64   docker://17.12.1-ce
abc-sjkubenode02     Ready     <none>    131d      v1.11.2   10.123.24.106   <none>        CentOS Linux 7 (Core)   3.10.0-862.2.3.el7.x86_64   docker://17.12.1-ce

编辑:
我想添加的另一件事是如何删除pod coredns并重新创建它?我没有yaml文件,因为它是使用kubeadm安装kubebernets集群时创建的
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:3fff:fe60:fea9  prefixlen 64  scopeid 0x20<link>
        ether 02:42:3f:60:fe:a9  txqueuelen 0  (Ethernet)
        RX packets 123051  bytes 8715267 (8.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 88559  bytes 33067497 (31.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.123.24.106  netmask 255.255.224.0  broadcast 10.123.31.255
        inet6 fd0f:f1c3:ba53:6c01:5de2:b5af:362e:a9b2  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::ee61:b84b:bf18:93f2  prefixlen 64  scopeid 0x20<link>
        ether 00:50:56:91:75:d2  txqueuelen 1000  (Ethernet)
        RX packets 1580516  bytes 534188729 (509.4 MiB)
        RX errors 0  dropped 114794  overruns 0  frame 0
        TX packets 303093  bytes 28327667 (27.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.1.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::4c0e:7dff:fe4b:12f2  prefixlen 64  scopeid 0x20<link>
        ether 4e:0e:7d:4b:12:f2  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 40 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 75  bytes 5864 (5.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 75  bytes 5864 (5.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:fc:5b:de  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0-nic: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 52:54:00:fc:5b:de  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

最佳答案

首先,检查所有节点上的配置是否存在于/etc/cni/net.d下。

然后,我将尝试删除您的法兰绒DaemonSet,杀死所有 pods 并完全重新安装法兰绒。

您可能需要重新启动除kube-apiserverkube-controller-manager之外的所有其他Pod。如果您也想重新启动它们,可以,但是不必这样做。

希望能帮助到你!

08-03 12:50