本文介绍了这些吊舱是否在覆盖网络内部?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如何确认该Kubernetes集群中的某些Pod是否在Calico覆盖网络中运行?

How can I confirm whether or not some of the pods in this Kubernetes cluster are running inside the Calico overlay network?


Pod名称:

具体来说,当我运行 kubectl get pods --all-namespaces ,结果列表中只有两个节点的名称中有单词 calico 。其他豆荚,例如 etcd kube-controller-manager ,其他豆荚没有单词 calico 。根据我在网上阅读的内容,其他豆荚的名称中应带有 calico 一词。

Specifically, when I run kubectl get pods --all-namespaces, only two of the nodes in the resulting list have the word calico in their names. The other pods, like etcd and kube-controller-manager, and others do NOT have the word calico in their names. From what I read online, the other pods should have the word calico in their names.

$ kubectl get pods --all-namespaces

NAMESPACE     NAME                                                               READY   STATUS              RESTARTS   AGE
kube-system   calico-node-l6jd2                                                  1/2     Running             0          51m
kube-system   calico-node-wvtzf                                                  1/2     Running             0          51m
kube-system   coredns-86c58d9df4-44mpn                                           0/1     ContainerCreating   0          40m
kube-system   coredns-86c58d9df4-j5h7k                                           0/1     ContainerCreating   0          40m
kube-system   etcd-ip-10-0-0-128.us-west-2.compute.internal                      1/1     Running             0          50m
kube-system   kube-apiserver-ip-10-0-0-128.us-west-2.compute.internal            1/1     Running             0          51m
kube-system   kube-controller-manager-ip-10-0-0-128.us-west-2.compute.internal   1/1     Running             0          51m
kube-system   kube-proxy-dqmb5                                                   1/1     Running             0          51m
kube-system   kube-proxy-jk7tl                                                   1/1     Running             0          51m
kube-system   kube-scheduler-ip-10-0-0-128.us-west-2.compute.internal            1/1     Running             0          51m


stdout 通过应用印花布


stdout from applying calico

应用印花布产生的标准输出如下:

The stdout that resulted from applying calico is as follows:

$ sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

configmap/calico-config created
service/calico-typha created
deployment.apps/calico-typha created
poddisruptionbudget.policy/calico-typha created
daemonset.extensions/calico-node created\nserviceaccount/calico-node created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created


创建集群的方式:

安装集群的命令是:

$ sudo -i
# kubeadm init --kubernetes-version 1.13.1 --pod-network-cidr 192.168.0.0/16 | tee kubeadm-init.out
# exit
$ sudo mkdir -p $HOME/.kube
$ sudo chown -R lnxcfg:lnxcfg /etc/kubernetes
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
$ sudo kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml

它在Amazon Linux 2主机上的AWS上运行。

This is running on AWS in Amazon Linux 2 host machines.

推荐答案

这是正常现象,是正常现象,您只有几个以 Calico 。它们是在初始化Calico或向集群添加新节点时创建的。

This is normal and expected behavior, you have only a few pods starting with Calico. They are created when you initialize Calico or add new nodes to your cluster.

etcd-* kube-apiserver-* kube-controller-manager-* coredns-* kube-proxy-* kube-scheduler-* 是强制性系统组件,吊舱没有依赖性在印花布上。因此,名称将基于系统。

etcd-*, kube-apiserver-*, kube-controller-manager-*, coredns-*, kube-proxy-*, kube-scheduler-* are mandatory system components, pods have no dependency on Calico. Hence names would be system based.

此外,正如@Jonathan_M所写-Calico不适用于K8s控制面。仅对新创建的Pod

Also, as @Jonathan_M already wrote - Calico doesn't apply to K8s control plane. Only to newly created pods

您可以使用 kubectl来验证是否在网络中覆盖了Pod --all-namespaces -o wide

我的示例:

kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE   READINESS GATES
default       my-nginx-76bf4969df-4fwgt               1/1     Running   0          14s   192.168.1.3   kube-calico-2   <none>           <none>
default       my-nginx-76bf4969df-h9w9p               1/1     Running   0          14s   192.168.1.5   kube-calico-2   <none>           <none>
default       my-nginx-76bf4969df-mh46v               1/1     Running   0          14s   192.168.1.4   kube-calico-2   <none>           <none>
kube-system   calico-node-2b8rx                       2/2     Running   0          70m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   calico-node-q5n2s                       2/2     Running   0          60m   10.132.0.13   kube-calico-2   <none>           <none>
kube-system   coredns-86c58d9df4-q22lx                1/1     Running   0          74m   192.168.0.2   kube-calico-1   <none>           <none>
kube-system   coredns-86c58d9df4-q8nmt                1/1     Running   0          74m   192.168.1.2   kube-calico-2   <none>           <none>
kube-system   etcd-kube-calico-1                      1/1     Running   0          73m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   kube-apiserver-kube-calico-1            1/1     Running   0          73m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   kube-controller-manager-kube-calico-1   1/1     Running   0          73m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   kube-proxy-6zsxc                        1/1     Running   0          74m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   kube-proxy-97xsf                        1/1     Running   0          60m   10.132.0.13   kube-calico-2   <none>           <none>
kube-system   kube-scheduler-kube-calico-1            1/1     Running   0          73m   10.132.0.12   kube-calico-1   <none>           <none>


kubectl get nodes --all-namespaces -o wide
NAME            STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
kube-calico-1   Ready    master   84m   v1.13.4   10.132.0.12   <none>        Ubuntu 16.04.5 LTS   4.15.0-1023-gcp   docker://18.9.2
kube-calico-2   Ready    <none>   70m   v1.13.4   10.132.0.13   <none>        Ubuntu 16.04.6 LTS   4.15.0-1023-gcp   docker://18.9.2

您可以看到K8s控制平面使用了初始IP,而nginx部署容器已经使用了 Calico 192.168.0.0/16范围。

You can see that K8s control plane uses initial IPs and nginx deployment pods already use Calico 192.168.0.0/16 range.

这篇关于这些吊舱是否在覆盖网络内部?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

06-01 10:43