我正在关注 https://v1-12.docs.kubernetes.io/docs/setup/independent/high-availability/ 以设置高可用性集群

三个主:10.240.0.4 (kb8-master1)、10.240.0.33 (kb8-master2)、10.240.0.75 (kb8-master3)
LB:10.240.0.16(haproxy)

我已经设置了 kb8-master1 并按照说明将以下文件复制到其余的 master(kb8-master2 和 kb8-master3)

在 kb8-master2

mkdir -p /etc/kubernetes/pki/etcd

mv /home/${USER}/ca.crt /etc/kubernetes/pki/

mv /home/${USER}/ca.key /etc/kubernetes/pki/

mv /home/${USER}/sa.pub /etc/kubernetes/pki/

mv /home/${USER}/sa.key /etc/kubernetes/pki/

mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/

mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/

mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt

mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key

mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf

After that I started to follow following commands in the kb8-master2

> `sudo kubeadm alpha phase certs all --config kubeadm-config.yaml`

Output:-

[certificates] Generated etcd/ca certificate and key.

[certificates] Generated etcd/server certificate and key.

[certificates] etcd/server serving cert is signed for DNS names [kb8-master2 localhost] and IPs [127.0.0.1 ::1]

[certificates] Generated apiserver-etcd-client certificate and key.

[certificates] Generated etcd/peer certificate and key.

[certificates] etcd/peer serving cert is signed for DNS names [kb8-master2 localhost] and IPs [10.240.0.33 127.0.0.1 ::1]

[certificates] Generated etcd/healthcheck-client certificate and key.

[certificates] Generated ca certificate and key.

[certificates] Generated apiserver-kubelet-client certificate and key.

[certificates] Generated apiserver certificate and key.

[certificates] apiserver serving cert is signed for DNS names [kb8-master2 kubernetes kubernetes.default kubernetes.default.svc
kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.240.0.33]

[certificates] Generated front-proxy-ca certificate and key.

[certificates] Generated front-proxy-client certificate and key.

[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"

[certificates] Generated sa key and public key.

>`sudo kubeadm alpha phase kubelet config write-to-disk --config kubeadm-config.yaml`

Output:-

[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

>`sudo kubeadm alpha phase kubelet write-env-file --config kubeadm-config.yaml`

Output:-
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

>`sudo kubeadm alpha phase kubeconfig kubelet --config kubeadm-config.yaml`

Output:-
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

>`sudo systemctl start kubelet`



>`export KUBECONFIG=/etc/kubernetes/admin.conf`


>`sudo kubectl exec -n kube-system etcd-kb8-master1 -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=protocol://10.240.0.4:2379 member add kb8-master2 https://10.240.0.33:2380`

输出:-
与服务器 localhost:8080 的连接被拒绝 - 您是否指定了正确的主机或端口?

注意:我现在可以在 kb8-master2 中运行 kubectl get po -n kube-system 来查看 pod
sudo kubeadm alpha phase etcd local --config kubeadm-config.yaml

无输出
sudo kubeadm alpha phase kubeconfig all --config kubeadm-config.yaml

输出:-

kubeconfig 文件“/etc/kubernetes/admin.conf”已经存在,但得到了错误的 API 服务器 URL

我真的被困在这里了。更多

在 kb8-master2 中使用的 kubeadm-config.yaml 文件下方
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
kubernetesVersion: v1.12.2
apiServerCertSANs:
- "10.240.0.16"
controlPlaneEndpoint: "10.240.0.16:6443"
etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://10.240.0.33:2379"
      advertise-client-urls: "https://10.240.0.33:2379"
      listen-peer-urls: "https://10.240.0.33:2380"
      initial-advertise-peer-urls: "https://10.240.0.33:2380"
      initial-cluster: "kb8-master1=https://10.240.0.4:2380,kb8-master2=https://10.240.0.33:2380"
      initial-cluster-state: existing
    serverCertSANs:
      - kb8-master2
      - 10.240.0.33
    peerCertSANs:
      - kb8-master2
      - 10.240.0.33
networking:
    podSubnet: "10.244.0.0/16"

有没有人遇到过同样的问题。我完全被困在这里

最佳答案

您是否有任何理由单独执行所有 init 和 join 任务,而不是直接使用 init 和 join ? Kubeadm 使用起来应该非常简单。

创建您的 initConfigurationclusterConfiguraton list 并将它们放在您的 Master 上的同一个文件中。然后创建一个 nodeConfiguration list 并将其放在节点上的文件中。然后在你的主上运行 kubeadm init --config=/location/master.yml 然后在你的节点上运行 kubeadm join --token 1.2.3.4:6443
与其通过文档了解 init 和 join 如何专门处理它们的子任务,不如通过 this document 来使用它们的自动化更轻松地构建集群。

关于kubernetes - kubeconfig 文件 “/etc/kubernetes/admin.conf” 已存在,但 API 服务器 URL 错误,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/53757607/

10-16 12:39