一、环境准备
192.168.83.11 | master01 |
192.168.83.12 | slave01 |
192.168.83.13 | slave02 |
二、执行部署
1.在所有节点上移除旧的docker版本
# yum remove -y docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
2.在所有节点上配置 yum repository
# yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
# yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
3.在所有节点上安装并启动docker
# yum install -y docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io
# systemctl enable docker
# systemctl start docker
4. 在所有节点上安装 nfs-utils
# yum install -y nfs-utils
5. 在所有节点上配置k8s yum源
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
6.关闭selinux,iptables以及swap
# systemctl stop firewalld
# systemctl disable firewalld
# setenforce 0
# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# swapoff -a
# yes | cp /etc/fstab /etc/fstab_bak
# cat /etc/fstab_bak |grep -v swap > /etc/fstab
7.优化内核参数
# vim /etc/sysctl.conf
添加下列参数
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
生效
# sysctl -p
8.在所有节点安装 kubelet、kubeadm、kubectl
# yum install -y kubelet-1.16.1 kubeadm-1.16.1 kubectl-1.16.1
9.在所有节点修改docker Cgroup Driver为systemd
# vim /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
改为:
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
效果为下图
10.设置docker镜像
在所有节点执行以下命令使用 docker 国内镜像,提高 docker 镜像下载速度和稳定性
# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
重启 docker,并启动 kubelet
# systemctl daemon-reload
# systemctl restart docker
# systemctl enable kubelet && systemctl start kubelet
11.初始化master节点
11.1 配置本地hosts
配置本地hosts
# 只在 master 节点执行
# echo "x.x.x.x apiserver.demo" >> /etc/hosts
PS:x.x.x.x 为master本机IP
11.2 在master节点创建 ./kubeadm-config.yaml
# cat <<EOF > ./kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.16.1
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
controlPlaneEndpoint: "apiserver.demo:6443"
networking:
podSubnet: "10.100.0.1/20"
EOF
PS:podSubnet 所使用的网段不能与节点所在的网段重叠;
11.3 在master节点初始化apiserver
# kubeadm init --config=kubeadm-config.yaml --upload-certs
11.4 在master初始化 root 用户的 kubectl 配置
# rm -rf /root/.kube/
# mkdir /root/.kube/
# cp -i /etc/kubernetes/admin.conf /root/.kube/config
11.5 在master节点安装 calico
# kubectl apply -f https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
查看所有容器的状态
# watch kubectl get pod -n kube-system
等到所有容器的状态为running
效果如下图:
12.初始化slave节点
12.1 在master节点上获取加入集群命令
# kubeadm token create --print-join-command
kubeadm join apiserver.demo:6443 --token fjpir5.6331xvd1g7qv0f67 --discovery-token-ca-cert-hash sha256:d7b4ec836e0aac365612e77eba02fe4136f86e02d6e664efa35e284a4d2c5605
12.2 在slave节点上执行加入集群的命令
echo "x.x.x.x apiserver.demo" >> /etc/hosts
# kubeadm join apiserver.demo:6443 --token fjpir5.6331xvd1g7qv0f67 --discovery-token-ca-cert-hash sha256:d7b4ec836e0aac365612e77eba02fe4136f86e02d6e664efa35e284a4d2c5605
PS:x.x.x.x为master主网IP
在master节点上查看集群状态
# kubectl get nodes
12.3 集群中踢掉slave节点
如果slave出现异常,需要执行踢掉命令
在slave节点执行
# kubeadm reset
在master节点执行
# kubectl delete node slave01
三、安装过程中遇到的问题
problem 1.节点状态为NotReady
# kubeadm get node
NAME STATUS ROLES AGE VERSION
master01 NotReady master 47h v1.16.1
slave01 NotReady <none> 43h v1.16.1
slave02 Ready <none> 43h v1.16.1
解决办法:
两个配置文件中
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
/var/lib/kubelet/kubeadm-flags.env
删掉:
network-plugin=cni
重启服务:
# systemctl daemon-reload
# systemctl restart kubelet