master非集群安装
在master
主机上执行:
kubeadm init --kubernetes-version=v1.14.0 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.10.152.11 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
参数说明:
--kubernetes-version
指定安装版本,和安装的kubectl kubeadm
版本保持一致--image-repository
指定镜像下载仓库地址--pod-network-cidr
指定POD
的网络地址范围--apiserver-advertise-address
指定apiserver
的通讯地址,使用时需要修改--service-cidr
为服务VIP
使用其他IP
地址范围--ignore-preflight-errors
指定忽略swap
的错误提示信息
还有一些其他的参数可以执行以下命令查看:
kubeadm init --help
执行上面命令初始化完成后,最后根据提示信息进行操作,加入node
节点也在最后会生成一个命令
集群coredns设置
kubectl taint nodes --all node.kubernetes.io/not-ready-
集群flannel设置
部署安装方式:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
这里可能会出现在部署的时候慢的原因,因为需要下载flannel
镜像
最终查看
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready master 2d23h v1.14.0
node1 Ready <none> 2d23h v1.14.0
node2 Ready <none> 2d23h v1.14.0
[root@master ~]# kubectl get po --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-d5947d4b-rbb8n 1/1 Running 2 2d22h 10.244.2.3 node2 <none> <none>
kube-system coredns-d5947d4b-rjxjr 1/1 Running 2 2d22h 10.244.1.3 node1 <none> <none>
kube-system etcd-master 1/1 Running 1 2d23h 10.10.152.11 master <none> <none>
kube-system kube-apiserver-master 1/1 Running 1 2d23h 10.10.152.11 master <none> <none>
kube-system kube-controller-manager-master 1/1 Running 1 2d23h 10.10.152.11 master <none> <none>
kube-system kube-flannel-ds-amd64-6rjcg 1/1 Running 1 2d22h 10.10.152.12 node1 <none> <none>
kube-system kube-flannel-ds-amd64-wh4zl 1/1 Running 1 2d22h 10.10.152.11 master <none> <none>
kube-system kube-flannel-ds-amd64-xf9kb 1/1 Running 1 2d22h 10.10.152.13 node2 <none> <none>
kube-system kube-proxy-tltzd 1/1 Running 1 2d23h 10.10.152.11 master <none> <none>
kube-system kube-proxy-twrhm 1/1 Running 1 2d23h 10.10.152.13 node2 <none> <none>
kube-system kube-proxy-whfvh 1/1 Running 1 2d23h 10.10.152.12 node1 <none> <none>
kube-system kube-scheduler-master 1/1 Running 1 2d23h 10.10.152.11 master <none> <none>
[root@master ~]# kubectl cluster-info
Kubernetes master is running at https://10.10.152.11:6443
KubeDNS is running at https://10.10.152.11:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
具体使用命令请查看:xxxxxxxxxxxxx链接
多master集群部署安装
- 修改初始化配置
使用kubeadm config print init-defaults > kubeadm-init.yaml
打印出默认配置,然后在根据自己的环境修改配置.
[root@k8s-master-1 ~]# kubeadm config print init-defaults > kubeadm-init.yaml
kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master-1
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
scheduler: {}
下面是我的配置信息kubeadm-new-init.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.14.0
controlPlaneEndpoint: "10.10.152.235:8443"
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
networking:
podSubnet: 10.244.0.0/16
apiServer:
certSANs:
- "k8s-master-1"
- "k8s-master-2"
- "k8s-master-3"
- "10.10.152.166"
- "10.10.152.167"
- "10.10.152.167"
- "10.10.152.235"
- "127.0.0.1"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
2.执行集群初始化命令
# `--experimental-upload-certs` 这个参数一定要加
[root@k8s-master-1 kubeadinit]# kubeadm init --config kube-init.yaml --experimental-upload-certs
[init] Using Kubernetes version: v1.14.0
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [192.168.1.166 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [192.168.1.166 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-master-1 k8s-master-2 k8s-master-3] and IPs [10.96.0.1 192.168.1.166 10.10.152.235 10.10.152.166 10.10.152.167 10.10.152.167 10.10.152.235 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.502402 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in ConfigMap "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
606d3cc6c65411a988febbfa8a073494b96e8c61b994e42178d6e0532cf25a7d
[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 0hsrmo.xly9fsdvzvw19sny
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
# 初始化成功后执行下面的命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
# 添加master节点的命令
kubeadm join 10.10.152.235:8443 --token 0hsrmo.xly9fsdvzvw19sny \
--discovery-token-ca-cert-hash sha256:85294a514161846e12dec59ee8c628689eb4472a279bda698df7ff62695e615e \
--experimental-control-plane --certificate-key 606d3cc6c65411a988febbfa8a073494b96e8c61b994e42178d6e0532cf25a7d
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --experimental-upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
# 添加node节点的命令
kubeadm join 10.10.152.235:8443 --token 0hsrmo.xly9fsdvzvw19sny \
--discovery-token-ca-cert-hash sha256:85294a514161846e12dec59ee8c628689eb4472a279bda698df7ff62695e615e
如果执行错误了可以删除初始化重新修改执行
[root@k8s-master-1 kubeadinit]# kubeadm reset
3.查看状态并处理几个小问题
[root@k8s-master-1 kubeadinit]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-1 NotReady master 6m45s v1.14.0
k8s-master-2 NotReady master 6m14s v1.14.0
k8s-master-3 NotReady master 4m30s v1.14.0
[root@k8s-master-1 kubeadinit]#
[root@k8s-master-1 kubeadinit]#
[root@k8s-master-1 kubeadinit]# kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-d5947d4b-5mhv5 0/1 Pending 0 6m57s
kube-system coredns-d5947d4b-cgcpp 0/1 Pending 0 6m57s
kube-system etcd-k8s-master-1 1/1 Running 0 6m7s
kube-system etcd-k8s-master-2 1/1 Running 0 6m41s
kube-system etcd-k8s-master-3 1/1 Running 0 4m57s
kube-system kube-apiserver-k8s-master-1 1/1 Running 0 5m56s
kube-system kube-apiserver-k8s-master-2 1/1 Running 0 6m41s
kube-system kube-apiserver-k8s-master-3 1/1 Running 0 4m57s
kube-system kube-controller-manager-k8s-master-1 1/1 Running 1 6m29s
kube-system kube-controller-manager-k8s-master-2 1/1 Running 0 6m41s
kube-system kube-controller-manager-k8s-master-3 1/1 Running 0 4m58s
kube-system kube-proxy-krggg 1/1 Running 0 6m56s
kube-system kube-proxy-krsmd 1/1 Running 0 6m42s
kube-system kube-proxy-qll57 1/1 Running 0 4m58s
kube-system kube-scheduler-k8s-master-1 1/1 Running 1 6m30s
kube-system kube-scheduler-k8s-master-2 1/1 Running 0 6m42s
kube-system kube-scheduler-k8s-master-3 1/1 Running 0 4m58s
处理 coredns
这里要把 coredns 删除掉重新安装,下面是 yaml 配置文件coredns.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: kube-dns
name: coredns
namespace: kube-system
spec:
replicas: 3
selector:
matchLabels:
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
k8s-app: kube-dns
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values:
- kube-dns
topologyKey: kubernetes.io/hostname
containers:
- args:
- -conf
- /etc/coredns/Corefile
image: coredns/coredns:1.5.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/coredns
name: config-volume
readOnly: true
dnsPolicy: Default
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: coredns
serviceAccountName: coredns
terminationGracePeriodSeconds: 30
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
name: coredns
name: config-volume
执行下面的命令
# 删除现有的 coredns
[root@k8s-master-1 kubeadinit]# kubectl delete deployment coredns -n kube-system
deployment.extensions "coredns" deleted
# 使用上面的新的 yaml 文件创建新的
[root@k8s-master-1 kubeadinit]# kubectl apply -f coredns.yaml
deployment.apps/coredns created
# 查看现在的状态
[root@k8s-master-1 kubeadinit]# kubectl get po -n kube-system | grep coredns
coredns-64b4d88cdd-8lhfr 0/1 Pending 0 47s
coredns-64b4d88cdd-jxg2w 0/1 Pending 0 47s
coredns-64b4d88cdd-ws4fs 0/1 Pending 0 47s
[root@k8s-master-1 kubeadinit]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
稍等片刻在查看状态
[root@k8s-master-1 kubeadinit]# kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-64b4d88cdd-8lhfr 1/1 Running 0 7m2s
coredns-64b4d88cdd-jxg2w 1/1 Running 0 7m2s
coredns-64b4d88cdd-ws4fs 1/1 Running 0 7m2s
etcd-k8s-master-1 1/1 Running 0 32m
etcd-k8s-master-2 1/1 Running 0 32m
etcd-k8s-master-3 1/1 Running 0 31m
kube-apiserver-k8s-master-1 1/1 Running 0 32m
kube-apiserver-k8s-master-2 1/1 Running 0 32m
kube-apiserver-k8s-master-3 1/1 Running 0 31m
kube-controller-manager-k8s-master-1 1/1 Running 1 32m
kube-controller-manager-k8s-master-2 1/1 Running 0 32m
kube-controller-manager-k8s-master-3 1/1 Running 0 31m
kube-flannel-ds-amd64-88sps 1/1 Running 0 5m9s
kube-flannel-ds-amd64-c4z8p 1/1 Running 0 5m9s
kube-flannel-ds-amd64-qrjkl 1/1 Running 0 5m9s
kube-proxy-krggg 1/1 Running 0 33m
kube-proxy-krsmd 1/1 Running 0 32m
kube-proxy-qll57 1/1 Running 0 31m
kube-scheduler-k8s-master-1 1/1 Running 1 32m
kube-scheduler-k8s-master-2 1/1 Running 0 32m
kube-scheduler-k8s-master-3 1/1 Running 0 31m
[root@k8s-master-1 kubeadinit]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-1 Ready master 33m v1.14.0
k8s-master-2 Ready master 33m v1.14.0
k8s-master-3 Ready master 31m v1.14.0