本文章参考:http://www.kubeasy.com/
上一篇介绍了master,node还有pod的基础概念和一些用法。这一篇主要介绍k8s当中Pod的资源调度
一、RC和ReplicaSet
Replication Controller和ReplicaSet的创建删除和Pod并无太大区别,Replication Controller目前几乎已经不在生产环境中使用,ReplicaSet也很少单独被使用,都是使用更高级的资源Deployment、DaemonSet、StatefulSet进行管理Pod。
1、Replication Controller和ReplicaSet
Replication Controller(复制控制器,RC)和ReplicaSet(复制集,RS)是两种简单部署Pod的方式。因为在生产环境中,主要使用更高级的Deployment等方式进行Pod的管理和部署,所以本节只对Replication Controller和Replica Set的部署方式进行简单介绍。
1.1、Replication Controller
Replication Controller(简称RC)可确保Pod副本数达到期望值,也就是RC定义的数量。换句话说,Replication Controller可确保一个Pod或一组同类Pod总是可用。
如果存在的Pod大于设定的值,则Replication Controller将终止额外的Pod。如果太小,Replication Controller将启动更多的Pod用于保证达到期望值。与手动创建Pod不同的是,用Replication Controller维护的Pod在失败、删除或终止时会自动替换。因此即使应用程序只需要一个Pod,也应该使用Replication Controller或其他方式管理。Replication Controller类似于进程管理程序,但是Replication Controller不是监视单个节点上的各个进程,而是监视多个节点上的多个Pod。
定义一个Replication Controller的示例如下。
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
1.2、ReplicaSet
ReplicaSet是支持基于集合的标签选择器的下一代Replication Controller,它主要用作Deployment协调创建、删除和更新Pod,和Replication Controller唯一的区别是,ReplicaSet支持标签选择器。在实际应用中,虽然ReplicaSet可以单独使用,但是一般建议使用Deployment来自动管理ReplicaSet,除非自定义的Pod不需要更新或有其他编排等。
定义一个ReplicaSet的示例如下:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
matchExpressions:
- {key: tier, operator: In, values: [frontend]}
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# If your cluster config does not include a dns service, then to
# instead access environment variables to find service host
# info, comment out the 'value: dns' line above, and uncomment the
# line below.
# value: env
ports:
- containerPort: 80
二、无状态应用Deployment
2.1、Deployment概念
用于部署无状态的服务,这个最常用的控制器。一般用于管理维护企业内部无状态的微服务,比如configserver、zuul、springboot。他可以管理多个副本的Pod实现无缝迁移、自动扩容缩容、自动灾难恢复、一键回滚等功能。
2.2、手动创建一个pod
#yaml文件
# cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2020-09-19T02:41:11Z"
generation: 1
labels:
app: nginx
name: nginx
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 2 #副本数
revisionHistoryLimit: 10 # 历史记录保留的个数
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
#创建deployment
[root@master yaml]# kubectl apply -f dp-nginx.yaml
deployment.apps/nginx created
[root@master yaml]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-66bbc9fdc5-jnh5c 1/1 Running 0 63s
nginx-66bbc9fdc5-v5wq7 1/1 Running 0 63s
#状态解析
[root@master yaml]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-66bbc9fdc5-jnh5c 1/1 Running 0 108s 172.171.205.134 master <none> <none>
nginx-66bbc9fdc5-v5wq7 1/1 Running 0 108s 172.165.11.3 node2 <none> <none>
NAME: Deployment名称
READY:Pod的状态,已经Ready的个数
UP-TO-DATE:已经达到期望状态的被更新的副本数
AVAILABLE:已经可以用的副本数
AGE:显示应用程序运行的时间
CONTAINERS:容器名称
IMAGES:容器的镜像
SELECTOR:管理的Pod的标签
2.3、Deployment的更新
2.3.1、更改deployment的镜像并记录
#用命令行的方式更改
[root@master yaml]# kubectl set image deploy nginx nginx=nginx1.15.4 --record
deployment.apps/nginx image updated
2.3.2、查看更新过程
[root@master yaml]# kubectl rollout status deploy nginx
Waiting for deployment "nginx" rollout to finish: 1 out of 2 new replicas have been updated...
2.3.3、查看更新记录
[root@master log]# kubectl rollout history deploy nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
1 <none>
2 kubectl set image deploy nginx nginx=nginx1.15.3 --record=true
3 kubectl set image deploy nginx nginx=nginx1.15.4 --record=true
4 kubectl set image deploy nginx nginx=nginx1.15.2 --record=true
2.4、回滚操作
2.4.1、回滚到上一个版本
[root@master log]# kubectl rollout undo deploy nginx
deployment.apps/nginx rolled back
2.4.2、回滚到指定版本
1、先查看更新记录
[root@master log]# kubectl rollout history deploy nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
1 <none>
2 kubectl set image deploy nginx nginx=nginx1.15.3 --record=true
3 kubectl set image deploy nginx nginx=nginx1.15.4 --record=true
4 kubectl set image deploy nginx nginx=nginx1.15.2 --record=true
2、回滚到第二个版本
[root@master log]# kubectl rollout undo deploy nginx --to-revision=2
deployment.apps/nginx rolled back
2.4.3、查看指定版本的详细信息
[root@master log]# kubectl rollout history deploy nginx --revision=1
deployment.apps/nginx with revision #1
Pod Template:
Labels: app=nginx
pod-template-hash=66bbc9fdc5
Containers:
nginx:
Image: nginx:1.15.2
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
2.4.4、Deployment的暂停和恢复
#暂停功能
[root@master log]# kubectl rollout pause deploy nginx
deployment.apps/nginx paused
[root@master log]# kubectl set image deploy nginx nginx=1.19.1 --record
deployment.apps/nginx image updated
#进行第二次配置变更,添加内存CPU配置
[root@master log]# kubectl set resources deploy nginx -c nginx --limits=cpu=200m,memory=128Mi --requests=cpu=10m,memory=16Mi
deployment.apps/nginx resource requirements updated
[root@master log]# kubectl get deploy nginx -oyaml
#发现内存和CPU已经变成上面我们设置的大小
#查看Pod是否被更新
[root@master log]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-66bbc9fdc5-jnh5c 1/1 Running 0 33m
nginx-66bbc9fdc5-v5wq7 1/1 Running 0 33m
#恢复启动
[root@master log]# kubectl rollout resume deploy nginx
deployment.apps/nginx resumed
[root@master log]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-5b9975cbd8 0 0 0 21m
nginx-66945c45ff 0 0 0 17m
nginx-66bbc9fdc5 2 2 2 34m
nginx-7d596c7796 1 1 0 19s
nginx-c58645c45 0 0 0 19m
2.5、deployment注意事项
[root@master log]# kubectl get deploy nginx -oyaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "7"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"deployment.kubernetes.io/revision":"1"},"creationTimestamp":"2020-09-19T02:41:11Z","generation":1,"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"progressDeadlineSeconds":600,"replicas":2,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app":"nginx"}},"strategy":{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"nginx:1.15.2","imagePullPolicy":"IfNotPresent","name":"nginx","resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30}}}}
kubernetes.io/change-cause: kubectl set image deploy nginx nginx=1.19.1 --record=true
creationTimestamp: "2021-08-28T13:52:35Z"
generation: 10
labels:
app: nginx
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:app: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:containers:
k:{"name":"nginx"}:
.: {}
f:imagePullPolicy: {}
f:name: {}
f:resources:
.: {}
f:limits:
f:cpu: {}
f:memory: {}
f:requests:
f:cpu: {}
f:memory: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: kubectl-client-side-apply
operation: Update
time: "2021-08-28T13:52:35Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:kubernetes.io/change-cause: {}
f:spec:
f:template:
f:spec:
f:containers:
k:{"name":"nginx"}:
f:image: {}
f:resources:
f:limits:
.: {}
f:cpu: {}
f:memory: {}
f:requests:
.: {}
f:cpu: {}
f:memory: {}
manager: kubectl-set
operation: Update
time: "2021-08-28T14:22:50Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:unavailableReplicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
time: "2021-08-28T14:26:34Z"
name: nginx
namespace: default
resourceVersion: "13598"
uid: aea1a028-b412-41fe-b8ea-a955a4c6245d
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: 1.19.1
imagePullPolicy: IfNotPresent
name: nginx
resources:
limits:
cpu: 200m
memory: 128Mi
requests:
cpu: 10m
memory: 16Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
conditions:
- lastTransitionTime: "2021-08-28T13:52:37Z"
lastUpdateTime: "2021-08-28T13:52:37Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-08-28T14:26:34Z"
lastUpdateTime: "2021-08-28T14:26:34Z"
message: ReplicaSet "nginx-7d596c7796" is progressing.
reason: ReplicaSetUpdated
status: "True"
type: Progressing
observedGeneration: 10
readyReplicas: 2
replicas: 3
unavailableReplicas: 1
updatedReplicas: 1
- .spec.revisionHistoryLimit:设置保留RS旧的revision的个数,设置为0的话,不保留历史数据
- .spec.minReadySeconds:可选参数,指定新创建的Pod在没有任何容器崩溃的情况下视为Ready最小的秒数,默认为0,即一旦被创建就视为可用。
- 滚动更新的策略:
- .spec.strategy.type:更新deployment的方式,默认是RollingUpdate
- RollingUpdate:滚动更新,可以指定maxSurge和maxUnavailable
- maxUnavailable:指定在回滚或更新时最大不可用的Pod的数量,可选字段,默认25%,可以设置成数字或百分比,如果该值为0,那么maxSurge就不能0
- maxSurge:可以超过期望值的最大Pod数,可选字段,默认为25%,可以设置成数字或百分比,如果该值为0,那么maxUnavailable不能为0
- Recreate:重建,先删除旧的Pod,在创建新的Pod
三、有状态应用StatefulSet
3.1、StatefulSet概念
StatefulSet(有状态集,缩写为sts)常用于部署有状态的且需要有序启动的应用程序,比如在进行SpringCloud项目容器化时,Eureka的部署是比较适合用StatefulSet部署方式的,可以给每个Eureka实例创建一个唯一且固定的标识符,并且每个Eureka实例无需配置多余的Service,其余Spring Boot应用可以直接通过Eureka的Headless Service即可进行注册。
- Eureka的statefulset的资源名称是eureka,eureka-0 eureka-1 eureka-2
- Service:headless service,没有ClusterIPeureka-svc (无头service)
- Eureka-0.eureka-svc.NAMESPACE_NAME eureka-1.eureka-svc …
StatefulSet主要用于管理有状态应用程序的工作负载API对象。比如在生产环境中,可以部署ElasticSearch集群、MongoDB集群或者需要持久化的RabbitMQ集群、Redis集群、Kafka集群和ZooKeeper集群等。
和Deployment类似,一个StatefulSet也同样管理着基于相同容器规范的Pod。不同的是,StatefulSet为每个Pod维护了一个粘性标识。这些Pod是根据相同的规范创建的,但是不可互换,每个Pod都有一个持久的标识符,在重新调度时也会保留,一般格式为StatefulSetName-Number。比如定义一个名字是Redis-Sentinel的StatefulSet,指定创建三个Pod,那么创建出来的Pod名字就为Redis-Sentinel-0、Redis-Sentinel-1、Redis-Sentinel-2。而,Headless一般的格式为:statefulSetName-{0..N-1}.serviceName.namespace.svc.cluster.local。
说明:
- serviceName为Headless Service的名字,创建StatefulSet时,必须指定Headless Service名称;
- 0..N-1为Pod所在的序号,从0开始到N-1;
- statefulSetName为StatefulSet的名字;
- namespace为服务所在的命名空间;
- .cluster.local为Cluster Domain(集群域)。
假如公司某个项目需要在Kubernetes中部署一个主从模式的Redis,此时使用StatefulSet部署就极为合适,因为StatefulSet启动时,只有当前一个容器完全启动时,后一个容器才会被调度,并且每个容器的标识符是固定的,那么就可以通过标识符来断定当前Pod的角色。
比如用一个名为redis-ms的StatefulSet部署主从架构的Redis,第一个容器启动时,它的标识符为redis-ms-0,并且Pod内主机名也为redis-ms-0,此时就可以根据主机名来判断,当主机名为redis-ms-0的容器作为Redis的主节点,其余从节点,那么Slave连接Master主机配置就可以使用不会更改的Master的Headless Service,此时Redis从节点(Slave)配置文件如下:
port 6379
slaveof redis-ms-0.redis-ms.public-service.svc.cluster.local 6379
tcp-backlog 511
timeout 0
tcp-keepalive 0
....
其中redis-ms-0.redis-ms.public-service.svc.cluster.local是Redis Master的Headless Service,在同一命名空间下只需要写redis-ms-0.redis-ms即可,后面的public-service.svc.cluster.local可以省略。
3.2、StatefulSet注意事项
一般StatefulSet用于有以下一个或者多个需求的应用程序:
- 需要稳定的独一无二的网络标识符。
- 需要持久化数据。
- 需要有序的、优雅的部署和扩展。
- 需要有序的自动滚动更新。
如果应用程序不需要任何稳定的标识符或者有序的部署、删除或者扩展,应该使用无状态的控制器部署应用程序,比如Deployment或者ReplicaSet。
StatefulSet是Kubernetes 1.9版本之前的beta资源,在1.5版本之前的任何Kubernetes版本都没有。
Pod所用的存储必须由PersistentVolume Provisioner(持久化卷配置器)根据请求配置StorageClass,或者由管理员预先配置,当然也可以不配置存储。
为了确保数据安全,删除和缩放StatefulSet不会删除与StatefulSet关联的卷,可以手动选择性地删除PVC和PV
StatefulSet目前使用Headless Service(无头服务)负责Pod的网络身份和通信,需要提前创建此服务。
删除一个StatefulSet时,不保证对Pod的终止,要在StatefulSet中实现Pod的有序和正常终止,可以在删除之前将StatefulSet的副本缩减为0。
3.3、手动创建一个Pod
配置文件如下
[root@master yaml]# cat state-nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
#创建pod
[root@master yaml]# kubectl apply -f statefulset.yaml
service/nginx created
statefulset.apps/web created
#查看pod启动状态
[root@master yaml]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 2m56s
web-1 1/1 Running 0 2m31s
#查看定义的无头service,发现nginx这个service资源没有CLUSTER-IP
[root@master yaml]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
nginx ClusterIP None <none> 80/TCP 8m34s
关键注意点:
- kind: Service定义了一个名字为Nginx的Headless Service,创建的Service格式为nginx-0.nginx.default.svc.cluster.local,其他的类似,因为没有指定Namespace(命名空间),所以默认部署在default。
- kind: StatefulSet定义了一个名字为web的StatefulSet,replicas表示部署Pod的副本数,本实例为2。
在StatefulSet中必须设置Pod选择器(.spec.selector)用来匹配其标签(.spec.template.metadata.labels)。在1.8版本之前,如果未配置该字段(.spec.selector),将被设置为默认值,在1.8版本之后,如果未指定匹配Pod Selector,则会导致StatefulSet创建错误。
当StatefulSet控制器创建Pod时,它会添加一个标签statefulset.kubernetes.io/pod-name,该标签的值为Pod的名称,用于匹配Service。
3.4、验证是否能够解析
#创建一个busybox容器,用于验证
[root@master yaml]# cat<<EOF | kubectl apply -f -
> apiVersion: v1
> kind: Pod
> metadata:
> name: busybox
> namespace: default
> spec:
> containers:
> - name: busybox
> image: busybox:1.28
> command:
> - sleep
> - "3600"
> imagePullPolicy: IfNotPresent
> restartPolicy: Always
> EOF
#进入busybox容器,注意没有bash,只能用sh
[root@master yaml]# kubectl exec -it busybox -- sh
/ # nslookup web-0.nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-0.nginx
Address 1: 10.244.32.152 web-0.nginx.default.svc.cluster.local
#可以正常解析到创建的statefulset控制器创建的pod,代表验证没有问题
3.5、StatefulSet更新策略
[root@k8s-master01 ~]# kubectl get sts web -o yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: "2020-09-19T07:46:49Z"
generation: 5
name: web
namespace: default
spec:
podManagementPolicy: OrderedReady
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
serviceName: nginx
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
name: web
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
- partition数量代表更新时杀掉的副本数,尽量让这个数量保持1。如果太高了容易影响业务。
- type类型跟deleployment一样,都是滚动更新
四、守护进程DaemonSent
DaemonSet:守护进程集,缩写为ds,在所有节点或者是匹配的节点上都部署一个Pod。
使用DaemonSet的场景
- 运行集群存储的daemon,比如ceph或者glusterd
- 节点的CNI网络插件,calico
- 节点日志的收集:fluentd或者是filebeat
- 节点的监控:node exporter
- 服务暴露:部署一个ingress nginx
4.1、创建一个DaemonSet
#deaemonset的配置文件
[root@master yaml]# cat nginx-ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: nginx
name: nginx
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
#创建
[root@master yaml]# kubectl create -f nginx-ds.yaml
daemonset.apps/nginx created
#可以看到每个节点上的都有一个nginx容器
[root@master yaml]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 0 52m 172.171.205.151 master <none> <none>
nginx-2bvmk 1/1 Running 0 71s 172.171.205.153 master <none> <none>
nginx-4rffn 1/1 Running 0 71s 172.165.149.25 node1 <none> <none>
nginx-f8qhk 1/1 Running 0 71s 172.165.11.13 node2 <none> <none>
#打标签进行观察
[root@master yaml]# kubectl label node node1 node2 ds=true
node/node1 labeled
node/node2 labeled
[root@master yaml]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master Ready control-plane,master 18h v1.20.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
node1 Ready <none> 18h v1.20.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ds=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux
node2 Ready <none> 18h v1.20.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ds=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux
#修改NGINX的镜像版本后查看更新历史
[root@master yaml]# kubectl rollout history ds nginx
daemonset.apps/nginx
REVISION CHANGE-CAUSE
1 <none>
2 <none>
4.2、DaemonSet更新回滚
Statefulset和DaemonSet更新回滚和Deployment一致,这里不做介绍了。可以翻看上面的内容。
五、HPA控制器
Deployment、ReplicaSet、Replication Controller或StatefulSet控制器资源管控的Pod副本数量支持手动方式的运行时调整,从而更好地匹配业务规模的实际需求。不过,手动调整的方式依赖于用户深度参与监控容器应用的资源压力并且需要计算出合理的值进行调整,存在一定的程度的滞后性。为此,Kubernetes提供了多种自动弹性伸缩(Auto Scaling)工具
HPA:全称为Horizontal Pod Autoscaler,一种支持控制器对象下Pod规模弹性伸缩的工具,目前有两个版本的实现,分别称为HPA和HPA(v2),前一种仅支持把CPU指标作为评估基准,而新版本支持可从资源指标API和自定义指标API中获取指标数据。
- HPA v1为稳定版自动水平伸缩,只支持CPU指标
- V2为beta版本,分为v2beta1(支持CPU、内存和自定义指标)
- v2beta2(支持CPU、内存、自定义指标Custom和额外指标ExternalMetrics)
流程如下图
5.1、创建一个HPA控制器
[root@master yaml]# cat hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: mynginx
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: mynginx
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
- type: Resource
resource:
name: memory
targetAverageValue: 50Mi
#创建
[root@master yaml]# kubectl apply -f hpa.yaml
horizontalpodautoscaler.autoscaling/mynginx created
#查看状态
[root@master yaml]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
mynginx Deployment/mynginx <unknown>/50Mi, <unknown>/50% 2 10 0 89s
#或者直接跑一个容器
[root@master yaml]# kubectl run nginx-server-hpa --requests=cpu=10m --image=registry.cn-beijing.aliyuncs.com/dotbalo/nginx --port=80
暴露80端口
[root@master yaml]# kubectl exposedeployment hpa-nginx --port=80
#控制最大cpu限制和最大的POD个数
[root@master yaml]# kubectl autoscale deployment hpa-nginx --cpu-percent=10 --min=1 --max=10
必须安装metrics-server或其他自定义metrics-server
必须配置requests参数
不能扩容无法缩放的对象,比如DaemonSet