一 glusterfs存储集群部署

注意:以下为简略步骤,详情参考《附009.Kubernetes永久存储之GlusterFS独立部署》。

1.1 架构示意

1.2 相关规划

主机
IP
磁盘
备注
k8smaster01
172.24.8.71
——
Kubernetes Master节点
Heketi主机
k8smaster02
172.24.8.72
——
Kubernetes Master节点
Heketi主机
k8smaster03
172.24.8.73
——
Kubernetes Master节点
Heketi主机
k8snode01
172.24.8.74
sdb
Kubernetes Worker节点
glusterfs 01节点
k8snode02
172.24.8.75
sdb
Kubernetes Worker节点
glusterfs 02节点
k8snode03
172.24.8.76
sdb
Kubernetes Worker节点
glusterfs 03节点
提示:本规划直接使用裸磁盘完成。

1.3 安装glusterfs

# yum -y install centos-release-gluster
# yum -y install glusterfs-server
# systemctl start glusterd
# systemctl enable glusterd
提示:建议所有节点安装。

1.4 添加信任池

[root@k8snode01 ~]# gluster peer probe k8snode02
[root@k8snode01 ~]# gluster peer probe k8snode03
[root@k8snode01 ~]# gluster peer status #查看信任池状态
[root@k8snode01 ~]# gluster pool list #查看信任池列表
提示:仅需要在glusterfs任一节点执行一次即可。

1.5 安装heketi

[root@k8smaster01 ~]# yum -y install heketi heketi-client

1.6 配置heketi

[root@k8smaster01 ~]# vi /etc/heketi/heketi.json
 {  " "",   " "  " " " " " },  " " " }  },   " " " " " " " " " " ],  "  " " " " "",  " },   " "  " " " " ],  " }  }
 

1.7 配置免秘钥

[root@k8smaster01 ~]# ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N ""
[root@k8smaster01 ~]# chown heketi:heketi /etc/heketi/heketi_key
[root@k8smaster01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@k8snode01
[root@k8smaster01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@k8snode02
[root@k8smaster01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@k8snode03

1.8 启动heketi

[root@k8smaster01 ~]# systemctl enable heketi.service
[root@k8smaster01 ~]# systemctl start heketi.service
[root@k8smaster01 ~]# systemctl status heketi.service
[root@k8smaster01 ~]# curl http://localhost:8080/hello #测试访问

1.9 配置Heketi拓扑

[root@k8smaster01 ~]# vi /etc/heketi/topology.json
 {  " {  " {  " " " " ],  " " ]  },  " },  " " ]  },  {  " " " " ],  " " ]  },  " },  " " ]  },  {  " " " " ],  " " ]  },  " },  " " ]  }  ]  }  ]  }
 
[root@k8smaster01 ~]# echo "export HEKETI_CLI_SERVER=http://k8smaster01:8080" >> /etc/profile.d/heketi.sh
[root@k8smaster01 ~]# echo "alias heketi-cli='heketi-cli --user admin --secret admin123'" >> .bashrc
[root@k8smaster01 ~]# source /etc/profile.d/heketi.sh
[root@k8smaster01 ~]# source .bashrc
[root@k8smaster01 ~]# echo $HEKETI_CLI_SERVER
http://k8smaster01:8080
[root@k8smaster01 ~]# heketi-cli --server $HEKETI_CLI_SERVER --user admin --secret admin123 topology load --json=/etc/heketi/topology.json

1.10 集群管理及测试

[root@heketi ~]# heketi-cli cluster list #集群列表
[root@heketi ~]# heketi-cli node list #卷信息
[root@heketi ~]# heketi-cli volume list #卷信息
[root@k8snode01 ~]# gluster volume info #通过glusterfs节点查看

1.11 创建StorageClass

[root@k8smaster01 study]# vi heketi-secret.yaml
 apiVersion: v1  kind: Secret  metadata:   namespace: heketi   key: YWRtaW4xMjM=  type: kubernetes.io/glusterfs
 
[root@k8smaster01 study]# kubectl create ns heketi
[root@k8smaster01 study]# kubectl create -f heketi-secret.yaml #创建heketi
[root@k8smaster01 study]# kubectl get secrets -n heketi
[root@k8smaster01 study]# vim gluster-heketi-storageclass.yaml #正式创建StorageClass
 StorageClass  apiVersion: storage.k8s.io/v1  kind: StorageClass  metadata:   parameters:  resturl: " clusterid: " restauthenabled: " restuser: " secretName: " secretNamespace: " volumetype: " provisioner: kubernetes.io/glusterfs  reclaimPolicy: Delete
 
[root@k8smaster01 study]# kubectl create -f gluster-heketi-storageclass.yaml
注意:storageclass资源创建后不可变更,如修改只能删除后重建。
[root@k8smaster01 heketi]# kubectl get storageclasses #查看确认
NAME PROVISIONER AGE
gluster-heketi-storageclass kubernetes.io/glusterfs 85s
[root@k8smaster01 heketi]# kubectl describe storageclasses ghstorageclass
 

二 集群监控Metrics

注意:以下为简略步骤,详情参考《049.集群管理-集群监控Metrics》。

2.1 开启聚合层

开机聚合层功能,使用kubeadm默认已开启此功能,可如下查看验证。
[root@k8smaster01 ~]# cat /etc/kubernetes/manifests/kube-apiserver.yaml

2.2 获取部署文件

[root@k8smaster01 ~]# git clone https://github.com/kubernetes-incubator/metrics-server.git
[root@k8smaster01 ~]# cd metrics-server/deploy/1.8+/
[root@k8smaster01 1.8+]# vi metrics-server-deployment.yaml
 ……  image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6  command:  - /metrics-server  - --metric-resolution=30s  - --kubelet-insecure-tls  - --kubelet-preferred-address- ……
 

2.3 正式部署

[root@k8smaster01 1.8+]# kubectl apply -f .
[root@k8smaster01 1.8+]# kubectl -n kube-system get pods -l k8s-app=metrics-server
[root@k8smaster01 1.8+]# kubectl -n kube-system logs -l k8s-app=metrics-server -f #可查看部署日志

2.4 确认验证

[root@k8smaster01 ~]# kubectl top nodes
[root@k8smaster01 ~]# kubectl top pods --all-namespaces
 

三 Prometheus部署

注意:以下为简略步骤,详情参考《050.集群管理-Prometheus+Grafana监控方案》。

3.1 获取部署文件

[root@k8smaster01 ~]# git clone https://github.com/prometheus/prometheus

3.2 创建命名空间

[root@k8smaster01 ~]# cd prometheus/documentation/examples/
[root@k8smaster01 examples]# vi monitor-namespace.yaml
 apiVersion: v1  kind: Namespace  metadata:   
[root@k8smaster01 examples]# kubectl create -f monitor-namespace.yaml

3.3 创建RBAC

[root@k8smaster01 examples]# vi rbac-setup.yml
 apiVersion: rbac.authorization.k8s.io/v1beta1  kind: ClusterRole  metadata:   rules:  - apiGroups: [""]  resources:  - nodes  - nodes/ - services  - endpoints  - pods  verbs: [" - apiGroups:  - extensions  resources:  - ingresses  verbs: [" - nonResourceURLs: [" verbs: [" ---  apiVersion: v1  kind: ServiceAccount  metadata:   namespace: monitoring  ---  apiVersion: rbac.authorization.k8s.io/v1beta1  kind: ClusterRoleBinding  metadata:   roleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole   subjects:  - kind: ServiceAccount   namespace: monitoring #仅需修改命名空间
[root@k8smaster01 examples]# kubectl create -f rbac-setup.yml

3.4 创建Prometheus ConfigMap

[root@k8smaster01 examples]# cat prometheus-kubernetes.yml | grep -v ^$ | grep -v "#" >> prometheus-config.yaml
[root@k8smaster01 examples]# vi prometheus-config.yaml
 apiVersion: v1  kind: ConfigMap  metadata:   labels:   namespace: monitoring   prometheus.yml: |-  global:  scrape_interval: 10s  evaluation_interval: 10s   scrape_configs:  - job_name: 'kubernetes-apiservers'  kubernetes_sd_configs:  - role: endpoints  scheme: https  tls_config:  ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token  relabel_configs:  - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]  action: keep  regex: default;kubernetes;https   - job_name: 'kubernetes-nodes'  scheme: https  tls_config:  ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token  kubernetes_sd_configs:  - role: node  relabel_configs:  - action: labelmap  regex: __meta_kubernetes_node_label_(.+)  - target_label: __address__  replacement: kubernetes.default.svc:443  - source_labels: [__meta_kubernetes_node_name]  regex: (.+)  target_label: __metrics_path__  replacement: /api/v1/nodes/${1}/  - job_name: 'kubernetes-cadvisor'  scheme: https  tls_config:  ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token  kubernetes_sd_configs:  - role: node  relabel_configs:  - action: labelmap  regex: __meta_kubernetes_node_label_(.+)  - target_label: __address__  replacement: kubernetes.default.svc:443  - source_labels: [__meta_kubernetes_node_name]  regex: (.+)  target_label: __metrics_path__  replacement: /api/v1/nodes/${1}/  - job_name: 'kubernetes-service-endpoints'  kubernetes_sd_configs:  - role: endpoints  relabel_configs:  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]  action: keep  regex: true  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]  action:  target_label: __scheme__  regex: (https?)  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]  action:  target_label: __metrics_path__  regex: (.+)  - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]  action:  target_label: __address__  regex: ([^:]+)(?::\d+)?;(\d+)  replacement: $1:$2  - action: labelmap  regex: __meta_kubernetes_service_label_(.+)  - source_labels: [__meta_kubernetes_namespace]  action:  target_label: kubernetes_namespace  - source_labels: [__meta_kubernetes_service_name]  action:  target_label: kubernetes_name   - job_name: 'kubernetes-services'  metrics_path: /probe  params:   kubernetes_sd_configs:  - role: service  relabel_configs:  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]  action: keep  regex: true  - source_labels: [__address__]  target_label: __param_target  - target_label: __address__  replacement: blackbox-exporter.example.com:9115  - source_labels: [__param_target]  target_label:  - action: labelmap  regex: __meta_kubernetes_service_label_(.+)  - source_labels: [__meta_kubernetes_namespace]  target_label: kubernetes_namespace  - source_labels: [__meta_kubernetes_service_name]  target_label: kubernetes_name   - job_name: 'kubernetes-ingresses'  kubernetes_sd_configs:  - role: ingress  relabel_configs:  - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]  action: keep  regex: true  - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]  regex: (.+);(.+);(.+)  replacement: ${1}://${2}${3}  target_label: __param_target  - target_label: __address__  replacement: blackbox-exporter.example.com:9115  - source_labels: [__param_target]  target_label:  - action: labelmap  regex: __meta_kubernetes_ingress_label_(.+)  - source_labels: [__meta_kubernetes_namespace]  target_label: kubernetes_namespace  - source_labels: [__meta_kubernetes_ingress_name]  target_label: kubernetes_name   - job_name: 'kubernetes-pods'  kubernetes_sd_configs:  - role: pod  relabel_configs:  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]  action: keep  regex: true  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]  action:  target_label: __metrics_path__  regex: (.+)  - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]  action:  regex: ([^:]+)(?::\d+)?;(\d+)  replacement: $1:$2  target_label: __address__  - action: labelmap  regex: __meta_kubernetes_pod_label_(.+)  - source_labels: [__meta_kubernetes_namespace]  action:  target_label: kubernetes_namespace  - source_labels: [__meta_kubernetes_pod_name]  action:  target_label: kubernetes_pod_name
[root@k8smaster01 examples]# kubectl create -f prometheus-config.yaml

3.5 创建持久PVC

[root@k8smaster01 examples]# vi prometheus-pvc.yaml
 apiVersion: v1  kind: PersistentVolumeClaim  metadata:   namespace: monitoring  annotations:  volume.beta.kubernetes.io/storage- spec:  accessModes:  - ReadWriteMany  resources:  requests:  storage: 5Gi
[root@k8smaster01 examples]# kubectl create -f prometheus-pvc.yaml

3.6 Prometheus部署

[root@k8smaster01 examples]# vi prometheus-deployment.yml
 apiVersion: apps/v1beta2  kind: Deployment  metadata:  labels:    namespace: monitoring  spec:  replicas: 1  selector:  matchLabels:  app: prometheus-server  template:  metadata:  labels:  app: prometheus-server  spec:  containers:  -  image: prom/prometheus:v2.14.0  command:  - " args:  - " - " - " ports:  - containerPort: 9090  protocol: TCP  volumeMounts:  -  mountPath: /etc/prometheus/  -  mountPath: /prometheus/  serviceAccountName: prometheus  imagePullSecrets:  -  volumes:  -  configMap:  defaultMode: 420   -  persistentVolumeClaim:  claimName: prometheus-pvc
[root@k8smaster01 examples]# kubectl create -f prometheus-deployment.yml

3.7 创建Prometheus Service

[root@k8smaster01 examples]# vi prometheus-service.yaml
 apiVersion: v1  kind: Service  metadata:  labels:  app: prometheus-service   namespace: monitoring  spec:  type: NodePort  selector:  app: prometheus-server  ports:  - port: 9090  targetPort: 9090  nodePort: 30001
[root@k8smaster01 examples]# kubectl create -f prometheus-service.yaml
[root@k8smaster01 examples]# kubectl get all -n monitoring

3.8 确认验证Prometheus

浏览器直接访问:http://172.24.8.100:30001/
附014.Kubernetes Prometheus+Grafana+EFK+Kibana+Glusterfs整合解决方案-LMLPHP
 

四 部署grafana

注意:以下为简略步骤,详情参考《050.集群管理-Prometheus+Grafana监控方案》。

4.1 获取部署文件

[root@k8smaster01 ~]# git clone https://github.com/liukuan73/kubernetes-addons
[root@k8smaster01 ~]# cd /root/kubernetes-addons/monitor/prometheus+grafana

4.2 创建持久PVC

[root@k8smaster01 prometheus+grafana]# vi grafana-data-pvc.yaml
 apiVersion: v1  kind: PersistentVolumeClaim  metadata:   namespace: monitoring  annotations:  volume.beta.kubernetes.io/storage- spec:  accessModes:  - ReadWriteOnce  resources:  requests:  storage: 5Gi
[root@k8smaster01 prometheus+grafana]# kubectl create -f grafana-data-pvc.yaml

4.3 grafana部署

[root@k8smaster01 prometheus+grafana]# vi grafana.yaml
 apiVersion: extensions/v1beta1  kind: Deployment  metadata:   namespace: monitoring  spec:  replicas: 1  template:  metadata:  labels:  task: monitoring  k8s-app: grafana  spec:  containers:  -  image: grafana/grafana:6.5.0  imagePullPolicy: IfNotPresent  ports:  - containerPort: 3000  protocol: TCP  volumeMounts:  - mountPath: /var/lib/grafana   env:  -  value: monitoring-influxdb  -  value: ""  -  value: " -  value: " -  value: Admin  -  value: /  readinessProbe:  httpGet:   port: 3000  volumes:  -  persistentVolumeClaim:  claimName: grafana- nodeSelector:  node-role.kubernetes.io/master: " tolerations:  - key: " effect: " ---  apiVersion: v1  kind: Service  metadata:  labels:  kubernetes.io/cluster-service: 'true'  kubernetes.io/ annotations:  prometheus.io/scrape: 'true'  prometheus.io/tcp-probe: 'true'  prometheus.io/tcp-probe-port: '80'   namespace: monitoring  spec:  type: NodePort  ports:  - port: 80  targetPort: 3000  nodePort: 30002  selector:  k8s-app: grafana
[root@k8smaster01 prometheus+grafana]# kubectl label nodes k8smaster01 node-role.kubernetes.io/master=true
[root@k8smaster01 prometheus+grafana]# kubectl label nodes k8smaster02 node-role.kubernetes.io/master=true
[root@k8smaster01 prometheus+grafana]# kubectl label nodes k8smaster03 node-role.kubernetes.io/master=true
[root@k8smaster01 prometheus+grafana]# kubectl create -f grafana.yaml
[root@k8smaster01 examples]# kubectl get all -n monitoring

4.4 确认验证Prometheus

浏览器直接访问:http://172.24.8.100:30002/

4.4 grafana配置

  • 添加数据源:略
  • 创建用户:略
提示:所有grafana配置可配置参考:https://grafana.com/docs/grafana/latest/installation/configuration/。

4.5 查看监控

浏览器再次访问:http://172.24.8.100:30002/
附014.Kubernetes Prometheus+Grafana+EFK+Kibana+Glusterfs整合解决方案-LMLPHP
 

五 日志管理

注意:以下为简略步骤,详情参考《051.集群管理-日志管理》。

5.1 获取部署文件

[root@k8smaster01 ~]# git clone https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
[root@k8smaster01 ~]# cd fluentd-elasticsearch/

5.2 修改相关源

[root@k8smaster01 ~]# sed -i "s/quay.io/quay-mirror.qiniu.com/g" `grep quay.io -rl ./*.yaml`
[root@k8smaster01 fluentd-elasticsearch]# vi es-statefulset.yaml
 ……  - image: quay-mirror.qiniu.com/fluentd_elasticsearch/elasticsearch:v7.3.2   imagePullPolicy: IfNotPresent  ……
 
[root@k8smaster01 fluentd-elasticsearch]# cat fluentd-es-ds.yaml
 ……  image: quay-mirror.qiniu.com/fluentd_elasticsearch/fluentd:v2.7.0  imagePullPolicy: IfNotPresent  ……
 
[root@k8smaster01 fluentd-elasticsearch]# cat kibana-deployment.yaml
 ……  image: docker.elastic.co/kibana/kibana-oss:7.3.2  imagePullPolicy: IfNotPresent  ……
 

5.3 创建持久PVC

[root@k8smaster01 fluentd-elasticsearch]# vi elasticsearch-pvc.yaml
 apiVersion: v1  kind: PersistentVolumeClaim  metadata:   namespace: kube-system  annotations:  volume.beta.kubernetes.io/storage- spec:  accessModes:  - ReadWriteMany  resources:  requests:  storage: 5Gi
[root@k8smaster01 fluentd-elasticsearch]# kubectl create -f elasticsearch-pvc.yaml
5.4 部署elasticsearch
[root@k8smaster01 fluentd-elasticsearch]# vi es-statefulset.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: elasticsearch-logging
   namespace: kube-system
   labels:
     k8s-app: elasticsearch-logging
     addonmanager.kubernetes.io/mode: Reconcile
 ---
 kind: ClusterRole
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: elasticsearch-logging
   labels:
     k8s-app: elasticsearch-logging
     addonmanager.kubernetes.io/mode: Reconcile
 rules:
 - apiGroups:
   - ""
   resources:
   - "services"
   - "namespaces"
   - "endpoints"
   verbs:
   - "get"
 ---
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   namespace: kube-system
   name: elasticsearch-logging
   labels:
     k8s-app: elasticsearch-logging
     addonmanager.kubernetes.io/mode: Reconcile
 subjects:
 - kind: ServiceAccount
   name: elasticsearch-logging
   namespace: kube-system
   apiGroup: ""
 roleRef:
   kind: ClusterRole
   name: elasticsearch-logging
   apiGroup: ""
 ---
 apiVersion: apps/v1
 kind: StatefulSet
 metadata:
   name: elasticsearch-logging
   namespace: kube-system
   labels:
     k8s-app: elasticsearch-logging
     version: v7.3.2
     addonmanager.kubernetes.io/mode: Reconcile
 spec:
   serviceName: elasticsearch-logging
   replicas: 1
   selector:
     matchLabels:
       k8s-app: elasticsearch-logging
       version: v7.3.2
   template:
     metadata:
       labels:
         k8s-app: elasticsearch-logging
         version: v7.3.2
     spec:
       serviceAccountName: elasticsearch-logging
       containers:
       - image: quay-mirror.qiniu.com/fluentd_elasticsearch/elasticsearch:v7.3.2
         name: elasticsearch-logging
         imagePullPolicy: IfNotPresent
         resources:
           limits:
             cpu: 1000m
             memory: 3Gi
           requests:
             cpu: 100m
             memory: 3Gi
         ports:
         - containerPort: 9200
           name: db
           protocol: TCP
         - containerPort: 9300
           name: transport
           protocol: TCP
         volumeMounts:
         - name: elasticsearch-logging
           mountPath: /data
         env:
         - name: "NAMESPACE"
           valueFrom:
             fieldRef:
               fieldPath: metadata.namespace
       volumes:
       - name: elasticsearch-logging			#挂载永久存储PVC
         persistentVolumeClaim:
           claimName: elasticsearch-pvc
       initContainers:
       - image: alpine:3.6
         command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
         name: elasticsearch-logging-init
         securityContext:
           privileged: true
[root@k8smaster01 fluentd-elasticsearch]# kubectl create -f es-statefulset.yaml

5.5 部署Elasticsearch SVC

[root@k8smaster01 fluentd-elasticsearch]# vi es-service.yaml #官方默认即可
 apiVersion: v1
 kind: Service
 metadata:
   name: elasticsearch-logging
   namespace: kube-system
   labels:
     k8s-app: elasticsearch-logging
     kubernetes.io/cluster-service: "true"
     addonmanager.kubernetes.io/mode: Reconcile
     kubernetes.io/name: "Elasticsearch"
 spec:
   ports:
   - port: 9200
     protocol: TCP
     targetPort: db
   selector:
     k8s-app: elasticsearch-logging
[root@k8smaster01 fluentd-elasticsearch]# kubectl create -f es-service.yaml

5.6 部署fluentd

[root@k8smaster01 fluentd-elasticsearch]# kubectl create -f fluentd-es-configmap.yaml #创建fluentd ConfigMap

[root@k8smaster01 fluentd-elasticsearch]# kubectl create -f fluentd-es-ds.yaml                     #部署fluentd

5.7 部署Kibana

[root@k8smaster01 fluentd-elasticsearch]# vi kibana-deployment.yaml #做如下修改
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: kibana-logging
   namespace: kube-system
   labels:
     k8s-app: kibana-logging
     addonmanager.kubernetes.io/mode: Reconcile
 spec:
   replicas: 1
   selector:
     matchLabels:
       k8s-app: kibana-logging
   template:
     metadata:
       labels:
         k8s-app: kibana-logging
       annotations:
         seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
     spec:
       containers:
       - name: kibana-logging
         image: docker.elastic.co/kibana/kibana-oss:7.3.2
         imagePullPolicy: IfNotPresent
         resources:
           limits:
             cpu: 1000m
           requests:
             cpu: 100m
         env:
           - name: ELASTICSEARCH_HOSTS
             value: http://elasticsearch-logging:9200
         ports:
         - containerPort: 5601
           name: ui
           protocol: TCP
[root@k8smaster01 fluentd-elasticsearch]# kubectl create -f kibana-deployment.yaml

5.8 部署Kibana SVC

[root@k8smaster01 fluentd-elasticsearch]# vi kibana-service.yaml
 apiVersion: v1
 kind: Service
 metadata:
   name: kibana-logging
   namespace: kube-system
   labels:
     k8s-app: kibana-logging
     kubernetes.io/cluster-service: "true"
     addonmanager.kubernetes.io/mode: Reconcile
     kubernetes.io/name: "Kibana"
 spec:
   type: NodePort
   ports:
   - port: 5601
     protocol: TCP
     nodePort: 30003
     targetPort: ui
   selector:
     k8s-app: kibana-logging
[root@k8smaster01 fluentd-elasticsearch]# kubectl create -f kibana-service.yaml
[root@k8smaster01 fluentd-elasticsearch]# kubectl get pods -n kube-system -o wide | grep -E 'NAME|elasticsearch|fluentd|kibana' #查看相关资源

5.9 确认验证

浏览器直接访问:http://172.24.8.100:30003/
附014.Kubernetes Prometheus+Grafana+EFK+Kibana+Glusterfs整合解决方案-LMLPHP
 
05-12 02:15