问题描述
我在Google-Container-Engine中运行我的系统的新模块。我想从他们(运行在豆荚)带来stdout和stderr到我的集中logstash。有没有一种简单的方法可以将日志从pod转发到外部日志记录服务,例如logstash或elasticsearch?解决方案
我在Google-Container-Engine中运行我的系统的新模块。我想从他们(运行在豆荚)带来stdout和stderr到我的集中logstash。有没有一种简单的方法可以将日志从pod转发到外部日志记录服务,例如logstash或elasticsearch?解决方案
我决定记录日志直接发布到 elasticsearch ,这是一个可以在 elasticsearch.c.my-project.internal
访问的外部虚拟机(我在Google-Cloud-平台)。这很容易:
设置一个名为 elasticsearch 的ExternalService指向elasticsearch实例:
apiVersion:v1
类型:服务
元数据:
名称:elasticsearch-logging
命名空间:kube-system
标签:
k8s-app:elasticsearch
kubernetes.io/name:elasticsearch
规格:
类型:ExternalName
externalName:elasticsearch.c.my-project.internal
ports:
- port:9200
targetPort:9200
将Fluentd-elasticsearch部署为DeamonSet。 fluentd-elasticsearch将自动连接到名称 elasticsearch-logging
的服务(基于:
apiVersion:extensions / v1beta1
kind:DaemonSet
元数据:
名称:fluentd-elasticsearch
名称空间:kube-system
标签:
tier:监视
app:fluentd-logging
k8s-app:fluentd-logging
spec:
模板:
元数据:
标签:
名称:fluentd-elasticsearch
规格:
容器:
- 名称:fluentd-elasticsearch
图片:gcr.io/google_containers/fluentd-elasticsearch:1.19
volumeMounts :
- name:varlog
mountPath:/ var / log
- name:varlibdockercontainers
mountPath:/ var / lib / docke r / containers
readOnly:true
terminationGracePeriodSeconds:30
volumes:
- name:varlog
hostPath:
path:/ var / log
- name:varlibdockercontainers
hostPath:
path:/ var / lib / docker / containers
使用 kubectl记录fluentd-elasticsearch -...
以检查您是否能够连接到elasticsearach实例。
>现在,您可以访问kibana并查看日志。
I run new modules of my system in Google-Container-Engine. I would like to bring stdout and stderr from them (running in pods) to my centralised logstash. Is there an easy way to forward logs from pods to external logging service, e.g., logstash or elasticsearch?
I decided to log directly to elasticsearch, an external virtual machine that can be access at elasticsearch.c.my-project.internal
(I am on Google-Cloud-Platform). It is quite easy:
Setup an ExternalService with name: elasticsearch that points to the elasticsearch instance:
apiVersion: v1kind: Servicemetadata: name: elasticsearch-logging namespace: kube-system labels: k8s-app: elasticsearch kubernetes.io/name: "elasticsearch"spec: type: ExternalName externalName: elasticsearch.c.my-project.internal ports: - port: 9200 targetPort: 9200
Deploy a fluentd-elasticsearch as a DeamonSet. fluentd-elasticsearch will automatically connect to service with name elasticsearch-logging
(based on a fluentd-elasticsearch deployment defintion :
apiVersion: extensions/v1beta1kind: DaemonSetmetadata: name: fluentd-elasticsearch namespace: kube-system labels: tier: monitoring app: fluentd-logging k8s-app: fluentd-loggingspec: template: metadata: labels: name: fluentd-elasticsearch spec: containers: - name: fluentd-elasticsearch image: gcr.io/google_containers/fluentd-elasticsearch:1.19 volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers
Use kubectl logs fluentd-elasticsearch-...
to check whether you were able to connect to the elasticsearach instance.
Now, you can access kibana and see the logs.
这篇关于如何将日志从运行在GCP之上的Kubernetes上的pod传送到elasticsearch / logstash?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!