介绍

版本:filebeat-7.12.0

是关于k8s的日志采集,部署方式是采用DaemonSet的方式,采集时按照k8s集群的namespace进行分类,然后根据namespace的名称创建不同的topic到kafka中

日志收集之filebeat使用介绍-LMLPHP

k8s日志文件说明

一般情况下,容器中的日志在输出到标准输出(stdout)时,会以*-json.log的命名方式保存在/var/lib/docker/containers目录中,当然如果修改了docker的数据目录,那就是在修改后的数据目录中了,例如:

# tree /data/docker/containers
/data/docker/containers
├── 009227c00e48b051b6f5cb65128fd58412b845e0c6d2bec5904f977ef0ec604d
│   ├── 009227c00e48b051b6f5cb65128fd58412b845e0c6d2bec5904f977ef0ec604d-json.log
│   ├── checkpoints
│   ├── config.v2.json
│   ├── hostconfig.json
│   └── mounts

这里能看到,有这么个文件: /data/docker/containers/container id/*-json.log,然后k8s默认会在/var/log/containers/var/log/pods目录中会生成这些日志文件的软连接,如下所示:

cattle-node-agent-tvhlq_cattle-system_agent-8accba2d42cbc907a412be9ea3a628a90624fb8ef0b9aa2bc6ff10eab21cf702.log
etcd-k8s-master01_kube-system_etcd-248e250c64d89ee6b03e4ca28ba364385a443cc220af2863014b923e7f982800.log

然后,会看到这个目录下存在了此宿主机上的所有容器日志,文件的命名方式为:

[podName]_[nameSpace]_[depoymentName]-[containerId].log

上面这个是deployment的命名方式,其他的会有些不同,例如:DaemonSetStatefulSet等,不过所有的都有一个共同点,就是

*_[nameSpace]_*.log

到这里,知道这个特性,就可以往下来看Filebeat的部署和配置了。

filebeat部署

部署采用的DaemonSet方式进行,这里没有啥可说的,参照官方文档直接部署即可

---
apiVersion: v1
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      enabled: true
      paths:
      - /var/log/containers/*_bim5d-basic_*log
      fields:
        log_topic: bim5d-basic
        env: dev
    - type: container
      enabled: true
      paths:
      - /var/log/containers/*_bim5d-cost_*log
      fields:
        log_topic: bim5d-cost
        env: dev
    - type: container
      enabled: true
      paths:
      - /var/log/containers/*_giot-integration-test_*log
      fields:
        log_topic: giot-integration-test
        env: dev
    - type: container
      enabled: true
      paths:
      - /var/log/containers/*_infra-integration-test_*log
      fields:
        log_topic: infra-integration-test
        env: dev
    - type: container
      enabled: true
      paths:
      - /var/log/containers/*_gss-integration-test_*log
      fields:
        log_topic: gss-integration-test
        env: dev
    filebeat.config.modules:
      path: ${path.config}/modules.d/*.yml
      reload.enabled: false
    output.kafka:
      hosts: ["10.0.105.74:9092","10.0.105.76:9092","10.0.105.96:9092"]
      topic: '%{[fields.log_topic]}'
      partition.round_robin:
        reachable_only: true
kind: ConfigMap
metadata:
  name: filebeat-daemonset-config-test
  namespace: default
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.12.0
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0640
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlog
        hostPath:
          path: /var/log
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          # When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  - nodes
  verbs:
  - get
  - watch
  - list
- apiGroups: ["apps"]
  resources:
    - replicasets
  verbs: ["get", "list", "watch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat

启动的话直接kubectl apply -f启动即可,部署不是本篇的重点,这里不做过多介绍。

官方部署参考:https://raw.githubusercontent.com/elastic/beats/7.12/deploy/kubernetes/filebeat-kubernetes.yaml

filebeat配置简单介绍

这里先简单介绍下filebeat的配置结构

filebeat.inputs:

filebeat.config.modules:

processors:

output.xxxxx:

结构大概是这么个结构,完整的数据流向简单来说就是下面这个图:

日志收集之filebeat使用介绍-LMLPHP

前面也说了,我是根据根据命名空间做分类,每一个命名空间就是一个topic,如果要收集多个集群,同样也是使用命名空间做分类,只不过topic的命名就需要加个k8s的集群名,这样方便去区分了,那既然是通过命名空间来获取日志,那么在配置inputs的时候就需要通过写正则将指定命名空间下的日志文件取出,然后读取,例如:

filebeat.inputs:
- type: container
  enabled: true
  paths:
  - /var/log/containers/*_bim5d-basic_*log
  fields:
    log_topic: bim5d-basic
    env: dev

这里我的命名空间为bim5d-basic,然后通过正则*_bim5d-basic_*log来获取带有此命名空间名的日志文件,随后又加了个自定义字段,方便下面创建topic时使用。
这里是写了一个命名空间,如果有多个,就排开写就行了,如下所示:

filebeat.inputs:
- type: container
  enabled: true
  paths:
  - /var/log/containers/*_bim5d-basic_*log
  fields:
    log_topic: bim5d-basic
    env: dev
- type: container
  enabled: true
  paths:
  - /var/log/containers/*_bim5d-cost_*log
  fields:
    log_topic: bim5d-cost
    env: dev
- type: container
  enabled: true
  paths:
  - /var/log/containers/*_giot-integration-test_*log
  fields:
    log_topic: giot-integration-test
    env: dev

这种写法有一个不好的地方就是,如果命名空间比较多,那么整个配置就比较多,维护起来可能是个问题,所以建议将filebeat的配置文件通过版本控制来管理起来

上面说了通过命名空间创建topic,我这里加了一个自定义的字段log_topic,就是后面的topic的名称,但是这里有很多的命名空间,那在输出的时候,如何动态去创建呢?

output.kafka:
  hosts: ["10.0.105.74:9092","10.0.105.76:9092","10.0.105.96:9092"]
  topic: '%{[fields.log_topic]}'
  partition.round_robin:
    reachable_only: true

注意这里的写法:%{[fields.log_topic]}

那么完整的配置如下所示:

filebeat.inputs:
- type: container
  enabled: true
  paths:
  - /var/log/containers/*_bim5d-basic_*log
  fields:
    log_topic: bim5d-basic
    env: dev
- type: container
  enabled: true
  paths:
  - /var/log/containers/*_bim5d-cost_*log
  fields:
    log_topic: bim5d-cost
    env: dev
- type: container
  enabled: true
  paths:
  - /var/log/containers/*_giot-integration-test_*log
  fields:
    log_topic: giot-integration-test
    env: dev
output.kafka:
  hosts: ["10.0.105.74:9092","10.0.105.76:9092","10.0.105.96:9092"]
  topic: '%{[fields.log_topic]}'
  partition.round_robin:
    reachable_only: true

如果是不对日志做任何处理,到这里就结束了,但是这样又视乎在查看日志的时候少了点什么? 没错!到这里你仅仅知道日志内容,和该日志来自于哪个命名空间,但是你不知道该日志属于哪个服务,哪个pod,甚至说想查看该服务的镜像地址等,但是这些信息在我们上面的配置方式中是没有的,所以需要进一步的添砖加瓦。

这个时候就用到了一个配置项,叫做: processors, 看下官方的解释

简单来说就是处理日志

下面来重点讲一下这个地方,非常有用和重要

filebeat的processors使用介绍

添加K8s的基本信息

在采集k8s的日志时,如果按照上面那种配置方式,是没有关于pod的一些信息的,例如:

  • Pod Name
  • Pod UID
  • Namespace
  • Labels
  • 等等等等

那么如果想添加这些信息,就要使用processors中的一个工具,叫做: add_kubernetes_metadata, 字面意思就是添加k8s的一些元数据信息,使用方法可以先来看一段示例:

processors:
  - add_kubernetes_metadata:
      host: ${NODE_NAME}
      matchers:
      - logs_path:
          logs_path: "/var/log/containers/"

host: 指定要对filebeat起作用的节点,防止无法准确检测到它,比如在主机网络模式下运行filebeat
matchers: 匹配器用于构造与索引创建的标识符相匹配的查找键
logs_path: 容器日志的基本路径,如果未指定,则使用Filebeat运行的平台的默认日志路径

加上这个k8s的元数据信息之后,就可以在日志里面看到k8s的信息了,看一下添加k8s信息后的日志格式:

{
  "@timestamp": "2021-04-19T07:07:36.065Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "_doc",
    "version": "7.11.2"
  },
  "log": {
    "offset": 377708,
    "file": {
      "path": "/var/log/containers/geip-gateway-test-85545c868b-6nsvc_geip-function-test_geip-gateway-test-server-885412c0a8af6bfa7b3d7a341c3a9cb79a85986965e363e87529b31cb650aec4.log"
    }
  },
  "fields": {
    "env": "dev",
    "log_topic": "geip-function-test"
  },
  "host": {
    "name": "filebeat-fv484"
  },
  "agent": {
    "id": "7afbca43-3ec1-4cee-b5cb-1de1e955b717",
    "name": "filebeat-fv484",
    "type": "filebeat",
    "version": "7.11.2",
    "hostname": "filebeat-fv484",
    "ephemeral_id": "8fd29dee-da50-4c88-88d5-ebb6bbf20772"
  },
  "ecs": {
    "version": "1.6.0"
  },
  "stream": "stdout",
  "message": "2021-04-19 15:07:36.065  INFO 23 --- [trap-executor-0] c.n.d.s.r.aws.ConfigClusterResolver      : Resolving eureka endpoints via configuration",
  "input": {
    "type": "container"
  },
  "container": {
    "image": {
      "name": "packages.gxxxxxn.com/docxxxxxp/gexxxxxxxst:3.3.1-ent-release-SNAPSHOT.20210402191241_87c9b1f841c"
    },
    "id": "885412c0a8af6bfa7b3d7a341c3a9cb79a85986965e363e87529b31cb650aec4",
    "runtime": "docker"
  },
  "kubernetes": {
    "labels": {
      "pod-template-hash": "85545c868b",
      "app": "geip-gateway-test"
    },
    "container": {
      "name": "geip-gateway-test-server",
      "image": "packages.xxxxxxx.com/dxxxxxp/gxxxxxxxxt:3.3.1-ent-release-SNAPSHOT.20210402191241_87c9b1f841c"
    },
    "node": {
      "uid": "511d9dc1-a84e-4948-b6c8-26d3f1ba2e61",
      "labels": {
        "kubernetes_io/hostname": "k8s-node-09",
        "kubernetes_io/os": "linux",
        "beta_kubernetes_io/arch": "amd64",
        "beta_kubernetes_io/os": "linux",
        "cloudt-global": "true",
        "kubernetes_io/arch": "amd64"
      },
      "hostname": "k8s-node-09",
      "name": "k8s-node-09"
    },
    "namespace_uid": "4fbea846-44b8-4d4a-b03b-56e43cff2754",
    "namespace_labels": {
      "field_cattle_io/projectId": "p-lgxhz",
      "cattle_io/creator": "norman"
    },
    "pod": {
      "name": "gxxxxxxxxst-85545c868b-6nsvc",
      "uid": "1e678b63-fb3c-40b5-8aad-892596c5bd4d"
    },
    "namespace": "geip-function-test",
    "replicaset": {
      "name": "geip-gateway-test-85545c868b"
    }
  }
}

可以看到kubernetes这个key的value有关于pod的信息,还有node的一些信息,还有namespace信息等,基本上关于k8s的一些关键信息都包含了,非常的多和全。

但是,问题又来了,这一条日志信息有点太多了,有一半多不是我们想要的信息,所以,我们需要去掉一些对于我们没有用的字段

删除不必要的字段

processors:
  - drop_fields:
      #删除的多余字段
      fields:
        - host
        - ecs
        - log
        - agent
        - input
        - stream
        - container
      ignore_missing: true

添加日志时间

通过上面的日志信息,可以看到是没有单独的一个关于日志时间的字段的,虽然里面有一个@timestamp,但不是北京时间,而我们要的是日志的时间,message里面倒是有时间,但是怎么能把它取到并单独添加一个字段呢,这个时候就需要用到script了,需要写一个js脚本来替换。

processors:
  - script:
      lang: javascript
      id: format_time
      tag: enable
      source: >
        function process(event) {
            var str=event.Get("message");
            var time=str.split(" ").slice(0, 2).join(" ");
            event.Put("time", time);
        }
  - timestamp:
      field: time
      timezone: Asia/Shanghai
      layouts:
        - '2006-01-02 15:04:05'
        - '2006-01-02 15:04:05.999'
      test:
        - '2019-06-22 16:33:51'

添加完成后,会多一个time的字段,在后面使用的时候,就可以使用这个字段了。

重新拼接k8s源信息

实际上,到这个程度就已经完成了我们的所有需求了,但是添加完k8s的信息之后,多了很多无用的字段,而我们如果想去掉那些没用的字段用drop_fields也可以,例如下面这种写法:

processors:
  - drop_fields:
      #删除的多余字段
      fields:
        - kubernetes.pod.uid
        - kubernetes.namespace_uid
        - kubernetes.namespace_labels
        - kubernetes.node.uid
        - kubernetes.node.labels
        - kubernetes.replicaset
        - kubernetes.labels
        - kubernetes.node.name
      ignore_missing: true

这样写也可以把无用的字段去掉,但是结构层级没有变化,嵌套了很多层,最终结果可能变成这个样子

{
  "@timestamp": "2021-04-19T07:07:36.065Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "_doc",
    "version": "7.11.2"
  },
  "fields": {
    "env": "dev",
    "log_topic": "geip-function-test"
  },
  "message": "2021-04-19 15:07:36.065  INFO 23 --- [trap-executor-0] c.n.d.s.r.aws.ConfigClusterResolver      : Resolving eureka endpoints via configuration",
  "kubernetes": {
    "container": {
      "name": "geip-gateway-test-server",
      "image": "packages.xxxxxxx.com/dxxxxxp/gxxxxxxxxt:3.3.1-ent-release-SNAPSHOT.20210402191241_87c9b1f841c"
    },
    "node": {
      "hostname": "k8s-node-09"
    },
    "pod": {
      "name": "gxxxxxxxxst-85545c868b-6nsvc"
    },
    "namespace": "geip-function-test"
  }
}

这样在后面使用es创建template的时候,就会嵌套好多层,查询起来也很不方便,既然这样那我们就优化下这个层级结构,继续script这个插件

processors:
  - script:
      lang: javascript
      id: format_k8s
      tag: enable
      source: >
        function process(event) {
            var k8s=event.Get("kubernetes");
            var newK8s = {
                podName: k8s.pod.name,
                nameSpace: k8s.namespace,
                imageAddr: k8s.container.name,
                hostName: k8s.node.hostname
            }
            event.Put("k8s", newK8s);
        }

这里单独创建了一个字段k8s,字段里包含:podName, nameSpace, imageAddr, hostName等关键信息,最后再把kubernetes这个字段drop掉就可以了。最终结果如下:

{
  "@timestamp": "2021-04-19T07:07:36.065Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "_doc",
    "version": "7.11.2"
  },
  "fields": {
    "env": "dev",
    "log_topic": "geip-function-test"
  },
  "time": "2021-04-19 15:07:36.065",
  "message": "2021-04-19 15:07:36.065  INFO 23 --- [trap-executor-0] c.n.d.s.r.aws.ConfigClusterResolver      : Resolving eureka endpoints via configuration",
  "k8s": {
      "podName": "gxxxxxxxxst-85545c868b-6nsvc",
      "nameSpace": "geip-function-test",
      "imageAddr": "packages.xxxxxxx.com/dxxxxxp/gxxxxxxxxt:3.3.1-ent-release-SNAPSHOT.20210402191241_87c9b1f841c",
      "hostName": "k8s-node-09"
  }
}

这样看起来就非常清爽了。

下面贴一些一个完整的示例:
configmap参考

apiVersion: v1
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      enabled: true
      paths:
      - /var/log/containers/*_bim5d-basic_*log
      fields:
        namespace: bim5d-basic
        k8s: cluster01
        env: prod
      multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}|^[1-9]\d*\.[1-9]\d*\.[1-9]\d*\.[1-9]\d*'
      multiline.negate: true
      multiline.match: after
      multiline.timeout: 10s
    - type: container
      enabled: true
      paths:
      - /var/log/containers/*_bim5d-cost_*log
      fields:
        namespace: bim5d-cost
        k8s: cluster01
        env: prod
      multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}|^[1-9]\d*\.[1-9]\d*\.[1-9]\d*\.[1-9]\d*'
      multiline.negate: true
      multiline.match: after
      multiline.timeout: 10s
    filebeat.config.modules:
      path: ${path.config}/modules.d/*.yml
      reload.enabled: false
    processors:
      - add_kubernetes_metadata:
          #添加k8s描述字段
          default_indexers.enabled: true
          default_matchers.enabled: true
          host: ${NODE_NAME}
          matchers:
          - logs_path:
              logs_path: "/var/log/containers/"
      - drop_event.when.regexp:
          or:
            kubernetes.pod.name: "filebeat.*"
            kubernetes.pod.name: "external-dns.*"
            kubernetes.pod.name: "coredns.*"
      - drop_fields:
          #删除的多余字段
          fields:
            - host
            - tags
            - ecs
            - log
            - prospector
            - agent
            - input
            - beat
            - offset
            - stream
            - container
            - kubernetes
          ignore_missing: true
      - timestamp:
          field: start_time
          timezone: Asia/Shanghai
          layouts:
            - '2006-01-02 15:04:05'
            - '2006-01-02 15:04:05.999'
          test:
            - '2019-06-22 16:33:51'
    output.kafka:
      hosts: ["10.0.105.74:9092","10.0.105.76:9092","10.0.105.96:9092"]
      topic: '%{[fields.k8s]}-%{[fields.namespace]}'
      partition.round_robin:
        reachable_only: true
kind: ConfigMap
metadata:
  name: filebeat-daemonset-config
  namespace: default

总结

个人认为让filebeat在收集日志的第一层做一些处理,能缩短整个过程的处理时间,因为瓶颈大多在es和logstash,所以一些耗时的操作尽量在filebeat这块去处理,如果处理不了在使用logstash,另外一个非常容易忽视的一点就是,对于日志内容的简化,这样能显著降低日志的体积,我做过测试,同样的日志条数,未做简化的体积达到20G,而优化后的体积不到10G,这样的话对于整个es集群来说可以说是非常的友好和作用巨大了。

另外,可以通过版本控制,来管理filebeat的配置文件,这样在维护时也能有个记录,变化管理。


欢迎各位朋友关注我的公众号,来一起学习进步哦
日志收集之filebeat使用介绍-LMLPHP

04-20 15:01