我想通过fluentd将Kubernetes日志从流利的比特转发到Elasticsearch,但是流利的比特无法正确解析kubernetes日志。为了安装Fluent-bit和Fluentd,我使用Helm图表。我同时尝试了稳定/流利和流利/流利,并且遇到了相同的问题:
#0 dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch [error type]: mapper_parsing_exception [reason]: 'Could not dynamically add mapping for field [app.kubernetes.io/component]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text].'"
我将以下几行放入流利的比特值文件中,如here所示 remapMetadataKeysFilter:
enabled: true
match: kube.*
## List of the respective patterns and replacements for metadata keys replacements
## Pattern must satisfy the Lua spec (see https://www.lua.org/pil/20.2.html)
## Replacement is a plain symbol to replace with
replaceMap:
- pattern: "[/.]"
replacement: "_"
...没有改变,列出了相同的错误。是否有解决该错误的解决方法?
我的values.yaml在这里:
# Default values for fluent-bit.
# kind -- DaemonSet or Deployment
kind: DaemonSet
# replicaCount -- Only applicable if kind=Deployment
replicaCount: 1
image:
repository: fluent/fluent-bit
pullPolicy: Always
# tag:
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
create: true
annotations: {}
name:
rbac:
create: true
podSecurityPolicy:
create: false
podSecurityContext:
{}
# fsGroup: 2000
securityContext:
{}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 2020
annotations:
prometheus.io/path: "/api/v1/metrics/prometheus"
prometheus.io/port: "2020"
prometheus.io/scrape: "true"
serviceMonitor:
enabled: true
namespace: monitoring
interval: 10s
scrapeTimeout: 10s
# selector:
# prometheus: my-prometheus
resources:
{}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
priorityClassName: ""
env: []
envFrom: []
extraPorts: []
# - port: 5170
# containerPort: 5170
# protocol: TCP
# name: tcp
extraVolumes: []
extraVolumeMounts: []
## https://docs.fluentbit.io/manual/administration/configuring-fluent-bit
config:
## https://docs.fluentbit.io/manual/service
service: |
[SERVICE]
Flush 1
Daemon Off
Log_Level info
Parsers_File parsers.conf
Parsers_File custom_parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
## https://docs.fluentbit.io/manual/pipeline/inputs
inputs: |
[INPUT]
Name tail
Path /var/log/containers/*.log
Parser docker
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
[INPUT]
Name systemd
Tag host.*
Systemd_Filter _SYSTEMD_UNIT=kubelet.service
Read_From_Tail On
## https://docs.fluentbit.io/manual/pipeline/filters
filters: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
[FILTER]
Name lua
Match kube.*
script /fluent-bit/etc/functions.lua
call dedot
## https://docs.fluentbit.io/manual/pipeline/outputs
outputs: |
[OUTPUT]
Name forward
Match *
Host fluentd-in-forward.elastic-system.svc.cluster.local
Port 24224
tls off
tls.verify off
## https://docs.fluentbit.io/manual/pipeline/parsers
customParsers: |
[PARSER]
Name docker_no_time
Format json
Time_Keep Off
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
最佳答案
我认为您的问题不是在kubernetes中,不是在fluentbit / fluentd图表中,您的问题是在elasticsearch中,尤其是在映射中。
在elsticsearch 7.x版中,同一字段不能具有不同的类型(字符串,整数等)。
为了解决这个问题,我在用于kubernetes日志的索引模板中使用“ignore_malformed”:true。
https://www.elastic.co/guide/en/elasticsearch/reference/current/ignore-malformed.html