本文介绍了Fluentd是否支持文件输出的日志轮换?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用的当前设置是带有多个容器的Docker compose堆栈.这些容器将其日志记录信息发送到运行Fluentd守护程序的日志记录容器(在Compose堆栈内部). Fluentd的配置由一个in_forward源组成,该源收集日志并将它们写到单独的文件中,具体取决于容器.我的Fluentd配置文件看起来与此类似:

The current setup I am working with is a Docker compose stack with multiple containers. These containers send their logging information to a logging container (inside the compose stack) running the Fluentd daemon. The configuration for Fluentd consists of one in_forward source that collects the logs and writes them to separate files, depending on the container. My Fluentd configuration file looks similar to this:

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>

<match container1>
   @type copy
   <store>
     @type file
     path /fluentd/log/container1.*.log
     format single_value
     message_key "log"
   </store>
</match>

...

我的docker-compose.yml文件看起来像这样:

My docker-compose.yml file looks something like this:

version: '3'

services:

  container1:
    build: ./container1
    container_name: "container1" 
    depends_on:
     - "logger" 
    logging:
      driver: "fluentd"
      options:
        tag: container1  
    networks:
      static-net:
        ipv4_address: 172.28.0.4  


  ...


  logger:
    build: ./logger
    container_name: "logger"
    ports:
     - "24224:24224"
     - "24224:24224/udp"
    volumes:
     - ./logger/logs:/fluentd/log
    networks:
      static-net:
        ipv4_address: 172.28.0.5          

networks:
  static-net:
    ipam:
      driver: default
      config:
       - subnet: 172.28.0.0/16

一切正常,但我理想情况下希望将Fluentd设置为保留一定数量的日志文件.我可以通过在缓冲区部分配置chunk_limit_size参数来更改日志文件的大小.但是,即使我需要此选项,我仍然不希望Fluentd编写无数文件.缓冲区配置中的buffer_queue_limitoverflow_action似乎没有任何影响.部署后,该应用程序将连续运行,因此有必要进行日志轮换.我有几个问题:

Everything works as expected, but I would ideally like to set Fluentd to keep a certain number of log files. I can change the size of the log files by configuring the chunk_limit_size parameter in a buffer section. However, even though I want this option, I still do not want Fluentd writing an endless amount of files. The buffer_queue_limit and overflow_action in the buffer configuration do not seem to affect anything. This application will be running continuously once deployed, so log rotation is a necessity. Several questions I have:

  1. Fluentd是否支持将日志写入文件的日志轮换?如果是这样,我应该在Fluentd配置文件中设置哪些参数?
  2. 如果没有,我是否可以通过配置Docker的方式来利用Fluentd的json日志驱动程序的日志轮换方式?
  3. 如果不可能,是否有办法通过插件或在Fluentd docker容器本身(或sidecar容器)中向Fluentd添加日志轮转?

推荐答案

当您为docker使用fluentd日志记录驱动程序时,则没有容器日志文件,只有fluentd日志,并且要轮换它们,您可以使用此链接.如果要让docker保留日志并轮换日志,则必须从以下位置更改堆栈文件:

When you are using fluentd logging driver for docker then there is no container log files, there are only fluentd logs, and to rotate them you can use this link.If you want docker to keep logs and to rotate them, then you have to change your stackfile from:

    logging:
      driver: "fluentd"
      options:
        tag: container1  

  logging:
   driver: "json-file"
   options:
      max-size: "5m" #max size of log file
      max-file: "2" #how many files should docker keep

在fluentd中,您必须使用in_tail插件而不是forward(fluentd应该可以访问/var/lib/docker/containers/*/*-json.log中的日志文件)

in fluentd you have to use the in_tail plugin instead of forward (fluentd should have access to log files in /var/lib/docker/containers/*/*-json.log )

<source>
  type tail
  read_from_head true
  pos_file fluentd-docker.pos
  path /var/lib/docker/containers/*/*-json.log
  tag docker.*
  format json
</source>

这篇关于Fluentd是否支持文件输出的日志轮换?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-15 17:57