本文介绍了fluentd失去毫秒,现在日志消息在弹性搜索中按顺序存储的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用流畅的方式将日志消息集中在弹性搜索中,并使用kibana进行查看。当我查看日志消息时,发生在同一秒的消息是无序的,@timestamp中的毫秒全部为零

  2015-01-13T11:54:01.000-06:00调试我的消息

我如何获得流利存储毫秒?

解决方案

fluentd目前不支持sub-second解决方案:



我通过添加一个新的字段到record_reformer的所有日志消息来存储纳秒从epoch



例如,如果你的流畅有一些这样的输入:

 
#Syslog

< source>
类型syslog
端口5140
绑定本地主机
标签syslog
< / source>


#Tomcat log4j json输出

< source>
类型tail
路径/home/foo/logs/catalina-json.out
pos_file /home/foo/logs/fluentd.pos
标签tomcat
格式json
time_key @timestamp
time_format%Y-%m-%dT%H:%M:%S.%L%Z
< / source>

然后将其更改为这样,并添加一个添加一个纳秒字段的record_reformer

 
#Syslog

< source>
类型syslog
端口5140
绑定本地主机
标签cleanup.syslog
< / source>


#Tomcat log4j json输出

< source>
类型tail
路径/home/foo/logs/catalina-json.out
pos_file /home/foo/logs/fluentd.pos
标签cleanup.tomcat
格式json
time_key @timestamp
time_format%Y-%m-%dT%H:%M:%S.%L%Z
< / source>

< match cleanup。**>
type record_reformer
time_nano $ {t = Time.now; ((t.to_i * 1000000000)+ t.nsec).to_s}
标签$ {tag_suffix [1]}
< / match>

然后将time_nano字段添加到您的kibana仪表板,并使用它来排序,而不是@timestamp,按顺序。


I am using fluentd to centralize log messages in elasticsearch and view them with kibana. When I view log messages, messages that occured in the same second are out of order and the milliseconds in @timestamp is all zeros

2015-01-13T11:54:01.000-06:00   DEBUG   my message

How do I get fluentd to store milliseconds?

解决方案

fluentd does not currently support sub-second resolution:https://github.com/fluent/fluentd/issues/461

I worked around this by adding a new field to all of the log messages with record_reformer to store nanoseconds since epoch

For example if your fluentd has some inputs like so:

#
# Syslog
#
<source>
    type syslog
    port 5140
    bind localhost
    tag syslog
</source>

#
# Tomcat log4j json output
#
<source>
    type tail
    path /home/foo/logs/catalina-json.out
    pos_file /home/foo/logs/fluentd.pos
    tag tomcat
    format json
    time_key @timestamp
    time_format "%Y-%m-%dT%H:%M:%S.%L%Z"
</source>

Then change them to look like this and add a record_reformer that adds a nanosecond field

#
# Syslog
#
<source>
    type syslog
    port 5140
    bind localhost
    tag cleanup.syslog
</source>

#
# Tomcat log4j json output
#
<source>
    type tail
    path /home/foo/logs/catalina-json.out
    pos_file /home/foo/logs/fluentd.pos
    tag cleanup.tomcat
    format json
    time_key @timestamp
    time_format "%Y-%m-%dT%H:%M:%S.%L%Z"
</source>

<match cleanup.**>
    type record_reformer
    time_nano ${t = Time.now; ((t.to_i * 1000000000) + t.nsec).to_s}
    tag ${tag_suffix[1]}
</match>

Then add the time_nano field to your kibana dashboards and use it to sort instead of @timestamp and everything will be in order.

这篇关于fluentd失去毫秒,现在日志消息在弹性搜索中按顺序存储的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-06 16:06