FileBeat 解决时间冲突问题

版本:FileBeat 6.3


问题

项目中配置了生成JSON格式的日志,方便导入到ELK中,生成格式如下:

{
    "@timestamp":"2018-06-29T16:24:27.555+08:00",
    "severity":"INFO",
    "service":"osg-sender",
    "trace":"",
    "span":"",
    "parent":"",
    "exportable":"",
    "pid":"4620",
    "thread":"restartedMain",
    "class":"o.s.c.a.AnnotationConfigApplicationContext",
    "rest":"Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@7a7df24c: startup date [Fri Jun 29 16:24:27 CST 2018]; root of context hierarchy"
}

然后再通过FileBeat直接将json文件的每一行导入到ElasticSearch中,但是FileBeat会自动生成 @timestamp 字段,代表导入时间,这样就和LOG中的该字段冲突了。

解决办法

两种方式:

  1. 将LOG文件的@timestamp字段换个名字,比如 logDate,避免和FileBeat中的冲突,此时要为logDate在FileBeat的 fields.yml中添加索引字段配置,添加类型为日期的logDate字段,否则在Kibana中创建index pattern时无法看到该time filter。
  2. 不改变LOG内容,直接修改FileBeat的 fields.yml,把他原来的 @timestamp 字段改个名字,并增加 @timestamp 字段为日志使用。

我使用的第二种方式,修改fields.yml

- key: beat
  title: Beat
  description: >
    Contains common beat fields available in all event types.
  fields:

    - name: beat.name
      description: >
        The name of the Beat sending the log messages. If the Beat name is
        set in the configuration file, then that value is used. If it is not
        set, the hostname is used. To set the Beat name, use the `name`
        option in the configuration file.
    - name: beat.hostname
      description: >
        The hostname as returned by the operating system on which the Beat is
        running.
    - name: beat.timezone
      description: >
        The timezone as returned by the operating system on which the Beat is
        running.
    - name: beat.version
      description: >
        The version of the beat that generated this event.

    - name: "@timestamp-beat"
      type: date
      required: true
      format: date
      example: August 26th 2016, 12:35:53.332
      description: >
        The timestamp when the event log record was generated.

    - name: "@timestamp"
      type: date
      format: "yyyy-MM-dd'T'HH:mm:ss.SSSZZ"

将原来的@timestamp改成了@timestamp-beat,增加了新的@timestamp并指定了日期格式。

这样在创建index pattern的时候,就能看到两个时间过滤器了,一个是日志生成的时间,一个是filebeat导入的时间。


image.png

配置

贴一下项目中log的配置文件,以及FileBeat的配置
logback-spring.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <include resource="org/springframework/boot/logging/logback/defaults.xml"/>
  ​
  <springProperty scope="context" name="springAppName" source="spring.application.name"/>
  <!-- Example for logging into the build folder of your project -->
  <property name="LOG_FILE" value="${BUILD_FOLDER:-log}/${springAppName}"/>​

  <!-- You can override this to have a custom pattern -->
  <property name="CONSOLE_LOG_PATTERN" value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>

  <!-- Appender to log to console -->
  <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
    <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
      <!-- Minimum logging level to be presented in the console logs-->
      <level>DEBUG</level>
    </filter>
    <encoder>
      <pattern>${CONSOLE_LOG_PATTERN}</pattern>
      <charset>utf8</charset>
    </encoder>
  </appender>

  <!-- Appender to log to file -->​
  <appender name="flatfile" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${LOG_FILE}</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.gz</fileNamePattern>
      <maxHistory>90</maxHistory>
    </rollingPolicy>
    <encoder>
      <pattern>${CONSOLE_LOG_PATTERN}</pattern>
      <charset>utf8</charset>
    </encoder>
  </appender>
  ​
  <!-- Appender to log to file in a JSON format -->
  <appender name="logstash" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${LOG_FILE}.json</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>${LOG_FILE}.json.%d{yyyy-MM-dd}</fileNamePattern>
      <maxHistory>90</maxHistory>
    </rollingPolicy>
    <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
      <providers>
        <timestamp>
          <!--<fieldName>logDate</fieldName>-->
          <!--<pattern>yyyy-MM-dd HH:mm:ss.SSS</pattern>-->
        </timestamp>
        <pattern>
          <pattern>
            {
            "severity": "%level",
            "service": "${springAppName:-}",
            "trace": "%X{X-B3-TraceId:-}",
            "span": "%X{X-B3-SpanId:-}",
            "parent": "%X{X-B3-ParentSpanId:-}",
            "exportable": "%X{X-Span-Export:-}",
            "pid": "${PID:-}",
            "thread": "%thread",
            "class": "%logger{40}",
            "rest": "%message"
            }
          </pattern>
        </pattern>
        <stackTrace>
          <throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
            <maxDepthPerThrowable>30</maxDepthPerThrowable>
            <maxLength>2048</maxLength>
            <shortenedClassNameLength>20</shortenedClassNameLength>
            <exclude>^sun\.reflect\..*\.invoke</exclude>
            <exclude>^net\.sf\.cglib\.proxy\.MethodProxy\.invoke</exclude>
            <rootCauseFirst>true</rootCauseFirst>
          </throwableConverter>
        </stackTrace>
      </providers>
    </encoder>
  </appender>
  ​
  <root level="INFO">
    <appender-ref ref="console"/>
    <!-- uncomment this to have also JSON logs -->
    <appender-ref ref="logstash"/>
    <appender-ref ref="flatfile"/>
  </root>
</configuration>

filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /root/ogter/build/*.json.*
    - /root/ogter/log/*.json
  exclude_files: ['.gz$']
  json.keys_under_root: true
  json.overwrite_keys: true
  exclude_files: ['.gz$']
  
output.elasticsearch:
  hosts: ["192.168.1.17:9200"]
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容

  • Spring Cloud为开发人员提供了快速构建分布式系统中一些常见模式的工具(例如配置管理,服务发现,断路器,智...
    卡卡罗2017阅读 134,991评论 19 139
  • 本人陆陆续续接触了ELK的1.4,2.0,2.4,5.0,5.2版本,可以说前面使用当中一直没有太多感触,最近使用...
    三杯水Plus阅读 4,121评论 0 12
  • ################### Filebeat Configuration Example ######...
    ssdsss阅读 9,783评论 1 1
  • 一代游戏大师逝世,将遗产继承化身为通关游戏,你会为了彩蛋还是三只钥匙? 对于一个非典型游戏沉浸者、科幻片、二次元的...
    千尋茵阅读 319评论 0 0
  • 黑色的森林。 一个瘸腿的少妇,穿着黑色的麻布衣服坐在墨色的死河边洗着衣物。 似乎一切都是黑的颜色。 小河中汩汩冒着...
    腐鸟阅读 263评论 0 0