基本环境:
filebeat版本:6.5.4 (Linux,x86-64)
elasticsearch版本:6.54
 
(一)需求说明
在一台服务器上有多个日志需要使用filebeat日志收集到elasticsearch中,以便于查看。对于收集方法,主要有2种:
  • 将同一台服务器上的日志收集到elasticsearch的同一个索引中,这种方式存在一个较大的问题,如果服务器上有多个业务在运行,产生了多个日志,那么将会被收集到elasticsearch的同一个索引中,如图1。
  • 将同一台服务器上的日志收集到elasticsearch的不同索引中,每个索引都存放相关业务的日志,如图2。
 
 
很明显,图2的日志输出是我们想要的,因为它将不同的日志放到了不同的索引中。
 
(二)解决方案
在使用filbeat收集日志输出到elasticsearch数据库时,可以使用indices参数来配置不同的日志输出到不同的索引中,官方文档及其配置例子如下:
 
 
(三)实际测试
(3.1)输入存在fields_under_root: true选项
使用filebeat对3个日志testa.log、testb.log、testc.log进行数据抓取,要求:
  • testa.log日志的数据存放到testa-log索引中
  • testb.log日志的数据存放到testb-log索引中
  • 其它(非testa.log和testb.log)的日志数据存放到test-other-log索引中
 
filebeat输入配置如下:
 
输出配置如下:
 
 
最终测试成功。
 
这里附一份完整的filebeat配置文件:

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html # For more available modules and options, please see the filebeat.reference.yml sample
# configuration file. #=========================== Filebeat inputs ============================= filebeat.inputs: # Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations. # testa.log
- type: log
enabled: true
paths:
- /root/test/testa.log
fields:
log_topics: "testa"
fields_under_root: true # testb.log
- type: log
enabled: true
paths:
- /root/test/testb.log
fields:
log_topics: "testb"
fields_under_root: true # testc.log
- type: log
enabled: true
paths:
- /root/test/testc.log
fields:
log_topics: "testc"
fields_under_root: true # Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG'] # Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN'] # Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$'] # Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1 ### Multiline options # Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[ # Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after #============================= Filebeat modules =============================== filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml # Set to true to enable config reloading
reload.enabled: true # Period on which files under path should be checked for changes
#reload.period: 10s #==================== Elasticsearch template setting ========================== setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
setup.template.name: "prod-file*"
setup.template.pattern: "prod-file*"
setup.ilm.enabled: false
#================================ General ===================================== # The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name: # The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"] # Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging #============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false # The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url: #============================== Kibana ===================================== # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana: # Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601" # Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id: #============================= Elastic Cloud ================================== # These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/). # The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id: # The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth: #================================ Outputs ===================================== # Configure what output to use when sending the data collected by the beat. #-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# hosts: ["192.168.10.30:9200"]
# index: "testlog-666" #output.elasticsearch:
# hosts: ["192.168.10.30:9200"]
# indices:
# - index: "testa-log"
# when.contains:
# log_topics: "testa"
# - index: "testb-log"
# when.contains:
# log_topics: "testb" output.elasticsearch:
hosts: ["192.168.10.100:9200"]
index: "test-other-log"
indices:
- index: "testa-log"
when.contains:
log_topics: "testa"
- index: "testb-log"
when.contains:
log_topics: "testb"
#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"] # Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem" # Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key" #================================ Processors ===================================== # Configure processors to enhance or manipulate events generated by the beat. #================================ Logging ===================================== # Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug # At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"] #============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default. # Set to true to enable the monitoring reporter.
#monitoring.enabled: false # Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch: #================================= Migration ================================== # This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
 (3.2)输入不存在fields_under_root: true选项
filebeat输入配置如下:
 
输出配置如下:
 
这里附一份完整的filebeat配置文件:

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html # For more available modules and options, please see the filebeat.reference.yml sample
# configuration file. #=========================== Filebeat inputs ============================= filebeat.inputs: # Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations. # testa.log
- type: log
enabled: true
paths:
- /root/test/testa.log
fields:
log_topics: "testa" # testb.log
- type: log
enabled: true
paths:
- /root/test/testb.log
fields:
log_topics: "testb" # testc.log
- type: log
enabled: true
paths:
- /root/test/testc.log
fields:
log_topics: "testc" # Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG'] # Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN'] # Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$'] # Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1 ### Multiline options # Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[ # Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after #============================= Filebeat modules =============================== filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml # Set to true to enable config reloading
reload.enabled: true # Period on which files under path should be checked for changes
#reload.period: 10s #==================== Elasticsearch template setting ========================== setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
setup.template.name: "prod-file*"
setup.template.pattern: "prod-file*"
setup.ilm.enabled: false
#================================ General ===================================== # The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name: # The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"] # Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging #============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false # The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url: #============================== Kibana ===================================== # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana: # Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601" # Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id: #============================= Elastic Cloud ================================== # These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/). # The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id: # The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth: #================================ Outputs ===================================== # Configure what output to use when sending the data collected by the beat. #-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# hosts: ["192.168.10.30:9200"]
# index: "testlog-666" #output.elasticsearch:
# hosts: ["192.168.10.30:9200"]
# indices:
# - index: "testa-log"
# when.contains:
# log_topics: "testa"
# - index: "testb-log"
# when.contains:
# log_topics: "testb" output.elasticsearch:
hosts: ["192.168.10.100:9200"]
index: "test-other-log"
indices:
- index: "testa-log"
when.contains:
fields:
log_topics: "testa"
- index: "testb-log"
when.contains:
fields:
log_topics: "testb"
#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"] # Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem" # Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key" #================================ Processors ===================================== # Configure processors to enhance or manipulate events generated by the beat. #================================ Logging ===================================== # Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug # At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"] #============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default. # Set to true to enable the monitoring reporter.
#monitoring.enabled: false # Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch: #================================= Migration ================================== # This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
 
 
 其它说明:为什么需要特别注意fields_under_root参数
fields_under_root参数定义如下:
  • 如果值为ture,那么fields存储在输出文档的顶级位置,如果与filebeat中字段冲突,自定义字段会覆盖其他字段
  • 如果值为false或者未设置,那么fields存储在输出文档的子位置。
如下:
(1)fields_under_root:true
此时在filebeat的input部分定义的字段log_topics是一个顶级字段。
 
 
 
 
(2)fields_under_root:false或者未设置
此时在filebeat的input部分定义的字段log_topics是fields字段的子字段。
 
 
 
 
【完】

filebeat输出结果到elasticsearch的多个索引的更多相关文章

  1. 使用ElasticSearch赋能HBase二级索引 | 实践一年后总结

    前言:还记得那是2018年的一个夏天,天气特别热,我一边擦汗一边听领导大刀阔斧的讲述自己未来的改革蓝图.会议开完了,核心思想就是:我们要搞一个数据大池子,要把公司能灌的数据都灌入这个大池子,然后让别人 ...

  2. 第三百六十二节,Python分布式爬虫打造搜索引擎Scrapy精讲—elasticsearch(搜索引擎)基本的索引和文档CRUD操作、增、删、改、查

    第三百六十二节,Python分布式爬虫打造搜索引擎Scrapy精讲—elasticsearch(搜索引擎)基本的索引和文档CRUD操作.增.删.改.查 elasticsearch(搜索引擎)基本的索引 ...

  3. (转)ElasticSearch Java Api-检索索引库

    上篇博客记录了如何用java调用api把数据写入索引,这次记录下如何搜索. 一.准备数据 String data1 = JsonUtil.model2Json(new Blog(1, "gi ...

  4. 四十一 Python分布式爬虫打造搜索引擎Scrapy精讲—elasticsearch(搜索引擎)基本的索引和文档CRUD操作、增、删、改、查

    elasticsearch(搜索引擎)基本的索引和文档CRUD操作 也就是基本的索引和文档.增.删.改.查.操作 注意:以下操作都是在kibana里操作的 elasticsearch(搜索引擎)都是基 ...

  5. Elasticsearch之curl创建索引

    前提,是 Elasticsearch之curl创建索引库 [hadoop@djt002 elasticsearch-2.4.3]$ curl -XPUT 'http://192.168.80.200: ...

  6. Elasticsearch之curl创建索引库

    关于curl的介绍,请移步 Elasticsearch学习概念之curl 启动es,请移步 Elasticsearch的前后台运行与停止(tar包方式) Elasticsearch的前后台运行与停止( ...

  7. Elasticsearch之curl删除索引库

    关于curl创建索引库的介绍,请移步 Elasticsearch之curl创建索引库 [hadoop@djt002 elasticsearch-2.4.3]$ curl -XPUT 'http://1 ...

  8. Elasticsearch之curl创建索引库和索引时注意事项

    前提, Elasticsearch之curl创建索引库 Elasticsearch之curl创建索引 注意事项 1.索引库名称必须要全部小写,不能以下划线开头,也不能包含逗号 2.如果没有明确指定索引 ...

  9. Elasticsearch之cur查询索引

    前提, Elasticsearch之curl创建索引库 Elasticsearch之curl创建索引 Elasticsearch之curl创建索引库和索引时注意事项 Elasticsearch之cur ...

随机推荐

  1. Vue2.x 响应式部分源码阅读记录

    之前也用了一段时间Vue,对其用法也较为熟练了,但是对各种用法和各种api使用都是只知其然而不知其所以然.最近利用空闲时间尝试的去看看Vue的源码,以便更了解其具体原理实现,跟着学习学习. Proxy ...

  2. 用JavaScript实现全选-反选

    实现全选-反选 在日常生活我们会遇到需要全选-反选的地方,其实用JavaScript也能实现. 样式如下所示: 样式代码如下所示: <!DOCTYPE html PUBLIC "-// ...

  3. YH高校集中用电管理网上查询系统POST注入漏洞

    1.burpsuite 抓包保存为1.txt POST /apartsearch.asp HTTP/1.1 Host: 2*0.86.2**.69 User-Agent: Mozilla/5.0 (W ...

  4. 面试BAT必问的JVM,今天我们来说一说它类加载器的底层原理

    类加载器的关系 类加载器的分类 JVM支持两种类加载器,一种为引导类加载器(Bootstrap ClassLoader),另外一种是自定义类加载器(User Defined ClassLoader) ...

  5. 面试官:小伙子,给我说一下spring框架吧

    1. spring是什么 轻量级开源框架 以 IoC(Inverse Of Control:反转控制)和 AOP(Aspect Oriented Programming:面向切面编程)为内核. 还能整 ...

  6. iMindMap思维导图中可以插入哪些附件?

    iMindMap(Windows系统)不仅拥有灵活的排版功能,而且还允许用户插入多种附件,丰富思维导图的内容.用户可以为思维导图添加图片.网址.录音等文件,让导图更显生动性.实用性. 将图片.录音等文 ...

  7. Macos系统上怎么自动下载任务

    相对于Windows系统来说,好用的Mac下载工具就显得比较少了.Folx作为Mac下载工具中的佼佼者,其自动化下载功能受到很多Mac系统用户的欢迎. 随着高清影视的发展,很多影视资源体动辄就是1-2 ...

  8. 网络系列之 cookie增删改查(封装)

    什么是cookie 呢?简单来说,这个小东西,会记录你的 浏览器 浏览习惯,或 账号密码等, 以便于提高用户的体验感. 举个例子: 你们有没有发现,去淘宝一些购物网站, 你搜索了 椅子, 挑选了一会椅 ...

  9. SQL相关子查询是什么?和嵌套子查询有什么区别?

    目录 两者的各种叫法 相关子查询MySQL解释 相关子查询Wikipedia解释 相关子查询执行步骤拆解 相关子查询和嵌套查询的区别 参考资料 两者的各种叫法 相关子查询叫做:Correlated S ...

  10. iOS gif图显示问题

    问题 有时候需要显示gif动态图,让界面更加的绚丽,但是iOS默认只支持png,gpg图片.那么如何才能显示gif图呢? 解决方式 添加框架 CoreGraphics.framework ImageI ...