简介

Filebeat用于收集本地文件的日志数据。 它作为服务器上的代理安装,Filebeat监视日志目录或特定的日志文件,尾部文件,并将它们转发到Elasticsearch或Logstash进行索引。

logstash 和filebeat都具有日志收集功能,filebeat更轻量,使用go语言编写,占用资源更少,可以有很高的并发,但logstash 具有filter功能,能过滤分析日志。一般结构都是filebeat采集日志,然后发送到消息队列,如redis,kafka。然后logstash去获取,利用filter功能过滤分析,然后存储到elasticsearch中。

Kafka是LinkedIn开源的分布式发布-订阅消息系统,目前归属于Apache定级项目。Kafka主要特点是基于Pull的模式来处理消息消费,追求高吞吐量,一开始的目的就是用于日志收集和传输。0.8版本开始支持复制,不支持事务,对消息的重复、丢失、错误没有严格要求,适合产生大量数据的互联网服务的数据收集业务。

环境清单

IP hostname 软件 配置要求 网络 备注
192.168.43.176 ES/数据存储 elasticsearch-7.2 内存2GB/硬盘40GB Nat,内网
192.168.43.215 Kibana/UI展示 kibana-7.2 内存2GB/硬盘40GB Nat,内网
192.168.43.164 Filebeat/数据采集 Filebeat-7.2/nginx 内存2GB/硬盘40GB Nat,内网
192.168.43.30 Logstash/数据管道 logstash-7.2 内存2GB/硬盘40GB Nat,内网
192.168.43.86 Kibana/UI展示 kibana-7.2 内存2GB/硬盘40GB Nat,内网
192.168.43.47 Kafka/消息队列 Kafka2.12 / zk3.4 内存2GB/硬盘40GB Nat,内网
192.168.43.151 Kafka/消息队列 Kafka2.12 / zk3.4 内存2GB/硬盘40GB Nat,内网
192.168.43.43 Kafka/消息队列 Kafka2.12 / zk3.4 内存2GB/硬盘40GB Nat,内网
192.168.43.194 Tomcat tomcat8.5 内存2GB/硬盘40GB Nat,内网

配置使用zookeeper和kafka请看我写的另一篇博客

https://www.cnblogs.com/you-men/p/12884779.html

使用Logstash和Kafka交互

编辑logstash配置文件
input{
stdin{}
} output{
kafka{
topic_id =>"kafkatest"
bootstrap_servers => "192.168.43.47:9092"
batch_size => 5
}
stdout{
codec => "rubydebug"
}
}
启动logstash,输入数据
./bin/logstash -f kafka.conf
zhoujian
{
"@timestamp" => 2020-07-24T07:11:26.235Z,
"message" => "zhoujian",
"host" => "logstash-30",
"@version" => "1"
}
youmen
{
"@timestamp" => 2020-07-24T07:11:29.441Z,
"message" => "youmen",
"host" => "logstash-30",
"@version" => "1"
}
kafka中查看写入数据
# 查看kafka现有的topic
./bin/kafka-topics.sh --list --bootstrap-server 192.168.43.47:9092,192.168.43.151:9092,192.168.43.43:9092
kafkatest
test-you-io # 查看kafkatest里面消息
./bin/kafka-console-consumer.sh --bootstrap-server 192.168.43.47:9092,192.168.43.151:9092,192.168.43.43:9092 --topic kafkatest --from-beginning
2020-07-24T07:13:59.461Z logstash-30 zhoujian
2020-07-24T07:14:01.518Z logstash-30 youmen

数据写入成功,kafka配置完成

配置Filebeat

输出日志到kafka
/etc/filebeat/filebeat.yml
# hosts: ["localhost:9200"] # Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme" #----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"] # Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem" # Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key" output.kafka:
enabled: true
hosts: ["192.168.43.47:9092","192.168.43.151:9092","192.168.43.43:9092"]
topic: "tomcat-filebeat"
partition.hash:
reachable_only: true
compression: gzip
max_message_bytes: 1000000
required_acks: 1
#================================ Processors ===================================== # Configure processors to enhance or manipulate events generated by the beat. processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
[root@tomcat-194 logs]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/local/tomcat/logs/localhost_access_log.2020-07*
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.kafka:
enabled: true
hosts: ["192.168.43.47:9092","192.168.43.151:9092","192.168.43.43:9092"]
topic: "tomcat-filebeat"
partition.hash:
reachable_only: true
compression: gzip
max_message_bytes: 1000000
required_acks: 1
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
kafka查询是否有tomcat日志
./bin/kafka-console-consumer.sh --bootstrap-server 192.168.43.47:9092,192.168.43.151:9092,192.168.43.43:9092 --topic tomcat-filebeat --from-beginning

{"@timestamp":"2020-07-24T06:35:24.294Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.2.0","topic":"tomcat-filebeat"},"message":"{\"client\":\"192.168.43.84\",  \"client user\":\"-\",   \"authenticated\":\"-\",   \"access time\":\"[24/Jul/2020:14:35:10 +0800]\",     \"method\":\"GET /docs/config/ HTTP/1.1\",   \"status\":\"200\",  \"send bytes\":\"6826\",  \"Query?string\":\"\",  \"partner\":\"http://192.168.43.194:8080/\",  \"Agent version\":\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36\"}","input":{"type":"log"},"ecs":{"version":"1.0.0"},"host":{"name":"tomcat-194","id":"b029c3ce28374f7db698c050e342457f","containerized":false,"hostname":"tomcat-194","architecture":"x86_64","os":{"platform":"centos","version":"7 (Core)","family":"redhat","name":"CentOS Linux","kernel":"3.10.0-514.el7.x86_64","codename":"Core"}},"agent":{"hostname":"tomcat-194","id":"cfe87df5-c912-49d0-8758-b73e917a6c9c","version":"7.2.0","type":"filebeat","ephemeral_id":"894657d2-af1a-4660-a3eb-98602bc3d1d7"},"log":{"offset":19393,"file":{"path":"/usr/local/tomcat/logs/localhost_access_log.2020-07-24.log"}}}
{"@timestamp":"2020-07-24T06:38:29.339Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.2.0","topic":"tomcat-filebeat"},"host":{"id":"b029c3ce28374f7db698c050e342457f","containerized":false,"hostname":"tomcat-194","name":"tomcat-194","architecture":"x86_64","os":{"family":"redhat","name":"CentOS Linux","kernel":"3.10.0-514.el7.x86_64","codename":"Core","platform":"centos","version":"7 (Core)"}},"agent":{"ephemeral_id":"894657d2-af1a-4660-a3eb-98602bc3d1d7","hostname":"tomcat-194","id":"cfe87df5-c912-49d0-8758-b73e917a6c9c","version":"7.2.0","type":"filebeat"},"ecs":{"version":"1.0.0"},"message":"{\"client\":\"192.168.43.84\", \"client user\":\"-\", \"authenticated\":\"-\", \"access time\":\"[24/Jul/2020:14:38:18 +0800]\", \"method\":\"GET /manager/status HTTP/1.1\", \"status\":\"403\", \"send bytes\":\"3446\", \"Query?string\":\"\", \"partner\":\"http://192.168.43.194:8080/\", \"Agent version\":\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36\"}","log":{"offset":19797,"file":{"path":"/usr/local/tomcat/logs/localhost_access_log.2020-07-24.log"}},"input":{"type":"log"}}
^CProcessed a total of 66 messages

使用logstash从Kafka读取日志到es

配置logstash读取kafka日志
cat kafka-es.conf
input{
kafka{
bootstrap_servers => "192.168.43.62:9092,192.168.43.151:9092,192.168.43.43:9092"
topics => "tomcat-filebeat"
consumer_threads => 1
decorate_events => true
codec => "json"
auto_offset_reset => "latest"
}
} output{
elasticsearch {
hosts => ["192.168.43.176:9200"]
index => "tomcat-filebeat-%{+YYYY.MM.dd}" }
stdout{
codec => "rubydebug"
}
}
前台运行,确保日志能否正常输出
./bin/logstash -f kafka-es.conf
constant ::Fixnum is deprecated
{
"input" => {
"type" => "log"
},
"@version" => "1",
"message" => "{\"client\":\"192.168.43.227\", \"client user\":\"-\", \"authenticated\":\"-\", \"access time\":\"[24/Jul/2020:15:47:08 +0800]\", \"method\":\"GET / HTTP/1.1\", \"status\":\"200\", \"send bytes\":\"11215\", \"Query?string\":\"\", \"partner\":\"-\", \"Agent version\":\"curl/7.29.0\"}",
"agent" => {
"version" => "7.2.0",
"hostname" => "tomcat-194",
"ephemeral_id" => "894657d2-af1a-4660-a3eb-98602bc3d1d7",
"id" => "cfe87df5-c912-49d0-8758-b73e917a6c9c",
"type" => "filebeat"
},
"host" => {
"name" => "tomcat-194",
"os" => {
"version" => "7 (Core)",
"name" => "CentOS Linux",
"codename" => "Core",
"family" => "redhat",
"platform" => "centos",
"kernel" => "3.10.0-514.el7.x86_64"
},
"id" => "b029c3ce28374f7db698c050e342457f",
"containerized" => false,
"hostname" => "tomcat-194",
"architecture" => "x86_64"
},
"@timestamp" => 2020-07-24T07:47:11.857Z,
"log" => {
"offset" => 20203,
"file" => {
"path" => "/usr/local/tomcat/logs/localhost_access_log.2020-07-24.log"
}
},
"ecs" => {
"version" => "1.0.0"
}
} # kafka节点查看 {"@timestamp":"2020-07-24T07:53:11.944Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.2.0","topic":"tomcat-filebeat"},"host":{"id":"b029c3ce28374f7db698c050e342457f","containerized":false,"hostname":"tomcat-194","architecture":"x86_64","name":"tomcat-194","os":{"codename":"Core","platform":"centos","version":"7 (Core)","family":"redhat","name":"CentOS Linux","kernel":"3.10.0-514.el7.x86_64"}},"agent":{"type":"filebeat","ephemeral_id":"894657d2-af1a-4660-a3eb-98602bc3d1d7","hostname":"tomcat-194","id":"cfe87df5-c912-49d0-8758-b73e917a6c9c","version":"7.2.0"},"log":{"file":{"path":"/usr/local/tomcat/logs/localhost_access_log.2020-07-24.log"},"offset":20462},"message":"{\"client\":\"192.168.43.227\", \"client user\":\"-\", \"authenticated\":\"-\", \"access time\":\"[24/Jul/2020:15:53:06 +0800]\", \"method\":\"GET / HTTP/1.1\", \"status\":\"200\", \"send bytes\":\"11215\", \"Query?string\":\"\", \"partner\":\"-\", \"Agent version\":\"curl/7.29.0\"}","input":{"type":"log"},"ecs":{"version":"1.0.0"}}
es查看索引
curl -XGET "http://127.0.0.1:9200/_cat/indices?v"
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .monitoring-es-7-2020.07.24 z0Ff-j7WSlSm4ZBH6IhZaw 1 1 185 60 3.7mb 2mb
green open .monitoring-kibana-7-2020.07.24 PWqXvObhSRazQn4CY8Z2lg 1 1 3 0 216.3kb 73.4kb
green open .kibana_task_manager Ptj7ydZmQqGG7hWxK2NbSg 1 1 2 0 61.2kb 45.5kb
green open .kibana_2 fot9Sk6jRWa2vS5cQGvOeQ 1 1 5 0 68.6kb 34.3kb
green open .kibana_1 jYD4jXLVTeeAMImEz9NEVA 1 1 1 0 18.7kb 9.3kb
green open .tasks NIwDk-PYQT-d-njh3g0t0g 1 1 1 0 12.7kb 6.3kb
green open tomcat-filebeat-2020.07.24 s3aB-c6GSemUHvaurYQ8Zw 1 1 38 0 227.4kb 80.3kb

06 . ELK Stack + kafka集群的更多相关文章

  1. ELK+Kafka集群日志分析系统

    ELK+Kafka集群分析系统部署 因为是自己本地写好的word文档复制进来的.格式有些出入还望体谅.如有错误请回复.谢谢! 一. 系统介绍 2 二. 版本说明 3 三. 服务部署 3 1) JDK部 ...

  2. Zookeeper、Kafka集群与Filebeat+Kafka+ELK架构

    Zookeeper.Kafka集群与Filebeat+Kafka+ELK架构 目录 Zookeeper.Kafka集群与Filebeat+Kafka+ELK架构 一.Zookeeper 1. Zook ...

  3. 《Apache kafka实战》读书笔记-kafka集群监控工具

    <Apache kafka实战>读书笔记-kafka集群监控工具 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 如官网所述,Kafka使用基于yammer metric ...

  4. Kafka相关内容总结(Kafka集群搭建手记)

    简介 Kafka is a distributed,partitioned,replicated commit logservice.它提供了类似于JMS的特性,但是在设计实现上完全不同,此外它并不是 ...

  5. 解决kafka集群由于默认的__consumer_offsets这个topic的默认的副本数为1而存在的单点故障问题

    抛出问题: __consumer_offsets这个topic是由kafka自动创建的,默认50个,但是都存在一台kafka服务器上,这是不是就存在很明显的单点故障?经测试,如果将存储consumer ...

  6. Kafka学习之(五)搭建kafka集群之Zookeeper集群搭建

    Zookeeper是一种在分布式系统中被广泛用来作为:分布式状态管理.分布式协调管理.分布式配置管理.和分布式锁服务的集群.kafka增加和减少服务器都会在Zookeeper节点上触发相应的事件kaf ...

  7. 4 kafka集群部署及kafka生产者java客户端编程 + kafka消费者java客户端编程

    本博文的主要内容有   kafka的单机模式部署 kafka的分布式模式部署 生产者java客户端编程 消费者java客户端编程 运行kafka ,需要依赖 zookeeper,你可以使用已有的 zo ...

  8. 【拆分版】Docker-compose构建Zookeeper集群管理Kafka集群

    写在前边 在搭建Logstash多节点之前,想到就算先搭好Logstash启动会因为日志无法连接到Kafka Brokers而无限重试,所以这里先构建下Zookeeper集群管理的Kafka集群. 众 ...

  9. Centos7.4 kafka集群安装与kafka-eagle1.3.9的安装

    Centos7.4 kafka集群安装与kafka-eagle1.3.9的安装 集群规划: hostname Zookeeper Kafka kafka-eagle kafka01 √ √ √ kaf ...

随机推荐

  1. Vue组件篇——Vue3.0中使用高德地图

    VUE-CLI 3.0 中配置高德地图 在项目开发中,地图组件 1.首先,需要注册高德开放平台的账号,并在[应用管理]页面[创建新应用],为应用添加Key值 高德开放平台:https://lbs.am ...

  2. Python 简明教程 --- 12,Python 字典

    微信公众号:码农充电站pro 个人主页:https://codeshellme.github.io 代码写的越急,程序跑得越慢. -- Roy Carlson 目录 Python 字典是另一种非常实用 ...

  3. Eclipse配置maven环境1

    一.什么是maven? Maven是一个项目管理工具,它包含了一个项目对象模型 (Project Object Model),一组标准集合,一个项目生命周期(Project Lifecycle),一个 ...

  4. JavaScript基础Literal 与 Constructor(008)

    JavaScript支持以字面声名法(Literal)的方式来声名对象和数组,相对于构造函数(constructor)的方式,Literal的声 名方式更简洁,更易读,也更少导致Bug.事实上,JSO ...

  5. crm项目开发之架构设计

    CRM customer relationship management 客户管理系统 1. 干什么用的? 管理客户 维护客户关系 2. 谁去使用? 销售 班主任 项目经理 3. 需求: 1. 登录 ...

  6. emacs-显示行号以及跳转到指定行

    [显示行号]M+x display-line-number-mode <Return> [跳转行号]M+x goto-line 然后输入你想跳转到的行号 <Return> (在 ...

  7. (二)ansible 使用

    一,ansible 命令格式 #ansible <pattern> -m <module_name> -a <arguments> #单个服务器 ansible 3 ...

  8. Flask路由中使用正则表达式匹配

    1.说明 由于flask并不支持直接使用正则表达式来匹配路由,我们可以使用werkzeug.routing的BaseConverter来实现 2.代码 from flask import Flask ...

  9. 报错信息ImportError: /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by............)

    报错信息ImportError: /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by............) L ...

  10. Let's GO(三)

    人生苦短,Let's GO Let's GO(一) Let's GO(二) Let's GO(三) Let's GO(四) 今天我学了什么? 1. 结构体(struct) /* type TYPENA ...