简介

Filebeat用于收集本地文件的日志数据。 它作为服务器上的代理安装,Filebeat监视日志目录或特定的日志文件,尾部文件,并将它们转发到Elasticsearch或Logstash进行索引。

logstash 和filebeat都具有日志收集功能,filebeat更轻量,使用go语言编写,占用资源更少,可以有很高的并发,但logstash 具有filter功能,能过滤分析日志。一般结构都是filebeat采集日志,然后发送到消息队列,如redis,kafka。然后logstash去获取,利用filter功能过滤分析,然后存储到elasticsearch中。

Kafka是LinkedIn开源的分布式发布-订阅消息系统,目前归属于Apache定级项目。Kafka主要特点是基于Pull的模式来处理消息消费,追求高吞吐量,一开始的目的就是用于日志收集和传输。0.8版本开始支持复制,不支持事务,对消息的重复、丢失、错误没有严格要求,适合产生大量数据的互联网服务的数据收集业务。

环境清单

IP hostname 软件 配置要求 网络 备注
192.168.43.176 ES/数据存储 elasticsearch-7.2 内存2GB/硬盘40GB Nat,内网
192.168.43.215 Kibana/UI展示 kibana-7.2 内存2GB/硬盘40GB Nat,内网
192.168.43.164 Filebeat/数据采集 Filebeat-7.2/nginx 内存2GB/硬盘40GB Nat,内网
192.168.43.30 Logstash/数据管道 logstash-7.2 内存2GB/硬盘40GB Nat,内网
192.168.43.86 Kibana/UI展示 kibana-7.2 内存2GB/硬盘40GB Nat,内网
192.168.43.47 Kafka/消息队列 Kafka2.12 / zk3.4 内存2GB/硬盘40GB Nat,内网
192.168.43.151 Kafka/消息队列 Kafka2.12 / zk3.4 内存2GB/硬盘40GB Nat,内网
192.168.43.43 Kafka/消息队列 Kafka2.12 / zk3.4 内存2GB/硬盘40GB Nat,内网
192.168.43.194 Tomcat tomcat8.5 内存2GB/硬盘40GB Nat,内网

配置使用zookeeper和kafka请看我写的另一篇博客

https://www.cnblogs.com/you-men/p/12884779.html

使用Logstash和Kafka交互

编辑logstash配置文件
input{
stdin{}
} output{
kafka{
topic_id =>"kafkatest"
bootstrap_servers => "192.168.43.47:9092"
batch_size => 5
}
stdout{
codec => "rubydebug"
}
}
启动logstash,输入数据
./bin/logstash -f kafka.conf
zhoujian
{
"@timestamp" => 2020-07-24T07:11:26.235Z,
"message" => "zhoujian",
"host" => "logstash-30",
"@version" => "1"
}
youmen
{
"@timestamp" => 2020-07-24T07:11:29.441Z,
"message" => "youmen",
"host" => "logstash-30",
"@version" => "1"
}
kafka中查看写入数据
# 查看kafka现有的topic
./bin/kafka-topics.sh --list --bootstrap-server 192.168.43.47:9092,192.168.43.151:9092,192.168.43.43:9092
kafkatest
test-you-io # 查看kafkatest里面消息
./bin/kafka-console-consumer.sh --bootstrap-server 192.168.43.47:9092,192.168.43.151:9092,192.168.43.43:9092 --topic kafkatest --from-beginning
2020-07-24T07:13:59.461Z logstash-30 zhoujian
2020-07-24T07:14:01.518Z logstash-30 youmen

数据写入成功,kafka配置完成

配置Filebeat

输出日志到kafka
/etc/filebeat/filebeat.yml
# hosts: ["localhost:9200"] # Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme" #----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"] # Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem" # Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key" output.kafka:
enabled: true
hosts: ["192.168.43.47:9092","192.168.43.151:9092","192.168.43.43:9092"]
topic: "tomcat-filebeat"
partition.hash:
reachable_only: true
compression: gzip
max_message_bytes: 1000000
required_acks: 1
#================================ Processors ===================================== # Configure processors to enhance or manipulate events generated by the beat. processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
[root@tomcat-194 logs]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/local/tomcat/logs/localhost_access_log.2020-07*
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.kafka:
enabled: true
hosts: ["192.168.43.47:9092","192.168.43.151:9092","192.168.43.43:9092"]
topic: "tomcat-filebeat"
partition.hash:
reachable_only: true
compression: gzip
max_message_bytes: 1000000
required_acks: 1
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
kafka查询是否有tomcat日志
./bin/kafka-console-consumer.sh --bootstrap-server 192.168.43.47:9092,192.168.43.151:9092,192.168.43.43:9092 --topic tomcat-filebeat --from-beginning

{"@timestamp":"2020-07-24T06:35:24.294Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.2.0","topic":"tomcat-filebeat"},"message":"{\"client\":\"192.168.43.84\",  \"client user\":\"-\",   \"authenticated\":\"-\",   \"access time\":\"[24/Jul/2020:14:35:10 +0800]\",     \"method\":\"GET /docs/config/ HTTP/1.1\",   \"status\":\"200\",  \"send bytes\":\"6826\",  \"Query?string\":\"\",  \"partner\":\"http://192.168.43.194:8080/\",  \"Agent version\":\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36\"}","input":{"type":"log"},"ecs":{"version":"1.0.0"},"host":{"name":"tomcat-194","id":"b029c3ce28374f7db698c050e342457f","containerized":false,"hostname":"tomcat-194","architecture":"x86_64","os":{"platform":"centos","version":"7 (Core)","family":"redhat","name":"CentOS Linux","kernel":"3.10.0-514.el7.x86_64","codename":"Core"}},"agent":{"hostname":"tomcat-194","id":"cfe87df5-c912-49d0-8758-b73e917a6c9c","version":"7.2.0","type":"filebeat","ephemeral_id":"894657d2-af1a-4660-a3eb-98602bc3d1d7"},"log":{"offset":19393,"file":{"path":"/usr/local/tomcat/logs/localhost_access_log.2020-07-24.log"}}}
{"@timestamp":"2020-07-24T06:38:29.339Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.2.0","topic":"tomcat-filebeat"},"host":{"id":"b029c3ce28374f7db698c050e342457f","containerized":false,"hostname":"tomcat-194","name":"tomcat-194","architecture":"x86_64","os":{"family":"redhat","name":"CentOS Linux","kernel":"3.10.0-514.el7.x86_64","codename":"Core","platform":"centos","version":"7 (Core)"}},"agent":{"ephemeral_id":"894657d2-af1a-4660-a3eb-98602bc3d1d7","hostname":"tomcat-194","id":"cfe87df5-c912-49d0-8758-b73e917a6c9c","version":"7.2.0","type":"filebeat"},"ecs":{"version":"1.0.0"},"message":"{\"client\":\"192.168.43.84\", \"client user\":\"-\", \"authenticated\":\"-\", \"access time\":\"[24/Jul/2020:14:38:18 +0800]\", \"method\":\"GET /manager/status HTTP/1.1\", \"status\":\"403\", \"send bytes\":\"3446\", \"Query?string\":\"\", \"partner\":\"http://192.168.43.194:8080/\", \"Agent version\":\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36\"}","log":{"offset":19797,"file":{"path":"/usr/local/tomcat/logs/localhost_access_log.2020-07-24.log"}},"input":{"type":"log"}}
^CProcessed a total of 66 messages

使用logstash从Kafka读取日志到es

配置logstash读取kafka日志
cat kafka-es.conf
input{
kafka{
bootstrap_servers => "192.168.43.62:9092,192.168.43.151:9092,192.168.43.43:9092"
topics => "tomcat-filebeat"
consumer_threads => 1
decorate_events => true
codec => "json"
auto_offset_reset => "latest"
}
} output{
elasticsearch {
hosts => ["192.168.43.176:9200"]
index => "tomcat-filebeat-%{+YYYY.MM.dd}" }
stdout{
codec => "rubydebug"
}
}
前台运行,确保日志能否正常输出
./bin/logstash -f kafka-es.conf
constant ::Fixnum is deprecated
{
"input" => {
"type" => "log"
},
"@version" => "1",
"message" => "{\"client\":\"192.168.43.227\", \"client user\":\"-\", \"authenticated\":\"-\", \"access time\":\"[24/Jul/2020:15:47:08 +0800]\", \"method\":\"GET / HTTP/1.1\", \"status\":\"200\", \"send bytes\":\"11215\", \"Query?string\":\"\", \"partner\":\"-\", \"Agent version\":\"curl/7.29.0\"}",
"agent" => {
"version" => "7.2.0",
"hostname" => "tomcat-194",
"ephemeral_id" => "894657d2-af1a-4660-a3eb-98602bc3d1d7",
"id" => "cfe87df5-c912-49d0-8758-b73e917a6c9c",
"type" => "filebeat"
},
"host" => {
"name" => "tomcat-194",
"os" => {
"version" => "7 (Core)",
"name" => "CentOS Linux",
"codename" => "Core",
"family" => "redhat",
"platform" => "centos",
"kernel" => "3.10.0-514.el7.x86_64"
},
"id" => "b029c3ce28374f7db698c050e342457f",
"containerized" => false,
"hostname" => "tomcat-194",
"architecture" => "x86_64"
},
"@timestamp" => 2020-07-24T07:47:11.857Z,
"log" => {
"offset" => 20203,
"file" => {
"path" => "/usr/local/tomcat/logs/localhost_access_log.2020-07-24.log"
}
},
"ecs" => {
"version" => "1.0.0"
}
} # kafka节点查看 {"@timestamp":"2020-07-24T07:53:11.944Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.2.0","topic":"tomcat-filebeat"},"host":{"id":"b029c3ce28374f7db698c050e342457f","containerized":false,"hostname":"tomcat-194","architecture":"x86_64","name":"tomcat-194","os":{"codename":"Core","platform":"centos","version":"7 (Core)","family":"redhat","name":"CentOS Linux","kernel":"3.10.0-514.el7.x86_64"}},"agent":{"type":"filebeat","ephemeral_id":"894657d2-af1a-4660-a3eb-98602bc3d1d7","hostname":"tomcat-194","id":"cfe87df5-c912-49d0-8758-b73e917a6c9c","version":"7.2.0"},"log":{"file":{"path":"/usr/local/tomcat/logs/localhost_access_log.2020-07-24.log"},"offset":20462},"message":"{\"client\":\"192.168.43.227\", \"client user\":\"-\", \"authenticated\":\"-\", \"access time\":\"[24/Jul/2020:15:53:06 +0800]\", \"method\":\"GET / HTTP/1.1\", \"status\":\"200\", \"send bytes\":\"11215\", \"Query?string\":\"\", \"partner\":\"-\", \"Agent version\":\"curl/7.29.0\"}","input":{"type":"log"},"ecs":{"version":"1.0.0"}}
es查看索引
curl -XGET "http://127.0.0.1:9200/_cat/indices?v"
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .monitoring-es-7-2020.07.24 z0Ff-j7WSlSm4ZBH6IhZaw 1 1 185 60 3.7mb 2mb
green open .monitoring-kibana-7-2020.07.24 PWqXvObhSRazQn4CY8Z2lg 1 1 3 0 216.3kb 73.4kb
green open .kibana_task_manager Ptj7ydZmQqGG7hWxK2NbSg 1 1 2 0 61.2kb 45.5kb
green open .kibana_2 fot9Sk6jRWa2vS5cQGvOeQ 1 1 5 0 68.6kb 34.3kb
green open .kibana_1 jYD4jXLVTeeAMImEz9NEVA 1 1 1 0 18.7kb 9.3kb
green open .tasks NIwDk-PYQT-d-njh3g0t0g 1 1 1 0 12.7kb 6.3kb
green open tomcat-filebeat-2020.07.24 s3aB-c6GSemUHvaurYQ8Zw 1 1 38 0 227.4kb 80.3kb

06 . ELK Stack + kafka集群的更多相关文章

  1. ELK+Kafka集群日志分析系统

    ELK+Kafka集群分析系统部署 因为是自己本地写好的word文档复制进来的.格式有些出入还望体谅.如有错误请回复.谢谢! 一. 系统介绍 2 二. 版本说明 3 三. 服务部署 3 1) JDK部 ...

  2. Zookeeper、Kafka集群与Filebeat+Kafka+ELK架构

    Zookeeper.Kafka集群与Filebeat+Kafka+ELK架构 目录 Zookeeper.Kafka集群与Filebeat+Kafka+ELK架构 一.Zookeeper 1. Zook ...

  3. 《Apache kafka实战》读书笔记-kafka集群监控工具

    <Apache kafka实战>读书笔记-kafka集群监控工具 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 如官网所述,Kafka使用基于yammer metric ...

  4. Kafka相关内容总结(Kafka集群搭建手记)

    简介 Kafka is a distributed,partitioned,replicated commit logservice.它提供了类似于JMS的特性,但是在设计实现上完全不同,此外它并不是 ...

  5. 解决kafka集群由于默认的__consumer_offsets这个topic的默认的副本数为1而存在的单点故障问题

    抛出问题: __consumer_offsets这个topic是由kafka自动创建的,默认50个,但是都存在一台kafka服务器上,这是不是就存在很明显的单点故障?经测试,如果将存储consumer ...

  6. Kafka学习之(五)搭建kafka集群之Zookeeper集群搭建

    Zookeeper是一种在分布式系统中被广泛用来作为:分布式状态管理.分布式协调管理.分布式配置管理.和分布式锁服务的集群.kafka增加和减少服务器都会在Zookeeper节点上触发相应的事件kaf ...

  7. 4 kafka集群部署及kafka生产者java客户端编程 + kafka消费者java客户端编程

    本博文的主要内容有   kafka的单机模式部署 kafka的分布式模式部署 生产者java客户端编程 消费者java客户端编程 运行kafka ,需要依赖 zookeeper,你可以使用已有的 zo ...

  8. 【拆分版】Docker-compose构建Zookeeper集群管理Kafka集群

    写在前边 在搭建Logstash多节点之前,想到就算先搭好Logstash启动会因为日志无法连接到Kafka Brokers而无限重试,所以这里先构建下Zookeeper集群管理的Kafka集群. 众 ...

  9. Centos7.4 kafka集群安装与kafka-eagle1.3.9的安装

    Centos7.4 kafka集群安装与kafka-eagle1.3.9的安装 集群规划: hostname Zookeeper Kafka kafka-eagle kafka01 √ √ √ kaf ...

随机推荐

  1. Java | 内部类(Inner Class)

    前言 本文内容主要来自 Java 官方教程中的<嵌套类>章节. 本文提供的是 JDK 14 的示例代码. 定义 内部类(Inner Class),是 Java 中对类的一种定义方式,是嵌套 ...

  2. 发布Nuget包时遇到都意外

    准备好工具和发布教程.(这些网上都有,我就不说了,就说说我遇到都意外.) 在发布包都过程中,我给我都dll命名为Common.不知道是不是这个原因导致的我包发布上去后,程序对其引用时居然没主动引用进程 ...

  3. 入门大数据---Hive常用DML操作

    Hive 常用DML操作 一.加载文件数据到表 1.1 语法 LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename ...

  4. 重学 Java 设计模式:实战中介者模式「按照Mybaits原理手写ORM框架,给JDBC方式操作数据库增加中介者场景」

    作者:小傅哥 博客:https://bugstack.cn - 原创系列专题文章 沉淀.分享.成长,让自己和他人都能有所收获! 一.前言 同龄人的差距是从什么时候拉开的 同样的幼儿园.同样的小学.一样 ...

  5. matlab批量处理数据的方法

    问题描述: 有多个.mat格式数据(本文数据名称:‘buf_026.mat’),要抽取其中的数据进行运算,结果返回到数组/xlsx等 关键字:num2str/ xlsxwrite/ eval/ 元胞数 ...

  6. 手摸手带你理解Vue的Watch原理

    前言 watch 是由用户定义的数据监听,当监听的属性发生改变就会触发回调,这项配置在业务中是很常用.在面试时,也是必问知识点,一般会用作和 computed 进行比较. 那么本文就来带大家从源码理解 ...

  7. 洛谷 P2648 赚钱

    这道题其实就是求最长路顺便再判断一下正环而已. 这种题肯定要用SPFA的啦,有又正边权(因为最长路所以正边就相当于负边),又是正环(同理,相当于负环),SPFA专治这种问题. 当一个点入队多次的时候, ...

  8. Spring Security(五) —— 动手实现一个 IP_Login

    摘要: 原创出处 https://www.cnkirito.moe/spring-security-5/ 「老徐」欢迎转载,保留摘要,谢谢! 5 动手实现一个IP_Login 在开始这篇文章之前,我们 ...

  9. 前端走进机器学习生态,在 Node.js 中使用 Python

    这次给大家带来一个好东西,它的主要用途就是能让大家在 Node.js 中使用 Python 的接口和函数.可能你看到这里会好奇,会疑惑,会不解,我 Node.js 大法那么好,干嘛要用 Python ...

  10. rhel7 编写CMakeList.txt编译运行MySQL官方例子代码

    注:若需要参考rhel7上安装MySQL 请 点击此处 1.下面MySQL链接库版本用到了boost(若需要请到官网下载最新链接库和文档和C++连接数据库操作示例) Red Hat Enterpris ...