logstash1 - kafka - logstash2 - elasticsearch - kibana
0.拓扑图

参考:https://www.cnblogs.com/JetpropelledSnake/p/10057545.html
官网: http://kafka.apache.org/documentation.html#introduction kafka原理
https://www.cnblogs.com/jixp/articles/9435699.html Kafka全解析 https://blog.csdn.net/vinfly_li/article/details/79397201
1.logstash的配置
[root@VM_0_4_centos config]# cat wxqyh.yml|egrep -v '^$|^#'
input {
file {
type => "4personal20001"
path => "/mnt/data/logs/personal_20001/log4j.log"
start_position => "beginning"
sincedb_path => "/dev/null"
codec => multiline {
pattern => "^%{TIMESTAMP_ISO8601}"
negate => true
what => "previous"
}
}
file {
type => "4personal20002"
path => "/mnt/data/logs/personal_20002/log4j.log"
start_position => "beginning"
sincedb_path => "/dev/null"
codec => multiline {
pattern => "^%{TIMESTAMP_ISO8601}"
negate => true
what => "previous"
}
}
}
filter { grok {
match => {
"message" => "^%{TIMESTAMP_ISO8601}\[%{WORD:level} %{GREEDYDATA:ajpcon}\| %{GREEDYDATA:data}"
}
match => {
"message" => "^%{TIMESTAMP_ISO8601}\[ %{WORD:level} %{GREEDYDATA:ajpcon}\| %{GREEDYDATA:data}"
}
remove_field => "message"
}
}
output {
if [type] == "4personal20001" {
kafka {
max_request_size => 10485761
bootstrap_servers => "10.0.0.134:9092"
topic_id => "topic4personal1"
compression_type => "snappy"
}
}
if [type] == "4personal20002" {
kafka {
max_request_size => 10485761
bootstrap_servers => "10.0.0.134:9092"
topic_id => "topic4personal2"
compression_type => "snappy"
}
}
}
2.kafka的配置
[root@VM_0_134_centos config]# cat server.properties|egrep -v '^$|^#'
broker.id=
listeners=PLAINTEXT://10.0.0.134:9092
advertised.listeners=PLAINTEXT://10.0.0.134:9092
num.network.threads=
num.io.threads=
socket.send.buffer.bytes=
socket.receive.buffer.bytes=
socket.request.max.bytes=
message.max.bytes=
replica.fetch.max.bytes=
log.dirs=/mnt/data/monitor01/kafka_2.-1.1./data
num.partitions=
num.recovery.threads.per.data.dir=
offsets.topic.replication.factor=
transaction.state.log.replication.factor=
transaction.state.log.min.isr=
log.retention.hours=
log.segment.bytes=
log.retention.check.interval.ms=
zookeeper.connect=localhost:
zookeeper.connection.timeout.ms=
group.initial.rebalance.delay.ms=
3.logstash2的配置
[root@VM_0_134_centos config]# cat haode.yml|egrep -v '^$|^#'
input {
kafka {
type => "topic12wxqyh8"
codec => "plain"
topics => ["topic12wxqyh8"]
client_id => "es1"
group_id => "es1"
bootstrap_servers => "10.0.0.134:9092"
} kafka {
type => "topic12wxqyh9"
codec => "plain"
topics => ["topic12wxqyh9"]
client_id => "es2"
group_id => "es2"
bootstrap_servers => "10.0.0.134:9092"
}
kafka {
type => "topic24wxqyh6"
codec => "plain"
topics => ["topic24wxqyh6"]
client_id => "es3"
group_id => "es3"
bootstrap_servers => "10.0.0.134:9092"
} kafka {
type => "topic24wxqyh7"
codec => "plain"
topics => ["topic24wxqyh7"]
client_id => "es4"
group_id => "es4"
bootstrap_servers => "10.0.0.134:9092"
} kafka {
type => "topic4personal1"
codec => "plain"
topics => ["topic4personal1"]
client_id => "es5"
group_id => "es5"
bootstrap_servers => "10.0.0.134:9092"
}
kafka {
type => "topic4personal2"
codec => "plain"
topics => ["topic4personal2"]
client_id => "es6"
group_id => "es6"
bootstrap_servers => "10.0.0.134:9092"
} } output {
if [type] == "topic12wxqyh8" {
elasticsearch {
index => "topic12wxqyh8"
hosts => ["10.0.0.7:9200"]
}
} if [type] == "topic12wxqyh9" {
elasticsearch {
index => "topic12wxqyh9"
hosts => ["10.0.0.7:9200"]
}
}
if [type] == "topic24wxqyh6" {
elasticsearch {
index => "topic24wxqyh6"
hosts => ["10.0.0.7:9200"]
}
}
if [type] == "topic24wxqyh7" {
elasticsearch {
index => "topic24wxqyh7"
hosts => ["10.0.0.7:9200"]
}
}
if [type] == "topic4personal1" {
elasticsearch {
index => "topic4personal1"
hosts => ["10.0.0.7:9200"]
}
}
if [type] == "topic4personal2" {
elasticsearch {
index => "topic4personal2"
hosts => ["10.0.0.7:9200"]
}
}
}
参考:https://www.cnblogs.com/swordfall/p/8860941.html#auto_id_4
4.kafka的问题:
4.1zookeeper--选主,提供访问入口
4.2 Kafka的原理
生产者将数据发送到Broker代理,Broker代理有多个话题topic,消费者从Broker获取数据。

参考: https://zhuanlan.zhihu.com/p/97030680
5.partion--相当于es的shard

logstash1 - kafka - logstash2 - elasticsearch - kibana的更多相关文章
- Filebeat+Kafka+Logstash+ElasticSearch+Kibana搭建完整版
1. 了解各个组件的作用 Filebeat是一个日志文件托运工具,在你的服务器上安装客户端后,filebeat会监控日志目录或者指定的日志文件,追踪读取这些文件(追踪文件的变化,不停的读) Kafka ...
- Filebeat+Kafka+Logstash+ElasticSearch+Kibana 日志采集方案
前言 Elastic Stack 提供 Beats 和 Logstash 套件来采集任何来源.任何格式的数据.其实Beats 和 Logstash的功能差不多,都能够与 Elasticsearch 产 ...
- lagstash + elasticsearch + kibana 3 + kafka 日志管理系统部署 02
因公司数据安全和分析的需要,故调研了一下 GlusterFS + lagstash + elasticsearch + kibana 3 + redis 整合在一起的日志管理应用: 安装,配置过程,使 ...
- Flink SQL结合Kafka、Elasticsearch、Kibana实时分析电商用户行为
body { margin: 0 auto; font: 13px / 1 Helvetica, Arial, sans-serif; color: rgba(68, 68, 68, 1); padd ...
- 安装logstash,elasticsearch,kibana三件套
logstash,elasticsearch,kibana三件套 elk是指logstash,elasticsearch,kibana三件套,这三件套可以组成日志分析和监控工具 注意: 关于安装文档, ...
- 使用logstash+elasticsearch+kibana快速搭建日志平台
日志的分析和监控在系统开发中占非常重要的地位,系统越复杂,日志的分析和监控就越重要,常见的需求有: * 根据关键字查询日志详情 * 监控系统的运行状况 * 统计分析,比如接口的调用次数.执行时间.成功 ...
- 安装logstash,elasticsearch,kibana三件套(转)
logstash,elasticsearch,kibana三件套 elk是指logstash,elasticsearch,kibana三件套,这三件套可以组成日志分析和监控工具 注意: 关于安装文档, ...
- logstash+elasticsearch+kibana快速搭建日志平台
使用logstash+elasticsearch+kibana快速搭建日志平台 日志的分析和监控在系统开发中占非常重要的地位,系统越复杂,日志的分析和监控就越重要,常见的需求有: 根据关键字查询日 ...
- 快速搭建应用服务日志收集系统(Filebeat + ElasticSearch + kibana)
快速搭建应用服务日志收集系统(Filebeat + ElasticSearch + kibana) 概要说明 需求场景,系统环境是CentOS,多个应用部署在多台服务器上,平时查看应用日志及排查问题十 ...
随机推荐
- Linux C打印所有的环境变量
#include <stdio.h> extern char** environ; int main() { ; ; environ[nIndex] != NULL; nIndex++) ...
- VMware WorkStations最小化安装&配置&卸载CentOS 7
所需软件: VMware WorkStations,CentOS 7镜像文件(可以在CentOS官网下载) 1.打开VMware WorkStations,点击创建虚拟机 2.选择典型,点击下一步 3 ...
- 1、课程简介-Spring 注解驱动开发
1.课程简介-Spring 注解驱动开发
- python自动华 (二)
Python自动化 [第二篇]:Python基础-列表.元组.字典 本节内容 模块初识 .pyc简介 数据类型初识 数据运算 列表.元组操作 字符串操作 字典操作 集合操作 字符编码与转码 一.模块初 ...
- [git]一个本地仓库,多个远程仓库
操作步骤如下: 1. 克隆某个远程仓库的代码到本地 git clone http://...... // 或者 git clone git@.... 2. 查看当前远程仓库地址 // 查看需要添加的远 ...
- c实现单向链表
实现一个单向链表的:创建.插入.删除.排序(冒泡).逆向.搜索中间节点 #include <iostream> #include <stdio.h> #include < ...
- Maxim-可自定义的Monkey测试工具(Android)
Maxim 基于monkey做的二次开发,相比原始monkey,新增如下功能 多种随机测试模式:dfs(深度遍历) mix模式(monkey随机测试+控件识别) troy模式(按照控件选择器进行遍历) ...
- vxe-table 可编辑表格 行内编辑以及验证 element-UI集成
<vxe-table border show-overflow ref="xTable" ----------------------------------------- ...
- mysql建表问题
PUBLIC Stack Overflow Tags Users Jobs TeamsQ&A for workLearn More MySQL error: The maximum col ...
- allowMultiQueries=true mybatis 要多行sql执行,一定要注意
allowMultiQueries=true 这个配置已经出现多次问题了,这次由于切换时多数据源,搞配置的同志不知道从哪里copy的配置,只换了我们的链接,我们之前配置了好多配置都丢失了,我的代码中有 ...