filebeat收集日志传输到Redis集群,logstash从Redis集群中拉取数据
前提:已配置好Redis集群,并设置的有统一的访问密码
架构是filebeat-->redis集群-->logstash->elasticsearch,需要修改filebeat的输出和logstash的输入值
filebeat地址:192.168.80.108
redis集群地址:192.168.80.107 ,采用的是伪集群的方式
1 filebeat配置
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/local/openresty/nginx/logs/host.access.log
fields:
log_source: messages
- type: log
enabled: true
paths:
- /usr/local/openresty/nginx/logs/error.log
fields:
log_source: secure
output.redis:
# Redis集群地址列表
hosts: ["192.168.80.107:7001","192.168.80.107:7002","192.168.80.107:7003","192.168.80.107:7004","192.168.80.107:7005","192.168.80.107:7006","192.168.80.107:7007","192.168.80.107:7008"]
# Redis集群key
key: messages_secure
password: foobar2000
# 集群模式下只能用第0数据库,填写其他的会报错
db: 0
2 redis端查看数据
登录:
# -h是地址,-p是端口,-c表示集群,-a是密码
/elk/redis/redis-4.0.1/src/redis-cli -h 192.168.80.107 -c -p 7001 -a foobar2000
查看:
redis 127.0.0.1:7000[0]> keys * # 出现这个key了 说明fielebeat的数据已经传输到redis集群中了
1) "messages_secure"
redis 127.0.0.1:7000[0]> llen emessages_secure ##查看list长度
(integer) 2002
redis 127.0.0.1:7000[0]> lindex messages_secure 0 #查看相关数据
或者使用redis客户端RedisDesktopManager使用
发现一个问题,Redis集群中出现俩messages_secure,且存储的数据一模一样,这个问题还有待继续研究..
3 logstash配置
input {
redis {
host => "192.168.80.107"
port => 7001
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
redis {
host => "192.168.80.107"
port => 7002
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
redis {
host => "192.168.80.107"
port => 7003
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
redis {
host => "192.168.80.107"
port => 7004
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
redis {
host => "192.168.80.107"
port => 7005
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
redis {
host => "192.168.80.107"
port => 7006
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
redis {
host => "192.168.80.107"
port => 7007
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
redis {
host => "192.168.80.107"
port => 7008
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
redis {
batch_count => 1
host => "192.168.80.107"
port => 7001
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
}
# 输出到elasticsearch中,根据不同的日志来源创建不同的索引
output {
if [fields][log_source] == 'messages' {
elasticsearch {
hosts => ["http://192.168.80.104:9200", "http://192.168.80.105:9200","http://192.168.80.106:9200"]
index => "messages-%{+YYYY.MM.dd}"
user => "elastic"
password => "elkstack123456"
}
}
if [fields][log_source] == "secure" {
elasticsearch {
hosts => ["http://192.168.80.104:9200", "http://192.168.80.105:9200","http://192.168.80.106:9200"]
index => "secure-%{+YYYY.MM.dd}"
user => "elastic"
password => "elkstack123456"
}
}
}
说明:
input的redis中,host默认是string,不能填写列表,所以需要把所有集群的地址都写上,
若是只写其中一个Redis集群节点的地址,,则会出现如下提示,同时logstash也无法从Redis集群中拉取数据
Redis connection problem {:exception=>#<Redis::CommandError: CROSSSLOT Keys in request don't hash to the same slot>}
Redis connection problem {:exception=>#<Redis::CommandError: MOVED 7928 192.168.80.107:7002>}
但是若把所有集群的地址都写上,虽然也会出现上述的俩提示,但是logstash能从Redis集群中拉取数据
4 问题
延伸的问题:因为Redis集群中存储俩messages_secure,导致logstash从Redis集群中拉取的数据是会有俩一模一样的,进而传输给Elasticsearch的数据
也是有重复的,在kibana上查看,每个记录均有两条
出现这个问题是因为filebeat存储到Redis集群的数据重复,有待上面问题的解决。
5 官方相关文档
host参数的值是string,不支持列表
Redis input pluginedit
- Plugin version: v3.1.4
- Released on: 2017-08-16
- Changelog
For other versions, see the Versioned plugin docs.
Getting Helpedit
For questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github. For the list of Elastic supported plugins, please consult the Elastic Support Matrix.
Descriptionedit
This input will read events from a Redis instance; it supports both Redis channels and lists. The list command (BLPOP) used by Logstash is supported in Redis v1.3.1+, and the channel commands used by Logstash are found in Redis v1.3.8+. While you may be able to make these Redis versions work, the best performance and stability will be found in more recent stable versions. Versions 2.6.0+ are recommended.
For more information about Redis, see http://redis.io/
batch_count note: If you use the batch_count setting, you must use a Redis version 2.6.0 or newer. Anything older does not support the operations used by batching.
Redis Input Configuration Optionsedit
This plugin supports the following configuration options plus the Common Options described later.
| Setting | Input type | Required |
|---|---|---|
batch_count |
number | No |
data_type |
string, one of ["list", "channel", "pattern_channel"] |
Yes |
db |
number | No |
host |
string | No |
key |
string | Yes |
password |
password | No |
port |
number | No |
threads |
number | No |
timeout |
number | No |
Also see Common Options for a list of options supported by all input plugins.
batch_countedit
- Value type is number
- Default value is
125
The number of events to return from Redis using EVAL.
data_typeedit
- This is a required setting.
- Value can be any of:
list,channel,pattern_channel - There is no default value for this setting.
Specify either list or channel. If data_type is list, then we will BLPOP the key. If data_type is channel, then we will SUBSCRIBE to the key. If data_type is pattern_channel, then we will PSUBSCRIBE to the key.
dbedit
- Value type is number
- Default value is
0
The Redis database number.
hostedit
- Value type is string
- Default value is
"127.0.0.1"
The hostname of your Redis server.
keyedit
- This is a required setting.
- Value type is string
- There is no default value for this setting.
The name of a Redis list or channel.
passwordedit
- Value type is password
- There is no default value for this setting.
Password to authenticate with. There is no authentication by default.
portedit
- Value type is number
- Default value is
6379
The port to connect on.
ssledit
- Value type is boolean
- Default value is
false
Enable SSL support.
threadsedit
- Value type is number
- Default value is
1
timeoutedit
- Value type is number
- Default value is
5
Initial connection timeout in seconds.
Common Optionsedit
The following configuration options are supported by all input plugins:
| Setting | Input type | Required |
|---|---|---|
add_field |
hash | No |
codec |
codec | No |
enable_metric |
boolean | No |
id |
string | No |
tags |
array | No |
type |
string | No |
Detailsedit
add_fieldedit
- Value type is hash
- Default value is
{}
Add a field to an event
codecedit
- Value type is codec
- Default value is
"plain"
The codec used for input data. Input codecs are a convenient method for decoding your data before it enters the input, without needing a separate filter in your Logstash pipeline.
enable_metricedit
- Value type is boolean
- Default value is
true
Disable or enable metric logging for this specific plugin instance by default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
idedit
- Value type is string
- There is no default value for this setting.
Add a unique ID to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 redis inputs. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
input {
redis {
id => "my_plugin_id"
}
}
tagsedit
- Value type is array
- There is no default value for this setting.
Add any number of arbitrary tags to your event.
This can help with processing later.
typeedit
- Value type is string
- There is no default value for this setting.
Add a type field to all events handled by this input.
Types are used mainly for filter activation.
The type is stored as part of the event itself, so you can also use the type to search for it in Kibana.
If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server.
filebeat收集日志传输到Redis集群,logstash从Redis集群中拉取数据的更多相关文章
- 使用filebeat收集日志传输到redis的各种效果展示
0 环境 Linux主机,cengtos7系统 安装有openresty软件,用来访问生成日志信息 1.15.8版本 安装有filebeat软件,用来收集openresty的日志 7.3版本 安装有r ...
- logstash7.3版本不支持从redis集群中拉取数据
filebeat可以把收集到的日志传输到redis集群中,但是logstash如何从从redis集群中拉取数据的呢? ogstash使用的是7.3版本 经过查看官网文档,发现logstash7.3版本 ...
- filebeat收集日志到elsticsearch中并使用ingest node的pipeline处理
filebeat收集日志到elsticsearch中 一.需求 二.实现 1.filebeat.yml 配置文件的编写 2.创建自定义的索引模板 3.加密连接到es用户的密码 1.创建keystore ...
- 第十一章·Filebeat-使用Filebeat收集日志
Filebeat介绍及部署 Filebeat介绍 Filebeat附带预构建的模块,这些模块包含收集.解析.充实和可视化各种日志文件格式数据所需的配置,每个Filebeat模块由一个或多个文件集组成, ...
- ELK日志方案--使用Filebeat收集日志并输出到Kafka
1,Filebeat简介 Filebeat是一个使用Go语言实现的轻量型日志采集器.在微服务体系中他与微服务部署在一起收集微服务产生的日志并推送到ELK. 在我们的架构设计中Kafka负责微服务和EL ...
- elk-日志方案--使用Filebeat收集日志并输出到Kafka
1,Filebeat简介 Filebeat是一个使用Go语言实现的轻量型日志采集器.在微服务体系中他与微服务部署在一起收集微服务产生的日志并推送到ELK. 在我们的架构设计中Kafka负责微服务和 ...
- .Nginx安装filebeat收集日志:
1.安装filebeat: [root@nginx ~]# vim /usr/local/filebeat/filebeat.yml [root@nginx ~]# tar xf filebeat-6 ...
- ELK之在windows安装filebeat收集日志
登录官方网站下载filebeat的windows客户端 https://www.elastic.co/downloads/beats 下载压缩包,无需解压 修改配置文件filebeat.yml 其余设 ...
- 使用logstash从Kafka中拉取数据并传输给elasticsearch且创建相应索引的操作
注意事项:默认Kafka传递给elastci的数据是在'data'字段,且不包含其他数据,所以需要使用额外的操作进行处理 logstash配置文件操作 input { kafka { bootstra ...
随机推荐
- 论文阅读:OpenFlow: Enabling Innovation in Campus Networks
摘要: 本白皮书提出了OpenFlow——研究人员在他们每天使用的网络中运行实验协议的一种方式. OpenFlow基于以太网交换机,具有内部流表以及用于添加和删除流条目的标准化接口.我们的目标是鼓励网 ...
- Ubuntu18.04安装rabbitvcs svn图形化客户端和简单实用
1.1 自带source源里面查找rabbitvcs信息 sudo apt search rabbitvcs 1.2 安装rabbitvcs sudo apt install rabbitvcs- ...
- [JZOJ5399]:Confess(随机化)
题目描述 小$w$隐藏的心绪已经难以再隐藏下去了. 小$w$有$n+1$(保证$n$为偶数)个心绪,每个都包含了$[1,2n]$的一个大小为$n$的子集. 现在他要找到隐藏的任意两个心绪,使得他们的交 ...
- 用过消息队列?Kafka?能否手写一个消息队列?懵
是否有同样的经历?面试官问你做过啥项目,我一顿胡侃,项目利用到了消息队列,kafka,rocketMQ等等. 好的,那请开始你的表演,面试官递过一支笔:给我手写一个消息队列!!WHAT? 为了大家遇到 ...
- 笔记本电脑如何同时上内外网(通过usb外接网卡实现虚拟机连外网)
我们这里达成的方式不是内外网切换,而是真正意义上的同时上内网和外网 原理: 通过构建虚拟机,利用usb外接网卡(某宝某东都有售,价格在50大洋左右)使虚机连接外网,然后开启虚拟机的unity模式 准备 ...
- 第九周课程总结 & 实验报告(七)
第九周课程总结 一.多线程 1.线程的状态 2.线程操作的相关方法 二.Java IO 1.操作文件的类---File ()基本介绍 ()使用File类操作文件 .RandomAccessFile类 ...
- spark 笔记 8: Stage
Stage 是一组独立的任务,他们在一个job中执行相同的功能(function),功能的划分是以shuffle为边界的.DAG调度器以拓扑顺序执行同一个Stage中的task. /** * A st ...
- vue动态构造下拉
在点击菜单的进入后台初始化方法 @RequestMapping("/init") public String init(@ModelAttribute("response ...
- Win10卸载预装应用
Win10的预装应用大多数都比较鸡肋,没啥用,喜欢纯净的系统的朋友可以将其卸载掉. 1.以管理员身份启动powershell,键入命令[Get-AppxPackage | Select Name, P ...
- python之注释的分类
<1> 单行注释 以#开头,#右边的所有东西当做说明,而不是真正要执行的程序,起辅助说明作用 # 我是注释,可以在里写一些功能说明之类的哦 print('hello world') < ...