logstash redis kafka传输 haproxy日志
logstash 客户端收集 haproxy tcp日志
input {
file {
path => "/data/haproxy/logs/haproxy_http.log"
start_position => "beginning"
type => "haproxy_http"
}
file {
path => "/data/haproxy/logs/haproxy_tcp.log"
start_position => "beginning"
type => "haproxy_tcp"
}
}
filter {
if [type] == "haproxy_http" {
grok{
patterns_dir => "/data/logstash/patterns"
match => {"message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{IPORHOST:syslog_server} %{SYSLOGPROG}: %{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_request}/%{INT:time_queue}/%{INT:time_backend_connect}/%{INT:time_backend_response}/%{NOTSPACE:time_duration} %{INT:http_status_code} %{NOTSPACE:bytes_read} %{FENG:captured_request_cookie} %{FENG:captured_response_cookie} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue} \"%{WORD:verb} %{URIPATHPARAM:request} %{WORD:http_socke}/%{NUMBER:http_version}\""}
}
geoip {
source => "client_ip"
fields => ["ip","city_name","country_name","location"]
add_tag => [ "geoip" ]
}
} else if [type] == "haproxy_tcp" {
grok {
match => { "message" => "(?:%{SYSLOGTIMESTAMP:syslog_timestamp}|%{TIMESTAMP_ISO8601:timestamp8601}) %{IPORHOST:syslog_server} %{SYSLOGPROG}: %{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_queue}/%{INT:time_backend_connect}/%{NOTSPACE:time_duration} %{NOTSPACE:bytes_read} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue}" }
}
}
}
output {
if [type] == "haproxy_http" {
redis {
host => "192.168.20.166"
port => "6379"
db => "5"
data_type => "list"
key => "haproxy_http.log"
}
} else if [type] == "haproxy_tcp" {
redis {
host => "192.168.20.166"
port => "6379"
db => "4"
data_type => "list"
key => "haproxy_tcp.log"
}
}
}
logstash 服务器端把 haproxy tcp日志写入到elasticsearch中
[root@logstashserver etc]# cat logstash.conf
input {
if [type] == "haproxy_http" {
redis {
host => "192.168.20.166"
port => "6379"
db => "5"
data_type => "list"
key => "haproxy_http.log"
}
} else if [type] == "haproxy_tcp" {
redis {
host => "192.168.20.166"
port => "6379"
db => "4"
data_type => "list"
key => "haproxy_tcp.log"
}
}
}
output {
if [type] == "haproxy_http" {
elasticsearch {
hosts => ["es1:9200","es2:9200","es3:9200"]
manage_template => true
index => "logstash-haproxy-http.log-%{+YYYY-MM-dd}"
}
}
if [type] == "haproxy_tcp" {
elasticsearch {
hosts => ["es1:9200","es2:9200","es3:9200"]
manage_template => true
index => "logstash-haproxy-tcp.log-%{+YYYY-MM-dd}"
}
}
}
#########################################kafka###############################################
客户端
input {
file {
path => "/data/haproxy/logs/haproxy_http.log"
start_position => "beginning"
type => "haproxy_http"
}
file {
path => "/data/haproxy/logs/haproxy_tcp.log"
start_position => "beginning"
type => "haproxy_tcp"
}
}
filter {
if [type] == "haproxy_http" {
grok{
patterns_dir => "/data/logstash/patterns"
match => {"message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{IPORHOST:syslog_server} %{SYSLOGPROG}: %{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_request}/%{INT:time_queue}/%{INT:time_backend_connect}/%{INT:time_backend_response}/%{NOTSPACE:time_duration} %{INT:http_status_code} %{NOTSPACE:bytes_read} %{FENG:captured_request_cookie} %{FENG:captured_response_cookie} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue} \"%{WORD:verb} %{URIPATHPARAM:request} %{WORD:http_socke}/%{NUMBER:http_version}\""}
}
geoip {
source => "client_ip"
fields => ["ip","city_name","country_name","location"]
add_tag => [ "geoip" ]
}
} else if [type] == "haproxy_tcp" {
grok {
match => { "message" => "(?:%{SYSLOGTIMESTAMP:syslog_timestamp}|%{TIMESTAMP_ISO8601:timestamp8601}) %{IPORHOST:syslog_server} %{SYSLOGPROG}: %{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_queue}/%{INT:time_backend_connect}/%{NOTSPACE:time_duration} %{NOTSPACE:bytes_read} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue}" }
}
}
}
output {
if [type] == "haproxy_http" {
kafka { #输出到kafka
bootstrap_servers => "kafka1:9092,kafka2:9092,kafka3:9092" #他们就是生产者
topic_id => "haproxy_http.log" #这个将作为主题的名称,将会自动创建
compression_type => "snappy" #压缩类型
}
} else if [type] == "haproxy_tcp" {
kafka { #输出到kafka
bootstrap_servers => "kafka1:9092,kafka2:9092,kafka3:9092" #他们就是生产者
topic_id => "haproxy_tcp.log" #这个将作为主题的名称,将会自动创建
compression_type => "snappy" #压缩类型
}
}
}
服务器端
input {
if [type] == "haproxy_http" {
kafka {
zk_connect => "zookeeper1:2181,zookeeper2:2181,zookeeper3:2181"
topic_id => "haproxy_http.log"
reset_beginning => false
consumer_threads => 5
decorate_events => true
}
} else if [type] == "haproxy_tcp" {
kafka {
zk_connect => "zookeeper1:2181,zookeeper2:2181,zookeeper3:2181"
topic_id => "haproxy_tcp.log"
reset_beginning => false
consumer_threads => 5
decorate_events => true
}
}
}
output {
if [type] == "haproxy_http" {
elasticsearch {
hosts => ["es1:9200","es2:9200","es3:9200"]
manage_template => true
index => "logstash-haproxy-http.log-%{+YYYY-MM-dd}"
}
}
if [type] == "haproxy_tcp" {
elasticsearch {
hosts => ["es1:9200","es2:9200","es3:9200"]
manage_template => true
index => "logstash-haproxy-tcp.log-%{+YYYY-MM-dd}"
}
}
}
logstash redis kafka传输 haproxy日志的更多相关文章
- logstash通过kafka传输nginx日志(三)
单个进程 logstash 可以实现对数据的读取.解析和输出处理.但是在生产环境中,从每台应用服务器运行 logstash 进程并将数据直接发送到 Elasticsearch 里,显然不是第一选择:第 ...
- elasticsearch+logstash+redis+kibana 实时分析nginx日志
1. 部署环境 2. 架构拓扑 3. nginx安装 安装在192.168.176.128服务器上 这里安装就简单粗暴了直接yum安装nginx [root@manager ~]# yum -y in ...
- 第九章·Logstash深入-Logstash配合rsyslog收集haproxy日志
rsyslog介绍及安装配置 在centos 6及之前的版本叫做syslog,centos 7开始叫做rsyslog,根据官方的介绍,rsyslog(2013年版本)可以达到每秒转发百万条日志的级别, ...
- ELK之收集haproxy日志
由于HAProxy的运行信息不写入日志文件,但它依赖于标准的系统日志协议将日志发送到远程服务器(通常位于同一系统上),所以需要借助rsyslog来收集haproxy的日志.haproxy代理nginx ...
- 使用Nginx和Logstash以及kafka来实现网站日志采集的详细步骤和过程
使用Nginx和Logstash以及kafka来实现网站日志采集的详细步骤和过程 先列出来总体启动流程: (1)启动zookeeper集群(hadoop01.hadoop02和hadoop03这3台机 ...
- 安装logstash+kibana+elasticsearch+redis搭建集中式日志分析平台
安装logstash+kibana+elasticsearch+redis搭建集中式日志分析平台 2014-01-16 19:40:57| 分类: logstash | 标签:logstash ...
- elk系列8之logstash+redis+es的架构来收集apache的日志【转】
preface logstash--> redis --> logstash --> es这套架构在讲究松耦合关系里面是最简单的,架构图如下: 解释下这个架构图的流程 首先前端log ...
- logstash+redis收集负载均衡模式下多台服务器的多个web日志
一.logstash的简介 一般我们看日志来解决问题的时候要么 tail+grep 要么 把日志下载下来再搜索,可以应付不多的主机和应用不多的部署场景.但对于多机多应用部署就不合适了.这里的多机多应用 ...
- ELK(+Redis)-开源实时日志分析平台
################################################################################################### ...
随机推荐
- UltraEdit 注册机使用说明
请断开网络连接(或直接拔掉网线)后执行: 安装完成后,点击弹出界面的“注册”按钮,然后直接点击“激活”,此时UltraEdit检测到网络断开则弹出界面提示“脱机激活”,此时启动注册机,并将UltraE ...
- 在ASP.NET MVC项目中使用React
(此文章同时发表在本人微信公众号"dotNET每日精华文章",欢迎右边二维码来关注.) 题记:最近在开发钉钉的微应用,考虑到性能和UI库的支持,遂采用了React来开发前端. 目前 ...
- include、merge 、ViewStub
在布局优化中,Androi的官方提到了这三种布局<include />.<merge />.<ViewStub />,并介绍了这三种布局各有的优势,下面也是简单说一 ...
- cmd中编译java
cmd定位到.java文件所在位置: 注意.java文件名应与类名相同. javac xxx.java:编译(生成.class文件): java xxx:运行(执行.class文件): 若类间相互调用 ...
- JavaScript闭包深入解析
for (var i=1; i<=5; i++) { setTimeout( function timer() { console.log( i ); }, i*1000 ); } --上面这段 ...
- [工作中的设计模式]中介模式模式Mediator
一.模式解析 用一个中介者对象封装一系列的对象交互,中介者使各对象不需要显示地相互作用,从而使耦合松散,而且可以独立地改变它们之间的交互. 中介模式又叫调停者模式,他有如下特点: 1.有多个系统或者对 ...
- HDU5942 : Just a Math Problem
\[\begin{eqnarray*}ans&=&\sum_{i=1}^ng(i)\\&=&\sum_{i=1}^n\sum_{d|i}\mu^2(d)\\&= ...
- 基础知识:Q&A
1.点乘几何意义? >0,a,b同向,=1时,方向完全相同 =0,垂直 <0,反向,=-1时,方向完全相反 2.叉乘几何意义?叉乘得到的向量的方向怎么确定? aXb ,可以得到 同时垂直于 ...
- python字典
1.字典解释 映射:通过名字来引用值得数据结构,字典是python中唯一内建的映射类型,字典中的值并没有特殊的顺序,都存储在一个特定的键下,键可以使数字.字符串.元组,通过查找某个特定键,就可以找到对 ...
- CentOS上安装man手册
我的CentOS采用的是minimal方式安装的,学会遇到命令找不到的问题.今天我遇到的是,查找命令相关的man手册时,显示:通过查看CentOS iso, 我发现了man-pages这个rpm包.装 ...