logstash redis kafka传输 haproxy日志
logstash 客户端收集 haproxy tcp日志
input {
file {
path => "/data/haproxy/logs/haproxy_http.log"
start_position => "beginning"
type => "haproxy_http"
}
file {
path => "/data/haproxy/logs/haproxy_tcp.log"
start_position => "beginning"
type => "haproxy_tcp"
}
}
filter {
if [type] == "haproxy_http" {
grok{
patterns_dir => "/data/logstash/patterns"
match => {"message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{IPORHOST:syslog_server} %{SYSLOGPROG}: %{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_request}/%{INT:time_queue}/%{INT:time_backend_connect}/%{INT:time_backend_response}/%{NOTSPACE:time_duration} %{INT:http_status_code} %{NOTSPACE:bytes_read} %{FENG:captured_request_cookie} %{FENG:captured_response_cookie} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue} \"%{WORD:verb} %{URIPATHPARAM:request} %{WORD:http_socke}/%{NUMBER:http_version}\""}
}
geoip {
source => "client_ip"
fields => ["ip","city_name","country_name","location"]
add_tag => [ "geoip" ]
}
} else if [type] == "haproxy_tcp" {
grok {
match => { "message" => "(?:%{SYSLOGTIMESTAMP:syslog_timestamp}|%{TIMESTAMP_ISO8601:timestamp8601}) %{IPORHOST:syslog_server} %{SYSLOGPROG}: %{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_queue}/%{INT:time_backend_connect}/%{NOTSPACE:time_duration} %{NOTSPACE:bytes_read} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue}" }
}
}
}
output {
if [type] == "haproxy_http" {
redis {
host => "192.168.20.166"
port => "6379"
db => "5"
data_type => "list"
key => "haproxy_http.log"
}
} else if [type] == "haproxy_tcp" {
redis {
host => "192.168.20.166"
port => "6379"
db => "4"
data_type => "list"
key => "haproxy_tcp.log"
}
}
}
logstash 服务器端把 haproxy tcp日志写入到elasticsearch中
[root@logstashserver etc]# cat logstash.conf
input {
if [type] == "haproxy_http" {
redis {
host => "192.168.20.166"
port => "6379"
db => "5"
data_type => "list"
key => "haproxy_http.log"
}
} else if [type] == "haproxy_tcp" {
redis {
host => "192.168.20.166"
port => "6379"
db => "4"
data_type => "list"
key => "haproxy_tcp.log"
}
}
}
output {
if [type] == "haproxy_http" {
elasticsearch {
hosts => ["es1:9200","es2:9200","es3:9200"]
manage_template => true
index => "logstash-haproxy-http.log-%{+YYYY-MM-dd}"
}
}
if [type] == "haproxy_tcp" {
elasticsearch {
hosts => ["es1:9200","es2:9200","es3:9200"]
manage_template => true
index => "logstash-haproxy-tcp.log-%{+YYYY-MM-dd}"
}
}
}
#########################################kafka###############################################
客户端
input {
file {
path => "/data/haproxy/logs/haproxy_http.log"
start_position => "beginning"
type => "haproxy_http"
}
file {
path => "/data/haproxy/logs/haproxy_tcp.log"
start_position => "beginning"
type => "haproxy_tcp"
}
}
filter {
if [type] == "haproxy_http" {
grok{
patterns_dir => "/data/logstash/patterns"
match => {"message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{IPORHOST:syslog_server} %{SYSLOGPROG}: %{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_request}/%{INT:time_queue}/%{INT:time_backend_connect}/%{INT:time_backend_response}/%{NOTSPACE:time_duration} %{INT:http_status_code} %{NOTSPACE:bytes_read} %{FENG:captured_request_cookie} %{FENG:captured_response_cookie} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue} \"%{WORD:verb} %{URIPATHPARAM:request} %{WORD:http_socke}/%{NUMBER:http_version}\""}
}
geoip {
source => "client_ip"
fields => ["ip","city_name","country_name","location"]
add_tag => [ "geoip" ]
}
} else if [type] == "haproxy_tcp" {
grok {
match => { "message" => "(?:%{SYSLOGTIMESTAMP:syslog_timestamp}|%{TIMESTAMP_ISO8601:timestamp8601}) %{IPORHOST:syslog_server} %{SYSLOGPROG}: %{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_queue}/%{INT:time_backend_connect}/%{NOTSPACE:time_duration} %{NOTSPACE:bytes_read} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue}" }
}
}
}
output {
if [type] == "haproxy_http" {
kafka { #输出到kafka
bootstrap_servers => "kafka1:9092,kafka2:9092,kafka3:9092" #他们就是生产者
topic_id => "haproxy_http.log" #这个将作为主题的名称,将会自动创建
compression_type => "snappy" #压缩类型
}
} else if [type] == "haproxy_tcp" {
kafka { #输出到kafka
bootstrap_servers => "kafka1:9092,kafka2:9092,kafka3:9092" #他们就是生产者
topic_id => "haproxy_tcp.log" #这个将作为主题的名称,将会自动创建
compression_type => "snappy" #压缩类型
}
}
}
服务器端
input {
if [type] == "haproxy_http" {
kafka {
zk_connect => "zookeeper1:2181,zookeeper2:2181,zookeeper3:2181"
topic_id => "haproxy_http.log"
reset_beginning => false
consumer_threads => 5
decorate_events => true
}
} else if [type] == "haproxy_tcp" {
kafka {
zk_connect => "zookeeper1:2181,zookeeper2:2181,zookeeper3:2181"
topic_id => "haproxy_tcp.log"
reset_beginning => false
consumer_threads => 5
decorate_events => true
}
}
}
output {
if [type] == "haproxy_http" {
elasticsearch {
hosts => ["es1:9200","es2:9200","es3:9200"]
manage_template => true
index => "logstash-haproxy-http.log-%{+YYYY-MM-dd}"
}
}
if [type] == "haproxy_tcp" {
elasticsearch {
hosts => ["es1:9200","es2:9200","es3:9200"]
manage_template => true
index => "logstash-haproxy-tcp.log-%{+YYYY-MM-dd}"
}
}
}
logstash redis kafka传输 haproxy日志的更多相关文章
- logstash通过kafka传输nginx日志(三)
单个进程 logstash 可以实现对数据的读取.解析和输出处理.但是在生产环境中,从每台应用服务器运行 logstash 进程并将数据直接发送到 Elasticsearch 里,显然不是第一选择:第 ...
- elasticsearch+logstash+redis+kibana 实时分析nginx日志
1. 部署环境 2. 架构拓扑 3. nginx安装 安装在192.168.176.128服务器上 这里安装就简单粗暴了直接yum安装nginx [root@manager ~]# yum -y in ...
- 第九章·Logstash深入-Logstash配合rsyslog收集haproxy日志
rsyslog介绍及安装配置 在centos 6及之前的版本叫做syslog,centos 7开始叫做rsyslog,根据官方的介绍,rsyslog(2013年版本)可以达到每秒转发百万条日志的级别, ...
- ELK之收集haproxy日志
由于HAProxy的运行信息不写入日志文件,但它依赖于标准的系统日志协议将日志发送到远程服务器(通常位于同一系统上),所以需要借助rsyslog来收集haproxy的日志.haproxy代理nginx ...
- 使用Nginx和Logstash以及kafka来实现网站日志采集的详细步骤和过程
使用Nginx和Logstash以及kafka来实现网站日志采集的详细步骤和过程 先列出来总体启动流程: (1)启动zookeeper集群(hadoop01.hadoop02和hadoop03这3台机 ...
- 安装logstash+kibana+elasticsearch+redis搭建集中式日志分析平台
安装logstash+kibana+elasticsearch+redis搭建集中式日志分析平台 2014-01-16 19:40:57| 分类: logstash | 标签:logstash ...
- elk系列8之logstash+redis+es的架构来收集apache的日志【转】
preface logstash--> redis --> logstash --> es这套架构在讲究松耦合关系里面是最简单的,架构图如下: 解释下这个架构图的流程 首先前端log ...
- logstash+redis收集负载均衡模式下多台服务器的多个web日志
一.logstash的简介 一般我们看日志来解决问题的时候要么 tail+grep 要么 把日志下载下来再搜索,可以应付不多的主机和应用不多的部署场景.但对于多机多应用部署就不合适了.这里的多机多应用 ...
- ELK(+Redis)-开源实时日志分析平台
################################################################################################### ...
随机推荐
- Knockout.js随手记(3)
下拉菜单 <select>也是网页设计重要的一环,knockout.js(以下简称KO)也有不错的支持.针对<select>,在data-bind除了用value可对应下拉菜单 ...
- Codeforces Round #222 (Div. 1) D. Developing Game 线段树有效区间合并
D. Developing Game Pavel is going to make a game of his dream. However, he knows that he can't mak ...
- java中的继承与oc中的继承的区别
为什么要使用继承? 继承的好处: (1)抽取出了重复的代码,使代码更加灵活 (2)建立了类和类之间的联系 继承的缺点: 耦合性太强 OC中的继承 1.OC中不允许子类和父类拥有相同名称的成员变量名:( ...
- 如何转换WMV到MP3,WMV到MP3播放器
非常好的软件!!!!!没有注册,可以用.推荐给大家! http://www.daniusoft.com/cn/convert-wmv/wmv-to-mp3.html http://hi.baidu.c ...
- jquery:closest和parents的主要区别
closest和parents的主要区别是:1,前者从当前元素开始匹配寻找,后者从父元素开始匹配寻找:2,前者逐级向上查找,直到发现匹配的元素后就停止了,后者一直向上查找直到根元素,然后把这些元素放进 ...
- 从Bayesian角度浅析Batch Normalization
前置阅读:http://blog.csdn.net/happynear/article/details/44238541——Batch Norm阅读笔记与实现 前置阅读:http://www.zhih ...
- Python数据库迁移脚本(终极版)
上次的那几个脚本这次全部整合到了一起,而且后面发现了一个可以使用的ODBC包,于是这次采用的方式就很简单了,直接通过ODBC将InterBase数据库中的数据全部取出来之后通过Python的sqlal ...
- js获取浏览器前缀
<script> var div = null; var _prefix = (function (temp) { var prefix = ["webkit", &q ...
- SOLD原则
借鉴: 1. 单一职责原则 单一职责原则 (Single Responsibility Principle,SRP) 指出,每个方法或类应当有且仅有 一个改变的理由.这意味着每个方法或类应当做一件事情 ...
- python 2.43 升级到2.7
[root@GW1 bin]# lsb_release -aLSB Version: :core-3.1-amd64:core-3.1-ia32:core-3.1-noarch:graphics-3. ...