logstash 客户端收集 haproxy  tcp日志

input {
file {
path => "/data/haproxy/logs/haproxy_http.log"
start_position => "beginning"
type => "haproxy_http"
}
file {
path => "/data/haproxy/logs/haproxy_tcp.log"
start_position => "beginning"
type => "haproxy_tcp"
}
}

filter {
if [type] == "haproxy_http" {
grok{
patterns_dir => "/data/logstash/patterns"
match => {"message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{IPORHOST:syslog_server} %{SYSLOGPROG}: %{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_request}/%{INT:time_queue}/%{INT:time_backend_connect}/%{INT:time_backend_response}/%{NOTSPACE:time_duration} %{INT:http_status_code} %{NOTSPACE:bytes_read} %{FENG:captured_request_cookie} %{FENG:captured_response_cookie} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue} \"%{WORD:verb} %{URIPATHPARAM:request} %{WORD:http_socke}/%{NUMBER:http_version}\""}
}
geoip {
source => "client_ip"
fields => ["ip","city_name","country_name","location"]
add_tag => [ "geoip" ]
}
} else if [type] == "haproxy_tcp" {
grok {
match => { "message" => "(?:%{SYSLOGTIMESTAMP:syslog_timestamp}|%{TIMESTAMP_ISO8601:timestamp8601}) %{IPORHOST:syslog_server} %{SYSLOGPROG}: %{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_queue}/%{INT:time_backend_connect}/%{NOTSPACE:time_duration} %{NOTSPACE:bytes_read} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue}" }
}
}
}

output {
if [type] == "haproxy_http" {
redis {
host => "192.168.20.166"
port => "6379"
db => "5"
data_type => "list"
key => "haproxy_http.log"
}
} else if [type] == "haproxy_tcp" {
redis {
host => "192.168.20.166"
port => "6379"
db => "4"
data_type => "list"
key => "haproxy_tcp.log"
}
}
}

logstash 服务器端把 haproxy  tcp日志写入到elasticsearch中

[root@logstashserver etc]# cat logstash.conf

input {
if [type] == "haproxy_http" {
redis {
host => "192.168.20.166"
port => "6379"
db => "5"
data_type => "list"
key => "haproxy_http.log"
}
} else if [type] == "haproxy_tcp" {
redis {
host => "192.168.20.166"
port => "6379"
db => "4"
data_type => "list"
key => "haproxy_tcp.log"
}
}
}

output {
if [type] == "haproxy_http" {
elasticsearch {
hosts => ["es1:9200","es2:9200","es3:9200"]
manage_template => true
index => "logstash-haproxy-http.log-%{+YYYY-MM-dd}"
}
}
if [type] == "haproxy_tcp" {
elasticsearch {
hosts => ["es1:9200","es2:9200","es3:9200"]
manage_template => true
index => "logstash-haproxy-tcp.log-%{+YYYY-MM-dd}"
}
}
}

#########################################kafka###############################################

客户端

input {
file {
path => "/data/haproxy/logs/haproxy_http.log"
start_position => "beginning"
type => "haproxy_http"
}
file {
path => "/data/haproxy/logs/haproxy_tcp.log"
start_position => "beginning"
type => "haproxy_tcp"
}
}

filter {
if [type] == "haproxy_http" {
grok{
patterns_dir => "/data/logstash/patterns"
match => {"message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{IPORHOST:syslog_server} %{SYSLOGPROG}: %{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_request}/%{INT:time_queue}/%{INT:time_backend_connect}/%{INT:time_backend_response}/%{NOTSPACE:time_duration} %{INT:http_status_code} %{NOTSPACE:bytes_read} %{FENG:captured_request_cookie} %{FENG:captured_response_cookie} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue} \"%{WORD:verb} %{URIPATHPARAM:request} %{WORD:http_socke}/%{NUMBER:http_version}\""}
}
geoip {
source => "client_ip"
fields => ["ip","city_name","country_name","location"]
add_tag => [ "geoip" ]
}
} else if [type] == "haproxy_tcp" {
grok {
match => { "message" => "(?:%{SYSLOGTIMESTAMP:syslog_timestamp}|%{TIMESTAMP_ISO8601:timestamp8601}) %{IPORHOST:syslog_server} %{SYSLOGPROG}: %{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_queue}/%{INT:time_backend_connect}/%{NOTSPACE:time_duration} %{NOTSPACE:bytes_read} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue}" }
}
}
}

output {
if [type] == "haproxy_http" {
kafka { #输出到kafka
bootstrap_servers => "kafka1:9092,kafka2:9092,kafka3:9092" #他们就是生产者
topic_id => "haproxy_http.log" #这个将作为主题的名称,将会自动创建
compression_type => "snappy" #压缩类型
}
} else if [type] == "haproxy_tcp" {
kafka { #输出到kafka
bootstrap_servers => "kafka1:9092,kafka2:9092,kafka3:9092" #他们就是生产者
topic_id => "haproxy_tcp.log" #这个将作为主题的名称,将会自动创建
compression_type => "snappy" #压缩类型
}
}
}

服务器端

input {
if [type] == "haproxy_http" {
kafka {
zk_connect => "zookeeper1:2181,zookeeper2:2181,zookeeper3:2181"
topic_id => "haproxy_http.log"
reset_beginning => false
consumer_threads => 5
decorate_events => true
}
} else if [type] == "haproxy_tcp" {
kafka {
zk_connect => "zookeeper1:2181,zookeeper2:2181,zookeeper3:2181"
topic_id => "haproxy_tcp.log"
reset_beginning => false
consumer_threads => 5
decorate_events => true
}
}
}

output {
if [type] == "haproxy_http" {
elasticsearch {
hosts => ["es1:9200","es2:9200","es3:9200"]
manage_template => true
index => "logstash-haproxy-http.log-%{+YYYY-MM-dd}"
}
}
if [type] == "haproxy_tcp" {
elasticsearch {
hosts => ["es1:9200","es2:9200","es3:9200"]
manage_template => true
index => "logstash-haproxy-tcp.log-%{+YYYY-MM-dd}"
}
}
}

logstash redis kafka传输 haproxy日志的更多相关文章

  1. logstash通过kafka传输nginx日志(三)

    单个进程 logstash 可以实现对数据的读取.解析和输出处理.但是在生产环境中,从每台应用服务器运行 logstash 进程并将数据直接发送到 Elasticsearch 里,显然不是第一选择:第 ...

  2. elasticsearch+logstash+redis+kibana 实时分析nginx日志

    1. 部署环境 2. 架构拓扑 3. nginx安装 安装在192.168.176.128服务器上 这里安装就简单粗暴了直接yum安装nginx [root@manager ~]# yum -y in ...

  3. 第九章·Logstash深入-Logstash配合rsyslog收集haproxy日志

    rsyslog介绍及安装配置 在centos 6及之前的版本叫做syslog,centos 7开始叫做rsyslog,根据官方的介绍,rsyslog(2013年版本)可以达到每秒转发百万条日志的级别, ...

  4. ELK之收集haproxy日志

    由于HAProxy的运行信息不写入日志文件,但它依赖于标准的系统日志协议将日志发送到远程服务器(通常位于同一系统上),所以需要借助rsyslog来收集haproxy的日志.haproxy代理nginx ...

  5. 使用Nginx和Logstash以及kafka来实现网站日志采集的详细步骤和过程

    使用Nginx和Logstash以及kafka来实现网站日志采集的详细步骤和过程 先列出来总体启动流程: (1)启动zookeeper集群(hadoop01.hadoop02和hadoop03这3台机 ...

  6. 安装logstash+kibana+elasticsearch+redis搭建集中式日志分析平台

    安装logstash+kibana+elasticsearch+redis搭建集中式日志分析平台 2014-01-16 19:40:57|  分类: logstash |  标签:logstash   ...

  7. elk系列8之logstash+redis+es的架构来收集apache的日志【转】

    preface logstash--> redis --> logstash --> es这套架构在讲究松耦合关系里面是最简单的,架构图如下: 解释下这个架构图的流程 首先前端log ...

  8. logstash+redis收集负载均衡模式下多台服务器的多个web日志

    一.logstash的简介 一般我们看日志来解决问题的时候要么 tail+grep 要么 把日志下载下来再搜索,可以应付不多的主机和应用不多的部署场景.但对于多机多应用部署就不合适了.这里的多机多应用 ...

  9. ELK(+Redis)-开源实时日志分析平台

    ################################################################################################### ...

随机推荐

  1. node+fis3搭建

    node安装: 到https://nodejs.org/en/download/releases下载编译好的包, 如:https://nodejs.org/download/release/v4.4. ...

  2. 整理UWP中网络和设备信息获取的帮助类,需要的拿走。

    网络(运营商信息,网络类型) public static class NetworkInfo { /// <summary> /// 网络是否可用 /// </summary> ...

  3. Javascript中的apply与call详解

    JavaScript中有一个call和apply方法,其作用基本相同,但也有略微的区别. 一.方法定义 1.call 方法 语法:call([thisObj[,arg1[, arg2[, [,.arg ...

  4. ORACLE操作列

    一.下面介绍oracle数据库操作列的CURD操作 --学生表 STUDENT CREATE TABLE STUDENT( ID NUMBER(18) NOT NULL, NAME VARCHAR2( ...

  5. Linux CentOS下安装Oracle

    1 .在安装oracle之前首先安装以下组件包,直接输入下列语句安装. yum install binutils* -y yum install compat-lib* -y yum install ...

  6. 成功进行了一次UDP打洞

    本次测试参数:服务端是公网固定IP:两个客户端A和B分别位于不同电脑,不同宽带,不同型号路由后面(一个家庭路由,一个企业路由),且路由没有经过特别的设置.测试没有什么特别的地方,只是依照网络资料进行实 ...

  7. 【填坑向】spoj COT/bzoj2588 Count on a tree

    这题是学主席树的时候就想写的,,, 但是当时没写(懒) 现在来填坑 = =日常调半天lca(考虑以后背板) 主席树还是蛮好写的,但是代码出现重复,不太好,导致调试的时候心里没底(虽然事实证明主席树部分 ...

  8. csipsimple 出现单通情况

    今天在测试voip电话时,突然打不通了和windows端也不通,boss发怒了. 经过排查,发现设置G729编码 //设置G729编码 prefs.setCodecPriority("g72 ...

  9. JSF 与 HTML 标签的联系

    *页面的开头 <%@ taglib uri="http://java.sun.com/jsf/core" prefix="f"%> <%@ t ...

  10. FileOutputStream和FileInputStream的用法

    public static void show() { File f=new File("d:"+File.separator+"1.txt"); FileOu ...