【原创】大数据基础之Logstash(3)应用之file解析(grok/ruby/kv)
从nginx日志中进行url解析
/v1/test?param2=v2¶m3=v3&time=2019-03-18%2017%3A34%3A14
->
{'param1':'v1','param2':'v2','param3':'v3','time':'2019-03-18 17:34:14'}
nginx日志示例:
1.119.132.168 - - [18/Mar/2019:09:13:50 +0000] "POST /param1/test?param2=1¶m3=2&time=2019-03-18%2017%3A34%3A14 HTTP/1.1" 200 929 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36" "-"
1 使用grok
input {
file {
path => [ "/var/log/nginx/access.log" ]
start_position => "beginning"
}
}
filter {
if [message] =~ /test/ {
grok {
match => { "message" => "%{IPORHOST:client_ip} (%{USER:ident}|-) (%{USER:auth}|-) \[%{HTTPDATE:access_time_raw}\] \"(?:%{WORD:verb} (/%{PARAMVALUE:param1}/test\?param2=%{PARAMVALUE:param2}¶m3=%{PARAMVALUE:param3}&time=%{PARAMVALUE:send_time_raw})(?: HTTP/%{NUMBER:http_version})?|-)\" (%{NUMBER:response}|-) (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{QS:x_forward_for}" }
pattern_definitions => { "PARAMVALUE" => "[^& ]*" }
}
urldecode {
all_fields => true
}
date {
match => [ "access_time_raw","dd/MMM/yyyy:HH:mm:ss Z"]
target => "access_time_tmp"
}
ruby {
code => "event.set('access_time', (event.get('access_time_tmp').to_i * 1000000).to_s)
event.set('send_time', event.get('access_time'))"
}
if [send_time_raw] {
date {
match => [ "send_time_raw","yyyy-MM-dd HH:mm:ss"]
target => "send_time_tmp"
timezone => "UTC"
}
ruby {
code => "event.set('send_time', (event.get('send_time_tmp').to_i * 1000000).to_s)"
}
}
mutate {
remove_field => ["message", "ident", "auth", "verb", "bytes", "reponse", "x_forward_for", "http_version", "access_time_raw", "access_time_tmp", "path", "response", "send_time_raw", "send_time_tmp"]
}
} else {
drop {}
}
}
output {
if [param1] and [param2] and [param3] and "_grokparsefailure" not in [tags] {
stdout {codec => json}
}
}
注意:
1)对url的参数名和位置硬编码,不灵活
2)使用自定义pattern:PARAMVALUE
3)一定要使用urldecode,否则time得到的value为2019-03-18%2017%3A34%3A14,logstash中date插件使用joda解析pattern会报错,因为含有字母A;
4)如果time为空,则使用access_time;
5)不匹配的记录drop掉;
6)只有满足条件的记录才会被output;
7)在filter和output中使用if-else定义分支;
8)date插件要注意timezone,否则会按照时区偏移;
2 使用grok+ruby
input {
file {
path => [ "/var/log/nginx/access.log" ]
start_position => "beginning"
}
}
filter {
if [message] =~ /test/ {
grok {
match => { "message" => "%{IPORHOST:client_ip} (%{USER:ident}|-) (%{USER:auth}|-) \[%{HTTPDATE:access_time_raw}\] \"(?:%{WORD:verb} (%{URIPATHPARAM:request}|-)(?: HTTP/%{NUMBER:http_version})?|-)\" (%{NUMBER:response}|-) (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent}" }
}
urldecode {
all_fields => true
}
date {
match => [ "access_time_raw","dd/MMM/yyyy:HH:mm:ss Z"]
target => "access_time_tmp"
}
ruby {
code => "event.set('access_time', (event.get('access_time_tmp').to_i * 1000000).to_s)
event.set('send_time', event.get('access_time'))"
}
if [request] {
ruby {
init => "
def convertName(name)
result = ''
name.each_char{|ch| result += (if ch < 'a' then '_' + ch.downcase else ch end)}
result
end
"
code => "
event.set('param1', event.get('request').split('?')[0].split('/')[1])
pairs = event.get('request').split('?')[1].split('&')
pairs.each{ |item| arr=item.split('='); event.set(arr[0], arr[1])}
"
}
if [time] {
date {
match => [ "time","yyyy-MM-dd HH:mm:ss"]
target => "send_time_tmp"
timezone => "UTC"
}
ruby {
code => "event.set('send_time', (event.get('send_time_tmp').to_i * 1000000).to_s)"
}
}
}
mutate {
remove_field => ["message", "ident", "auth", "verb", "bytes", "reponse", "x_forward_for", "http_version", "access_time_raw", "access_time_tmp", "path", "response", "time", "send_time_tmp"]
}
} else {
drop {}
}
}
output {
if [param1] and [param2] and [param3] and "_grokparsefailure" not in [tags] {
stdout {codec => json}
}
}
注意:
1)直接使用默认的nginx日志的grok pattern;
2)在ruby中直接按照key=value进行解析,更灵活;
3)自定义函数;
logstash的ruby代码中getter和setter必须使用代码,比如event.get('field'),不能使用event['field'],因为
[2019-03-19T17:15:32,729][ERROR][logstash.filters.ruby ] Ruby exception occurred: Direct event field references (i.e. event['field'] = 'value') have been disabled in favor of using event get and set methods (e.g. event.set('field', 'value')). Please consult the Logstash 5.0 breaking changes documentation for more details.
3 使用grek+kv
input {
file {
path => [ "/data/tmp/access.log" ]
start_position => "beginning"
}
}
filter {
if [message] =~ /dataone\/u1/ {
grok {
match => { "message" => "%{IPORHOST:client_ip} (%{USER:ident}|-) (%{USER:auth}|-) \[%{HTTPDATE:access_time_raw}\] \"(?:%{WORD:verb} (%{URIPATHPARAM:request}|-)(?: HTTP/%{NUMBER:http_version})?|-)\" (%{NUMBER:response}|-) (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent}" }
}
kv {
source => "request"
field_split => "&?"
value_split => "="
}
urldecode {
all_fields => true
}
date {
match => [ "access_time_raw","dd/MMM/yyyy:HH:mm:ss Z"]
target => "access_time_tmp"
}
ruby {
code => "event.set('access_time', (event.get('access_time_tmp').to_i * 1000000).to_s)
event.set('send_time', event.get('access_time'))"
}
if [send_time_raw] {
date {
match => [ "send_time_raw","yyyy-MM-dd HH:mm:ss"]
target => "send_time_tmp"
}
ruby {
code => "event.set('send_time', (event.get('send_time_tmp').to_i * 1000000).to_s)"
}
}
mutate {
remove_field => ["message", "ident", "auth", "verb", "bytes", "reponse", "x_forward_for", "http_version", "access_time_raw", "access_time_tmp", "path", "response", "send_time_raw", "send_time_tmp"]
}
} else {
drop {}
}
}
参考:https://www.elastic.co/guide/en/logstash/current/plugins-filters-kv.html
【原创】大数据基础之Logstash(3)应用之file解析(grok/ruby/kv)的更多相关文章
- 【原创】大数据基础之Zookeeper(2)源代码解析
核心枚举 public enum ServerState { LOOKING, FOLLOWING, LEADING, OBSERVING; } zookeeper服务器状态:刚启动LOOKING,f ...
- 【原创】大数据基础之Logstash(1)简介、安装、使用
Logstash 6.6.2 官方:https://www.elastic.co/products/logstash 一 简介 Centralize, Transform & Stash Yo ...
- 【原创】大数据基础之Logstash(4)高可用
logstash高可用体现为不丢数据(前提为服务器短时间内不可用后可恢复比如重启服务器或重启进程),具体有两个方面: 进程重启(服务器重启) 事件消息处理失败 在logstash中对应的解决方案为: ...
- 【原创】大数据基础之Logstash(3)应用之http(in和out)
一个logstash很容易通过http打断成两个logstash实现跨服务器或者跨平台间数据同步,比如原来的流程是 logstash: nginx log -> kafka 打断成两个是 log ...
- 【原创】大数据基础之Logstash(2)应用之mysql-kafka
应用一:mysql数据增量同步到kafka 1 准备mysql测试表 mysql> create table test_sync(id int not null auto_increment, ...
- 【原创】大数据基础之Logstash(5)监控
有两种方式来监控logstash: api ui(xpack) When you run Logstash, it automatically captures runtime metrics tha ...
- 【原创】大数据基础之Logstash(6)mongo input
logstash input插件之mongodb是第三方的,配置如下: input { mongodb { uri => 'mongodb://mongo_server:27017/db' pl ...
- 【原创】大数据基础之词频统计Word Count
对文件进行词频统计,是一个大数据领域的hello word级别的应用,来看下实现有多简单: 1 Linux单机处理 egrep -o "\b[[:alpha:]]+\b" test ...
- 【原创】大数据基础之Impala(1)简介、安装、使用
impala2.12 官方:http://impala.apache.org/ 一 简介 Apache Impala is the open source, native analytic datab ...
随机推荐
- flask请求异步执行(转载)
Flask默认是不支持非阻塞IO的,表现为: 当 请求1未完成之前,请求2是需要等待处理状态,效率非常低. 在flask中非阻塞实现可以由2种: 启用flask多线程机制 # Flask from f ...
- nativefier - 快速把任意网页生成桌面应用程序
使用前端技术开发桌面应用的技术已经相当成熟了,像早先的 NW.js,如今很火的 Electron 等,都可以轻松实现.今天给大家分享的 nativefier 就是基于 Electron 封装的,可以帮 ...
- Metaprogramming
Metaprogramming https://en.wikipedia.org/wiki/Metaprogramming 元编程, 是一种编程技术, 制造的计算机程序,具有这种能力, 对待程序为他们 ...
- centos6.5配置redis服务 很好用谢谢
1.下载Redis3.2.5安装包 wget http://download.redis.io/releases/redis-3.2.5.tar.gz 2.解压.编译. ...
- EL 快速开始
技术选型上,推荐使用EL表达式,少用不用taglib. 大趋势 前后端分离 mvc+mvvm ,使用[thymeleaf]和前端更好结合,也是springboot官方推荐的做法. [viewTicke ...
- java设计模式之单例模式以及实现的几种方法
java设计模式以及实现的几种方法,看到比较好的博客文章,收藏起来供以后再次阅读.. 参见:http://www.cnblogs.com/garryfu/p/7976546.html
- 第26月第6天 selenium
1.selenium /** * @author Young * @param locator * @param values * @throws Exception */ protected voi ...
- P1880 [NOI1995]石子合并(区间DP)
题目链接:https://www.luogu.org/problemnew/show/P1880 题目大意:中文题目 具体思路:和上一篇的思路是差不多的,也是对于每一个小的区间进行处理,然后再归并到大 ...
- mongodb系列~mongodb集群介绍与管理
mongodb 集群维护1 简介 谈谈mongodb的集群架构2 常用的维护命令 1 查看状态 sh.status() 1 version 2 shards: ...
- RabbitMQ简单应用の简单队列
(1)首先创建一个maven项目: pom.xml,重点是配置RabbitMQ <dependencies> <dependency> <groupId>junit ...