【原创】大数据基础之Logstash(3)应用之file解析(grok/ruby/kv)
从nginx日志中进行url解析
/v1/test?param2=v2¶m3=v3&time=2019-03-18%2017%3A34%3A14
->
{'param1':'v1','param2':'v2','param3':'v3','time':'2019-03-18 17:34:14'}
nginx日志示例:
1.119.132.168 - - [18/Mar/2019:09:13:50 +0000] "POST /param1/test?param2=1¶m3=2&time=2019-03-18%2017%3A34%3A14 HTTP/1.1" 200 929 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36" "-"
1 使用grok
input {
file {
path => [ "/var/log/nginx/access.log" ]
start_position => "beginning"
}
}
filter {
if [message] =~ /test/ {
grok {
match => { "message" => "%{IPORHOST:client_ip} (%{USER:ident}|-) (%{USER:auth}|-) \[%{HTTPDATE:access_time_raw}\] \"(?:%{WORD:verb} (/%{PARAMVALUE:param1}/test\?param2=%{PARAMVALUE:param2}¶m3=%{PARAMVALUE:param3}&time=%{PARAMVALUE:send_time_raw})(?: HTTP/%{NUMBER:http_version})?|-)\" (%{NUMBER:response}|-) (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{QS:x_forward_for}" }
pattern_definitions => { "PARAMVALUE" => "[^& ]*" }
}
urldecode {
all_fields => true
}
date {
match => [ "access_time_raw","dd/MMM/yyyy:HH:mm:ss Z"]
target => "access_time_tmp"
}
ruby {
code => "event.set('access_time', (event.get('access_time_tmp').to_i * 1000000).to_s)
event.set('send_time', event.get('access_time'))"
}
if [send_time_raw] {
date {
match => [ "send_time_raw","yyyy-MM-dd HH:mm:ss"]
target => "send_time_tmp"
timezone => "UTC"
}
ruby {
code => "event.set('send_time', (event.get('send_time_tmp').to_i * 1000000).to_s)"
}
}
mutate {
remove_field => ["message", "ident", "auth", "verb", "bytes", "reponse", "x_forward_for", "http_version", "access_time_raw", "access_time_tmp", "path", "response", "send_time_raw", "send_time_tmp"]
}
} else {
drop {}
}
}
output {
if [param1] and [param2] and [param3] and "_grokparsefailure" not in [tags] {
stdout {codec => json}
}
}
注意:
1)对url的参数名和位置硬编码,不灵活
2)使用自定义pattern:PARAMVALUE
3)一定要使用urldecode,否则time得到的value为2019-03-18%2017%3A34%3A14,logstash中date插件使用joda解析pattern会报错,因为含有字母A;
4)如果time为空,则使用access_time;
5)不匹配的记录drop掉;
6)只有满足条件的记录才会被output;
7)在filter和output中使用if-else定义分支;
8)date插件要注意timezone,否则会按照时区偏移;
2 使用grok+ruby
input {
file {
path => [ "/var/log/nginx/access.log" ]
start_position => "beginning"
}
}
filter {
if [message] =~ /test/ {
grok {
match => { "message" => "%{IPORHOST:client_ip} (%{USER:ident}|-) (%{USER:auth}|-) \[%{HTTPDATE:access_time_raw}\] \"(?:%{WORD:verb} (%{URIPATHPARAM:request}|-)(?: HTTP/%{NUMBER:http_version})?|-)\" (%{NUMBER:response}|-) (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent}" }
}
urldecode {
all_fields => true
}
date {
match => [ "access_time_raw","dd/MMM/yyyy:HH:mm:ss Z"]
target => "access_time_tmp"
}
ruby {
code => "event.set('access_time', (event.get('access_time_tmp').to_i * 1000000).to_s)
event.set('send_time', event.get('access_time'))"
}
if [request] {
ruby {
init => "
def convertName(name)
result = ''
name.each_char{|ch| result += (if ch < 'a' then '_' + ch.downcase else ch end)}
result
end
"
code => "
event.set('param1', event.get('request').split('?')[0].split('/')[1])
pairs = event.get('request').split('?')[1].split('&')
pairs.each{ |item| arr=item.split('='); event.set(arr[0], arr[1])}
"
}
if [time] {
date {
match => [ "time","yyyy-MM-dd HH:mm:ss"]
target => "send_time_tmp"
timezone => "UTC"
}
ruby {
code => "event.set('send_time', (event.get('send_time_tmp').to_i * 1000000).to_s)"
}
}
}
mutate {
remove_field => ["message", "ident", "auth", "verb", "bytes", "reponse", "x_forward_for", "http_version", "access_time_raw", "access_time_tmp", "path", "response", "time", "send_time_tmp"]
}
} else {
drop {}
}
}
output {
if [param1] and [param2] and [param3] and "_grokparsefailure" not in [tags] {
stdout {codec => json}
}
}
注意:
1)直接使用默认的nginx日志的grok pattern;
2)在ruby中直接按照key=value进行解析,更灵活;
3)自定义函数;
logstash的ruby代码中getter和setter必须使用代码,比如event.get('field'),不能使用event['field'],因为
[2019-03-19T17:15:32,729][ERROR][logstash.filters.ruby ] Ruby exception occurred: Direct event field references (i.e. event['field'] = 'value') have been disabled in favor of using event get and set methods (e.g. event.set('field', 'value')). Please consult the Logstash 5.0 breaking changes documentation for more details.
3 使用grek+kv
input {
file {
path => [ "/data/tmp/access.log" ]
start_position => "beginning"
}
}
filter {
if [message] =~ /dataone\/u1/ {
grok {
match => { "message" => "%{IPORHOST:client_ip} (%{USER:ident}|-) (%{USER:auth}|-) \[%{HTTPDATE:access_time_raw}\] \"(?:%{WORD:verb} (%{URIPATHPARAM:request}|-)(?: HTTP/%{NUMBER:http_version})?|-)\" (%{NUMBER:response}|-) (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent}" }
}
kv {
source => "request"
field_split => "&?"
value_split => "="
}
urldecode {
all_fields => true
}
date {
match => [ "access_time_raw","dd/MMM/yyyy:HH:mm:ss Z"]
target => "access_time_tmp"
}
ruby {
code => "event.set('access_time', (event.get('access_time_tmp').to_i * 1000000).to_s)
event.set('send_time', event.get('access_time'))"
}
if [send_time_raw] {
date {
match => [ "send_time_raw","yyyy-MM-dd HH:mm:ss"]
target => "send_time_tmp"
}
ruby {
code => "event.set('send_time', (event.get('send_time_tmp').to_i * 1000000).to_s)"
}
}
mutate {
remove_field => ["message", "ident", "auth", "verb", "bytes", "reponse", "x_forward_for", "http_version", "access_time_raw", "access_time_tmp", "path", "response", "send_time_raw", "send_time_tmp"]
}
} else {
drop {}
}
}
参考:https://www.elastic.co/guide/en/logstash/current/plugins-filters-kv.html
【原创】大数据基础之Logstash(3)应用之file解析(grok/ruby/kv)的更多相关文章
- 【原创】大数据基础之Zookeeper(2)源代码解析
核心枚举 public enum ServerState { LOOKING, FOLLOWING, LEADING, OBSERVING; } zookeeper服务器状态:刚启动LOOKING,f ...
- 【原创】大数据基础之Logstash(1)简介、安装、使用
Logstash 6.6.2 官方:https://www.elastic.co/products/logstash 一 简介 Centralize, Transform & Stash Yo ...
- 【原创】大数据基础之Logstash(4)高可用
logstash高可用体现为不丢数据(前提为服务器短时间内不可用后可恢复比如重启服务器或重启进程),具体有两个方面: 进程重启(服务器重启) 事件消息处理失败 在logstash中对应的解决方案为: ...
- 【原创】大数据基础之Logstash(3)应用之http(in和out)
一个logstash很容易通过http打断成两个logstash实现跨服务器或者跨平台间数据同步,比如原来的流程是 logstash: nginx log -> kafka 打断成两个是 log ...
- 【原创】大数据基础之Logstash(2)应用之mysql-kafka
应用一:mysql数据增量同步到kafka 1 准备mysql测试表 mysql> create table test_sync(id int not null auto_increment, ...
- 【原创】大数据基础之Logstash(5)监控
有两种方式来监控logstash: api ui(xpack) When you run Logstash, it automatically captures runtime metrics tha ...
- 【原创】大数据基础之Logstash(6)mongo input
logstash input插件之mongodb是第三方的,配置如下: input { mongodb { uri => 'mongodb://mongo_server:27017/db' pl ...
- 【原创】大数据基础之词频统计Word Count
对文件进行词频统计,是一个大数据领域的hello word级别的应用,来看下实现有多简单: 1 Linux单机处理 egrep -o "\b[[:alpha:]]+\b" test ...
- 【原创】大数据基础之Impala(1)简介、安装、使用
impala2.12 官方:http://impala.apache.org/ 一 简介 Apache Impala is the open source, native analytic datab ...
随机推荐
- json中带有\r\n处理
后台代码把换行符\r\n替换为\\r\\n,前台代码js收到的字符就是\r\n
- [译]在vuejs中使用任意js库
原文 全局变量 最naive的办法是通过附加类库到window对象,使之成为全局变量: entry.js window._ = require('lodash'); MyComponent.vue e ...
- Js调用asp.net后台代码
方法一: 1.首先建立一个按钮,在后台将调用或处理的内容写入button_click中; 2.在前台写一个js函数,内容为document.getElementById("b ...
- STLINK V2安装使用详解
1. 解压st-link_v2_usb driver.zip文件. 2. 运行解压后的st-link_v2_usbdriver.exe文件,安装STLINK V2驱动程序.安装路 ...
- c++注意易错点
1.cout采用endl,cin不用endl cin>>a>>b; cout<<a<<b<<endl; 2.函数定义后面不要加分号,完了也没 ...
- sqlserver导入execl
一.找到导入导出的工具 找到安装目录 C:\Program Files\Microsoft SQL Server\100\DTS\Binn 里面的DTSWizard.exe 二.打开exe 然后下一步 ...
- 理解position定位
使用css布局position非常重要,语法如下: position:static | relative | absolute | fixed | center | page | sticky 默认值 ...
- python的进程/线程/协程
1.python的多线程 多线程就是在同一时刻执行多个不同的程序,然而python中的多线程并不能真正的实现并行,这是由于cpython解释器中的GIL(全局解释器锁)捣的鬼,这把锁保证了同一时刻只有 ...
- MySQL - COUNT关键字
基础数据信息 SELECT COUNT(*) AS '用户名的个数' FROM t_user SELECT COUNT(DISTINCT username) AS '用户名不重复的个数' FROM t ...
- mysql案例~mysql主从复制延迟概总
浅谈mysql主从复制延迟 1 概念解读 需要知道以下几点 1 mysql的主从同步上是异步复制,从库是串行化执行 2 mysql 5.7的并行复制能加速从库重做的速度,进一步缓解 主从同步的延迟问题 ...