前面已经有ELK-Redis的安装,此处只讲在不改变日志格式的情况下收集Nginx日志.

1.Nginx端的日志格式设置如下:

log_format  access  '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /usr/local/nginx/logs/access.log access;

2.Nginx端logstash-agent的配置如下:

[root@localhost conf]# cat logstash_agent.conf
input {
file {
path => [ "/usr/local/nginx/logs/access.log" ]
type => "nginx_access"
} }
output {
redis {
data_type => "list"
key => "nginx_access_log"
host => "192.168.100.70"
port => "6379" }
}

3.logstash_indexer的配置如下:

[root@elk-node1 conf]# cat logstash_indexer.conf
input {
redis {
data_type => "list"
key => "nginx_access_log"
host => "192.168.100.70"
port => "6379" }
} filter {
grok {
patterns_dir => "./patterns"
match => { "message" => "%{NGINXACCESS}" } }
geoip {
source => "clientip"
target => "geoip"
#database => "/usr/local/logstash/GeoLite2-City.mmdb"
database => "/usr/local/src/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
} mutate {
convert => [ "[geoip][coordinates]", "float" ]
convert => [ "response","integer" ]
convert => [ "bytes","integer" ]
}
mutate {remove_field => ["message"]}
date {
match => [ "timestamp","dd/MMM/yyyy:HH:mm:ss Z"]
}
mutate {
remove_field => "timestamp"
}
} output {
#stdout { codec => rubydebug }
elasticsearch {
hosts => "192.168.100.71"
#protocol => "http"
index => "logstash-nginx-access-log-%{+YYYY.MM.dd}"
}
}

3.创建存放logstash格式化Nginx日志的文件。

mkdir -pv /usr/local/logstash/patterns

[root@elk-node1 ]# vim/usr/local/logstash/patterns/nginx
ERNAME [a-zA-Z\.\@\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip} - %{NOTSPACE:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{NOTSPACE:http_x_forwarded_for} #这个格式要和Nginx的 log_format格式保持一致.

 假如说我 nginx 日志在加上一个 nginx 响应时间呢?修改格式加上”request_time”:  

修改日志结构生成数据:

log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" $request_time';

修改一下 nginx 的正则匹配,多加一个选项:

[root@elk-node1 patterns]# cat nginx

NGUSERNAME [a-zA-Z\.\@\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip} - %{NGUSER:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes:float}|-) %{QS:referrer} %{QS:agent} %{NOTSPACE:http_x_forwarded_for} %{NUMBER:request_time:float}
~
~

附一份当时生产环境自己的logstash.conf配置实例(logstash-5.2.2的conf文件):

input {
redis { data_type => "list"
key => "uc01-nginx-access-logs"
host => "192.168.100.71"
port => ""
db => ""
password => "juzi1@#$%QW"
} redis { data_type => "list"
key => "uc02-nginx-access-logs"
host => "192.168.100.71"
port => ""
db => ""
password => "juzi1@#$%QW"
}
redis { data_type => "list"
key => "p-nginx-access-logs"
host => "192.168.100.71"
port => ""
db => ""
password => "juzi1@#$%QW"
}
redis { data_type => "list"
key => "https-nginx-access-logs"
host => "192.168.100.71"
port => ""
db => ""
password => "juzi1@#$%QW"
}
redis { data_type => "list"
key => "rms01-nginx-access-logs"
host => "192.168.100.71"
port => ""
db => ""
password => "juzi1@#$%QW"
}
redis { data_type => "list"
key => "rms02-nginx-access-logs"
host => "192.168.100.71"
port => ""
db => ""
password => "juzi1@#$%QW"
} } filter {
if [path] =~ "nginx" {
grok {
patterns_dir => "./patterns"
match => { "message" => "%{NGINXACCESS}" } } mutate {
remove_field => ["message"]
}
mutate {
remove_field => "timestamp" } date {
match => [ "timestamp","dd/MMM/yyyy:HH:mm:ss Z"] } geoip {
source => "clientip"
target => "geoip"
database => "/usr/local/GeoLite2-City.mmdb"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ] } mutate {
convert => [ "[geoip][coordinates]", "float" ]
} }
else {
drop {}
} } output { if [type] == "uc01-nginx-access" {
elasticsearch {
hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
index => "logstash-uc01-log-%{+YYYY.MM.dd}"
user => logstash_internal
password => changeme
}
}
if [type] == "uc02-nginx-access" {
elasticsearch {
hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
index => "logstash-uc02-log-%{+YYYY.MM.dd}"
user => logstash_internal
password => changeme
}
}
if [type] == "p-nginx-access" {
elasticsearch {
hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
index => "logstash-p-log-%{+YYYY.MM.dd}"
user => logstash_internal
password => changeme
}
} if [type] == "https-nginx-access" {
elasticsearch {
hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
index => "logstash-api-log-%{+YYYY.MM.dd}"
user => logstash_internal
password => changeme
}
} if [type] == "rms01-nginx-access" {
elasticsearch {
hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
index => "logstash-rms01-log-%{+YYYY.MM.dd}"
user => logstash_internal
password => changeme
}
}
if [type] == "rms02-nginx-access" {
elasticsearch {
hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
index => "logstash-rms02-log-%{+YYYY.MM.dd}"
user => logstash_internal
password => changeme
}
}
}

logstash_indexer.conf

[root@localhost ~]$cd /usr/local/logstash-5.2./etc
[root@localhost etc]$ cat logstash_agentd.conf
input {
file {
type => "web-nginx-access"
path => "/usr/local/nginx/logs/access.log"
} } output{
#file {
# path => "/tmp/%{+YYYY-MM-dd}.messages.gz"
# gzip => true
#} redis {
data_type => "list"
key => "web01-nginx-access-logs"
host => "192.168.100.71"
port => ""
db => ""
password => "@#$%QW" } }

logstash_agentd.conf

ELK filter过滤器来收集Nginx日志的更多相关文章

  1. ELK 二进制安装并收集nginx日志

    对于日志来说,最常见的需求就是收集.存储.查询.展示,开源社区正好有相对应的开源项目:logstash(收集).elasticsearch(存储+搜索).kibana(展示),我们将这三个组合起来的技 ...

  2. ELK Stack (2) —— ELK + Redis收集Nginx日志

    ELK Stack (2) -- ELK + Redis收集Nginx日志 摘要 使用Elasticsearch.Logstash.Kibana与Redis(作为缓冲区)对Nginx日志进行收集 版本 ...

  3. ELK日志系统之使用Rsyslog快速方便的收集Nginx日志

    常规的日志收集方案中Client端都需要额外安装一个Agent来收集日志,例如logstash.filebeat等,额外的程序也就意味着环境的复杂,资源的占用,有没有一种方式是不需要额外安装程序就能实 ...

  4. 安装logstash5.4.1,并使用grok表达式收集nginx日志

    关于收集日志的方式,最简单性能最好的应该是修改nginx的日志存储格式为json,然后直接采集就可以了. 但是实际上会有一个问题,就是如果你之前有很多旧的日志需要全部导入elk上查看,这时就有两个问题 ...

  5. 第七章·Logstash深入-收集NGINX日志

    1.NGINX安装配置 源码安装nginx 因为资源问题,我们先将nginx安装在Logstash所在机器 #安装nginx依赖包 [root@elkstack03 ~]# yum install - ...

  6. ELK实践(二):收集Nginx日志

    Nginx访问日志 这里补充下Nginx访问日志使用的说明.一般在nginx.conf主配置文件里需要定义一种格式: log_format main '$remote_addr - $remote_u ...

  7. Docker 部署 ELK 收集 Nginx 日志

    一.简介 1.核心组成 ELK由Elasticsearch.Logstash和Kibana三部分组件组成: Elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引 ...

  8. ELK学习实验014:Nginx日志JSON格式收集

    1 Kibana的显示配置 https://demo.elastic.co/app/kibana#/dashboard/welcome_dashboard 环境先处理干净 安装nginx和httpd- ...

  9. ELASTIC 5.2部署并收集nginx日志

    elastic 5.2集群安装笔记   设计架构如下: nginx_json_log ->filebeat ->logstash ->elasticsearch ->kiban ...

随机推荐

  1. ADB Fix error : insufficient permissions for device

    Ubuntu 15中在使用中Android开发板时,命令行下输入adb devices.adb shell会提示insufficient permissions for device. 通常重启下ad ...

  2. vs2015配置OpenGL开发环境

    先吐槽下,不知道微软怎么整的,从win7开始,OpenGL的头文件更改到windows SDK中,不安装就不能用. 更搞笑的是,在win10下,vs2015安装还报win sdk安装失败,这典型的自己 ...

  3. C# 实现list=list.OrderBy(q=>q.字段名).ToList(); 按多个字段排序

    //倒序 list.OrderByDescending(i => i.a).ThenByDescending(i => i.b); //顺序 list.OrderBy(i => i. ...

  4. 6款漂亮HTML CSS样式用户留言表单

    如今我们的网站.页面更加需要注重细节,不论是字体的样式.还是图片的分辨率清晰度都会影响到用户的访问体验和PV,以及用户以后是否会回访我们的网站/博客.如果有时间的时候,老左也会浏览和阅读相关的前端网站 ...

  5. maven relativePath

    父项目的pom.xml文件的相对路径.默认值为../pom.xml.maven首先从当前构建项目开始查找父项目的pom文件,然后从本地仓库,最有从远程仓库.RelativePath允许你选择一个不同的 ...

  6. hdu3926(判断两个图是否相似,模版)

    题意:给你2个图,最大度为2.问两个图是否相似. 思路:图中有环.有链,判断环的个数以及每个环组成的人数,还有链的个数以及每个链组成的人数 是否相等即可. 如果形成了环,那么每形成一个环,结点数就会多 ...

  7. 1. 决策树(Decision Tree)-决策树原理

    1. 决策树(Decision Tree)-决策树原理 2. 决策树(Decision Tree)-ID3.C4.5.CART比较 1. 前言 决策树是一种基本的分类和回归方法.决策树呈树形结构,在分 ...

  8. RDD、DataFrame和DataSet

    简述 RDD.DataFrame和DataSet是容易产生混淆的概念,必须对其相互之间对比,才可以知道其中异同:DataFrame多了数据的结构信息,即schema.RDD是分布式的 Java对象的集 ...

  9. linux用户和权限详解

    1.用户组说明 在使用useradd命令创建用户的时侯可以用-g 和-G 指定用户所属组和附属组.基本组:如果没有指定用户组,创建用户的时候系统会默认同时创建一个和这个用户名同名的组,这个组就是基本组 ...

  10. ambari删除脚本

    #.删除hdp.repo和hdp-util.repo cd /etc/yum.repos.d/ rm -rf hdp* rm -rf HDP* #rm -rf ambari* #.删除安装包 #用yu ...