有时候根据日志的内容,可能一行不能全部显示,会延续在下一行,为了将上下内容关联在一起,于是codec插件中的multiline插件

就派上用场了,源日志内容:

[2017-09-20T16:04:34,936][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-09-20T16:04:34,949][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//192.168.44.134:9200"]}

根据时间戳前面的[这个符号进行判断日志内容是否是一起的

[root@node3 conf.d]# cat multiline.conf
input {
file {
path => ["/var/log/logstash/logstash-plain.log"]
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
} output {
stdout {
codec => rubydebug
}
}

 然后执行:

[root@node3 conf.d]# /usr/share/logstash/bin/logstash -f multiline.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
{
"@version" => "1",
"host" => "node3",
"path" => "/var/log/logstash/logstash-plain.log",
"@timestamp" => 2017-09-21T02:54:37.733Z,
"message" => "[2017-09-21T10:51:10,588][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>\"netflow\", :directory=>\"/usr/share/logstash/modules/netflow/configuration\"}"
}
{
"@version" => "1",
"host" => "node3",
"path" => "/var/log/logstash/logstash-plain.log",
"@timestamp" => 2017-09-21T02:54:37.743Z,
"message" => "[2017-09-21T10:51:10,596][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>\"fb_apache\", :directory=>\"/usr/share/logstash/modules/fb_apache/configuration\"}"
}

 接下来开始nginx日志的收集

  log_format name [escape=default|jsonstring,查看nginx文档,nginx日志格式支持json,于是采用json来配置nginx日志,再结合logstash的json插件来收集nginx的日志

1、首先安装nginx,这里采用yum安装,nginx存在于epel源中

2、配置nginx的日志输出为json格式

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format json '{"@timstamp":"$time_iso8601","@version":"1","client":"$remote_addr","url":"$uri","status":"$status","domain":"$host","host":"$server_addr","size":"$body_bytes_sent","responsetime":"$request_time","referer":"$http_referer","ua":"$http_user_agent"}'; #access_log /var/log/nginx/access.log main;
access_log /var/log/nginx/access.log json;

3、上面配置的value值都是nginx的一些变量,查看更多变量:http://nginx.org/en/docs/http/ngx_http_core_module.html#var_status

4、启动nginx服务,查看生成的日志是否是json格式:

[root@node3 nginx]# cat /var/log/nginx/access.log
{"@timstamp":"2017-09-21T13:47:43+08:00","@version":"1","client":"192.168.44.1","url":"/index.html","status":"200","domain":"192.168.44.136","host":"192.168.44.136","size":"3698","responsetime":"0.000","referer":"-","ua":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36"}
{"@timstamp":"2017-09-21T13:47:43+08:00","@version":"1","client":"192.168.44.1","url":"/nginx-logo.png","status":"200","domain":"192.168.44.136","host":"192.168.44.136","size":"368","responsetime":"0.000","referer":"http://192.168.44.136/","ua":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36"}
{"@timstamp":"2017-09-21T13:47:43+08:00","@version":"1","client":"192.168.44.1","url":"/poweredby.png","status":"200","domain":"192.168.44.136","host":"192.168.44.136","size":"2811","responsetime":"0.000","referer":"http://192.168.44.136/","ua":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36"}
{"@timstamp":"2017-09-21T13:47:43+08:00","@version":"1","client":"192.168.44.1","url":"/404.html","status":"404","domain":"192.168.44.136","host":"192.168.44.136","size":"3652","responsetime":"0.000","referer":"http://192.168.44.136/","ua":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36"}

然后利用logstash的json格式来收集nginx访问日志:

[root@node3 conf.d]# cat json.conf
input {
file {
path => ["/var/log/nginx/access.log"]
start_position => "beginning"
codec => json
}
} output {
stdout {
codec => rubydebug
}
}

执行编写的配置文件:

[root@node3 conf.d]# /usr/share/logstash/bin/logstash -f json.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
{
"referer" => "-",
"ua" => "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36",
"url" => "/index.html",
"path" => "/var/log/nginx/access.log",
"@timestamp" => 2017-09-21T05:58:44.442Z,
"size" => "3698",
"@timstamp" => "2017-09-21T13:47:43+08:00",
"domain" => "192.168.44.136",
"@version" => "1",
"host" => "192.168.44.136",
"client" => "192.168.44.1",
"responsetime" => "0.000",
"status" => "200"
}

现在将上面两个的日志(logstash和nginx的日志)都输出到elasticsearch中,将es中之前的index清空,重新使用logstash收集上面的日志到es中:

[root@node3 conf.d]# cat all.conf
input {
file {
type => "nginx-log"
path => ["/var/log/nginx/access.log"]
start_position => "beginning"
codec => json
}
file {
type => "logstash-log"
path => ["/var/log/logstash/logstash-plain.log"]
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
} output {
if [type] == "logstash-log" {
elasticsearch {
hosts => ["192.168.44.134:9200"]
index => "logstash-log"
}
}
if [type] == "nginx-log" {
elasticsearch {
hosts => ["192.168.44.134:9200"]
index => "nginx-log"
}
}
}

然后在es上查看是否已经有数据了:

使用kibana来看看具体效果,开始安装kibana

这里采用rpm进行安装kinaba:

wget https://artifacts.elastic.co/downloads/kibana/kibana-5.6.1-x86_64.rpm

/etc/default/kibana
/etc/init.d/kibana
/etc/kibana/kibana.yml
/etc/systemd/system/kibana.service
/usr/share/kibana/LICENSE.txt
/usr/share/kibana/NOTICE.txt
/usr/share/kibana/README.txt
/usr/share/kibana/bin/kibana
/usr/share/kibana/bin/kibana-plugin
/usr/share/kibana/node/CHANGELOG.md
/usr/share/kibana/node/LICENSE
/usr/share/kibana/node/README.md
/usr/share/kibana/node/bin/node
/usr/share/kibana/node/bin/npm

修改kibana的配置:

[root@node3 ~]# egrep -v "^#|^$" /etc/kibana/kibana.yml
server.port: 5601
server.host: "192.168.44.136"
server.name: "node3"
elasticsearch.url: "http://192.168.44.134:9200"
kibana.index: ".kibana"

然后启动kibana:

[root@node3 ~]# /etc/init.d/kibana start
kibana started
[root@node3 ~]# netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 6948/nginx
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1301/sshd
tcp 0 0 192.168.44.136:5601 0.0.0.0:* LISTEN 7451/node

访问kibana查看是否已经和es结合起来了:

现在将es中的nginx-log这个索引添加到kibana中:

于是kibana与es的简单结合操作完成

继续编logstash的配置文件:收集syslog的日志

[root@node3 conf.d]# cat syslog.conf
input {
syslog {
type => "syslog"
host => "192.168.44.136"
port => "514"
}
} output {
stdout {
codec => rubydebug
}
}

查看端口是否已经起来了:

[root@node3 conf.d]# netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 6948/nginx
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1301/sshd
tcp 0 0 192.168.44.136:5601 0.0.0.0:* LISTEN 7451/node
tcp 0 0 :::80 :::* LISTEN 6948/nginx
tcp 0 0 :::22 :::* LISTEN 1301/sshd
tcp 0 0 ::ffff:127.0.0.1:9600 :::* LISTEN 7669/java
tcp 0 0 ::ffff:192.168.44.136:514 :::* LISTEN 7669/java
udp 0 0 0.0.0.0:68 0.0.0.0:* 1128/dhclient
udp 0 0 ::ffff:192.168.44.136:514 :::* 7669/java

修改rsyslog配置文件:

vim /etc/rsyslog.conf添加到最后一行:

*.* @@192.168.44.136:514

重启rsyslog服务:

[root@node3 conf.d]# /etc/init.d/rsyslog restart
关闭系统日志记录器: [确定]
启动系统日志记录器: [确定]
[root@node3 conf.d]# /usr/share/logstash/bin/logstash -f syslog.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
{
"severity" => 6,
"program" => "kernel",
"message" => "imklog 5.8.10, log source = /proc/kmsg started.\n",
"type" => "syslog",
"priority" => 6,
"logsource" => "node3",
"@timestamp" => 2017-09-21T07:28:55.000Z,
"@version" => "1",
"host" => "192.168.44.136",
"facility" => 0,
"severity_label" => "Informational",
"timestamp" => "Sep 21 15:28:55",
"facility_label" => "kernel"
}

接下来使用tcp插件进行收集数据:

[root@node3 conf.d]# cat tcp.conf
input {
tcp {
host => ["192.168.44.136"]
port => "6666"
}
} output {
stdout {
codec => rubydebug
}
}
[root@node3 conf.d]# netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 6948/nginx
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1301/sshd
tcp 0 0 192.168.44.136:5601 0.0.0.0:* LISTEN 7451/node
tcp 0 0 :::80 :::* LISTEN 6948/nginx
tcp 0 0 :::22 :::* LISTEN 1301/sshd
tcp 0 0 ::ffff:127.0.0.1:9600 :::* LISTEN 7867/java
tcp 0 0 ::ffff:192.168.44.136:6666 :::* LISTEN 7867/java

可以看见端口6666已经开启了,现在开始测试:

[root@node3 ~]# nc 192.168.44.136 6666 < /etc/resolv.conf
[root@node3 ~]# nc 192.168.44.136 6666 < /etc/resolv.conf
[root@node3 conf.d]# /usr/share/logstash/bin/logstash -f tcp.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
{
"@version" => "1",
"host" => "192.168.44.136",
"@timestamp" => 2017-09-21T07:44:25.087Z,
"message" => "; generated by /sbin/dhclient-script",
"port" => 54117
}
{
"@version" => "1",
"host" => "192.168.44.136",
"@timestamp" => 2017-09-21T07:44:25.090Z,
"message" => "search localdomain",
"port" => 54117
}

下篇讲解将日志通过logstash收集到redis,然后再通过logstash从redis取出数据输出到es,通过kibana进行展示

logstash编写2以及结合kibana使用的更多相关文章

  1. Elasticsearch+Logstash+Kibana教程

    参考资料 累了就听会歌吧! Elasticsearch中文参考文档 Elasticsearch官方文档 Elasticsearch 其他——那些年遇到的坑 Elasticsearch 管理文档 Ela ...

  2. 如何在 Ubuntu 14.04 上安装 Elasticsearch,Logstash 和 Kibana

    介绍 在本教程中,我们将去的 Elasticsearch 麋鹿堆栈安装 Ubuntu 14.04 — — 那就是,Elasticsearch 5.2.x,Logstash 2.2.x 和 Kibana ...

  3. Elasticsearch+Logstash+Kibana搭建日志平台

    1 ELK简介 ELK是Elasticsearch+Logstash+Kibana的简称 ElasticSearch是一个基于Lucene的分布式全文搜索引擎,提供 RESTful API进行数据读写 ...

  4. Elasticsearch、Logstash、Kibana搭建统一日志分析平台

    // // ELKstack是Elasticsearch.Logstash.Kibana三个开源软件的组合.目前都在Elastic.co公司名下.ELK是一套常用的开源日志监控和分析系统,包括一个分布 ...

  5. 安装logstash,elasticsearch,kibana三件套

    logstash,elasticsearch,kibana三件套 elk是指logstash,elasticsearch,kibana三件套,这三件套可以组成日志分析和监控工具 注意: 关于安装文档, ...

  6. 用Kibana和logstash快速搭建实时日志查询、收集与分析系统

    Logstash是一个完全开源的工具,他可以对你的日志进行收集.分析,并将其存储供以后使用(如,搜索),您可以使用它.说到搜索,logstash带有一个web界面,搜索和展示所有日志. kibana ...

  7. 使用logstash+elasticsearch+kibana快速搭建日志平台

    日志的分析和监控在系统开发中占非常重要的地位,系统越复杂,日志的分析和监控就越重要,常见的需求有: * 根据关键字查询日志详情 * 监控系统的运行状况 * 统计分析,比如接口的调用次数.执行时间.成功 ...

  8. How To Use Logstash and Kibana To Centralize Logs On CentOS 6

    原文链接:https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-l ...

  9. 使用Elasticsearch、Logstash、Kibana与Redis(作为缓冲区)对Nginx日志进行收集(转)

    摘要 使用Elasticsearch.Logstash.Kibana与Redis(作为缓冲区)对Nginx日志进行收集 版本 elasticsearch版本: elasticsearch-2.2.0 ...

随机推荐

  1. 如何激励用户为你的app评分?

    如何激励用户为你的app评分? 2014-04-10 16:21 编辑: suiling 分类:营销推广 来源:CocoaChina  0 7247 应用设计应用评分 招聘信息: IOS兼职 深圳创业 ...

  2. [C++] 跨平台的生成GUID方法

    string GetGUID() { char szGUID[BUFF_SIZE]; #ifdef WIN32 GUID uuid; CoCreateGuid(&uuid); #else Tm ...

  3. 基于ZooKeeper的服务注册中心

    本文介绍基于ZooKeeper的Dubbo服务注册中心的原理. 1.ZooKeeper中的节点 ZooKeeper是一个树形结构的目录服务,支持变更推送,因此非常适合作为Dubbo服务的注册中心. 注 ...

  4. kibana5.6源码分析3--目录结构

    kibana5.6的项目目录结构: bin:系统启动脚本目录 config:kibana配置文件目录 data:估计是缓存一些系统数据的,uuid放在这里面 docs: maps:此目录包含TileM ...

  5. hdu2094—看似拓扑实际上是一道思维题

    HDU2094  产生冠军 题目链接:http://acm.hdu.edu.cn/showproblem.php?pid=2094 题意:中文题,就不解释了.题意已经非常清楚了. 这道题的看起来像是一 ...

  6. 三种方案在Windows系统下安装ubuntu双系统

    一.虚拟机安装(不推荐) 使用工具:Vmware 如果不是因为迫不得已,比如Mac OS对硬件不兼容,Federa安装频繁出错,各种驱动不全等等,不推荐使用虚拟机安装. 个人感觉这是一种对操作系统的亵 ...

  7. LeetCode—Longest Consecutive Sequence

    题目描述: Given an unsorted array of integers, find the length of the longest consecutive elements seque ...

  8. 怎样在QML应用中创建一个Context Menu

    我们在非常多的系统中看见能够在屏幕的一个地方长按,然后就能够依据当前显示的上下文弹出一个菜单. 菜单中能够有一些选项,比方删除,改动该项.这样的一般在ListView或GridView中常见.今天,我 ...

  9. Spotlight 连接SuSE11 linux报错的解决方法

    1. 在客户端安装spotlight: 2.在SuSE11中建立新用户,并且安装了sysstat包: 3.使用spotlight连接服务器,连接时提示    errorcode:3114   reas ...

  10. 如何制作一款HTML5 RPG游戏引擎——第三篇,利用幕布切换场景

    开言: 在RPG游戏中,如果有地图切换的地方,通常就会使用幕布效果.所谓的幕布其实就是将两个矩形合拢,直到把屏幕遮住,然后再展开直到两个矩形全部移出屏幕. 为了大家做游戏方便,于是我给这个引擎加了这么 ...