查看一下agent端的shipper的配置:

# cat logstash_test2.shipper.conf
input {
file {
path => ["/apps/logstash/conf/test/test2_log.txt"]
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
output {
stdout {
#codec => rubydebug
codec => json
}
}
#这个测试主要是看输出的格式为json的

先简测一下刚配好的shipper:

# ./../bin/logstash -f logstash_test2.shipper.conf -t
Sending Logstash's logs to /apps/logstash/logs which is now configured via log4j2.properties
Configuration OK
[--08T18::,][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

可以看到没有报错,接下来启动logstash并指定刚才配置好的配置文件:

# ./../bin/logstash -f logstash_test2.shipper.conf -t
Sending Logstash's logs to /apps/logstash/logs which is now configured via log4j2.properties
Configuration OK
[--08T18::,][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[root@Appsrv130 conf]# ./../bin/logstash -f logstash_test2.shipper.conf
Sending Logstash's logs to /apps/logstash/logs which is now configured via log4j2.properties
[--08T18::,][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>, "pipeline.max_inflight"=>}
[--08T18::,][INFO ][logstash.pipeline ] Pipeline main started
[--08T18::,][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>}
{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T10:19:13.102Z","@version":"","host":"ofs1","message":"haha------>","tags":[]}{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T10:19:13.113Z","@version":"","host":"ofs1","message":"haha------>2","tags":[]}{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T10:19:13.118Z","@version":"","host":"ofs1","message":"haha------>3","tags":[]}{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T10:19:13.121Z","@version":"","host":"ofs1","message":"haha------>3","tags":[]}

再看看所监控的log日志的内容:

# cat test/test2_log.txt
haha------>
haha------>
haha------>
haha------>

发现 这个shipper启动的时候会从头到尾,把配置文件全读一边(这种效里也是从配置文件中配置好的)

再看一下这个配置文件:

# cat logstash_test2.shipper.conf
input {
file {
path => ["/apps/logstash/conf/test/test2_log.txt"]
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
output {
stdout {
#codec => rubydebug
codec => json
}
}
#要点就是这行sincedb_path =>"/dev/null"了!该参数用来指定sincedb文件名,但是如果我们设置为/dev/null这个linux系统上特殊的空洞文件,
那么logstash每次重启进程的时候,尝试读取sincedb内容,都只会读到空洞,也就可以理解为前不有过运行记录,自然就从初始位置开始读取了!

下面往监控文件里写入内容时,会发生下面变化:

# echo "查看json格式是什么输出-------》">>test/test2_log.txt 

再看一下输出的内容:

# ./../bin/logstash -f logstash_test2.shipper.conf -t
Sending Logstash's logs to /apps/logstash/logs which is now configured via log4j2.properties
Configuration OK
[--08T18::,][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[root@Appsrv130 conf]# ./../bin/logstash -f logstash_test2.shipper.conf
Sending Logstash's logs to /apps/logstash/logs which is now configured via log4j2.properties
[--08T18::,][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>, "pipeline.max_inflight"=>}
[--08T18::,][INFO ][logstash.pipeline ] Pipeline main started
[--08T18::,][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>}
{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T10:19:13.102Z","@version":"","host":"ofs1","message":"haha------>","tags":[]}{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T10:19:13.113Z","@version":"","host":"ofs1","message":"haha------>2","tags":[]}{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T10:19:13.118Z","@version":"","host":"ofs1","message":"haha------>3","tags":[]}{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T10:19:13.121Z","@version":"","host":"ofs1","message":"haha------>3","tags":[]}{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T11:17:45.060Z","@version":"","host":"ofs1","message":"查看json格式是什么输出-------》","tags":[]}

修改配置文件:

# cat logstash_test2.shipper.conf
input {
file {
path => ["/apps/logstash/conf/test/test2_log.txt"]
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
output {
stdout {
codec => rubydebug #查看这种格式的日志输出
#codec => json
}
}

查看日志:

# echo "查看rubydebug格式是什么输出-------》">>test/test2_log.txt 
# ./../bin/logstash -f logstash_test2.shipper.conf
Sending Logstash's logs to /apps/logstash/logs which is now configured via log4j2.properties
[--08T19::,][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>, "pipeline.max_inflight"=>}
[--08T19::,][INFO ][logstash.pipeline ] Pipeline main started
[--08T19::,][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>}
{
"path" => "/apps/logstash/conf/test/test2_log.txt",
"@timestamp" => --08T11::.290Z,
"@version" => "",
"host" => "ofs1",
"message" => "haha------>",
"tags" => []
}
{
"path" => "/apps/logstash/conf/test/test2_log.txt",
"@timestamp" => --08T11::.299Z,
"@version" => "",
"host" => "ofs1",
"message" => "haha------>2",
"tags" => []
}
{
"path" => "/apps/logstash/conf/test/test2_log.txt",
"@timestamp" => --08T11::.301Z,
"@version" => "",
"host" => "ofs1",
"message" => "haha------>3",
"tags" => []
}
{
"path" => "/apps/logstash/conf/test/test2_log.txt",
"@timestamp" => --08T11::.302Z,
"@version" => "",
"host" => "ofs1",
"message" => "haha------>3",
"tags" => []
}
{
"path" => "/apps/logstash/conf/test/test2_log.txt",
"@timestamp" => --08T11::.303Z,
"@version" => "",
"host" => "ofs1",
"message" => "查看json格式是什么输出-------》",
"tags" => []
}
{
"path" => "/apps/logstash/conf/test/test2_log.txt",
"@timestamp" => --08T11::.415Z,
"@version" => "",
"host" => "ofs1",
"message" => "查看rubydebug格式是什么输出-------》",
"tags" => []
}

如果去掉上面的两个参数,看一下效果:

# cat logstash_test2.shipper.conf
input {
file {
path => ["/apps/logstash/conf/test/test2_log.txt"]
#start_position => "beginning"
#sincedb_path => "/dev/null"
}
}
output {
stdout {
codec => rubydebug
#codec => json
}
}

从另一个shell可以看到效果:

# ./../bin/logstash -f logstash_test2.shipper.conf
Sending Logstash's logs to /apps/logstash/logs which is now configured via log4j2.properties
[--09T13::,][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>, "pipeline.max_inflight"=>}
[--09T13::,][INFO ][logstash.pipeline ] Pipeline main started
[--09T13::,][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>}

先导入数据:

echo '去掉参数start_position => "beginning" sincedb_path => "/dev/null"' >>test/test2_log.txt 

下面看一下效果:

# ./../bin/logstash -f logstash_test2.shipper.conf
Sending Logstash's logs to /apps/logstash/logs which is now configured via log4j2.properties
[--09T13::,][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>, "pipeline.max_inflight"=>}
[--09T13::,][INFO ][logstash.pipeline ] Pipeline main started
[--09T13::,][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>}
{
"path" => "/apps/logstash/conf/test/test2_log.txt",
"@timestamp" => --09T05::.155Z,
"@version" => "",
"host" => "ofs1",
"message" => "去掉参数start_position => \"beginning\" sincedb_path => \"/dev/null\"",
"tags" => []
}

logstash json和rubydebug 第次重启logstash都会把所有的日志读完 而不是只读入新输入的内容的更多相关文章

  1. ELK学习笔记之Logstash和Filebeat解析对java异常堆栈下多行日志配置支持

    0x00 概述 logstash官方最新文档.假设有几十台服务器,每台服务器要监控系统日志syslog.tomcat日志.nginx日志.mysql日志等等,监控OOM.内存低下进程被kill.ngi ...

  2. 使用Elasticsearch、Logstash、Kibana与Redis(作为缓冲区)对Nginx日志进行收集(转)

    摘要 使用Elasticsearch.Logstash.Kibana与Redis(作为缓冲区)对Nginx日志进行收集 版本 elasticsearch版本: elasticsearch-2.2.0 ...

  3. 小白都会超详细--ELK日志管理平台搭建教程

    目录 一.介绍 二.安装JDK 三.安装Elasticsearch 四.安装Logstash 五.安装Kibana 六.Kibana简单使用 系统环境:CentOS Linux release 7.4 ...

  4. Logstash Json 过滤器插件

    1. Json Filter 功能概述 这是一个JSON解析过滤器.它接受一个包含JSON的现有字段,并将其扩展为Logstash事件中的实际数据结构. 默认情况下,它将把解析过的JSON放在Logs ...

  5. Logstash:在 Docker 中部署 Logstash

    文章转载自:https://elasticstack.blog.csdn.net/article/details/116516923 创建一个目录 docker-logstash.在该目录下,有如下的 ...

  6. logstash报错401 需要在logstash启动的配置文件中增加es的用户名和密码

  7. Logstash:如何使用Elasticsearch,Logstash和Kibana管理Apache日志

  8. 【linux】linux重启tomcat + 实时查看tomcat启动日志

    linux重启tomcat命令: http://www.cnblogs.com/plus301/p/6237468.html linux查看toncat实时的启动日志: https://www.cnb ...

  9. Ajax请求Json数据,报500错误,后台没有错误日志。

    post请求:http://localhost:9080/DataDiscoveryWeb/issueformcount/queryIssueTendencyDetail.xhtml?jobId=86 ...

随机推荐

  1. BZOJ2171——K凹凸序列

    好吧,我承认是sb题QAQ BZOJ2171弱化版QAQ 这题考试的时候写的我快吐血了QAQ 0.题目大意:给一个序列,你可以随便修改,修改是将一个数+1或-1,一次修改的代价是1,问把这个数修改成x ...

  2. css设置input中placeholder字体

    设置input中placeholder字体颜色 input::-webkit-input-placeholder {color:@a;} input:-moz-placeholder {color:@ ...

  3. [codevs1141]数列

    [codevs1141]数列 试题描述 给定一个正整数k(3≤k≤15),把所有k的方幂及所有有限个互不相等的k的方幂之和构成一个递增的序列,例如,当k=3时,这个序列是: 1,3,4,9,10,12 ...

  4. 2015安徽省赛 H.数7

    http://xcacm.hfut.edu.cn/problem.php?id=1212 模拟大发 #include<iostream> #include<cstdio> #i ...

  5. linux操作系统flash player问题--ubuntu

    adobe公司停止了对linux系统的flash player的更新,这导致很多网页视频不能够通过浏览器观看,很是不爽! 还好,给用户留下了一点点希望,那便是chrome浏览器. 谷歌浏览器,有一款插 ...

  6. java文件压缩和解压

    功能实现. package com.test; import java.io.File; import java.io.BufferedOutputStream; import java.io.Buf ...

  7. PyQt4多线程定时刷新控件

    1.通过事件关联和线程关联的方法刷新控件 self.listview=updatelistview()self.listview.updateText.connect(self.viewlist)   ...

  8. ubuntu下配置apache2多域名(apache2.4.6)

    Ubuntu 在 Linux 各发行版中, 个人用户数量最多的. 很多人在本机和虚拟机中使用. 但 Ubuntu 和 Redhat 的 VirtualHost 设置方法不相同. 1. 打开目录 /et ...

  9. PHP的Tag标签

    PHP的标签有至少四种:<?php ?>.<?= ?>.<script language="php"></script>.<? ...

  10. JQuery知识点链接

    1.深入理解jQuery插件开发                      http://learn.jquery.com/plugins/basic-plugin-creation/ 2.jQuer ...