前两节已经成功完成ek的搭建,还剩最后的一个日志上传的功能

依次执行如下命令

cd /home/elk
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.4.tar.gz
tar zxvf logstash-6.2..tar.gz
mv logstash-6.2. logstash

创建配置文件

cd logstash
cd data
vi logstash-simple.conf
监听8082端口

input {
stdin { }
tcp {
port => 8082
codec => json_lines
}
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "myfistlog"
}
stdout { codec => rubydebug }
}

如果搭配filebeat使用,配置如下

input {
beats {
port => 8082
}
}
filter {

if [fields][logtype] == "claimzuul" {
json {
source => "message"
target => "data"
}
}
}
output {
if [fields][logtype] == "claimzuul"{
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "claimzuul-%{+YYYY.MM.dd}"
}
}
}

启动

[root@insure bin]# ./logstash -f /home/elk/logstash/data/logstash-simple.conf &
Sending Logstash's logs to /home/elk/logstash/logs which is now configured via log4j2.properties
[--27T15::,][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/home/elk/logstash/modules/netflow/configuration"}
[--27T15::,][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/home/elk/logstash/modules/fb_apache/configuration"}
[--27T15::,][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/home/elk/logstash/data/queue"}
[--27T15::,][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/home/elk/logstash/data/dead_letter_queue"}
[--27T15::,][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[--27T15::,][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"d4f319cf-97a7-4f90-83c6-8e62e8205fa5", :path=>"/home/elk/logstash/data/uuid"}
[--27T15::,][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.4"}
[--27T15::,][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>}
[--27T15::,][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>}
[--27T15::,][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
[--27T15::,][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[--27T15::,][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
[--27T15::,][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>}
[--27T15::,][WARN ][logstash.outputs.elasticsearch] Detected a .x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[--27T15::,][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[--27T15::,][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[--27T15::,][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[--27T15::,][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1:9200"]}
[--27T15::,][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x18db84e4 run>"}
The stdin plugin is now waiting for input:
[--27T15::,][INFO ][logstash.agent ] Pipelines running {:count=>, :pipelines=>["main"]}

centos7搭建logstash的更多相关文章

  1. centos7搭建ELK Cluster集群日志分析平台(二):Logstash

    续  centos7搭建ELK Cluster集群日志分析平台(一) 已经安装完Elasticsearch 5.4 集群. 安装Logstash步骤 . 安装Java 8 官方说明:需要安装Java ...

  2. centos7搭建ELK Cluster集群日志分析平台(四):Fliebeat-简单测试

    续之前安装好的ELK集群 各主机:es-1 ~ es-3 :192.168.1.21/22/23 logstash: 192.168.1.24 kibana: 192.168.1.25 测试机:cli ...

  3. centos7搭建ELK Cluster集群日志分析平台(三):Kibana

    续  centos7搭建ELK Cluster集群日志分析平台(一) 续  centos7搭建ELK Cluster集群日志分析平台(二) 已经安装好elasticsearch 5.4集群和logst ...

  4. centos7搭建ELK Cluster集群日志分析平台(一):Elasticsearch

    应用场景: ELK实际上是三个工具的集合,ElasticSearch + Logstash + Kibana,这三个工具组合形成了一套实用.易用的监控架构, 很多公司利用它来搭建可视化的海量日志分析平 ...

  5. centos7搭建ELK Cluster集群日志分析平台

    应用场景:ELK实际上是三个工具的集合,ElasticSearch + Logstash + Kibana,这三个工具组合形成了一套实用.易用的监控架构, 很多公司利用它来搭建可视化的海量日志分析平台 ...

  6. centos7 搭建GlusterFS

    centos7 搭建GlusterFS 转载http://zhaijunming5.blog.51cto.com/10668883/1704535 实验需求:4台机器安装GlusterFS组成一个集群 ...

  7. ELK初学搭建(logstash)

    ELK初学搭建(logstash) elasticsearch logstash kibana ELK初学搭建 logstash 1.环境准备 centos6.8_64 mini IP:192.168 ...

  8. Centos7搭建FTP服务器

    从网上搜索了好多搭建Centos7搭建服务器的教程都没有成功唯独这个,利用Windows资源管理器连接测试成功. 一.通过yum安装vsftpd yum install -y vsftpd 二.修改v ...

  9. CentOS7 搭建 SVN 服务器

    CentOS7 搭建 SVN 服务器 介绍SVN: SVN是Subversion的简称,是一个开放源代码的版本控制系统,相较于RCS.CVS,它采用了分支管理系统,它的设计目标就是取代CVS.互联网上 ...

随机推荐

  1. 在整合spring和mongodb中,提示at org.springframework.data.mapping.model.BasicPersistentEntity.findAnnotation(

    遇到这种坑,找了好多资料.基本是都是因为springdata的jar包和spring的版本不兼容导致,除了这个错误之外,还有会比较多其他错误,也是版本不兼容导致的. at org.springfram ...

  2. python全局解释器锁(GIL)

    文章作者:卢钧轶(cenalulu) 本文原文地址:http://cenalulu.github.io/python/gil-in-python/ ,对文章做了适当的修改,加入了一些自己的理解. CP ...

  3. HDU6201

    transaction transaction transaction Time Limit: 4000/2000 MS (Java/Others)    Memory Limit: 132768/1 ...

  4. php7.27: export excel from mysql

    https://stackoverflow.com/questions/15699301/export-mysql-data-to-excel-in-php https://github.com/PH ...

  5. php curl中x-www-form-urlencoded与multipart/form-data 方式 Post 提交数据详解

    multipart/form-data 方式 post的curl库,模拟post提交的时候,默认的方式 multipart/form-data ,这个算是post提交的几个基础的实现方式. $post ...

  6. python之继承

    1.经典MRO : 树形结构的深度遍历优先 - > 树形结构遍历 class A: pass class B(A): pass class C(A): pass class D(B, C): p ...

  7. vue和webpack打包 项目相对路径修改

    一般vue使用webpack打包是整个工程的根目录,但是很多情况下都是把vue打包后的文件在某子目录下. 修改: 1,打开index.js assetsPublicPath:'/' 改为: asset ...

  8. 安卓开发_浅谈主配置文件(AndroidManifest.xml)

    AndroidManifest.xml本质:是整个应用的主配置清单文件包含:该应用的包名,版本号,组件,权限等信息作用:记录该应用的相关的配置信息 一.常用标签(1).全局篇(包名,版本信息)(2). ...

  9. .net core 导出Excel(epplus 创建excel )

    [Route("getopenfrequencyexcel")] [HttpGet] public IActionResult GetOpenFrequencyExcel(int ...

  10. nginx基础知识总结

    1.nginx的工作模式 master/worker工作模式: 一个master进程: 负载加载和分析配置文件.管理worker进程.平滑重启升级等. 一个或多个worker进程 处理并响应用户请求 ...