ELK日志分析系统之logstash7.x最新版安装与配置
2 、Logstash的简介
2.1 logstash 介绍
LogStash由JRuby语言编写,基于消息(message-based)的简单架构,并运行在Java虚拟机(JVM)上。不同于分离的代理端(agent)或主机端(server),LogStash可配置单一的代理端(agent)与其它开源软件结合,以实现不同的功能。
2.2 logStash的四大组件
Shipper:发送事件(events)至LogStash;通常,远程代理端(agent)只需要运行这个组件即可;
Broker and Indexer:接收并索引化事件;
Search and Storage:允许对事件进行搜索和存储;
Web Interface:基于Web的展示界面
正是由于以上组件在LogStash架构中可独立部署,才提供了更好的集群扩展性。
2.3、软件包下载网址:https://www.elastic.co/cn/downloads/logstash
2.4、将下载的tar压缩包拷贝到/application/目录下,并创建软链接/application/logstash。

2.5、循环渐近的学习logstash
2.5.1 启动一个logstash,-e:在命令行执行;input输入,stdin标准输入,是一个插件;output输出,stdout:标准输出。默认输出格式是使用rubudebug显示详细输出,codec为一种编解码器
[root@harlan_ansible ~]# /application/logstash/bin/logstash -e 'input {stdin{}} output {stdout{}}'
OpenJDK -Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/application/logstash-7.3./logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /application/logstash/logs which is now configured via log4j2.properties
[--27T21::,][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[--27T21::,][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.3.2"}
[--27T21::,][INFO ][org.reflections.Reflections] Reflections took ms to scan urls, producing keys and values
[--27T21::,][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[--27T21::,][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>, "pipeline.max_inflight"=>, :thread=>"#<Thread:0x1b23cd0d run>"}
[--27T21::,][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[--27T21::,][INFO ][logstash.agent ] Pipelines running {:count=>, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[--27T21::,][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>}
hello word #手动输入一串字符,然后下面在屏幕上会标准输出。
/application/logstash-7.3./vendor/bundle/jruby/2.5./gems/awesome_print-1.7./lib/awesome_print/formatters/base_formatter.rb:: warning: constant ::Fixnum is deprecated
{
"message" => "hello word",
"@version" => "",
"@timestamp" => --27T13::.241Z,
"host" => "harlan_ansible"
}
2.5.2 将屏幕输入的字符串输出到elasticsearch服务中
[root@harlan_ansible ~]# /application/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["127.0.0.1:9200"] } }'
OpenJDK -Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/application/logstash-7.3./logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /application/logstash/logs which is now configured via log4j2.properties
[--27T21::,][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[--27T21::,][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.3.2"}
[--27T21::,][INFO ][org.reflections.Reflections] Reflections took ms to scan urls, producing keys and values
[--27T21::,][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
[--27T21::,][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
[--27T21::,][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>}
[--27T21::,][WARN ][logstash.outputs.elasticsearch] Detected a .x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[--27T21::,][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1:9200"]}
[--27T21::,][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[--27T21::,][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[--27T21::,][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>, "pipeline.max_inflight"=>, :thread=>"#<Thread:0x228d3610 run>"}
[--27T21::,][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
[--27T21::,][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
The stdin plugin is now waiting for input:
[--27T21::,][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[--27T21::,][INFO ][logstash.agent ] Pipelines running {:count=>, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[--27T21::,][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>}
[--27T21::,][INFO ][logstash.outputs.elasticsearch] Creating rollover alias <logstash-{now/d}->
[--27T21::,][INFO ][logstash.outputs.elasticsearch] Installing ILM policy {"policy"=>{"phases"=>{"hot"=>{"actions"=>{"rollover"=>{"max_size"=>"50gb", "max_age"=>"30d"}}}}}} to _ilm/policy/logstash-policy
hello #手动输入一个字符串。
通过浏览器访问地址:http://10.0.0.169:9200/_search?pretty

恭喜,至此你已经成功利用Elasticsearch和Logstash来收集日志数据了。
2.6、 收集系统日志的conf
conf文件放置在/application/logstash/bin/目录下,具体配置如下:
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/application/es/to/logs/elasticsearch.log"
type => "es-error"
start_position => "beginning"
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => ["10.0.0.169:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
if [type] == "es-error" {
elasticsearch {
hosts => ["10.0.0.169:9200"]
index => "es-error-%{+YYYY.MM.dd}"
}
}
}
执行命令启动logstash服务:
/application/logstash/bin/logstash -f logstash.conf
ELK日志分析系统之logstash7.x最新版安装与配置的更多相关文章
- ELK日志分析系统之Kibana7.x最新版安装与配置
3.Kibana的简介 Kibana 让您能够自由地选择如何呈现自己的数据.Kibana 核心产品搭载了一批经典功能:柱状图.线状图.饼图.旭日图等等. 3.1.软件包下载地址:https://www ...
- ELK日志分析系统之elasticsearch7.x最新版安装与配置
1.Elasticsearch 1.1.elasticsearch的简介 ElasticSearch是一个基于Lucene的搜索服务器.它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful ...
- ELK日志分析系统简单部署
1.传统日志分析系统: 日志主要包括系统日志.应用程序日志和安全日志.系统运维和开发人员可以通过日志了解服务器软硬件信息.检查配置过程中的错误及错误发生的原因.经常分析日志可以了解服务器的负荷,性能安 ...
- Rsyslog+ELK日志分析系统
转自:https://www.cnblogs.com/itworks/p/7272740.html Rsyslog+ELK日志分析系统搭建总结1.0(测试环境) 因为工作需求,最近在搭建日志分析系统, ...
- 十分钟搭建和使用ELK日志分析系统
前言 为满足研发可视化查看测试环境日志的目的,准备采用EK+filebeat实现日志可视化(ElasticSearch+Kibana+Filebeat).题目为“十分钟搭建和使用ELK日志分析系统”听 ...
- ELK日志分析系统-Logstack
ELK日志分析系统 作者:Danbo 2016-*-* 本文是学习笔记,参考ELK Stack中文指南,链接:https://www.gitbook.com/book/chenryn/kibana-g ...
- elk 日志分析系统Logstash+ElasticSearch+Kibana4
elk 日志分析系统 Logstash+ElasticSearch+Kibana4 logstash 管理日志和事件的工具 ElasticSearch 搜索 Kibana4 功能强大的数据显示clie ...
- 《ElasticSearch6.x实战教程》之实战ELK日志分析系统、多数据源同步
第十章-实战:ELK日志分析系统 ElasticSearch.Logstash.Kibana简称ELK系统,主要用于日志的收集与分析. 一个完整的大型分布式系统,会有很多与业务不相关的系统,其中日志系 ...
- Docker笔记(十):使用Docker来搭建一套ELK日志分析系统
一段时间没关注ELK(elasticsearch —— 搜索引擎,可用于存储.索引日志, logstash —— 可用于日志传输.转换,kibana —— WebUI,将日志可视化),发现最新版已到7 ...
随机推荐
- 两种方法删除ArrayList里反复元素
方法一: /** List order not maintained **/ public static void removeDuplicate(ArrayList arlList) { HashS ...
- Django基础之简介(二)
三板斧 from django.shortcuts import render,HttpResponse, redirect HttpResponse # 返回字符串 urls: urlpatte ...
- box-shadow四个边框设置阴影样式
其实对于box-shadow,老白我也是一知半解,之前用的时候直接复制已有的,也没有仔细思考过box-shadow的数值分别对应什么,最后导致阴影的边如何自由控制,苦于懒人一个一直没有正式去学习,今天 ...
- kali优化配置(1)
前言 无论是工具还是物理机.虚拟机,我都遇到过惨绝人寰的配置错误.为了有效避免这些烦恼困住我,写一个排错文档之外,我还应当谨慎小心,从每一次配置走起..我的kali昨日的MySQL无法登陆,也没办法联 ...
- (转)nginx+redis实现接入层高性能缓存技术
转自:https://blog.csdn.net/phil_code/article/details/79154271 1. OpenRestyOpenResty是一个基于 Nginx与 Lua的高性 ...
- VNware上安装虚拟机Ubuntu16.10 并安装petalinux(版本问题的坑 弃帖 另开一帖)
1.下载Ubuntu镜像文件 最新版本:https://ubuntu.com/download/desktop 老版本:http://old-releases.ubuntu.com/releases/ ...
- tf.matmul / tf.multiply
import tensorflow as tfimport numpy as np 1.tf.placeholder placeholder()函数是在神经网络构建graph的时候在模型中的占位,此时 ...
- [web 安全] xxe
一.探测漏洞 1.是否支持实体解析. 2.是否支持外部实体解析. 2.1 直接读取本地文件: 2.2 远程文件: 3.不回显错误,则用 blind xxe.(先获取本地数据,然后带着本地数据去访问恶意 ...
- luogu2046 海拔
题目链接[NOI2010]海拔 首先有个性质就是海拔只会有\(0\)和\(1\)两种. 证明:海拔下降和人数乘积为总消耗,确定了海拔下降总数,如果有个地方可以使得单位消耗最小,那么全部消耗不会更劣. ...
- django之mysql数据库的配置和orm交互
一:django默认数据库的配置 DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path. ...