logstash
logstash作为数据搜集器,主要分为三个部分:input->filter->output 作为pipeline的形式进行处理,支持复杂的操作,如发邮件等
input配置数据的输入和简单的数据转换
filter配置数据的提取,一般使用grok
output配置数据的输出和简单的数据转换
运行:logstash -f /etc/logstash.conf
-f 指定配置文件
-e 只在控制台运行
具体的配置见官网
https://www.elastic.co/products/logstash
Centralize, Transform & Stash Your Data
input
|
Plugin |
Description |
Github repository |
|
Receives events from the Elastic Beats framework |
||
|
Streams events from CouchDB’s |
||
|
Reads query results from an Elasticsearch cluster |
||
|
Streams events from files |
||
|
Reads GELF-format messages from Graylog2 as events |
||
|
Generates random log events for test purposes |
||
|
Reads metrics from the |
||
|
Generates heartbeat events for testing |
||
|
Receives events over HTTP or HTTPS |
||
|
Decodes the output of an HTTP API into events |
||
|
Creates events from JDBC data |
||
|
Reads events from a Kafka topic |
||
|
Reads events over a TCP socket from a Log4j |
||
|
Receives events using the Lumberjack protocl |
||
|
Pulls events from a RabbitMQ exchange |
||
|
Reads events from a Redis instance |
||
|
Streams events from files in a S3 bucket |
||
|
Pulls events from an Amazon Web Services Simple Queue Service queue |
||
|
Reads events from standard input |
||
|
Reads syslog messages as events |
||
|
Reads events from a TCP socket |
||
|
Reads events from the Twitter Streaming API |
||
|
Reads events over UDP |
Community supported plugins
These plugins are maintained and supported by the community. These plugins have met the Logstash development & testing criteria for integration. Contributors include Community Maintainers, the Logstash core team at Elastic, and the broader community.
|
Plugin |
Description |
Github repository |
|
Pulls events from the Amazon Web Services CloudWatch API |
||
|
Retrieves watchdog log events from Drupal installations with DBLog enabled |
||
|
Pulls events from the Windows Event Log |
||
|
Captures the output of a shell command as an event |
||
|
Reads Ganglia packets over UDP |
||
|
Pushes events to a GemFire region |
||
|
Reads events from a GitHub webhook |
||
|
Streams events from the logs of a Heroku app |
||
|
Reads mail from an IMAP server |
||
|
Reads events from an IRC server |
||
|
Retrieves metrics from remote Java applications over JMX |
||
|
Receives events through an AWS Kinesis stream |
||
|
Captures the output of command line tools as an event |
||
|
Streams events from a long-running command pipe |
||
|
Receives facts from a Puppet server |
||
|
Receives events from a Rackspace Cloud Queue service |
||
|
Receives RELP events over a TCP socket |
||
|
Captures the output of command line tools as an event |
||
|
Creates events based on a Salesforce SOQL query |
||
|
Creates events based on SNMP trap messages |
||
|
Creates events based on rows in an SQLite database |
||
|
Creates events received with the STOMP protocol |
||
|
Reads events over a UNIX socket |
||
|
Reads from the |
||
|
Reads events from a websocket |
||
|
Creates events based on the results of a WMI query |
||
|
Receives events over the XMPP/Jabber protocol |
||
|
Reads Zenoss events from the fanout exchange |
||
|
Reads events from a ZeroMQ SUB socket |
filter
|
Plugin |
Description |
Github repository |
|
Aggregates information from several events originating with a single task |
||
|
Replaces field values with a consistent hash |
||
|
Parses comma-separated value data into individual fields |
||
|
Parses dates from fields to use as the Logstash timestamp for an event |
||
|
Computationally expensive filter that removes dots from a field name |
||
|
Extracts unstructured event data into fields using delimiters |
||
|
Performs a standard or reverse DNS lookup |
||
|
Drops all events |
||
|
Fingerprints fields by replacing values with a consistent hash |
||
|
Adds geographical information about an IP address |
||
|
Parses unstructured event data into fields |
||
|
Parses JSON events |
||
|
Parses key-value pairs |
||
|
Merges multiple lines into a single event |
||
|
Performs mutations on fields |
||
|
Executes arbitrary Ruby code |
||
|
Sleeps for a specified time span |
||
|
Splits multi-line messages into distinct events |
||
|
Parses the |
||
|
Throttles the number of events |
||
|
Replaces field contents based on a hash or YAML file |
||
|
Decodes URL-encoded fields |
||
|
Parses user agent strings into fields |
||
|
Adds a UUID to events |
||
|
Parses XML into fields |
Community supported plugins
These plugins are maintained and supported by the community. These plugins have met the Logstash development & testing criteria for integration. Contributors include Community Maintainers, the Logstash core team at Elastic, and the broader community.
|
Plugin |
Description |
Github repository |
|
Performs general alterations to fields that the |
||
|
Checks IP addresses against a list of network blocks |
||
|
Applies or removes a cipher to an event |
||
|
Duplicates events |
||
|
Collates events by time or count |
||
|
Calculates the elapsed time between a pair of events |
||
|
Copies fields from previous log events in Elasticsearch to current events |
||
|
Stores environment variables as metadata sub-fields |
||
|
Extracts numbers from a string |
||
|
Removes special characters from a field |
||
|
Serializes a field to JSON |
||
|
Adds arbitrary fields to an event |
||
|
Takes complex events containing a number of metrics and splits these up into multiple events, each holding a single metric |
||
|
Aggregates metrics |
||
|
Parse OUI data from MAC addresses |
||
|
Prunes event data based on a list of fields to blacklist or whitelist |
||
|
Strips all non-punctuation content from a field |
||
|
Checks that specified fields stay within given size or length limits |
||
|
Replaces the contents of the default message field with whatever you specify in the configuration |
||
|
Takes an existing field that contains YAML and expands it into an actual data structure within the Logstash event |
||
|
Sends an event to ZeroMQ |
output
Elastic supported plugins
These plugins are maintained and supported by Elastic.
|
Plugin |
Description |
Github repository |
|
Writes events to disk in a delimited format |
||
|
Stores logs in Elasticsearch |
||
|
Sends email to a specified address when output is received |
||
|
Writes events to files on disk |
||
|
Writes metrics to Graphite |
||
|
Sends events to a generic HTTP or HTTPS endpoint |
||
|
Writes events to a Kafka topic |
||
|
Sends events using the |
||
|
Pushes events to a RabbitMQ exchange |
||
|
Sends events to a Redis queue using the |
||
|
Sends Logstash events to the Amazon Simple Storage Service |
||
|
Prints events to the standard output |
||
|
Writes events over a TCP socket |
||
|
Sends events over UDP |
Community supported plugins
These plugins are maintained and supported by the community. These plugins have met the Logstash development & testing criteria for integration. Contributors include Community Maintainers, the Logstash core team at Elastic, and the broader community.
|
Plugin |
Description |
Github repository |
|
Sends annotations to Boundary based on Logstash events |
||
|
Sends annotations to Circonus based on Logstash events |
||
|
Aggregates and sends metric data to AWS CloudWatch |
||
|
Sends events to DataDogHQ based on Logstash events |
||
|
Sends metrics to DataDogHQ based on Logstash events |
||
|
Stores logs in Elasticsearch using the |
||
|
Runs a command for a matching event |
||
|
Writes metrics to Ganglia’s |
||
|
Generates GELF formatted output for Graylog2 |
||
|
Writes events to Google BigQuery |
||
|
Writes events to Google Cloud Storage |
||
|
Sends metric data on Windows |
||
|
Writes events to HipChat |
||
|
Writes metrics to InfluxDB |
||
|
Writes events to IRC |
||
|
Writes strutured JSON events to JIRA |
||
|
Pushes messages to the Juggernaut websockets server |
||
|
Sends metrics, annotations, and alerts to Librato based on Logstash events |
||
|
Ships logs to Loggly |
||
|
Writes metrics to MetricCatcher |
||
|
Writes events to MongoDB |
||
|
Sends passive check results to Nagios |
||
|
Sends passive check results to Nagios using the NSCA protocol |
||
|
Sends logstash events to New Relic Insights as custom events |
||
|
Writes metrics to OpenTSDB |
||
|
Sends notifications based on preconfigured services and escalation policies |
||
|
Pipes events to another program’s standard input |
||
|
Sends events to a Rackspace Cloud Queue service |
||
|
Creates tickets using the Redmine API |
||
|
Writes events to the Riak distributed key/value store |
||
|
Sends metrics to Riemann |
||
|
Sends events to Amazon’s Simple Notification Service |
||
|
Stores and indexes logs in Solr |
||
|
Pushes events to an Amazon Web Services Simple Queue Serice queue |
||
|
Sends metrics using the |
||
|
Writes events using the STOMP protocol |
||
|
Sends events to a |
||
|
Sends Logstash events to HDFS using the |
||
|
Publishes messages to a websocket |
||
|
Posts events over XMPP |
||
|
Sends events to a Zabbix server |
||
|
Writes events to a ZeroMQ PUB socket |
logstash的更多相关文章
- Logstash实践: 分布式系统的日志监控
文/赵杰 2015.11.04 1. 前言 服务端日志你有多重视? 我们没有日志 有日志,但基本不去控制需要输出的内容 经常微调日志,只输出我们想看和有用的 经常监控日志,一方面帮助日志微调,一方面及 ...
- logstash file输入,无输出原因与解决办法
1.现象 很多同学在用logstash input 为file的时候,经常会出现如下问题:配置文件无误,logstash有时一直停留在等待输入的界面 2.解释 logstash作为日志分析的管道,在实 ...
- logstash服务启动脚本
logstash服务启动脚本 最近在弄ELK,发现logstash没有sysv类型的服务启动脚本,于是按照网上一个老外提供的模板自己进行修改 #添加用户 useradd logstash -M -s ...
- Logstash时区、时间转换,message重组
适用场景 获取日志本身时间 日志时间转Unix时间 重组message 示例日志: hellow@,@world@,@2011-11-01 18:46:43 logstash 配置文件: input{ ...
- logstash日志分析的配置和使用
logstash是一个数据分析软件,主要目的是分析log日志.整一套软件可以当作一个MVC模型,logstash是controller层,Elasticsearch是一个model层,kibana是v ...
- logstash+elasticsearch+kibana管理日志(安装)
logstash1.先安装jdk2.wget https://download.elastic.co/logstash/logstash/logstash-2.4.0.tar.gz tar -xzvf ...
- 使用Logstash进行日志分析
LogStash主要用于数据收集和分析方面,配合Elasticsearch,Kibana用起来很方便,安装教程google出来很多. 推荐阅读 Elasticsearch 权威指南 精通 Elasti ...
- LogStash filter介绍(九)
LogStash plugins-filters-grok介绍 官方文档:https://www.elastic.co/guide/en/logstash/current/plugins-filter ...
- kafka(logstash) + elasticsearch 构建日志分析处理系统
第一版:logstash + es 第二版:kafka 替换 logstash的方案
- 海量日志分析方案--logstash+kibnana+kafka
下图为唯品会在qcon上面公开的日志处理平台架构图.听后觉得有些意思,好像也可以很容易的copy一个,就动手尝试了一下. 目前只对flume===>kafka===>elacsticSea ...
随机推荐
- 【BZOJ】1270: [BeijingWc2008]雷涛的小猫(DP+水题)
http://www.lydsy.com/JudgeOnline/problem.php?id=1270 这完全是一眼题啊,但是n^2的时间挺感人.(n^2一下的级别请大神们赐教,我还没学多少dp优化 ...
- HTTP请求中的User-Agent 判断浏览器类型的各种方法 网络爬虫的请求标示
我们知道,当用户发送一个http请求的时候,浏览的的版本信息也包含在了http请求信息中: 如上图所示,请求 google plus 请求头就包含了用户的浏览器信息: User-Agent:Mozil ...
- CentOS网卡配置文件
[root@xaiofan ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0TYPE=EthernetONBOOT=yesNM ...
- [转] - 如何用QTcpSocket传送图片
我们知道,tcp网络编程发送数据是利用套接字来实现,将要传输的东西转化为数据流再进行传输,为了确保数据传输的准确性和安全性,我们在发送数据流前发送一个quint32的常量来表示所要发送的数据的大小:当 ...
- 用wampserver 装的集成环境,命令行进不去提示mysql
命令行进不去提示mysql 不是内部命令或外部命令. 解决办法,就是将mysql/bin路径加到path中去
- 文件上传\">将在3秒钟后返回前页
conn.php: <?php $id=mysql_connect('localhost','root','root'); mysql_select_db("db_database12 ...
- 打包bat等文件成exe,双击运行不显示dos窗口,exe不报毒
准备材料如下bat和vbs直接新建文本,然后改后缀就可以建出来了(后面发现exe运行vbs来启动bat不报毒)下面内容就是要把这些文件打包成exe,双击exe后打开图片test.jpg,不显示dos窗 ...
- Scrum会议4
组名称:天天向上 项目名称:连连看 参会成员:王森(Master)张金生 张政 栾骄阳 时间:2016.10.19 已完成内容: 1.连连看生成一关功能. 2.目前测试发现没有问题. 计划完成: 1. ...
- 手动编译安装lanmp centos6.5 64位
对于新手来说一个很大的问题就是连源码包都在到在哪下载,还有就是软件的依赖关系 如果网卡也不会配置,请翻看我的其他文章 这就是基本所需的源码包了 http://pan.baidu.com/s/1kTxb ...
- HDU 1016 DFS
很简单的深搜 只要看出来是深搜... 注意判断最后一点是否与加一为质数 #include<stdio.h> #include<string.h> #include<alg ...