ELKF(elasticsearch、logstash、kibana、filebeat)搭建及收集nginx日志
1、elasticsearch
1.1、根目录下新建data文件夹
1.2、修改elasticsearch.yml文件,添加以下内容
path.data: /home/wwq/elk/elasticsearch-8.13.4/data
path.logs: /home/wwq/elk/elasticsearch-8.13.4/logs
1.3、修改jvm.options文件,新增以下内容
-Xms2g
-Xmx2g
1.4、启动
bin/elasticsearch 前台启动
bin/elasticsearch -d 后台启动
1.5、在日志中查看初始密码
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Elasticsearch security features have been automatically configured!
Authentication is enabled and cluster connections are encrypted. ️ Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
qP3wO0GZ+pdomIp_ShHL ️ HTTP CA certificate SHA-256 fingerprint:
98aef4ebc491b232a4c1cbf1cbfe7b73e1e4ebb8567caa174097f5c69f2b41fd ️ Configure Kibana to use this cluster:
• Run Kibana and click the configuration link in the terminal when Kibana starts.
• Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes):
eyJ2ZXIiOiI4LjEzLjQiLCJhZHIiOlsiMTk5Ljk5LjAuMTo5MjAwIl0sImZnciI6Ijk4YWVmNGViYzQ5MWIyMzJhNGMxY2JmMWNiZmU3YjczZTFlNGViYjg1NjdjYWExNzQwOTdmNWM2OWYyYjQxZmQiLCJrZXkiOiJmcF9ES3BBQlR6c3lRM0RMSU4teDoxclFRLXZraFREYUdYZmNiN2pQbXBBIn0= ️ Configure other nodes to join this cluster:
• On this node:
⁃ Create an enrollment token with `bin/elasticsearch-create-enrollment-token -s node`.
⁃ Uncomment the transport.host setting at the end of config/elasticsearch.yml.
⁃ Restart Elasticsearch.
• On other nodes:
⁃ Start Elasticsearch with `bin/elasticsearch --enrollment-token <token>`, using the enrollment token that you generated.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1.6、修改密码
# 重置密码
bin/elasticsearch-reset-password --username elastic -i
1.7、启动脚本
#!/bin/bash
#chkconfig: 345 63 37
#description: elasticsearch
#processname: elasticsearch-7.10.2 # 这个目录是你Es所在文件夹的目录
export ES_HOME=/home/wwq/elk/elasticsearch-8.13.4
case $1 in
start)
cd $ES_HOME
./bin/elasticsearch -d -p pid
exit
!
echo "elasticsearch is started"
;;
stop)
pid=`cat $ES_HOME/pid`
kill -9 $pid
echo "elasticsearch is stopped"
;;
restart)
pid=`cat $ES_HOME/pid`
kill -9 $pid
echo "elasticsearch is stopped"
sleep 1
cd $ES_HOME
./bin/elasticsearch -d -p pid
exit
!
echo "elasticsearch is started"
;;
*)
echo "start|stop|restart"
;;
esac
exit 0
2.kibana
2.1、编辑kibana.yml
server.port: 5601 # kibana的监听端口,可通过浏览器访问
server.host: "0.0.0.0" # kibana监听本地IP,全零为本地所有网卡
i18n.locale: "zh-CN"
2.2、启动
启动进程
./kibana
后台启动
nohup ./kibana &
2.3、浏览器访问,配置即可
2.4、设置开机自启
vim /lib/systemd/system/kibana.service
[Unit]
Description=kibana
After=network.target
[Service]
Type=simple
User=tomcat
ExecStart=/home/tomcat/elk/kibana-8.13.4/bin/kibana
PrivateTmp=true
[Install]
WantedBy=multi-user.target
systemctl enable kibana #开机自启
systemctl start kibana #启动
systemctl stop kibana #停止
systemctl restart kibana #重启
3.logstash
3.1、获取es秘钥
[tomcat@wwq bin]$ ./elasticsearch-keystore show xpack.security.http.ssl.keystore.secure_password
warning: ignoring JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.402.b06-1.el7_9.x86_64/jre; using bundled JDK
04nkv5WkSLuB854KnE-Kxg
3.2、配置logstash
vim /home/wwq/elk/logstash-8.13.4/config/logstash-sample.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input{
beats{
port => 5044
}
}
output{
elasticsearch{
hosts => ["https://192.168.1.223:9200"]
index => "%{[fields][logcategory]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
user => "elastic"
password => "123456"
ssl_certificate_verification => true
truststore => "/home/tomcat/elk/elasticsearch-8.13.4/config/certs/http.p12"
truststore_password => "04nkv5WkSLuB854KnE-Kxg"
}
}
3.3、开机自启
vim /lib/systemd/system/logstash.service
[Unit]
Description=logstash
[Service]
User=tomcat
ExecStart=/home/tomcat/elk/logstash-8.13.4/bin/logstash -f /home/tomcat/elk/logstash-8.13.4/config/logstash-sample.conf
Restart=always
[Install]
WantedBy=multi-user.target
systemctl enable logstash #开机自启
systemctl start logstash #启动
systemctl stop logstash #停止
systemctl restart logstash #重启
4、filebeat
4.1、nginx日志json格式
log_format log_json '{"@timestamp":"$time_iso8601",'
'"server_addr":"$server_addr",'
'"server_name":"$server_name",'
'"server_port":"$server_port",'
'"server_protocol":"$server_protocol",'
'"client_ip":"$remote_addr",'
'"client_user":"$remote_user",'
'"status":"$status",'
'"request_method": "$request_method",'
'"request_length":"$request_length",'
'"request_time":"$request_time",'
'"request_url":"$request_uri",'
'"request_line":"$request",'
'"send_client_size":"$bytes_sent",'
'"send_client_body_size":"$body_bytes_sent",'
'"proxy_protocol_addr":"$proxy_protocol_addr",'
'"proxy_add_x_forward":"$proxy_add_x_forwarded_for",'
'"proxy_port":"$proxy_port",'
'"proxy_host":"$proxy_host",'
'"upstream_host":"$upstream_addr",'
'"upstream_status":"$upstream_status",'
'"upstream_cache_status":"$upstream_cache_status",'
'"upstream_connect_time":"$upstream_connect_time",'
'"upstream_response_time":"$upstream_response_time",'
'"upstream_header_time":"$upstream_header_time",'
'"upstream_cookie_name":"$upstream_cookie_name",'
'"upstream_response_length":"$upstream_response_length",'
'"upstream_bytes_received":"$upstream_bytes_received",'
'"upstream_bytes_sent":"$upstream_bytes_sent",'
'"http_host":"$host",'
'"http_cookie":"$http_cooke",'
'"http_user_agent":"$http_user_agent",'
'"http_origin":"$http_origin",'
'"http_upgrade":"$http_upgrade",'
'"http_referer":"$http_referer",'
'"http_x_forward":"$http_x_forwarded_for",'
'"http_x_forwarded_proto":"$http_x_forwarded_proto",'
'"https":"$https",'
'"http_scheme":"$scheme",'
'"invalid_referer":"$invalid_referer",'
'"gzip_ratio":"$gzip_ratio",'
'"realpath_root":"$realpath_root",'
'"document_root":"$document_root",'
'"is_args":"$is_args",'
'"args":"$args",'
'"connection_requests":"$connection_requests",'
'"connection_number":"$connection",'
'"ssl_protocol":"$ssl_protocol",'
'"ssl_cipher":"$ssl_cipher"}';
access_log logs/access_json.log log_json;
4.2、配置filebeat.yml
vim /home/wwq/elk/filebeat-8.13.4/filebeat.yml
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
# filestream is an input for collecting log messages from files.
- type: filestream
# Unique ID among all inputs, an ID is required.
id: my-filestream-id
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
# Line filtering happens after the parsers pipeline. If you would like to filter lines
# before parsers, use include_message parser.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
# Line filtering happens after the parsers pipeline. If you would like to filter lines
# before parsers, use include_message parser.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#prospector.scanner.exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
filebeat.inputs:
- type: log
enabled: true
paths:
#nginx日志目录
- /home/wwq/nginx-1.25.5/logs/access_json.log
fields:
logcategory: nginx
json:
keys_under_root: true
overwrite_keys: true
message_key: "message"
add_error_key: true
output.logstash:
hosts: ["192.168.1.200:5044"]
#
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]
# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================
# Instrumentation support for the filebeat.
#instrumentation:
# Set to true to enable instrumentation of filebeat.
#enabled: false
# Environment in which filebeat is running on (eg: staging, production, etc.)
#environment: ""
# APM Server hosts to report instrumentation results to.
#hosts:
# - http://localhost:8200
# API Key for the APM Server(s).
# If api_key is set then secret_token will be ignored.
#api_key:
# Secret token for the APM Server(s).
#secret_token:
# ================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
4.3、启动
filebeat -e -c filebeat.yml
4.4、开机自启
vim /lib/systemd/system/filebeat.service
[Unit]
Description=filebeat
Wants=network-online.target
After=network-online.target
[Service]
User=tomcat
ExecStart=/home/tomcat/elk/filebeat-8.13.4/filebeat -e -c /home/tomcat/elk/filebeat-8.13.4/filebeat.yml
Restart=always
[Install]
WantedBy=multi-user.target
systemctl enable filebeat #开机自启
systemctl start filebeat #启动
systemctl stop filebeat #停止
systemctl restart filebeat #重启
5、页面访问kibana及配置

ELKF(elasticsearch、logstash、kibana、filebeat)搭建及收集nginx日志的更多相关文章
- ELK日志系统:Elasticsearch+Logstash+Kibana+Filebeat搭建教程
ELK日志系统:Elasticsearch + Logstash + Kibana 搭建教程 系统架构 安装配置JDK环境 JDK安装(不能安装JRE) JDK下载地址:http://www.orac ...
- 【linux】【ELK】搭建Elasticsearch+Logstash+Kibana+Filebeat日志收集系统
前言 ELK是Elasticsearch.Logstash.Kibana的简称,这三者是核心套件,但并非全部. Elasticsearch是实时全文搜索和分析引擎,提供搜集.分析.存储数据三大功能:是 ...
- ELK (Elasticsearch , Logstash, Kibana [+FileBeat])
ELK 简述: ELK 是: Elasticsearch , Logstash, Kibana 简称, 它们都是开源软件. Elasticsearch[搜索]是个开源分布式基于Lucene的搜索引擎, ...
- Centos7下使用ELK(Elasticsearch + Logstash + Kibana)搭建日志集中分析平台
日志监控和分析在保障业务稳定运行时,起到了很重要的作用,不过一般情况下日志都分散在各个生产服务器,且开发人员无法登陆生产服务器,这时候就需要一个集中式的日志收集装置,对日志中的关键字进行监控,触发异常 ...
- 2023最新ELK日志平台(elasticsearch+logstash+kibana)搭建
前言 去年公司由于不断发展,内部自研系统越来越多,所以后来搭建了一个日志收集平台,并将日志收集功能以二方包形式引入自研系统,避免每个自研系统都要建立一套自己的日志模块,节约了开发时间,管理起来也更加容 ...
- 搭建ELK收集Nginx日志
众所周知,ELK是日志收集套装,这里就不多做介绍了. 画了一个粗略的架构图,如下: 这里实际用了三个节点,系统版本为CentOS6.6,ES版本为2.3.5,logstash版本为2.4.0,kiba ...
- elasticsearch+logstash+kibana部署
这篇博客讲的是elasticsearch+logstash+kibana部署的方法. 内容大纲: 1.elasticsearch+logstash+kibana部署 2.收集Tomcat日志 3.收集 ...
- 【转】ELK(ElasticSearch, Logstash, Kibana)搭建实时日志分析平台
[转自]https://my.oschina.net/itblog/blog/547250 摘要: 前段时间研究的Log4j+Kafka中,有人建议把Kafka收集到的日志存放于ES(ElasticS ...
- 【Big Data - ELK】ELK(ElasticSearch, Logstash, Kibana)搭建实时日志分析平台
摘要: 前段时间研究的Log4j+Kafka中,有人建议把Kafka收集到的日志存放于ES(ElasticSearch,一款基于Apache Lucene的开源分布式搜索引擎)中便于查找和分析,在研究 ...
- ELK 二进制安装并收集nginx日志
对于日志来说,最常见的需求就是收集.存储.查询.展示,开源社区正好有相对应的开源项目:logstash(收集).elasticsearch(存储+搜索).kibana(展示),我们将这三个组合起来的技 ...
随机推荐
- 尝试 IIncrementalGenerator 进行增量 Source Generator 生成代码
在加上热重载时,源代码生成 Source Generator 的默认行为会让 Visual Studio 有些为难,其原因是热重载会变更代码,变更代码触发代码生成器更新代码,代码生成器更新的代码说不定 ...
- dotnet 读 WPF 源代码笔记 提升调试效率的 NamedObject 类型
本文来聊聊 WPF 那些值得称赞的设计中的 NamedObject 类型.在 WPF 中,有很多值得我学习的设计开发思想,其中就包括本文将要介绍的 NamedObject 类型.此类型的定义仅仅只是为 ...
- 登录信息localStorage存储
localStorage拓展了cookie的4K限制,与sessionStorage的唯一一点区别就是localStorage属于永久性存储,而sessionStorage属于当会话结束的时候,ses ...
- Java中HTTP下载文件——并解决跨域
1.常用的需要设置的MIME类型 任何文件(二进制文件) application/octet-stream .doc application/msword .dot application/mswor ...
- ChatGPT开源项目精选合集
大家好,我是 Java陈序员. 2023年,ChatGPT 的爆火无疑是最值得关注的事件之一,AI对话.AI绘图等工具层出不穷. 今天给大家介绍几个 ChatGPT 的开源项目! 关注微信公众号:[J ...
- 08 ES基本的聚合查询
目录 按protocol聚合 指定地区,按port聚合 指定地区和时间段,按ip聚合(独立ip 即ip去重) 并且 聚合再求独立ip数 聚合后将聚合结果进行分页的解决办法 子聚合 按protocol聚 ...
- WEB服务与NGINX(3)-NGINX基础及配置文件
WEB服务与NGINX(3)-NGINX基础及配置文件 目录 WEB服务与NGINX(3)-NGINX基础及配置文件 1. NGINX初识与安装 1.1 NGINX特性 1.2 NGINX功能和应用场 ...
- arcmap利用合并工具修改字段名称、类型、顺序
- iframe 高度设置为0时还有占位_iframe占位
iframe是一个内联元素,默认是跟baseline对齐的,iframe后边有个看不见.摸不着的行内空白节点,空白节点占据着高度,iframe与空白节点的基线对齐,导致了div被撑开,从而出现滚动条, ...
- kapt构建报错
报错信息: Caused by: org.gradle.internal.metaobject.AbstractDynamicObject$CustomMessageMissingMethodExce ...