ELKF(elasticsearch、logstash、kibana、filebeat)搭建及收集nginx日志
1、elasticsearch
1.1、根目录下新建data文件夹
1.2、修改elasticsearch.yml文件,添加以下内容
path.data: /home/wwq/elk/elasticsearch-8.13.4/data
path.logs: /home/wwq/elk/elasticsearch-8.13.4/logs
1.3、修改jvm.options文件,新增以下内容
-Xms2g
-Xmx2g
1.4、启动
bin/elasticsearch 前台启动
bin/elasticsearch -d 后台启动
1.5、在日志中查看初始密码
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Elasticsearch security features have been automatically configured!
Authentication is enabled and cluster connections are encrypted. ️ Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
qP3wO0GZ+pdomIp_ShHL ️ HTTP CA certificate SHA-256 fingerprint:
98aef4ebc491b232a4c1cbf1cbfe7b73e1e4ebb8567caa174097f5c69f2b41fd ️ Configure Kibana to use this cluster:
• Run Kibana and click the configuration link in the terminal when Kibana starts.
• Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes):
eyJ2ZXIiOiI4LjEzLjQiLCJhZHIiOlsiMTk5Ljk5LjAuMTo5MjAwIl0sImZnciI6Ijk4YWVmNGViYzQ5MWIyMzJhNGMxY2JmMWNiZmU3YjczZTFlNGViYjg1NjdjYWExNzQwOTdmNWM2OWYyYjQxZmQiLCJrZXkiOiJmcF9ES3BBQlR6c3lRM0RMSU4teDoxclFRLXZraFREYUdYZmNiN2pQbXBBIn0= ️ Configure other nodes to join this cluster:
• On this node:
⁃ Create an enrollment token with `bin/elasticsearch-create-enrollment-token -s node`.
⁃ Uncomment the transport.host setting at the end of config/elasticsearch.yml.
⁃ Restart Elasticsearch.
• On other nodes:
⁃ Start Elasticsearch with `bin/elasticsearch --enrollment-token <token>`, using the enrollment token that you generated.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1.6、修改密码
# 重置密码
bin/elasticsearch-reset-password --username elastic -i
1.7、启动脚本
#!/bin/bash
#chkconfig: 345 63 37
#description: elasticsearch
#processname: elasticsearch-7.10.2 # 这个目录是你Es所在文件夹的目录
export ES_HOME=/home/wwq/elk/elasticsearch-8.13.4
case $1 in
start)
cd $ES_HOME
./bin/elasticsearch -d -p pid
exit
!
echo "elasticsearch is started"
;;
stop)
pid=`cat $ES_HOME/pid`
kill -9 $pid
echo "elasticsearch is stopped"
;;
restart)
pid=`cat $ES_HOME/pid`
kill -9 $pid
echo "elasticsearch is stopped"
sleep 1
cd $ES_HOME
./bin/elasticsearch -d -p pid
exit
!
echo "elasticsearch is started"
;;
*)
echo "start|stop|restart"
;;
esac
exit 0
2.kibana
2.1、编辑kibana.yml
server.port: 5601 # kibana的监听端口,可通过浏览器访问
server.host: "0.0.0.0" # kibana监听本地IP,全零为本地所有网卡
i18n.locale: "zh-CN"
2.2、启动
启动进程
./kibana
后台启动
nohup ./kibana &
2.3、浏览器访问,配置即可
2.4、设置开机自启
vim /lib/systemd/system/kibana.service
[Unit]
Description=kibana
After=network.target
[Service]
Type=simple
User=tomcat
ExecStart=/home/tomcat/elk/kibana-8.13.4/bin/kibana
PrivateTmp=true
[Install]
WantedBy=multi-user.target
systemctl enable kibana #开机自启
systemctl start kibana #启动
systemctl stop kibana #停止
systemctl restart kibana #重启
3.logstash
3.1、获取es秘钥
[tomcat@wwq bin]$ ./elasticsearch-keystore show xpack.security.http.ssl.keystore.secure_password
warning: ignoring JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.402.b06-1.el7_9.x86_64/jre; using bundled JDK
04nkv5WkSLuB854KnE-Kxg
3.2、配置logstash
vim /home/wwq/elk/logstash-8.13.4/config/logstash-sample.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input{
beats{
port => 5044
}
}
output{
elasticsearch{
hosts => ["https://192.168.1.223:9200"]
index => "%{[fields][logcategory]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
user => "elastic"
password => "123456"
ssl_certificate_verification => true
truststore => "/home/tomcat/elk/elasticsearch-8.13.4/config/certs/http.p12"
truststore_password => "04nkv5WkSLuB854KnE-Kxg"
}
}
3.3、开机自启
vim /lib/systemd/system/logstash.service
[Unit]
Description=logstash
[Service]
User=tomcat
ExecStart=/home/tomcat/elk/logstash-8.13.4/bin/logstash -f /home/tomcat/elk/logstash-8.13.4/config/logstash-sample.conf
Restart=always
[Install]
WantedBy=multi-user.target
systemctl enable logstash #开机自启
systemctl start logstash #启动
systemctl stop logstash #停止
systemctl restart logstash #重启
4、filebeat
4.1、nginx日志json格式
log_format log_json '{"@timestamp":"$time_iso8601",'
'"server_addr":"$server_addr",'
'"server_name":"$server_name",'
'"server_port":"$server_port",'
'"server_protocol":"$server_protocol",'
'"client_ip":"$remote_addr",'
'"client_user":"$remote_user",'
'"status":"$status",'
'"request_method": "$request_method",'
'"request_length":"$request_length",'
'"request_time":"$request_time",'
'"request_url":"$request_uri",'
'"request_line":"$request",'
'"send_client_size":"$bytes_sent",'
'"send_client_body_size":"$body_bytes_sent",'
'"proxy_protocol_addr":"$proxy_protocol_addr",'
'"proxy_add_x_forward":"$proxy_add_x_forwarded_for",'
'"proxy_port":"$proxy_port",'
'"proxy_host":"$proxy_host",'
'"upstream_host":"$upstream_addr",'
'"upstream_status":"$upstream_status",'
'"upstream_cache_status":"$upstream_cache_status",'
'"upstream_connect_time":"$upstream_connect_time",'
'"upstream_response_time":"$upstream_response_time",'
'"upstream_header_time":"$upstream_header_time",'
'"upstream_cookie_name":"$upstream_cookie_name",'
'"upstream_response_length":"$upstream_response_length",'
'"upstream_bytes_received":"$upstream_bytes_received",'
'"upstream_bytes_sent":"$upstream_bytes_sent",'
'"http_host":"$host",'
'"http_cookie":"$http_cooke",'
'"http_user_agent":"$http_user_agent",'
'"http_origin":"$http_origin",'
'"http_upgrade":"$http_upgrade",'
'"http_referer":"$http_referer",'
'"http_x_forward":"$http_x_forwarded_for",'
'"http_x_forwarded_proto":"$http_x_forwarded_proto",'
'"https":"$https",'
'"http_scheme":"$scheme",'
'"invalid_referer":"$invalid_referer",'
'"gzip_ratio":"$gzip_ratio",'
'"realpath_root":"$realpath_root",'
'"document_root":"$document_root",'
'"is_args":"$is_args",'
'"args":"$args",'
'"connection_requests":"$connection_requests",'
'"connection_number":"$connection",'
'"ssl_protocol":"$ssl_protocol",'
'"ssl_cipher":"$ssl_cipher"}'; access_log logs/access_json.log log_json;
4.2、配置filebeat.yml
vim /home/wwq/elk/filebeat-8.13.4/filebeat.yml
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
# filestream is an input for collecting log messages from files.
- type: filestream
# Unique ID among all inputs, an ID is required.
id: my-filestream-id
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
# Line filtering happens after the parsers pipeline. If you would like to filter lines
# before parsers, use include_message parser.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
# Line filtering happens after the parsers pipeline. If you would like to filter lines
# before parsers, use include_message parser.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#prospector.scanner.exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
filebeat.inputs:
- type: log
enabled: true
paths:
#nginx日志目录
- /home/wwq/nginx-1.25.5/logs/access_json.log
fields:
logcategory: nginx
json:
keys_under_root: true
overwrite_keys: true
message_key: "message"
add_error_key: true
output.logstash:
hosts: ["192.168.1.200:5044"]
#
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]
# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================
# Instrumentation support for the filebeat.
#instrumentation:
# Set to true to enable instrumentation of filebeat.
#enabled: false
# Environment in which filebeat is running on (eg: staging, production, etc.)
#environment: ""
# APM Server hosts to report instrumentation results to.
#hosts:
# - http://localhost:8200
# API Key for the APM Server(s).
# If api_key is set then secret_token will be ignored.
#api_key:
# Secret token for the APM Server(s).
#secret_token:
# ================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
4.3、启动
filebeat -e -c filebeat.yml
4.4、开机自启
vim /lib/systemd/system/filebeat.service
[Unit]
Description=filebeat
Wants=network-online.target
After=network-online.target
[Service]
User=tomcat
ExecStart=/home/tomcat/elk/filebeat-8.13.4/filebeat -e -c /home/tomcat/elk/filebeat-8.13.4/filebeat.yml
Restart=always
[Install]
WantedBy=multi-user.target
systemctl enable filebeat #开机自启
systemctl start filebeat #启动
systemctl stop filebeat #停止
systemctl restart filebeat #重启
5、页面访问kibana及配置
ELKF(elasticsearch、logstash、kibana、filebeat)搭建及收集nginx日志的更多相关文章
- ELK日志系统:Elasticsearch+Logstash+Kibana+Filebeat搭建教程
ELK日志系统:Elasticsearch + Logstash + Kibana 搭建教程 系统架构 安装配置JDK环境 JDK安装(不能安装JRE) JDK下载地址:http://www.orac ...
- 【linux】【ELK】搭建Elasticsearch+Logstash+Kibana+Filebeat日志收集系统
前言 ELK是Elasticsearch.Logstash.Kibana的简称,这三者是核心套件,但并非全部. Elasticsearch是实时全文搜索和分析引擎,提供搜集.分析.存储数据三大功能:是 ...
- ELK (Elasticsearch , Logstash, Kibana [+FileBeat])
ELK 简述: ELK 是: Elasticsearch , Logstash, Kibana 简称, 它们都是开源软件. Elasticsearch[搜索]是个开源分布式基于Lucene的搜索引擎, ...
- Centos7下使用ELK(Elasticsearch + Logstash + Kibana)搭建日志集中分析平台
日志监控和分析在保障业务稳定运行时,起到了很重要的作用,不过一般情况下日志都分散在各个生产服务器,且开发人员无法登陆生产服务器,这时候就需要一个集中式的日志收集装置,对日志中的关键字进行监控,触发异常 ...
- 2023最新ELK日志平台(elasticsearch+logstash+kibana)搭建
前言 去年公司由于不断发展,内部自研系统越来越多,所以后来搭建了一个日志收集平台,并将日志收集功能以二方包形式引入自研系统,避免每个自研系统都要建立一套自己的日志模块,节约了开发时间,管理起来也更加容 ...
- 搭建ELK收集Nginx日志
众所周知,ELK是日志收集套装,这里就不多做介绍了. 画了一个粗略的架构图,如下: 这里实际用了三个节点,系统版本为CentOS6.6,ES版本为2.3.5,logstash版本为2.4.0,kiba ...
- elasticsearch+logstash+kibana部署
这篇博客讲的是elasticsearch+logstash+kibana部署的方法. 内容大纲: 1.elasticsearch+logstash+kibana部署 2.收集Tomcat日志 3.收集 ...
- 【转】ELK(ElasticSearch, Logstash, Kibana)搭建实时日志分析平台
[转自]https://my.oschina.net/itblog/blog/547250 摘要: 前段时间研究的Log4j+Kafka中,有人建议把Kafka收集到的日志存放于ES(ElasticS ...
- 【Big Data - ELK】ELK(ElasticSearch, Logstash, Kibana)搭建实时日志分析平台
摘要: 前段时间研究的Log4j+Kafka中,有人建议把Kafka收集到的日志存放于ES(ElasticSearch,一款基于Apache Lucene的开源分布式搜索引擎)中便于查找和分析,在研究 ...
- ELK 二进制安装并收集nginx日志
对于日志来说,最常见的需求就是收集.存储.查询.展示,开源社区正好有相对应的开源项目:logstash(收集).elasticsearch(存储+搜索).kibana(展示),我们将这三个组合起来的技 ...
随机推荐
- WPF 如何知道当前有多少个 DispatcherTime 在运行
在 WPF 调试中,对于 DispatcherTimer 定时器的执行,没有直观的调试方法.本文来告诉大家如何在 WPF 中调试当前主线程有多少个 DispatcherTimer 在运行 在 WPF ...
- 2019-8-31-C#-大端小端转换
title author date CreateTime categories C# 大端小端转换 lindexi 2019-08-31 16:55:58 +0800 2018-05-28 10:21 ...
- python性能分析line_profiler
在编程世界中,效率是王道.对于Python开发者来说,line_profiler 是一把锐利的剑,能够深入代码的每一行,找出性能瓶颈.今天,就让我们一起深入探索 line_profiler,学习如何用 ...
- java代码审计-某酒店管理系统
java代码审计-某酒店后台管理系统 目录 java代码审计-某酒店后台管理系统 1.简介 2.文件上传漏洞 3.CSRF漏洞 4.存储型XSS 1.简介 文章只作研究学习,请勿非法渗透测试: 该系统 ...
- 【OpenVINO™】基于 C# 和 OpenVINO™ 部署 Blazeface 模型实现人脸检测
前言 OpenVINO C# API 是一个 OpenVINO 的 .Net wrapper,应用最新的 OpenVINO 库开发,通过 OpenVINO C API 实现 .Net 对 OpenV ...
- vue路由跳转的三种方式
目录 1.router-link [实现跳转最简单的方法] 2.this.$router.push({ path:'/user'}) 3.this.$router.replace{path:'/' } ...
- C语言:如何让printf输出更加美化(用游戏英雄属性作例子)
#include <stdlib.h> /* run this program using the console pauser or add your own getch, system ...
- 基于FPGA的电子琴设计(按键和蜂鸣器)----第一版
欢迎各位朋友关注"郝旭帅电子设计团队",本篇为各位朋友介绍基于FPGA的电子琴设计(按键和蜂鸣器)----第一版. 功能说明: 外部输入七个按键,分别对应音符的"1.2. ...
- 智能调度_AIRIOT智能车队管理解决方案
客运.货运.汽车租赁.出租运营等行业对车辆管理.车队管理以及司乘人员的管理方式,逐渐向数字化和智能化转型.传统的依赖人工调度.记录和跟踪的管理模式已经难以满足业务发展需要,存在如下痛点: 实时监控与定 ...
- Vue 3入门指南
title: Vue 3入门指南 date: 2024/5/23 19:37:34 updated: 2024/5/23 19:37:34 categories: 前端开发 tags: 框架对比 环境 ...