开始动手前的说明

我搭建这一套环境的时候是基于docker搭建的,用到了docker-compose,所以开始前要先安装好dockerdocker-compose,并简单的了解dockerdocker-compose的用法。

前言

Q: ELK 是什么?

A: ELK 指:ElasticSearch + Logstash + Kibana

Q: ELK 用来干什么?

A: ELK 可以用来收集日志并进行日志分析,实现日志的统一管理,帮助开发人员和运维人员快速分析日志,快速发现问题。

当然它还有很多非常多实用功能,需要您去自行挖掘。

这里使用Filebeat进行日志收集并将收集上来的日志发送给ELK

es:

Elasticsearch 是一个分布式、RESTful 风格的搜索和数据分析引擎。

kibana:

Kibana 是通向 Elastic 产品集的窗口。 它可以在 Elasticsearch 中对数据进行视觉探索和实时分析。

logstash:

Logstash 是开源的服务器端数据处理管道,能够同时从多个来源采集数据,转换数据,然后将数据发送到您最喜欢的“存储库”中

filebeat:

轻量级收集日志的服务,并且可以将收集的日志发送给 es、logstash、kafka、redis

filebeat 概览图

ELK日志数据收集时序图

接下来开始动手操作。

准备工作

$ mkdir ELK_pro
$ cd ELK_pro
$ touch docker-compose.yml
$ touch Dockerfile
$ touch filebeat.yml
$ touch kibana.yml
$ touch logstash-pipeline.conf
$ touch logstash.yml

1. ElasticSearch 环境搭建

我是参考官网的例子直接写的docker-compose.yml,然后做了小的改动。下面是我改动之后的配置:

version: "3"

services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=true
- xpack.security.authc.accept_default_password=true
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
- xpack.security.transport.ssl.truststore.path=/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
- ./elastic-certificates.p12:/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
ports:
- 9200:9200
networks:
- falling_wind
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=true
- xpack.security.authc.accept_default_password=true
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
- xpack.security.transport.ssl.truststore.path=/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
- ./elastic-certificates.p12:/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
networks:
- falling_wind es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=true
- xpack.security.authc.accept_default_password=true
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
- xpack.security.transport.ssl.truststore.path=/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
- ./elastic-certificates.p12:/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
networks:
- falling_wind
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local networks:
falling_wind:
driver: bridge

这个配置我是加了证书认证的。

下面请看证书生成方法:

  1. 进入docker (es01):
$ docker ps
$ docker exec -it 容器ID或名称 /bin/sh
  1. 生成证书并copy
$ cd bin
$ elasticsearch-certutil ca
$ elasticsearch-certutil cert --ca elastic-stack-ca.p12
$ exit
$ docker cp 容器ID:/usr/share/elasticsearch/elastic-certificates.p12 . # 注意:最后的点不要忘记了。
  1. 设置es01的密码:
$ docker ps
$ docker exec -it 容器ID或名称 /bin/sh
$ cd bin
$ elasticsearch-setup-passwords interactive # 按照提示设置密码即可

2. kibana 环境搭建

配置 kibana

docker-compose.yml

  kibana:
image: docker.elastic.co/kibana/kibana:7.6.1
container_name: kibana_7_61
ports:
- "5601:5601"
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
networks:
- falling_wind
depends_on:
- es01

kibana.yml

server.name: kibana
server.host: "0"
elasticsearch.hosts: ["http://172.18.114.219:9200"]
xpack.monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.username: your username
elasticsearch.password: your password

3. logsstash 环境搭建

配置logsstash

docker-compose.yml

  logstash:
image: docker.elastic.co/logstash/logstash:7.6.1
container_name: logstash_7_61
ports:
- "5044:5044"
volumes:
- ./logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash-pipeline.conf:/usr/share/logstash/conf.d/logstash-pipeline.conf
networks:
- falling_wind

logstash.yml

path.config: /usr/share/logstash/conf.d/*.conf
path.logs: /var/log/logstash

logstash-pipeline.conf

input {
beats {
port => 5044
codec => json
}
tcp {
port => 8000
codec => json
}
} output {
elasticsearch {
hosts => ["172.18.114.219:9200"]
index => "falling-wind"
user => "your username"
password => "your password"
}
stdout {
codec => rubydebug
}
}

4. filebeat 环境搭建

配置 filebeat

docker-compose.yml

  filebeat:
container_name: filebeat_7_61
build:
context: .
dockerfile: Dockerfile
volumes:
- /var/logs:/usr/share/filebeat/logs
networks:
- falling_wind

Dockerfile

FROM docker.elastic.co/beats/filebeat:7.6.1
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
USER root
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
RUN chown root:filebeat /usr/share/filebeat/data/meta.json

说明:官网上的Dockerfile最后还加了 USER filebeat,按理说应该不会出现什么问题,但是启动总是会报权限不足:/usr/share/filebeat/data/meta.json,所以我暂时将这一句去掉就好了。

filebeat.yml

filebeat.inputs:
- type: log
paths:
- /usr/share/filebeat/logs/falling-wind/*.log
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
tags: ["falling-wind"]
- type: log
paths:
- /usr/share/filebeat/logs/celery/*.log
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
tags: ["celery"]
- type: log
paths:
- /usr/share/filebeat/logs/gunicorn/*.log
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
tags: ["gunicorn"]
- type: log
paths:
- /usr/share/filebeat/logs/supervisor/*.log
tags: ["supervisor"] #============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: true output.logstash:
hosts: ["172.18.114.219:5044"]

注意合并多行信息的配置

将堆栈信息合并:

multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after

总结

一共需要的配置文件:

  • docker-compose.yml
  • Dockerfile: 构建filebeat镜像
  • elastic-certificates.p12:证书文件
  • filebeat.yml
  • kibana.yml
  • logstash-pipeline.conf
  • logstash.yml

docker-compose.yml 完整版:

version: "3"

services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=true
- xpack.security.authc.accept_default_password=true
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
- xpack.security.transport.ssl.truststore.path=/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
- ./elastic-certificates.p12:/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
ports:
- 9200:9200
networks:
- falling_wind
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=true
- xpack.security.authc.accept_default_password=true
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
- xpack.security.transport.ssl.truststore.path=/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
- ./elastic-certificates.p12:/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
networks:
- falling_wind es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=true
- xpack.security.authc.accept_default_password=true
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
- xpack.security.transport.ssl.truststore.path=/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
- ./elastic-certificates.p12:/usr/share/elasticsearch/config/certificates/elastic-certificates.p12
networks:
- falling_wind kibana:
image: docker.elastic.co/kibana/kibana:7.6.1
container_name: kibana_7_61
ports:
- "5601:5601"
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
networks:
- falling_wind
depends_on:
- es01 logstash:
image: docker.elastic.co/logstash/logstash:7.6.1
container_name: logstash_7_61
ports:
- "5044:5044"
volumes:
- ./logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash-pipeline.conf:/usr/share/logstash/conf.d/logstash-pipeline.conf
networks:
- falling_wind filebeat:
container_name: filebeat_7_61
build:
context: .
dockerfile: Dockerfile
volumes:
- /var/logs:/usr/share/filebeat/logs
networks:
- falling_wind volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local networks:
falling_wind:
driver: bridge

Enjoy your code!

ELK 环境搭建总结的更多相关文章

  1. ELK 环境搭建4-Kafka + zookeeper

    一.安装前准备 1.节点 192.168.30.41 192.168.30.42 192.168.30.43 2.操作系统: Centos7.5 3.安装包 a.java8: jdk-8u181-li ...

  2. ELK 环境搭建3-Logstash

    一.Logstash是一款轻量级的日志搜集处理框架,可以方便的把分散的.多样化的日志搜集起来,并进行自定义的处理,然后传输到指定的位置,比如某个服务器或者文件或者中间件. 二.搭建 1.因为要涉及到收 ...

  3. ELK 环境搭建2-Kibana

    一.安装前准备 1.节点 192.168.30.41 2.操作系统: Centos7.5 3.安装包 a.java8: jdk-8u181-linux-x64.tar.gz b.Kibana kiba ...

  4. ELK环境搭建完整说明

    ELK环境搭建完整说明 ELK:ElasticSerach.Logstash.Kibana三款产品名称的首字母集合,用于日志的搜集和搜索.简单地理解为我们可以把服务端的日志(nginx.tomcat等 ...

  5. ELK环境搭建

    ELK环境搭建 1. Virtualbox/Vagrant安装 41.1. Virtualbox安装 41.2. Vagrant安装 41.2.1. 简述 41.2.2. Vagrant box 41 ...

  6. 2017.7.18 linux下ELK环境搭建

    参考来自:Linux日志分析ELK环境搭建  另一篇博文:2017.7.18 windows下ELK环境搭建   0 版本说明 因为ELK从5.0开始只支持jdk 1.8,但是项目中使用的是JDK 1 ...

  7. 2017.7.18 windows下ELK环境搭建

    参考来自:Windows环境下ELK平台的搭建 另一篇博文:2017.7.18 linux下ELK环境搭建 0 版本说明 因为ELK从5.0开始只支持jdk 1.8,但是项目中使用的是JDK 1.7, ...

  8. Linux日志分析ELK环境搭建

    场景:ELK作为一个日志收集和检索系统,感觉功能还是相当的强大的. ELK是啥, 其实是是三个组件的缩写, 分别是elasticsearch, logstash, kibana. ELK平台可以用于实 ...

  9. Windows下ELK环境搭建(单机多节点集群部署)

    1.背景 日志主要包括系统日志.应用程序日志和安全日志.系统运维和开发人员可以通过日志了解服务器软硬件信息.检查配置过程中的错误及错误发生的原因.经常分析日志可以了解服务器的负荷,性能安全性,从而及时 ...

随机推荐

  1. Opengl-法线贴图(用来细化表面的表现表现的凹凸)

    我们通过这张图可以看出来,使用了法线贴图的物体表面更有细节更逼真,其实这就是发现贴图的作用,没什么钻牛角尖的. 其实表面没有凹凸的情况是因为我们把表面一直按照平整来做的,要想突出这个表面的凹凸就要用到 ...

  2. Implementing 5G NR Features in FPGA

    目录 论文来源 论文简介 基本原理 论文创新点 借鉴之处 论文来源 2018 European Conference on Networks and Communications (EuCNC),Ja ...

  3. 深度解析互联网大厂面试难题自定义@EnableXX系列

    深度解析互联网大厂面试难题自定义@EnableXX系列   其实是一个@Import的设计技巧 创建注解@EnableXX(任何名称注解都行,只是这个名字好一些) XXConfiguration类不能 ...

  4. RTMP协议推流交互流程

    目录 RTMP协议推流交互流程 RTMP协议推流流程 RTMP握手 RTMP建立连接 RTMP建流&Play Wireshark抓个RTMP流 RTMP协议推流交互流程 想了解下直播常见协议R ...

  5. 峰哥说技术:06-手撸Spring Boot自定义启动器,解密Spring Boot自动化配置原理

    Spring Boot深度课程系列 峰哥说技术—2020庚子年重磅推出.战胜病毒.我们在行动 06  峰哥说技术:手撸Spring Boot自定义启动器,解密Spring Boot自动化配置原理 Sp ...

  6. Django之Model相关操作

    一.字段 AutoField(Field) - int自增列,必须填入参数 primary_key=True BigAutoField(AutoField) - bigint自增列,必须填入参数 pr ...

  7. Navicat15最新版本破解 亲测可用!!!

    1.下载Navicat Premium官网https://www.navicat.com.cn/下载最新版本下载安装 2.本人网盘链接:https://pan.baidu.com/s/1ncSaxId ...

  8. python基础学习day03

    基础数据类型总览 why:机器无法像人一样分编各种类型 int(数字) str(字符串)作用:存储少量信息. '12','我和你','qw' bool值 作用:判断真假 True False list ...

  9. IPv6 时代如何防御 DDoS 攻击?

    在互联网世界,每台联网的设备都被分配了一个用于标识和位置定义的 IP 地址.20 世纪 90 年代以来互联网的快速发展,联网设备所需的地址远远多于可用 IPv4 地址的数量,导致了 IPv4 地址耗尽 ...

  10. Hadoop集群搭建(七)~完全分布运行模式

    我使用的是完全分布运行模式.上一篇安装了JDK,本篇记录Hadoop的安装,版本2.7.2 (一)配置文件 1,先将hadoop安装包解压到module目录下 2,配置hadoop-env.sh.vi ...