使用Prometheus监控docker compose方式部署的ES
需求
收集 ES 的指标, 并进行展示和告警;
现状
- ES 通过 docker compose 安装
- 所在环境的 K8S 集群有 Prometheus 和 AlertManager 及 Grafana
方案
复用现有的监控体系, 通过: Prometheus 监控 ES.
具体实现为:
采集端 elasticsearch_exporter
可以监控的指标为:
Name | Type | Cardinality | Help |
---|---|---|---|
elasticsearch_breakers_estimated_size_bytes | gauge | 4 | Estimated size in bytes of breaker |
elasticsearch_breakers_limit_size_bytes | gauge | 4 | Limit size in bytes for breaker |
elasticsearch_breakers_tripped | counter | 4 | tripped for breaker |
elasticsearch_cluster_health_active_primary_shards | gauge | 1 | The number of primary shards in your cluster. This is an aggregate total across all indices. |
elasticsearch_cluster_health_active_shards | gauge | 1 | Aggregate total of all shards across all indices, which includes replica shards. |
elasticsearch_cluster_health_delayed_unassigned_shards | gauge | 1 | Shards delayed to reduce reallocation overhead |
elasticsearch_cluster_health_initializing_shards | gauge | 1 | Count of shards that are being freshly created. |
elasticsearch_cluster_health_number_of_data_nodes | gauge | 1 | Number of data nodes in the cluster. |
elasticsearch_cluster_health_number_of_in_flight_fetch | gauge | 1 | The number of ongoing shard info requests. |
elasticsearch_cluster_health_number_of_nodes | gauge | 1 | Number of nodes in the cluster. |
...
可以直接在 github 上找到完整的
展示端 基于Grafana
️ Reference:
告警指标 基于prometheus alertmanager
️ Reference:
ElasticSearch:https://awesome-prometheus-alerts.grep.to/rules.html#elasticsearch-1
实施步骤
以下为手动实施步骤
Docker Compose
docker pull quay.io/prometheuscommunity/elasticsearch-exporter:v1.3.0
docker-compose.yml
示例:
Warning:
exporter 在每次刮削时都会从 ElasticSearch 集群中获取信息,因此过短的刮削间隔会给 ES 主节点带来负载,特别是当你使用
--es.all
和--es.indices
运行时。我们建议你测量获取/_nodes/stats
和/_all/_stats
对你的ES集群来说需要多长时间,以确定你的刮削间隔是否太短。
原 ES 的 docker-copmose.yml
示例如下:
version: '3'
services:
elasticsearch:
image: elasticsearch-plugins:6.8.18
...
ports:
- 9200:9200
- 9300:9300
restart: always
增加了 elasticsearch_exporter
的yaml如下:
version: '3'
services:
elasticsearch:
image: elasticsearch-plugins:6.8.18
...
ports:
- 9200:9200
- 9300:9300
restart: always
elasticsearch_exporter:
image: quay.io/prometheuscommunity/elasticsearch-exporter:v1.3.0
command:
- '--es.uri=http://elasticsearch:9200'
- '--es.all'
- '--es.indices'
- '--es.indices_settings'
- '--es.indices_mappings'
- '--es.shards'
- '--es.snapshots'
- '--es.timeout=30s'
restart: always
ports:
- "9114:9114"
Prometheus 配置调整
prometheus 配置
Prometheus 增加静态抓取配置:
scrape_configs:
- job_name: "es"
static_configs:
- targets: ["x.x.x.x:9114"]
说明:
x.x.x.x 为 ES Exporter IP, 因为 ES Exporter 通过 docker compose 和 ES部署在同一台机器, 所以这个 IP 也是 ES 的IP.
Prometheus Rules
增加 ES 相关的 Prometheus Rules:
groups:
- name: elasticsearch
rules:
- record: elasticsearch_filesystem_data_used_percent
expr: 100 * (elasticsearch_filesystem_data_size_bytes - elasticsearch_filesystem_data_free_bytes)
/ elasticsearch_filesystem_data_size_bytes
- record: elasticsearch_filesystem_data_free_percent
expr: 100 - elasticsearch_filesystem_data_used_percent
- alert: ElasticsearchTooFewNodesRunning
expr: elasticsearch_cluster_health_number_of_nodes < 3
for: 0m
labels:
severity: critical
annotations:
description: "Missing node in Elasticsearch cluster\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
summary: ElasticSearch running on less than 3 nodes(instance {{ $labels.instance }}, node {{$labels.node}})
- alert: ElasticsearchDiskSpaceLow
expr: elasticsearch_filesystem_data_free_percent < 20
for: 2m
labels:
severity: warning
annotations:
summary: Elasticsearch disk space low (instance {{ $labels.instance }}, node {{$labels.node}})
description: "The disk usage is over 80%\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: ElasticsearchDiskOutOfSpace
expr: elasticsearch_filesystem_data_free_percent < 10
for: 0m
labels:
severity: critical
annotations:
summary: Elasticsearch disk out of space (instance {{ $labels.instance }}, node {{$labels.node}})
description: "The disk usage is over 90%\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: ElasticsearchHeapUsageWarning
expr: (elasticsearch_jvm_memory_used_bytes{area="heap"} / elasticsearch_jvm_memory_max_bytes{area="heap"}) * 100 > 80
for: 2m
labels:
severity: warning
annotations:
summary: Elasticsearch Heap Usage warning (instance {{ $labels.instance }}, node {{$labels.node}})
description: "The heap usage is over 80%\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: ElasticsearchHeapUsageTooHigh
expr: (elasticsearch_jvm_memory_used_bytes{area="heap"} / elasticsearch_jvm_memory_max_bytes{area="heap"}) * 100 > 90
for: 2m
labels:
severity: critical
annotations:
summary: Elasticsearch Heap Usage Too High (instance {{ $labels.instance }}, node {{$labels.node}})
description: "The heap usage is over 90%\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: ElasticsearchClusterRed
expr: elasticsearch_cluster_health_status{color="red"} == 1
for: 0m
labels:
severity: critical
annotations:
summary: Elasticsearch Cluster Red (instance {{ $labels.instance }}, node {{$labels.node}})
description: "Elastic Cluster Red status\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: ElasticsearchClusterYellow
expr: elasticsearch_cluster_health_status{color="yellow"} == 1
for: 0m
labels:
severity: warning
annotations:
summary: Elasticsearch Cluster Yellow (instance {{ $labels.instance }}, node {{$labels.node}})
description: "Elastic Cluster Yellow status\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: ElasticsearchHealthyDataNodes
expr: elasticsearch_cluster_health_number_of_data_nodes < 3
for: 0m
labels:
severity: critical
annotations:
summary: Elasticsearch Healthy Data Nodes (instance {{ $labels.instance }}, node {{$labels.node}})
description: "Missing data node in Elasticsearch cluster\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: ElasticsearchRelocatingShards
expr: elasticsearch_cluster_health_relocating_shards > 0
for: 0m
labels:
severity: info
annotations:
summary: Elasticsearch relocating shards (instance {{ $labels.instance }}, node {{$labels.node}})
description: "Elasticsearch is relocating shards\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: ElasticsearchRelocatingShardsTooLong
expr: elasticsearch_cluster_health_relocating_shards > 0
for: 15m
labels:
severity: warning
annotations:
summary: Elasticsearch relocating shards too long (instance {{ $labels.instance }}, node {{$labels.node}})
description: "Elasticsearch has been relocating shards for 15min\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: ElasticsearchInitializingShards
expr: elasticsearch_cluster_health_initializing_shards > 0
for: 0m
labels:
severity: info
annotations:
summary: Elasticsearch initializing shards (instance {{ $labels.instance }}, node {{$labels.node}})
description: "Elasticsearch is initializing shards\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: ElasticsearchInitializingShardsTooLong
expr: elasticsearch_cluster_health_initializing_shards > 0
for: 15m
labels:
severity: warning
annotations:
summary: Elasticsearch initializing shards too long (instance {{ $labels.instance }}, node {{$labels.node}})
description: "Elasticsearch has been initializing shards for 15 min\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: ElasticsearchUnassignedShards
expr: elasticsearch_cluster_health_unassigned_shards > 0
for: 0m
labels:
severity: critical
annotations:
summary: Elasticsearch unassigned shards (instance {{ $labels.instance }}, node {{$labels.node}})
description: "Elasticsearch has unassigned shards\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: ElasticsearchPendingTasks
expr: elasticsearch_cluster_health_number_of_pending_tasks > 0
for: 15m
labels:
severity: warning
annotations:
summary: Elasticsearch pending tasks (instance {{ $labels.instance }}, node {{$labels.node}})
description: "Elasticsearch has pending tasks. Cluster works slowly.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: ElasticsearchNoNewDocuments
expr: increase(elasticsearch_indices_docs{es_data_node="true"}[10m]) < 1
for: 0m
labels:
severity: warning
annotations:
summary: Elasticsearch no new documents (instance {{ $labels.instance }}, node {{$labels.node}})
description: "No new documents for 10 min!\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
并重启生效.
Warning:
ElasticsearchTooFewNodesRunning
告警的条件是 es 集群的node 少于 3个, 对于单节点 ES 会误报, 所以按需开启rule或按需屏蔽(slience).ElasticsearchHealthyDataNodes
告警同上.
AlertManager 告警规则及收件人配置
按需调整, 示例如下:
'global':
'smtp_smarthost': ''
'smtp_from': ''
'smtp_require_tls': false
'resolve_timeout': '5m'
'receivers':
- 'name': 'es-email'
'email_configs':
- 'to': 'sfw@example.com,sdfwef@example.com'
'send_resolved': true
'route':
'group_by':
- 'job'
'group_interval': '5m'
'group_wait': '30s'
'routes':
- 'receiver': 'es-email'
'match':
'job': 'es'
并重启生效.
Grafana 配置
导入 json 格式的 Grafana Dashboard:
{
"__inputs": [],
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "5.4.0"
},
{
"type": "panel",
"id": "graph",
"name": "Graph",
"version": "5.0.0"
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "5.0.0"
},
{
"type": "panel",
"id": "singlestat",
"name": "Singlestat",
"version": "5.0.0"
}
],
...
可以直接在 Grafana 上找到完整的
️ 参考文档
- prometheus-community/elasticsearch_exporter: Elasticsearch stats exporter for Prometheus (github.com)
- ElasticSearch dashboard for Grafana | Grafana Labs
- Awesome Prometheus alerts | Collection of alerting rules (grep.to)
三人行, 必有我师; 知识共享, 天下为公. 本文由东风微鸣技术博客 EWhisper.cn 编写.
使用Prometheus监控docker compose方式部署的ES的更多相关文章
- Prometheus监控Docker Swarm集群(一)
Prometheus监控Docker Swarm集群(一) cAdvisor简介 为了解决容器的监控问题,Google开发了一款容器监控工具cAdvisor(Container Advisor),它为 ...
- Docker Compose 一键部署LNMP
Docker Compose 一键部署LNMP 目录结构 [root@localhost ~]# tree compose_lnmp/ compose_lnmp/ ├── docker-compose ...
- Docker Compose 一键部署Nginx代理Tomcat集群
Docker Compose 一键部署Nginx代理Tomcat集群 目录结构 [root@localhost ~]# tree compose_nginx_tomcat/ compose_nginx ...
- Docker Compose 一键部署多节点爬虫程序
Docker Compose 一键部署多节点爬虫程序 目录结构 [root@localhost ~]# tree compose_crawler/ compose_crawler/ ├── cento ...
- Prometheus 监控Docker服务器及Granfanna可视化
Prometheus 监控Docker服务器及Granfanna可视化 cAdvisor(Container Advisor)用于收集正在运行的容器资源使用和性能信息. 使用Prometheus监控c ...
- asp.net core容器&mysql容器network互联 & docker compose方式编排启动多个容器
文章简介 asp.net core webapi容器与Mysql容器互联(network方式) docker compose方式编排启动多个容器 asp.net core webapi容器与Mysql ...
- Drools 7.15.0 docker容器方式部署
关于drools的相关介绍就不再赘述了,关于drools网上的资料都很少,或者都有些老了,最近折腾了一下,记录下安装部署的过程,希望能节省下大家的时间. 一.快速部署 1.拉取基础镜像,命令如下: d ...
- Python环境搭建、python项目以docker镜像方式部署到Linux
Python环境搭建.python项目以docker镜像方式部署到Linux 本文的项目是用Python写的,记录了生成docker镜像,然后整个项目在Linux跑起来的过程: 原文链接:https: ...
- Docker Compose安装部署Jenkins
流水线可以让项目发布流程更加清晰,docker可以大大减少Jenkins配置. 1.前言 数据卷挂载到 /var 磁盘目录下,因为该磁盘空间较大,后面需要挂载容器数据卷,以防内存吃紧. 为了可以留存启 ...
- Grafana连接Prometheus监控Docker平台
Grafana是一款开源的分析平台. Grafana allows you to query, visualize, alert on and understand your metrics no m ...
随机推荐
- StampedLock:一个并发编程中非常重要的票据锁
摘要:一起来聊聊这个在高并发环境下比ReadWriteLock更快的锁--StampedLock. 本文分享自华为云社区<[高并发]一文彻底理解并发编程中非常重要的票据锁--StampedLoc ...
- MySQL之安装(linux两种版本版本安装)
LinuxMySQL安装(Mysql5.5版本) 第一种 有安装包的安装方式 1.下载地址: http://dev.mysql.com/downloads/mysql 2.检查当前系统是否安装过mys ...
- 1.MongoDB之服务启动
1. 编写docker-compose.yaml文件 version: "3" services: mongo: image: mongo:4.2.6 ports: - 27017 ...
- ML-朴素贝叶斯算法
贝叶斯定理 w是由待测数据的所有属性组成的向量.p(c|x)表示,在数据为x时,属于c类的概率. \[p(c|w)=\frac{p(w|c)p(c)}{p(w)} \] 如果数据的目标变量最后有两个结 ...
- .net 温故知新:【8】.NET 中的配置从xml转向json
一.配置概述 在.net framework平台中我们常见的也是最熟悉的就是.config文件作为配置,控制台桌面程序是App.config,Web就是web.config,里面的配置格式为xml格式 ...
- 论文笔记 - PRISM: A Rich Class of Parameterized Submodular Information Measures for Guided Subset Selection
Motivation 与 Active Learning 类似,Target Learning 致力于 挑选外卖更"感兴趣"的数据,即人为为更重要的数据添加 bias.例如我们当前 ...
- 强连通分量与tarjan算法初步运用
模板题:B3609 [图论与代数结构 701] 强连通分量 题目描述 给定一张 n 个点 m 条边的有向图,求出其所有的强连通分量. 注意,本题可能存在重边和自环. 输入格式 第一行两个正整数 n , ...
- React+echarts (echarts-for-react) 画中国地图及省份切换
有足够的地图数据,可以点击到街道,示例我只出到市级 以umi为框架,版本是: "react": "^18.2.0", "umi": &quo ...
- 微信小程序的学习(一)
一.小程序简介 1.小程序与普通网页开发的区别 运行环境不同 网页运行在浏览器环境中 小程序运行在微信环境中 API不同 小程序无法调用浏览器中的DOM和BOM的API 但是小程序可以调用微信环境提供 ...
- Go语言核心36讲09
从本篇文章开始,我们正式进入了模块2的学习.在这之前,我们已经聊了很多的Go语言和编程方面的基础知识,相信你已经对Go语言的开发环境配置.常用源码文件写法,以及程序实体(尤其是变量)及其相关的各种概念 ...