[转帖]Export Prometheus metrics from SQL queries
https://github.com/albertodonato/query-exporter
query-exporter is a Prometheus exporter which allows collecting metrics from database queries, at specified time intervals.
It uses SQLAlchemy to connect to different database engines, including PostgreSQL, MySQL, Oracle and Microsoft SQL Server.
Each query can be run on multiple databases, and update multiple metrics.
The application is simply run as:
query-exporter config.yaml
where the passed configuration file contains the definitions of the databases to connect and queries to perform to update metrics.
Configuration file format
A sample configuration file for the application looks like this:
databases:
db1:
dsn: sqlite://
connect-sql:
- PRAGMA application_id = 123
- PRAGMA auto_vacuum = 1
labels:
region: us1
app: app1
db2:
dsn: sqlite://
keep-connected: false
labels:
region: us2
app: app1 metrics:
metric1:
type: gauge
description: A sample gauge
metric2:
type: summary
description: A sample summary
labels: [l1, l2]
expiration: 24h
metric3:
type: histogram
description: A sample histogram
buckets: [10, 20, 50, 100, 1000]
metric4:
type: enum
description: A sample enum
states: [foo, bar, baz] queries:
query1:
interval: 5
databases: [db1]
metrics: [metric1]
sql: SELECT random() / 1000000000000000 AS metric1
query2:
interval: 20
timeout: 0.5
databases: [db1, db2]
metrics: [metric2, metric3]
sql: |
SELECT abs(random() / 1000000000000000) AS metric2,
abs(random() / 10000000000000000) AS metric3,
"value1" AS l1,
"value2" AS l2
query3:
schedule: "*/5 * * * *"
databases: [db2]
metrics: [metric4]
sql: |
SELECT value FROM (
SELECT "foo" AS metric4 UNION
SELECT "bar" AS metric3 UNION
SELECT "baz" AS metric4
)
ORDER BY random()
LIMIT 1
databases section
This section contains definitions for databases to connect to. Key names are arbitrary and only used to reference databases in the queries section.
Each database definitions can have the following keys:
dsn:-
database connection details.
It can be provided as a string in the following format:
dialect[+driver]://[username:password][@host:port]/database[?option=value&...]
(see SQLAlchemy documentation for details on available engines and options), or as key/value pairs:
dialect: <dialect>[+driver]
user: <username>
password: <password>
host: <host>
port: <port>
database: <database>
options:
<key1>: <value1>
<key2>: <value2>All entries are optional, except
dialect.Note that in the string form, username, password and options need to be URL-encoded, whereas this is done automatically for the key/value form.
See database-specific options page for some extra details on database configuration options.
It's also possible to get the connection string indirectly from other sources:
from an environment variable (e.g.
$CONNECTION_STRING) by settingdsnto:env:CONNECTION_STRING
from a file, containing only the DSN value, by setting
dsnto:file:/path/to/file
These forms only support specifying the actual DNS in the string form.
connect-sql:- An optional list of queries to run right after database connection. This can be used to set up connection-wise parameters and configurations.
keep-connected:- whether to keep the connection open for the database between queries, or disconnect after each one. If not specified, defaults to
true. Setting this option tofalsemight be useful if queries on a database are run with very long interval, to avoid holding idle connections. autocommit:- whether to set autocommit for the database connection. If not specified, defaults to
true. This should only be changed tofalseif specific queries require it. labels:- an optional mapping of label names and values to tag metrics collected from each database. When labels are used, all databases must define the same set of labels.
metrics section
This section contains Prometheus metrics definitions. Keys are used as metric names, and must therefore be valid metric identifiers.
Each metric definition can have the following keys:
type:-
the type of the metric, must be specified. The following metric types are supported:
counter: value is incremented with each result from queriesenum: value is set with each result from queriesgauge: value is set with each result from querieshistogram: each result from queries is added to observationssummary: each result from queries is added to observations
description:- an optional description of the metric.
labels:-
an optional list of label names to apply to the metric.
If specified, queries updating the metric must return rows that include values for each label in addition to the metric value. Column names must match metric and labels names.
buckets:-
for
histogrammetrics, a list of buckets for the metrics.If not specified, default buckets are applied.
states:-
for
enummetrics, a list of string values for possible states.Queries for updating the enum must return valid states.
expiration:-
the amount of time after which a series for the metric is cleared if no new value is collected.
Last report times are tracked independently for each set of label values for the metric.
This can be useful for metric series that only last for a certain amount of time, to avoid an ever-increasing collection of series.
The value is interpreted as seconds if no suffix is specified; valid suffixes are
s,m,h,d. Only integer values are accepted. increment:-
for
countermetrics, whether to increment the value by the query result, or set the value to it.By default, counters are incremented by the value returned by the query. If this is set to
false, instead, the metric value will be set to the result of the query.NOTE: The default will be reversed in the 3.0 release, and
incrementwill be set tofalseby default.
queries section
This section contains definitions for queries to perform. Key names are arbitrary and only used to identify queries in logs.
Each query definition can have the following keys:
databases:-
the list of databases to run the query on.
Names must match those defined in the
databasessection.Metrics are automatically tagged with the
databaselabel so that independent series are generated for each database that a query is run on. interval:-
the time interval at which the query is run.
The value is interpreted as seconds if no suffix is specified; valid suffixes are
s,m,h,d. Only integer values are accepted.If a value is specified for
interval, aschedulecan't be specified.If no value is specified (or specified as
null), the query is only executed upon HTTP requests. metrics:-
the list of metrics that the query updates.
Names must match those defined in the
metricssection. parameters:-
an optional list or dictionary of parameters sets to run the query with.
If specified as a list, the query will be run once for every set of parameters specified in this list, for every interval.
Each parameter set must be a dictionary where keys must match parameters names from the query SQL (e.g.
:param).As an example:
query:
databases: [db]
metrics: [metric]
sql: |
SELECT COUNT(*) AS metric FROM table
WHERE id > :param1 AND id < :param2
parameters:
- param1: 10
param2: 20
- param1: 30
param2: 40If specified as a dictionary, it's used as a multidimensional matrix of parameters lists to run the query with. The query will be run once for each permutation of parameters.
If a query is specified with parameters as matrix in its
sql, it will be run once for every permutation in matrix of parameters, for every interval.Variable format in sql query:
:{top_level_key}__{inner_key}query:
databases: [db]
metrics: [apps_count]
sql: |
SELECT COUNT(1) AS apps_count FROM apps_list
WHERE os = :os__name AND arch = :os__arch AND lang = :lang__name
parameters:
os:
- name: MacOS
arch: arm64
- name: Linux
arch: amd64
- name: Windows
arch: amd64
lang:
- name: Python3
- name: Java
- name: TypeScriptThis example will generate 9 queries with all permutations of
osandlangparameters. schedule:-
a schedule for executing queries at specific times.
This is expressed as a Cron-like format string (e.g.
*/5 * * * *to run every five minutes).If a value is specified for
schedule, anintervalcan't be specified.If no value is specified (or specified as
null), the query is only executed upon HTTP requests. sql:-
the SQL text of the query.
The query must return columns with names that match those of the metrics defined in
metrics, plus those of labels (if any) for all these metrics.query:
databases: [db]
metrics: [metric1, metric2]
sql: SELECT 10.0 AS metric1, 20.0 AS metric2will update
metric1to10.0andmetric2to20.0.- Note:
- since
:is used for parameter markers (seeparametersabove), literal single:at the beginning of a word must be escaped with backslash (e.g.SELECT '\:bar' FROM table). There's no need to escape when the colon occurs inside a word (e.g.SELECT 'foo:bar' FROM table).
timeout:-
a value in seconds after which the query is timed out.
If specified, it must be a multiple of 0.1.
Metrics endpoint
The exporter listens on port 9560 providing the standard /metrics endpoint.
By default, the port is bound on localhost. Note that if the name resolves both IPv4 and IPv6 addressses, the exporter will bind on both.
For the configuration above, the endpoint would return something like this:
# HELP database_errors_total Number of database errors
# TYPE database_errors_total counter
# HELP queries_total Number of database queries
# TYPE queries_total counter
queries_total{app="app1",database="db1",query="query1",region="us1",status="success"} 50.0
queries_total{app="app1",database="db2",query="query2",region="us2",status="success"} 13.0
queries_total{app="app1",database="db1",query="query2",region="us1",status="success"} 13.0
queries_total{app="app1",database="db2",query="query3",region="us2",status="error"} 1.0
# HELP queries_created Number of database queries
# TYPE queries_created gauge
queries_created{app="app1",database="db1",query="query1",region="us1",status="success"} 1.5945442444463024e+09
queries_created{app="app1",database="db2",query="query2",region="us2",status="success"} 1.5945442444471517e+09
queries_created{app="app1",database="db1",query="query2",region="us1",status="success"} 1.5945442444477117e+09
queries_created{app="app1",database="db2",query="query3",region="us2",status="error"} 1.5945444000140696e+09
# HELP query_latency Query execution latency
# TYPE query_latency histogram
query_latency_bucket{app="app1",database="db1",le="0.005",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="0.01",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="0.025",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="0.05",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="0.075",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="0.1",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="0.25",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="0.5",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="0.75",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="1.0",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="2.5",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="5.0",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="7.5",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="10.0",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="+Inf",query="query1",region="us1"} 50.0
query_latency_count{app="app1",database="db1",query="query1",region="us1"} 50.0
query_latency_sum{app="app1",database="db1",query="query1",region="us1"} 0.004666365042794496
query_latency_bucket{app="app1",database="db2",le="0.005",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="0.01",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="0.025",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="0.05",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="0.075",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="0.1",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="0.25",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="0.5",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="0.75",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="1.0",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="2.5",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="5.0",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="7.5",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="10.0",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="+Inf",query="query2",region="us2"} 13.0
query_latency_count{app="app1",database="db2",query="query2",region="us2"} 13.0
query_latency_sum{app="app1",database="db2",query="query2",region="us2"} 0.012369773990940303
query_latency_bucket{app="app1",database="db1",le="0.005",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="0.01",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="0.025",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="0.05",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="0.075",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="0.1",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="0.25",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="0.5",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="0.75",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="1.0",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="2.5",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="5.0",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="7.5",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="10.0",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="+Inf",query="query2",region="us1"} 13.0
query_latency_count{app="app1",database="db1",query="query2",region="us1"} 13.0
query_latency_sum{app="app1",database="db1",query="query2",region="us1"} 0.004745393933262676
# HELP query_latency_created Query execution latency
# TYPE query_latency_created gauge
query_latency_created{app="app1",database="db1",query="query1",region="us1"} 1.594544244446163e+09
query_latency_created{app="app1",database="db2",query="query2",region="us2"} 1.5945442444470239e+09
query_latency_created{app="app1",database="db1",query="query2",region="us1"} 1.594544244447551e+09
# HELP metric1 A sample gauge
# TYPE metric1 gauge
metric1{app="app1",database="db1",region="us1"} -3561.0
# HELP metric2 A sample summary
# TYPE metric2 summary
metric2_count{app="app1",database="db2",l1="value1",l2="value2",region="us2"} 13.0
metric2_sum{app="app1",database="db2",l1="value1",l2="value2",region="us2"} 58504.0
metric2_count{app="app1",database="db1",l1="value1",l2="value2",region="us1"} 13.0
metric2_sum{app="app1",database="db1",l1="value1",l2="value2",region="us1"} 75262.0
# HELP metric2_created A sample summary
# TYPE metric2_created gauge
metric2_created{app="app1",database="db2",l1="value1",l2="value2",region="us2"} 1.594544244446819e+09
metric2_created{app="app1",database="db1",l1="value1",l2="value2",region="us1"} 1.594544244447339e+09
# HELP metric3 A sample histogram
# TYPE metric3 histogram
metric3_bucket{app="app1",database="db2",le="10.0",region="us2"} 1.0
metric3_bucket{app="app1",database="db2",le="20.0",region="us2"} 1.0
metric3_bucket{app="app1",database="db2",le="50.0",region="us2"} 2.0
metric3_bucket{app="app1",database="db2",le="100.0",region="us2"} 3.0
metric3_bucket{app="app1",database="db2",le="1000.0",region="us2"} 13.0
metric3_bucket{app="app1",database="db2",le="+Inf",region="us2"} 13.0
metric3_count{app="app1",database="db2",region="us2"} 13.0
metric3_sum{app="app1",database="db2",region="us2"} 5016.0
metric3_bucket{app="app1",database="db1",le="10.0",region="us1"} 0.0
metric3_bucket{app="app1",database="db1",le="20.0",region="us1"} 0.0
metric3_bucket{app="app1",database="db1",le="50.0",region="us1"} 0.0
metric3_bucket{app="app1",database="db1",le="100.0",region="us1"} 0.0
metric3_bucket{app="app1",database="db1",le="1000.0",region="us1"} 13.0
metric3_bucket{app="app1",database="db1",le="+Inf",region="us1"} 13.0
metric3_count{app="app1",database="db1",region="us1"} 13.0
metric3_sum{app="app1",database="db1",region="us1"} 5358.0
# HELP metric3_created A sample histogram
# TYPE metric3_created gauge
metric3_created{app="app1",database="db2",region="us2"} 1.5945442444469101e+09
metric3_created{app="app1",database="db1",region="us1"} 1.5945442444474254e+09
# HELP metric4 A sample enum
# TYPE metric4 gauge
metric4{app="app1",database="db2",metric4="foo",region="us2"} 0.0
metric4{app="app1",database="db2",metric4="bar",region="us2"} 0.0
metric4{app="app1",database="db2",metric4="baz",region="us2"} 1.0
Builtin metrics
The exporter provides a few builtin metrics which can be useful to track query execution:
database_errors{database="db"}:- a counter used to report number of errors, per database.
queries{database="db",query="q",status="[success|error|timeout]"}:- a counter with number of executed queries, per database, query and status.
query_latency{database="db",query="q"}:- a histogram with query latencies, per database and query.
In addition, metrics for resources usage for the exporter process can be included by passing --process-stats in the command line.
Debugging / Logs
You can enable extended logging using the -L commandline switch. Possible log levels are CRITICAL, ERROR, WARNING, INFO, DEBUG.
Database engines
SQLAlchemy doesn't depend on specific Python database modules at installation. This means additional modules might need to be installed for engines in use. These can be installed as follows:
pip install SQLAlchemy[postgresql] SQLAlchemy[mysql] ...
based on which database engines are needed.
See supported databases for details.
Install from Snap
query-exporter can be installed from Snap Store on systems where Snaps are supported, via:
sudo snap install query-exporter
The snap provides both the query-exporter command and a daemon instance of the command, managed via a Systemd service.
To configure the daemon:
- create or edit
/var/snap/query-exporter/current/config.yamlwith the configuration - run
sudo snap restart query-exporter
The snap has support for connecting the following databases:
- PostgreSQL (
postgresql://) - MySQL (
mysql://) - SQLite (
sqlite://) - Microsoft SQL Server (
mssql://) - IBM DB2 (
db2://) on supported architectures (x86_64, ppc64le and s390x)
Run in Docker
query-exporter can be run inside Docker containers, and is available from the Docker Hub:
docker run -p 9560:9560/tcp -v "$CONFIG_FILE:/config.yaml" --rm -it adonato/query-exporter:latest
where $CONFIG_FILE is the absolute path of the configuration file to use. Note that the image expects the file to be available as /config.yaml in the container.
- For other ODBC versions, build the image with --build-arg VERSION_NUMBER:
- docker build --build-arg ODBC_DRIVER_VERSION=17
The image has support for connecting the following databases:
- PostgreSQL (
postgresql://) - MySQL (
mysql://) - SQLite (
sqlite://) - Microsoft SQL Server (
mssql://) - IBM DB2 (
db2://) - Oracle (
oracle://)
A Helm chart to run the container in Kubernetes is also available.
[转帖]Export Prometheus metrics from SQL queries的更多相关文章
- EF: Raw SQL Queries
Raw SQL Queries Entity Framework allows you to query using LINQ with your entity classes. However, t ...
- Executing Raw SQL Queries using Entity Framework
原文 Executing Raw SQL Queries using Entity Framework While working with Entity Framework developers m ...
- Monitor All SQL Queries in MySQL (alias mysql profiler)
video from youtube: http://www.youtube.com/watch?v=79NWqv3aPRI one blog post: Monitor All SQL Querie ...
- Error Code: 1175. You are using safe update mode and you tried to update a table without a WHERE that uses a KEY column To disable safe mode, toggle the option in Preferences -> SQL Queries and reconn
使用MySQL执行update的时候报错: MySQL 在使用mysql执行update的时候,如果不是用主键当where语句,会报如下错误,使用主键用于where语句中正常. 异常内容: ...
- goaccess 通过jsonpath 转换为prometheus metrics
goaccess 是一个不错的日志分析工具,包含了json 数据同时支持基于websocket 的实时数据处理,当然我们可以通过jsonpath 的exporter 转换为支持promethues 的 ...
- EF Core 2.1 Raw SQL Queries (转自MSDN)
Entity Framework Core allows you to drop down to raw SQL queries when working with a relational data ...
- 【MySQL笔记】解除输入的安全模式,Error Code: 1175. You are using safe update mode and you tried to update a table without a WHERE that uses a KEY column To disable safe mode, toggle the option in Preferences -> SQL Queries and reconnect.
Error Code: 1175. You are using safe update mode and you tried to update a table without a WHERE tha ...
- Tracing SQL Queries in Real Time for MySQL Databases using WinDbg and Basic Assembler Knowledge
https://www.codeproject.com/Articles/43305/Tracing-SQL-Queries-in-Real-Time-for-MySQL-Databas As ...
- 使用haproxy 2.0 prometheus metrics 监控系统状态
haproxy 2.0 已经发布一段时间了,提供内部直接暴露的prometheus metrics 很方便 ,可以快速的监控系统的状态 以下是一个简单的demo 环境准备 docker-compose ...
- Prometheus Metrics 设计的最佳实践和应用实例,看这篇够了!
Prometheus 是一个开源的监控解决方案,部署简单易使用,难点在于如何设计符合特定需求的 Metrics 去全面高效地反映系统实时状态,以助力故障问题的发现与定位.本文即基于最佳实践的 Metr ...
随机推荐
- Eureka:Spring Cloud服务注册和发现组件
Eureka:Spring Cloud服务注册和发现组件 问题总结 Eureka 两大组件? Eureka 服务注册与发现? Eureka Server 集群? Eureka 自我保护机制? 问题答案 ...
- [ACTF2020 新生赛]Exec 1
[ACTF2020 新生赛]Exec 1 审题 发现题目有ping功能,猜测是命令执行漏洞. 知识点 linux系统命令 解题 先ping127.0.0.1,观察是否正常执行. 发现正常后执行ls / ...
- 斯坦福 UE4 C++ ActionRoguelike游戏实例教程 12.认识GamePlayTag, 实现技能的互斥
斯坦福课程 UE4 C++ ActionRoguelike游戏实例教程 0.绪论 概述 本篇文章对应Lecture 17 - GameplayTags, 67.67节.本文将会讲述UE4中Gamepl ...
- 神经网络优化篇:理解指数加权平均数(Understanding exponentially weighted averages)
理解指数加权平均数 回忆一下这个计算指数加权平均数的关键方程. \({{v}_{t}}=\beta {{v}_{t-1}}+(1-\beta ){{\theta }_{t}}\) \(\beta=0. ...
- 解决大模型“开发难”,昇思MindSpore自动并行技术应用实践
本文分享自华为云社区<DTSE Tech Talk|第35期:解决大模型"开发难",昇思MindSpore自动并行技术应用实践>,作者华为云社区精选. 昇思MindSp ...
- 华为云PB级数据库GaussDB(for Redis)解析第二期:Redis消息队列Stream的应用探讨
摘要:本文将对Stream的常用命令和应用场景进行介绍,并探讨原生Redis Stream消息队列的缺陷以及GaussDB(for Redis)提供的解决方案,供大家学习和选用. 华为云高斯Redis ...
- 华为云AI论文精读会2021第一期:高效语义分割模型Fast-SCNN分享
2020年举办的华为云AI经典论文复现活动,不仅受到了参赛者们一致好评,也产出了许多优质的算法成果.这些论文复现的算法描述.源代码以及算法使用指导文档均已发布到了AI Gallery.为了让开发者更好 ...
- Hive查看,删除分区
查看所有分区 show partitions 表名; 删除一般会有两种方案 1.直接删除hdfs文件 亲测删除hdfs路径后 查看分区还是能看到此分区 可能会引起其他问题 此方法不建议 2. 使用删除 ...
- Mvc管道模型和处理请求的流程
管道事件 ASP.NET MVC请求到响应的基本流程 原文链接:https://blog.csdn.net/qq_37112587/article/details/112340916
- 火山引擎DataLeap的Data Catalog系统公有云实践 (上)
更多技术交流.求职机会,欢迎关注字节跳动数据平台微信公众号,回复[1]进入官方交流群 前言 Data Catalog 通过汇总技术和业务元数据,解决大数据生产者组织梳理数据.数据消费者找数和理解数的业 ...