[转帖]Export Prometheus metrics from SQL queries
https://github.com/albertodonato/query-exporter
query-exporter is a Prometheus exporter which allows collecting metrics from database queries, at specified time intervals.
It uses SQLAlchemy to connect to different database engines, including PostgreSQL, MySQL, Oracle and Microsoft SQL Server.
Each query can be run on multiple databases, and update multiple metrics.
The application is simply run as:
query-exporter config.yaml
where the passed configuration file contains the definitions of the databases to connect and queries to perform to update metrics.
Configuration file format
A sample configuration file for the application looks like this:
databases:
db1:
dsn: sqlite://
connect-sql:
- PRAGMA application_id = 123
- PRAGMA auto_vacuum = 1
labels:
region: us1
app: app1
db2:
dsn: sqlite://
keep-connected: false
labels:
region: us2
app: app1 metrics:
metric1:
type: gauge
description: A sample gauge
metric2:
type: summary
description: A sample summary
labels: [l1, l2]
expiration: 24h
metric3:
type: histogram
description: A sample histogram
buckets: [10, 20, 50, 100, 1000]
metric4:
type: enum
description: A sample enum
states: [foo, bar, baz] queries:
query1:
interval: 5
databases: [db1]
metrics: [metric1]
sql: SELECT random() / 1000000000000000 AS metric1
query2:
interval: 20
timeout: 0.5
databases: [db1, db2]
metrics: [metric2, metric3]
sql: |
SELECT abs(random() / 1000000000000000) AS metric2,
abs(random() / 10000000000000000) AS metric3,
"value1" AS l1,
"value2" AS l2
query3:
schedule: "*/5 * * * *"
databases: [db2]
metrics: [metric4]
sql: |
SELECT value FROM (
SELECT "foo" AS metric4 UNION
SELECT "bar" AS metric3 UNION
SELECT "baz" AS metric4
)
ORDER BY random()
LIMIT 1
databases section
This section contains definitions for databases to connect to. Key names are arbitrary and only used to reference databases in the queries section.
Each database definitions can have the following keys:
dsn:-
database connection details.
It can be provided as a string in the following format:
dialect[+driver]://[username:password][@host:port]/database[?option=value&...]
(see SQLAlchemy documentation for details on available engines and options), or as key/value pairs:
dialect: <dialect>[+driver]
user: <username>
password: <password>
host: <host>
port: <port>
database: <database>
options:
<key1>: <value1>
<key2>: <value2>All entries are optional, except
dialect.Note that in the string form, username, password and options need to be URL-encoded, whereas this is done automatically for the key/value form.
See database-specific options page for some extra details on database configuration options.
It's also possible to get the connection string indirectly from other sources:
from an environment variable (e.g.
$CONNECTION_STRING) by settingdsnto:env:CONNECTION_STRING
from a file, containing only the DSN value, by setting
dsnto:file:/path/to/file
These forms only support specifying the actual DNS in the string form.
connect-sql:- An optional list of queries to run right after database connection. This can be used to set up connection-wise parameters and configurations.
keep-connected:- whether to keep the connection open for the database between queries, or disconnect after each one. If not specified, defaults to
true. Setting this option tofalsemight be useful if queries on a database are run with very long interval, to avoid holding idle connections. autocommit:- whether to set autocommit for the database connection. If not specified, defaults to
true. This should only be changed tofalseif specific queries require it. labels:- an optional mapping of label names and values to tag metrics collected from each database. When labels are used, all databases must define the same set of labels.
metrics section
This section contains Prometheus metrics definitions. Keys are used as metric names, and must therefore be valid metric identifiers.
Each metric definition can have the following keys:
type:-
the type of the metric, must be specified. The following metric types are supported:
counter: value is incremented with each result from queriesenum: value is set with each result from queriesgauge: value is set with each result from querieshistogram: each result from queries is added to observationssummary: each result from queries is added to observations
description:- an optional description of the metric.
labels:-
an optional list of label names to apply to the metric.
If specified, queries updating the metric must return rows that include values for each label in addition to the metric value. Column names must match metric and labels names.
buckets:-
for
histogrammetrics, a list of buckets for the metrics.If not specified, default buckets are applied.
states:-
for
enummetrics, a list of string values for possible states.Queries for updating the enum must return valid states.
expiration:-
the amount of time after which a series for the metric is cleared if no new value is collected.
Last report times are tracked independently for each set of label values for the metric.
This can be useful for metric series that only last for a certain amount of time, to avoid an ever-increasing collection of series.
The value is interpreted as seconds if no suffix is specified; valid suffixes are
s,m,h,d. Only integer values are accepted. increment:-
for
countermetrics, whether to increment the value by the query result, or set the value to it.By default, counters are incremented by the value returned by the query. If this is set to
false, instead, the metric value will be set to the result of the query.NOTE: The default will be reversed in the 3.0 release, and
incrementwill be set tofalseby default.
queries section
This section contains definitions for queries to perform. Key names are arbitrary and only used to identify queries in logs.
Each query definition can have the following keys:
databases:-
the list of databases to run the query on.
Names must match those defined in the
databasessection.Metrics are automatically tagged with the
databaselabel so that independent series are generated for each database that a query is run on. interval:-
the time interval at which the query is run.
The value is interpreted as seconds if no suffix is specified; valid suffixes are
s,m,h,d. Only integer values are accepted.If a value is specified for
interval, aschedulecan't be specified.If no value is specified (or specified as
null), the query is only executed upon HTTP requests. metrics:-
the list of metrics that the query updates.
Names must match those defined in the
metricssection. parameters:-
an optional list or dictionary of parameters sets to run the query with.
If specified as a list, the query will be run once for every set of parameters specified in this list, for every interval.
Each parameter set must be a dictionary where keys must match parameters names from the query SQL (e.g.
:param).As an example:
query:
databases: [db]
metrics: [metric]
sql: |
SELECT COUNT(*) AS metric FROM table
WHERE id > :param1 AND id < :param2
parameters:
- param1: 10
param2: 20
- param1: 30
param2: 40If specified as a dictionary, it's used as a multidimensional matrix of parameters lists to run the query with. The query will be run once for each permutation of parameters.
If a query is specified with parameters as matrix in its
sql, it will be run once for every permutation in matrix of parameters, for every interval.Variable format in sql query:
:{top_level_key}__{inner_key}query:
databases: [db]
metrics: [apps_count]
sql: |
SELECT COUNT(1) AS apps_count FROM apps_list
WHERE os = :os__name AND arch = :os__arch AND lang = :lang__name
parameters:
os:
- name: MacOS
arch: arm64
- name: Linux
arch: amd64
- name: Windows
arch: amd64
lang:
- name: Python3
- name: Java
- name: TypeScriptThis example will generate 9 queries with all permutations of
osandlangparameters. schedule:-
a schedule for executing queries at specific times.
This is expressed as a Cron-like format string (e.g.
*/5 * * * *to run every five minutes).If a value is specified for
schedule, anintervalcan't be specified.If no value is specified (or specified as
null), the query is only executed upon HTTP requests. sql:-
the SQL text of the query.
The query must return columns with names that match those of the metrics defined in
metrics, plus those of labels (if any) for all these metrics.query:
databases: [db]
metrics: [metric1, metric2]
sql: SELECT 10.0 AS metric1, 20.0 AS metric2will update
metric1to10.0andmetric2to20.0.- Note:
- since
:is used for parameter markers (seeparametersabove), literal single:at the beginning of a word must be escaped with backslash (e.g.SELECT '\:bar' FROM table). There's no need to escape when the colon occurs inside a word (e.g.SELECT 'foo:bar' FROM table).
timeout:-
a value in seconds after which the query is timed out.
If specified, it must be a multiple of 0.1.
Metrics endpoint
The exporter listens on port 9560 providing the standard /metrics endpoint.
By default, the port is bound on localhost. Note that if the name resolves both IPv4 and IPv6 addressses, the exporter will bind on both.
For the configuration above, the endpoint would return something like this:
# HELP database_errors_total Number of database errors
# TYPE database_errors_total counter
# HELP queries_total Number of database queries
# TYPE queries_total counter
queries_total{app="app1",database="db1",query="query1",region="us1",status="success"} 50.0
queries_total{app="app1",database="db2",query="query2",region="us2",status="success"} 13.0
queries_total{app="app1",database="db1",query="query2",region="us1",status="success"} 13.0
queries_total{app="app1",database="db2",query="query3",region="us2",status="error"} 1.0
# HELP queries_created Number of database queries
# TYPE queries_created gauge
queries_created{app="app1",database="db1",query="query1",region="us1",status="success"} 1.5945442444463024e+09
queries_created{app="app1",database="db2",query="query2",region="us2",status="success"} 1.5945442444471517e+09
queries_created{app="app1",database="db1",query="query2",region="us1",status="success"} 1.5945442444477117e+09
queries_created{app="app1",database="db2",query="query3",region="us2",status="error"} 1.5945444000140696e+09
# HELP query_latency Query execution latency
# TYPE query_latency histogram
query_latency_bucket{app="app1",database="db1",le="0.005",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="0.01",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="0.025",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="0.05",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="0.075",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="0.1",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="0.25",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="0.5",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="0.75",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="1.0",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="2.5",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="5.0",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="7.5",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="10.0",query="query1",region="us1"} 50.0
query_latency_bucket{app="app1",database="db1",le="+Inf",query="query1",region="us1"} 50.0
query_latency_count{app="app1",database="db1",query="query1",region="us1"} 50.0
query_latency_sum{app="app1",database="db1",query="query1",region="us1"} 0.004666365042794496
query_latency_bucket{app="app1",database="db2",le="0.005",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="0.01",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="0.025",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="0.05",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="0.075",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="0.1",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="0.25",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="0.5",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="0.75",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="1.0",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="2.5",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="5.0",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="7.5",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="10.0",query="query2",region="us2"} 13.0
query_latency_bucket{app="app1",database="db2",le="+Inf",query="query2",region="us2"} 13.0
query_latency_count{app="app1",database="db2",query="query2",region="us2"} 13.0
query_latency_sum{app="app1",database="db2",query="query2",region="us2"} 0.012369773990940303
query_latency_bucket{app="app1",database="db1",le="0.005",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="0.01",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="0.025",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="0.05",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="0.075",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="0.1",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="0.25",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="0.5",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="0.75",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="1.0",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="2.5",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="5.0",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="7.5",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="10.0",query="query2",region="us1"} 13.0
query_latency_bucket{app="app1",database="db1",le="+Inf",query="query2",region="us1"} 13.0
query_latency_count{app="app1",database="db1",query="query2",region="us1"} 13.0
query_latency_sum{app="app1",database="db1",query="query2",region="us1"} 0.004745393933262676
# HELP query_latency_created Query execution latency
# TYPE query_latency_created gauge
query_latency_created{app="app1",database="db1",query="query1",region="us1"} 1.594544244446163e+09
query_latency_created{app="app1",database="db2",query="query2",region="us2"} 1.5945442444470239e+09
query_latency_created{app="app1",database="db1",query="query2",region="us1"} 1.594544244447551e+09
# HELP metric1 A sample gauge
# TYPE metric1 gauge
metric1{app="app1",database="db1",region="us1"} -3561.0
# HELP metric2 A sample summary
# TYPE metric2 summary
metric2_count{app="app1",database="db2",l1="value1",l2="value2",region="us2"} 13.0
metric2_sum{app="app1",database="db2",l1="value1",l2="value2",region="us2"} 58504.0
metric2_count{app="app1",database="db1",l1="value1",l2="value2",region="us1"} 13.0
metric2_sum{app="app1",database="db1",l1="value1",l2="value2",region="us1"} 75262.0
# HELP metric2_created A sample summary
# TYPE metric2_created gauge
metric2_created{app="app1",database="db2",l1="value1",l2="value2",region="us2"} 1.594544244446819e+09
metric2_created{app="app1",database="db1",l1="value1",l2="value2",region="us1"} 1.594544244447339e+09
# HELP metric3 A sample histogram
# TYPE metric3 histogram
metric3_bucket{app="app1",database="db2",le="10.0",region="us2"} 1.0
metric3_bucket{app="app1",database="db2",le="20.0",region="us2"} 1.0
metric3_bucket{app="app1",database="db2",le="50.0",region="us2"} 2.0
metric3_bucket{app="app1",database="db2",le="100.0",region="us2"} 3.0
metric3_bucket{app="app1",database="db2",le="1000.0",region="us2"} 13.0
metric3_bucket{app="app1",database="db2",le="+Inf",region="us2"} 13.0
metric3_count{app="app1",database="db2",region="us2"} 13.0
metric3_sum{app="app1",database="db2",region="us2"} 5016.0
metric3_bucket{app="app1",database="db1",le="10.0",region="us1"} 0.0
metric3_bucket{app="app1",database="db1",le="20.0",region="us1"} 0.0
metric3_bucket{app="app1",database="db1",le="50.0",region="us1"} 0.0
metric3_bucket{app="app1",database="db1",le="100.0",region="us1"} 0.0
metric3_bucket{app="app1",database="db1",le="1000.0",region="us1"} 13.0
metric3_bucket{app="app1",database="db1",le="+Inf",region="us1"} 13.0
metric3_count{app="app1",database="db1",region="us1"} 13.0
metric3_sum{app="app1",database="db1",region="us1"} 5358.0
# HELP metric3_created A sample histogram
# TYPE metric3_created gauge
metric3_created{app="app1",database="db2",region="us2"} 1.5945442444469101e+09
metric3_created{app="app1",database="db1",region="us1"} 1.5945442444474254e+09
# HELP metric4 A sample enum
# TYPE metric4 gauge
metric4{app="app1",database="db2",metric4="foo",region="us2"} 0.0
metric4{app="app1",database="db2",metric4="bar",region="us2"} 0.0
metric4{app="app1",database="db2",metric4="baz",region="us2"} 1.0
Builtin metrics
The exporter provides a few builtin metrics which can be useful to track query execution:
database_errors{database="db"}:- a counter used to report number of errors, per database.
queries{database="db",query="q",status="[success|error|timeout]"}:- a counter with number of executed queries, per database, query and status.
query_latency{database="db",query="q"}:- a histogram with query latencies, per database and query.
In addition, metrics for resources usage for the exporter process can be included by passing --process-stats in the command line.
Debugging / Logs
You can enable extended logging using the -L commandline switch. Possible log levels are CRITICAL, ERROR, WARNING, INFO, DEBUG.
Database engines
SQLAlchemy doesn't depend on specific Python database modules at installation. This means additional modules might need to be installed for engines in use. These can be installed as follows:
pip install SQLAlchemy[postgresql] SQLAlchemy[mysql] ...
based on which database engines are needed.
See supported databases for details.
Install from Snap
query-exporter can be installed from Snap Store on systems where Snaps are supported, via:
sudo snap install query-exporter
The snap provides both the query-exporter command and a daemon instance of the command, managed via a Systemd service.
To configure the daemon:
- create or edit
/var/snap/query-exporter/current/config.yamlwith the configuration - run
sudo snap restart query-exporter
The snap has support for connecting the following databases:
- PostgreSQL (
postgresql://) - MySQL (
mysql://) - SQLite (
sqlite://) - Microsoft SQL Server (
mssql://) - IBM DB2 (
db2://) on supported architectures (x86_64, ppc64le and s390x)
Run in Docker
query-exporter can be run inside Docker containers, and is available from the Docker Hub:
docker run -p 9560:9560/tcp -v "$CONFIG_FILE:/config.yaml" --rm -it adonato/query-exporter:latest
where $CONFIG_FILE is the absolute path of the configuration file to use. Note that the image expects the file to be available as /config.yaml in the container.
- For other ODBC versions, build the image with --build-arg VERSION_NUMBER:
- docker build --build-arg ODBC_DRIVER_VERSION=17
The image has support for connecting the following databases:
- PostgreSQL (
postgresql://) - MySQL (
mysql://) - SQLite (
sqlite://) - Microsoft SQL Server (
mssql://) - IBM DB2 (
db2://) - Oracle (
oracle://)
A Helm chart to run the container in Kubernetes is also available.
[转帖]Export Prometheus metrics from SQL queries的更多相关文章
- EF: Raw SQL Queries
Raw SQL Queries Entity Framework allows you to query using LINQ with your entity classes. However, t ...
- Executing Raw SQL Queries using Entity Framework
原文 Executing Raw SQL Queries using Entity Framework While working with Entity Framework developers m ...
- Monitor All SQL Queries in MySQL (alias mysql profiler)
video from youtube: http://www.youtube.com/watch?v=79NWqv3aPRI one blog post: Monitor All SQL Querie ...
- Error Code: 1175. You are using safe update mode and you tried to update a table without a WHERE that uses a KEY column To disable safe mode, toggle the option in Preferences -> SQL Queries and reconn
使用MySQL执行update的时候报错: MySQL 在使用mysql执行update的时候,如果不是用主键当where语句,会报如下错误,使用主键用于where语句中正常. 异常内容: ...
- goaccess 通过jsonpath 转换为prometheus metrics
goaccess 是一个不错的日志分析工具,包含了json 数据同时支持基于websocket 的实时数据处理,当然我们可以通过jsonpath 的exporter 转换为支持promethues 的 ...
- EF Core 2.1 Raw SQL Queries (转自MSDN)
Entity Framework Core allows you to drop down to raw SQL queries when working with a relational data ...
- 【MySQL笔记】解除输入的安全模式,Error Code: 1175. You are using safe update mode and you tried to update a table without a WHERE that uses a KEY column To disable safe mode, toggle the option in Preferences -> SQL Queries and reconnect.
Error Code: 1175. You are using safe update mode and you tried to update a table without a WHERE tha ...
- Tracing SQL Queries in Real Time for MySQL Databases using WinDbg and Basic Assembler Knowledge
https://www.codeproject.com/Articles/43305/Tracing-SQL-Queries-in-Real-Time-for-MySQL-Databas As ...
- 使用haproxy 2.0 prometheus metrics 监控系统状态
haproxy 2.0 已经发布一段时间了,提供内部直接暴露的prometheus metrics 很方便 ,可以快速的监控系统的状态 以下是一个简单的demo 环境准备 docker-compose ...
- Prometheus Metrics 设计的最佳实践和应用实例,看这篇够了!
Prometheus 是一个开源的监控解决方案,部署简单易使用,难点在于如何设计符合特定需求的 Metrics 去全面高效地反映系统实时状态,以助力故障问题的发现与定位.本文即基于最佳实践的 Metr ...
随机推荐
- 4.elasticsearch中聚合查询
elasticsearch聚合查询 什么是聚合,就是目的不是查询具体的文档,而是查询文档的相关性,此外还可以对聚合的文档在按照其他维度再聚合. 包含以下四种聚合 Bucket Aggregation ...
- TeeChart 的使用从入门到精通
1.首先nutGet 进行使用 2.如果需要使用管方的Key 进行激活 3.直接上写的Demo代码 1 using System; 2 using System.Collections.Generic ...
- K8S系列一:概念入门
K8S系列一:概念入门 写在前面 本文组织方式: K8S的架构.作用和目的.需要首先对K8S整体有所了解. K8S是什么? 为什么是K8S? K8S怎么做? K8S的重要概念,即K8S的API对象.要 ...
- 实践解读丨Python 面向对象三大特征之多态
摘要:多态从字面意思上看就是多种形态,在我们python的面向对象里就是不同的对象在接收相同方法或者函数时会产生不同的行为,也就是说,每个对象可以用自己的方式去响应共同的函数,不同的方式实现不同的结果 ...
- 华为云推出全自研数据库,GaussDB(openGauss)能否撑起一片天?
摘要:GaussDB(openGauss) 基于华为云底座,能够快速全球化部署,同时支持用户的本地化部署诉求,跟云上生态工具紧密结合让用户在迁移.开发.运维上省时省心. GaussDB(openGau ...
- 一段java代码是如何执行的?
摘要:当你学会了java语言之后,你写了一些代码,然后你想要执行你的代码,来达成某些功能.那么,你都知道这段java代码都是如何执行的吗? 本文分享自华为云社区<一段java代码是如何执行的&g ...
- 带你了解NB-IoT标准演进
摘要:本文将带大家详细了解NB-IoT标准演进与产业发展. 本文分享自华为云社区<一文带你了解NB-IoT标准演进与产业发展>,作者:万万万. 我们都知道,物联网的场景和手机.电脑在使用的 ...
- TypeScript里string和String,真不是仅仅是大小写的区别
摘要:通常来说,string表示原生类型,而String表示对象. 本文分享自华为云社区<TypeScript里string和String的区别>,作者:gentle_zhou . 背景 ...
- 斗罗大陆真3D手游实力上线,带你感受魂兽猎杀的超燃时刻
摘要:在华为云数据库支撑该游戏的仅两个月内就完成了游戏内测至上线的全流程,业务上线流程缩短50%,并支撑海量游戏用户同时在线,达到了200万的用户预约量,上线首日流水破1000万. "没有废 ...
- 克魔助手:方便查看iPhone应用实时日志和奔溃日志工具
克魔助手:方便查看iPhone应用实时日志和奔溃日志工具 查看ios app运行日志 摘要 本文介绍了一款名为克魔助手的iOS应用日志查看工具,该工具可以方便地查看iPhone设备上应用和系统运行 ...