Cortex Architecture
内容来自github 官方文档,参考连接:https://github.com/cortexproject/cortex/blob/master/docs/architecture.md
Cortex consists of multiple horizontally scalable microservices. Each microservice uses the most appropriate technique for horizontal scaling; most are stateless and can handle requests for any users while some (namely the ingesters) are semi-stateful and depend on consistent hashing. This document provides a basic overview of Cortex's architecture.
The role of Prometheus
Prometheus instances scrape samples from various targets and then push them to Cortex (using Prometheus' remote write API). That remote write API emits batched Snappy-compressed Protocol Buffer messages inside the body of an HTTP PUT
request.
Cortex requires that each HTTP request bear a header specifying a tenant ID for the request. Request authentication and authorization are handled by an external reverse proxy.
Incoming samples (writes from Prometheus) are handled by the distributor while incoming reads (PromQL queries) are handled by the query frontend.
Services
Cortex has a service-based architecture, in which the overall system is split up into a variety of components that perform specific tasks and run separately (and potentially in parallel).
Cortex is, for the most part, a shared-nothing system. Each layer of the system can run multiple instances of each component and they don't coordinate or communicate with each other within that layer.
Distributor
The distributor service is responsible for handling samples written by Prometheus. It's essentially the "first stop" in the write path for Prometheus samples. Once the distributor receives samples from Prometheus, it splits them into batches and then sends them to multiple ingesters in parallel.
Distributors communicate with ingesters via gRPC. They are stateless and can be scaled up and down as needed.
If the HA Tracker is enabled, the Distributor will deduplicate incoming samples that contain both a cluster and replica label. It talks to a KVStore to store state about which replica per cluster it's accepting samples from for a given user ID. Samples with one or neither of these labels will be accepted by default.
Hashing
Distributors use consistent hashing, in conjunction with the (configurable) replication factor, to determine which instances of the ingester service receive each sample.
The hash itself is based on one of two schemes:
- The metric name and tenant ID
- All the series labels and tenant ID
The trade-off associated with the latter is that writes are more balanced but they must involve every ingester in each query.
This hashing scheme was originally chosen to reduce the number of required ingesters on the query path. The trade-off, however, is that the write load on the ingesters is less even.
The hash ring
A consistent hash ring is stored in Consul as a single key-value pair, with the ring data structure also encoded as a Protobuf message. The consistent hash ring consists of a list of tokens and ingesters. Hashed values are looked up in the ring; the replication set is built for the closest unique ingesters by token. One of the benefits of this system is that adding and remove ingesters results in only 1/N of the series being moved (where N is the number of ingesters).
Quorum consistency
All distributors share access to the same hash ring, which means that write requests can be sent to any distributor.
To ensure consistent query results, Cortex uses Dynamo-style quorum consistency on reads and writes. This means that the distributor will wait for a positive response of at least one half plus one of the ingesters to send the sample to before responding to the user.
Load balancing across distributors
We recommend randomly load balancing write requests across distributor instances, ideally by running the distributors as a Kubernetes Service.
Ingester
The ingester service is responsible for writing sample data to long-term storage backends (DynamoDB, S3, Cassandra, etc.).
Samples from each timeseries are built up in "chunks" in memory inside each ingester, then flushed to the chunk store. By default each chunk is up to 12 hours long.
If an ingester process crashes or exits abruptly, all the data that has not yet been flushed will be lost. Cortex is usually configured to hold multiple (typically 3) replicas of each timeseries to mitigate this risk.
A hand-over process manages the state when ingesters are added, removed or replaced.
Write de-amplification
Ingesters store the last 12 hours worth of samples in order to perform write de-amplification, i.e. batching and compressing samples for the same series and flushing them out to the chunk store. Under normal operations, there should be many orders of magnitude fewer operations per second (OPS) worth of writes to the chunk store than to the ingesters.
Write de-amplification is the main source of Cortex's low total cost of ownership (TCO).
Ruler
Ruler executes PromQL queries for Recording Rules and Alerts. Ruler is configured from a database, so that different rules can be set for each tenant.
All the rules for one instance are executed as a group, then rescheduled to be executed again 15 seconds later. Execution is done by a 'worker' running on a goroutine - if you don't have enough workers then the ruler will lag behind.
Ruler can be scaled horizontally.
AlertManager
AlertManager is responsible for accepting alert notifications from Ruler, grouping them, and passing on to a notification channel such as email, PagerDuty, etc.
Like the Ruler, AlertManager is configured per-tenant in a database.
Query frontend
The query frontend is an optional service that accepts HTTP requests, queues them by tenant ID, and retries in case of errors.
The query frontend is completely optional; you can use queriers directly. To use the query frontend, direct incoming authenticated reads at them and set the
-querier.frontend-address
flag on the queriers.
Queueing
Queuing performs a number of functions for the query frontend:
- It ensures that large queries that cause an out-of-memory (OOM) error in the querier will be retried. This allows administrators to under-provision memory for queries, or optimistically run more small queries in parallel, which helps to reduce TCO.
- It prevents multiple large requests from being convoyed on a single querier by distributing them first-in/first-out (FIFO) across all queriers.
- It prevents a single tenant from denial-of-service-ing (DoSing) other tenants by fairly scheduling queries between tenants.
Splitting
The query frontend splits multi-day queries into multiple single-day queries, executing these queries in parallel on downstream queriers and stitching the results back together again. This prevents large, multi-day queries from OOMing a single querier and helps them execute faster.
Caching
The query frontend caches query results and reuses them on subsequent queries. If the cached results are incomplete, the query frontend calculates the required subqueries and executes them in parallel on downstream queriers. The query frontend can optionally align queries with their step parameter to improve the cacheability of the query results.
Parallelism
The query frontend job accepts gRPC streaming requests from the queriers, which then "pull" requests from the frontend. For high availability it's recommended that you run multiple frontends; the queriers will connect to—and pull requests from—all of them. To reap the benefit of fair scheduling, it is recommended that you run fewer frontends than queriers. Two should suffice in most cases.
Querier
The querier service handles the actual PromQL evaluation of samples stored in long-term storage.
It embeds the chunk store client code for fetching data from long-term storage and communicates with ingesters for more recent data.
Chunk store
The chunk store is Cortex's long-term data store, designed to support interactive querying and sustained writing without the need for background maintenance tasks. It consists of:
- An index for the chunks. This index can be backed by DynamoDB from Amazon Web Services, Bigtable from Google Cloud Platform, Apache Cassandra.
- A key-value (KV) store for the chunk data itself, which can be DynamoDB, Bigtable, Cassandra again, or an object store such as Amazon S3
Unlike the other core components of Cortex, the chunk store is not a separate service, job, or process, but rather a library embedded in the three services that need to access Cortex data: the ingester, querier, and ruler.
The chunk store relies on a unified interface to the "NoSQL" stores—DynamoDB, Bigtable, and Cassandra—that can be used to back the chunk store index. This interface assumes that the index is a collection of entries keyed by:
- A hash key. This is required for all reads and writes.
- A range key. This is required for writes and can be omitted for reads, which can be queried by prefix or range.
The interface works somewhat differently across the supported databases:
- DynamoDB supports range and hash keys natively. Index entries are thus modelled directly as DynamoDB entries, with the hash key as the distribution key and the range as the range key.
- For Bigtable and Cassandra, index entries are modelled as individual column values. The hash key becomes the row key and the range key becomes the column key.
A set of schemas are used to map the matchers and label sets used on reads and writes to the chunk store into appropriate operations on the index. Schemas have been added as Cortex has evolved, mainly in an attempt to better load balance writes and improve query performance.
The current schema recommendation is the v10 schema.
Cortex Architecture的更多相关文章
- ARM 架构、ARM7、ARM9、STM32、Cortex M3 M4 、51、AVR 之间有什么区别和联系?(转载自知乎)
ARM架构: 由英国ARM公司设计的一系列32位的RISC微处理器架构总称,现有ARMv1~ARMv8种类. ARM7: 一类采用ARMv3或ARMv4架构的,使用冯诺依曼结构的内核. ...
- Implementation of Serial Wire JTAG flash programming in ARM Cortex M3 Processors
Implementation of Serial Wire JTAG flash programming in ARM Cortex M3 Processors The goal of the pro ...
- Introduction to Cortex Serial Wire Debugging
Serial Wire Debug (SWD) provides a debug port for severely pin limited packages, often the case for ...
- ARM architecture
http://en.wikipedia.org/wiki/ARM_architecture ARM architecture ARM architectures The ARM logo De ...
- Neural Architecture Search — Limitations and Extensions
Neural Architecture Search — Limitations and Extensions 2019-09-16 07:46:09 This blog is from: https ...
- Undefined symbols for architecture arm64解决方案
在iOS开发中经常遇到的一个错误是Undefined symbols for architecture arm64,这个错误表示工程某些地方不支持arm64指令集.那我们应该怎么解决这个问题了?我们不 ...
- Optimal Flexible Architecture(最优灵活架构)
来自:Oracle® Database Installation Guide 12_c_ Release 1 (12.1) for Linux Oracle base目录命名规范: /pm/s/u 例 ...
- EF框架组件详述【Entity Framework Architecture】(EF基础系列篇3)
我们来看看EF的框架设计吧: The following figure shows the overall architecture of the Entity Framework. Let us n ...
- [Architecture] 系统架构正交分解法
[Architecture] 系统架构正交分解法 前言 随着企业成长,支持企业业务的软件,也会越来越庞大与复杂.当系统复杂到一定程度,开发人员会发现很多系统架构的设计细节,很难有条理.有组织的用一张大 ...
随机推荐
- Python协程深入理解(转)
原文:https://www.cnblogs.com/zhaof/p/7631851.html 从语法上来看,协程和生成器类似,都是定义体中包含yield关键字的函数.yield在协程中的用法: 在协 ...
- java学习:循环结构的使用规则和注意事项
循环结构的基本组成部分,一般可分为四部分: 初始化语句:在循环开始最初执行,而且只做唯一一次 条件判断:如果成立,则循环继续:如果不成立,则循环退出. 循环体:重复要做的事情内容,若干行语句. 进步语 ...
- 打印出三位数的水仙花数Python
水仙花数计算 ...
- Kafka MirrorMaker 跨集群同步工具
一.MirrorMaker介绍 MirrorMaker是Kafka附带的一个用于在Kafka集群之间制作镜像数据的工具.该工具从源集群中消费并生产到目标群集.这种镜像的常见用例是在另一个数据中心提供副 ...
- 类型和变量(C#学习笔记02)
类型和变量 [C#类型和变量(原文参考官方教程)]https://docs.microsoft.com/zh-cn/dotnet/csharp/tour-of-csharp/types-and-var ...
- 调用WebApi出现 远程服务器返回错误: (500) 内部服务器错误
一.检查错误错误 将 HttpWebResponse response = (HttpWebResponse)request.GetResponse();改为 HttpWebResponse resp ...
- SQL 查询表外键_T-Sql 2016——级联删除外键查询
SELECT fk.name AS foreign_key_name, oSub.name AS table_name, SubCol.name AS table_column, oMain.name ...
- rest-spring-boot-starter
rest-spring-boot-starter 基于spring boot,统一业务异常处理,统一返回格式包装 依赖 <dependency> <groupId>tk.fis ...
- Linux环境:VMware下windows虚拟机与linux主机进行文件共享的方法
操作主要分两大步骤: 一.是对主机进行配置: 二.是在虚拟机上直接连接共享目录. 一.主机配置 1.打开VMware虚拟机,双击需要进行文件共享的虚拟机.如下图,双击CentOS 64位(以linux ...
- swift - 开心一刻
let array = ["one", "two", "three", "four", "five" ...