diff --git a/.github/workflows/sbom-report.yml b/.github/workflows/sbom-report.yml index 8c5e437c4a3..7c4f664faed 100644 --- a/.github/workflows/sbom-report.yml +++ b/.github/workflows/sbom-report.yml @@ -14,7 +14,7 @@ jobs: uses: actions/checkout@v4 - name: Anchore SBOM Action - uses: anchore/sbom-action@v0.17.8 + uses: anchore/sbom-action@v0.17.9 with: artifact-name: ${{ github.event.repository.name }}-spdx.json diff --git a/CHANGELOG.md b/CHANGELOG.md index 6c331f65020..5dca672a8a7 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,6 +1,39 @@ # Changelog -## main / unreleased +## 2.15.0-rc.0 + +* [CHANGE] Distributor: OTLP and push handler replace all non-UTF8 characters with the unicode replacement character `\uFFFD` in error messages before propagating them. #10236 +* [ENHANCEMENT] Distributor: OTLP receiver now converts also metric metadata. See also https://github.com/prometheus/prometheus/pull/15416. #10168 +* [ENHANCEMENT] Distributor: discard float and histogram samples with duplicated timestamps from each timeseries in a request before the request is forwarded to ingesters. Discarded samples are tracked by the `cortex_discarded_samples_total` metrics with reason `sample_duplicate_timestamp`. #10145 + +### Grafana Mimir + +* [BUGFIX] Distributor: Use a boolean to track changes while merging the ReplicaDesc components, rather than comparing the objects directly. #10185 + + +### Mixin + +* [BUGFIX] Dashboards: fix how we switch between classic and native histograms. #10018 + +### Jsonnet + +* [CHANGE] Update rollout-operator version to 0.22.0. #10229 + +### Mimirtool + +* [BUGFIX] Fix issue where `MIMIR_HTTP_PREFIX` environment variable was ignored and the value from `MIMIR_MIMIR_HTTP_PREFIX` was used instead. #10207 + +### Mimir Continuous Test + +### Query-tee + +### Documentation + +* [CHANGE] Add production tips related to cache size, heavy multi-tenancy and latency spikes. #9978 + +### Tools + +## 2.15.0-rc.0 ### Grafana Mimir @@ -16,12 +49,12 @@ * [CHANGE] Distributor: Drop experimental `-distributor.direct-otlp-translation-enabled` flag, since direct OTLP translation is well tested at this point. #9647 * [CHANGE] Ingester: Change `-initial-delay` for circuit breakers to begin when the first request is received, rather than at breaker activation. #9842 * [CHANGE] Query-frontend: apply query pruning before query sharding instead of after. #9913 -* [CHANGE] Ingester: remove experimental flags `-ingest-storage.kafka.ongoing-records-per-fetch` and `-ingest-storage.kafka.startup-records-per-fetch`. They are removed in favour of `-ingest-storage.kafka.max-buffered-bytes`. #9906 * [CHANGE] Ingester: Replace `cortex_discarded_samples_total` label from `sample-out-of-bounds` to `sample-timestamp-too-old`. #9885 * [CHANGE] Ruler: the `/prometheus/config/v1/rules` does not return an error anymore if a rule group is missing in the object storage after been successfully returned by listing the storage, because it could have been deleted in the meanwhile. #9936 * [CHANGE] Querier: The `.` pattern in regular expressions in PromQL matches newline characters. With this change regular expressions like `.*` match strings that include `\n`. To maintain the old behaviour, you will have to change regular expressions by replacing all `.` patterns with `[^\n]`, e.g. `foo[^\n]*`. This upgrades PromQL compatibility from Prometheus 2.0 to 3.0. #9844 -* [CHANGE] Querier: Lookback and range selectors are left open and right closed (previously left closed and right closed). This change affects queries when the evaluation time perfectly aligns with the sample timestamps. For example assume querying a timeseries with evenly spaced samples exactly 1 minute apart. Previously, a range query with `5m` would usually return 5 samples, or 6 samples if the query evaluation aligns perfectly with a scrape. Now, queries like this will always return 5 samples. This upgrades PromQL compatibility from Prometheus 2.0 to 3.0. #9844 +* [CHANGE] Querier: Lookback and range selectors are left open and right closed (previously left closed and right closed). This change affects queries and subqueries when the evaluation time perfectly aligns with the sample timestamps. For example assume querying a timeseries with evenly spaced samples exactly 1 minute apart. Previously, a range query with `5m` would usually return 5 samples, or 6 samples if the query evaluation aligns perfectly with a scrape. Now, queries like this will always return 5 samples. This upgrades PromQL compatibility from Prometheus 2.0 to 3.0. #9844 #10188 * [CHANGE] Querier: promql(native histograms): Introduce exponential interpolation. #9844 +* [CHANGE] Remove deprecated `api.get-request-for-ingester-shutdown-enabled` setting, which scheduled for removal in 2.15. #10197 * [FEATURE] Querier: add experimental streaming PromQL engine, enabled with `-querier.query-engine=mimir`. #10067 * [FEATURE] Distributor: Add support for `lz4` OTLP compression. #9763 * [FEATURE] Query-frontend: added experimental configuration options `query-frontend.cache-errors` and `query-frontend.results-cache-ttl-for-errors` to allow non-transient responses to be cached. When set to `true` error responses from hitting limits or bad data are cached for a short TTL. #9028 @@ -43,6 +76,8 @@ * [FEATURE] PromQL: Add experimental `info` function. Experimental functions are disabled by default, but can be enabled setting `-querier.promql-experimental-functions-enabled=true` in the query-frontend and querier. #9879 * [FEATURE] Distributor: Support promotion of OTel resource attributes to labels. #8271 * [FEATURE] Querier: Add experimental `double_exponential_smoothing` PromQL function. Experimental functions are disabled by default, but can be enabled by setting `-querier.promql-experimental-functions-enabled=true` in the query-frontend and querier. #9844 +* [FEATURE] Distributor: Add experimental `memberlist` KV store for ha_tracker. You can enable it using the `-distributor.ha-tracker.kvstore.store` flag. You can configure Memberlist parameters via the `-memberlist-*` flags. #10054 +* [FEATURE] Distributor: Add experimental `-distributor.otel-keep-identifying-resource-attributes` option to allow keeping `service.instance.id`, `service.name` and `service.namespace` in `target_info` on top of converting them to the `instance` and `job` labels. #10216 * [ENHANCEMENT] Query Frontend: Return server-side `bytes_processed` statistics following Server-Timing format. #9645 #9985 * [ENHANCEMENT] mimirtool: Adds bearer token support for mimirtool's analyze ruler/prometheus commands. #9587 * [ENHANCEMENT] Ruler: Support `exclude_alerts` parameter in `/api/v1/rules` endpoint. #9300 @@ -71,13 +106,13 @@ * [ENHANCEMENT] Compactor: refresh deletion marks when updating the bucket index concurrently. This speeds up updating the bucket index by up to 16 times when there is a lot of blocks churn (thousands of blocks churning every cleanup cycle). #9881 * [ENHANCEMENT] PromQL: make `sort_by_label` stable. #9879 * [ENHANCEMENT] Distributor: Initialize ha_tracker cache before ha_tracker and distributor reach running state and begin serving writes. #9826 #9976 -* [ENHANCEMENT] Ingester: `-ingest-storage.kafka.max-buffered-bytes` to limit the memory for buffered records when using concurrent fetching. #9892 * [ENHANCEMENT] Querier: improve performance and memory consumption of queries that select many series. #9914 -* [ENHANCEMENT] Ruler: Support OAuth2 and proxies in Alertmanager client #9945 +* [ENHANCEMENT] Ruler: Support OAuth2 and proxies in Alertmanager client #9945 #10030 * [ENHANCEMENT] Ingester: Add `-blocks-storage.tsdb.bigger-out-of-order-blocks-for-old-samples` to build 24h blocks for out-of-order data belonging to the previous days instead of building smaller 2h blocks. This reduces pressure on compactors and ingesters when the out-of-order samples span multiple days in the past. #9844 #10033 #10035 * [ENHANCEMENT] Distributor: allow a different limit for info series (series ending in `_info`) label count, via `-validation.max-label-names-per-info-series`. #10028 * [ENHANCEMENT] Ingester: do not reuse labels, samples and histograms slices in the write request if there are more entries than 10x the pre-allocated size. This should help to reduce the in-use memory in case of few requests with a very large number of labels, samples or histograms. #10040 * [ENHANCEMENT] Query-Frontend: prune ` and on() (vector(x)==y)` style queries and stop pruning ` < -Inf`. Triggered by https://github.com/prometheus/prometheus/pull/15245. #10026 +* [ENHANCEMENT] Query-Frontend: perform request format validation before processing the request. #10093 * [BUGFIX] Fix issue where functions such as `rate()` over native histograms could return incorrect values if a float stale marker was present in the selected range. #9508 * [BUGFIX] Fix issue where negation of native histograms (eg. `-some_native_histogram_series`) did nothing. #9508 * [BUGFIX] Fix issue where `metric might not be a counter, name does not end in _total/_sum/_count/_bucket` annotation would be emitted even if `rate` or `increase` did not have enough samples to compute a result. #9508 @@ -101,6 +136,15 @@ * [BUGFIX] Querier: Fix stddev+stdvar aggregations to treat Infinity consistently. #9844 * [BUGFIX] Ingester: Chunks could have one unnecessary zero byte at the end. #9844 * [BUGFIX] OTLP receiver: Preserve colons and combine multiple consecutive underscores into one when generating metric names in suffix adding mode (`-distributor.otel-metric-suffixes-enabled`). #10075 +* [BUGFIX] PromQL: Ignore native histograms in `clamp`, `clamp_max` and `clamp_min` functions. #10136 +* [BUGFIX] PromQL: Ignore native histograms in `max`, `min`, `stdvar`, `stddev` aggregation operators and instead return an info annotation. #10136 +* [BUGFIX] PromQL: Ignore native histograms when compared to float values with `==`, `!=`, `<`, `>`, `<=`, `>=` and instead return an info annotation. #10136 +* [BUGFIX] PromQL: Return an info annotation if the `quantile` function is used on a float series that does not have `le` label. #10136 +* [BUGFIX] PromQL: Fix `count_values` to take into account native histograms. #10168 +* [BUGFIX] PromQL: Ignore native histograms in time functions `day_of_month`, `day_of_week`, `day_of_year`, `days_in_month`, `hour`, `minute`, `month` and `year`, which means they no longer yield any value when encountering a native histograms series. #10188 +* [BUGFIX] PromQL: Ignore native histograms in `topk` and `bottomk` functions and return info annotation instead. #10188 +* [BUGFIX] PromQL: Let `limitk` and `limit_ratio` include native histograms if applicable. #10188 +* [BUGFIX] PromQL: Fix `changes` and `resets` functions to count switch between float and native histograms sample type as change and reset. #10188 ### Mixin @@ -142,6 +186,7 @@ ### Query-tee * [FEATURE] Added `-proxy.compare-skip-samples-before` to skip samples before the given time when comparing responses. The time can be in RFC3339 format (or) RFC3339 without the timezone and seconds (or) date only. #9515 +* [FEATURE] Add `-backend.config-file` for a YAML configuration file for per-backend options. Currently, it only supports additional HTTP request headers. #10081 * [ENHANCEMENT] Added human-readable timestamps to comparison failure messages. #9665 ### Documentation @@ -252,7 +297,7 @@ * [ENHANCEMENT] Update runtime configuration to read gzip-compressed files with `.gz` extension. #9074 * [ENHANCEMENT] Ingester: add `cortex_lifecycler_read_only` metric which is set to 1 when ingester's lifecycler is set to read-only mode. #9095 * [ENHANCEMENT] Add a new field, `encode_time_seconds` to query stats log messages, to record the amount of time it takes the query-frontend to encode a response. This does not include any serialization time for downstream components. #9062 -* [ENHANCEMENT] OTLP: If the flag `-distributor.otel-created-timestamp-zero-ingestion-enabled` is true, OTel start timestamps are converted to Prometheus zero samples to mark series start. #9131 #10053 +* [ENHANCEMENT] OTLP: If the flag `-distributor.otel-created-timestamp-zero-ingestion-enabled` is true, OTel start timestamps are converted to Prometheus zero samples to mark series start. #9131 * [ENHANCEMENT] Querier: attach logs emitted during query consistency check to trace span for query. #9213 * [ENHANCEMENT] Query-scheduler: Experimental `-query-scheduler.prioritize-query-components` flag enables the querier-worker queue priority algorithm to take precedence over tenant rotation when dequeuing requests. #9220 * [ENHANCEMENT] Add application credential arguments for Openstack Swift storage backend. #9181 diff --git a/Makefile b/Makefile index a963d7b5053..09a7bfb9cb4 100644 --- a/Makefile +++ b/Makefile @@ -478,6 +478,12 @@ lint: check-makefiles "google.golang.org/grpc/metadata.{FromIncomingContext}=google.golang.org/grpc/metadata.ValueFromIncomingContext" \ ./pkg/... ./cmd/... ./integration/... + # We don't use topic auto-creation because we don't control the num.partitions. + # As a result the topic can be created with the wrong number of partitions. + faillint -paths \ + "github.com/twmb/franz-go/pkg/kgo.{AllowAutoTopicCreation}" \ + ./pkg/... ./cmd/... ./tools/... ./integration/... + format: ## Run gofmt and goimports. find . $(DONT_FIND) -name '*.pb.go' -prune -o -type f -name '*.go' -exec gofmt -w -s {} \; find . $(DONT_FIND) -name '*.pb.go' -prune -o -type f -name '*.go' -exec goimports -w -local github.com/grafana/mimir {} \; diff --git a/RELEASE.md b/RELEASE.md index 4691642a9ad..c0b194a76ca 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -24,7 +24,7 @@ The following table contains past releases and tentative dates for upcoming rele | 2.12.0 | 2024-03-11 | Yuri Nikolic | | 2.13.0 | 2024-06-17 | Dimitar Dimitrov | | 2.14.0 | 2024-10-07 | Vladimir Varankin | -| 2.15.0 | 2024-12-09 | _To be announced_ | +| 2.15.0 | 2024-12-12 | Casie Chen | | 2.16.0 | 2025-03-10 | _To be announced_ | ## Release shepherd responsibilities @@ -276,7 +276,11 @@ To publish a stable release: 1. Update the Mimir version in the following locations: - `operations/mimir/images.libsonnet` (`_images.mimir` and `_images.query_tee` fields) - `operations/mimir-rules-action/Dockerfile` (`grafana/mimirtool` image tag) + 1. Update Jsonnet tests: `make build-jsonnet-tests` + 1. Commit updated tests 1. Update dashboard screenshots + 1. Make sure that operations/mimir-mixin-tools/screenshots/.config is configured according to the directions in [operations/mimir-mixin-tools/screenshots/run.sh](https://github.com/grafana/mimir/blob/main/operations/mimir-mixin-tools/screenshots/run.sh) + 1. Make sure that operations/mimir-mixin-tools/serve/.config is configured according to the directions in [operations/mimir-mixin-tools/serve/run.sh](https://github.com/grafana/mimir/blob/main/operations/mimir-mixin-tools/serve/run.sh) 1. Run `make mixin-screenshots` 1. Review all updated screenshots and ensure no sensitive data is disclosed 1. Open a PR diff --git a/VERSION b/VERSION index b70ae75a88d..90edc54efc1 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -2.14.1 +2.15.0-rc.0 diff --git a/cmd/metaconvert/Dockerfile b/cmd/metaconvert/Dockerfile index 3bcec2e5e14..67358ebb282 100644 --- a/cmd/metaconvert/Dockerfile +++ b/cmd/metaconvert/Dockerfile @@ -3,7 +3,7 @@ # Provenance-includes-license: Apache-2.0 # Provenance-includes-copyright: The Cortex Authors. -FROM alpine:3.20.3 +FROM alpine:3.21.0 ARG EXTRA_PACKAGES RUN apk add --no-cache ca-certificates tzdata $EXTRA_PACKAGES # Expose TARGETOS and TARGETARCH variables. These are supported by Docker when using BuildKit, but must be "enabled" using ARG. diff --git a/cmd/mimir/Dockerfile.alpine b/cmd/mimir/Dockerfile.alpine index b8a14ca5966..156a419c827 100644 --- a/cmd/mimir/Dockerfile.alpine +++ b/cmd/mimir/Dockerfile.alpine @@ -3,7 +3,7 @@ # Provenance-includes-license: Apache-2.0 # Provenance-includes-copyright: The Cortex Authors. -FROM alpine:3.20.3 +FROM alpine:3.21.0 ARG EXTRA_PACKAGES RUN apk add --no-cache ca-certificates tzdata $EXTRA_PACKAGES # Expose TARGETOS and TARGETARCH variables. These are supported by Docker when using BuildKit, but must be "enabled" using ARG. diff --git a/cmd/mimir/Dockerfile.continuous-test b/cmd/mimir/Dockerfile.continuous-test index 1ab25be5b6e..65187bf7c3b 100644 --- a/cmd/mimir/Dockerfile.continuous-test +++ b/cmd/mimir/Dockerfile.continuous-test @@ -1,6 +1,6 @@ # SPDX-License-Identifier: AGPL-3.0-only -FROM alpine:3.20.3 +FROM alpine:3.21.0 ARG EXTRA_PACKAGES RUN apk add --no-cache ca-certificates tzdata $EXTRA_PACKAGES # Expose TARGETOS and TARGETARCH variables. These are supported by Docker when using BuildKit, but must be "enabled" using ARG. diff --git a/cmd/mimir/config-descriptor.json b/cmd/mimir/config-descriptor.json index 4e2d6cb681a..3ac7f5c294b 100644 --- a/cmd/mimir/config-descriptor.json +++ b/cmd/mimir/config-descriptor.json @@ -96,17 +96,6 @@ "fieldType": "boolean", "fieldCategory": "advanced" }, - { - "kind": "field", - "name": "get_request_for_ingester_shutdown_enabled", - "required": false, - "desc": "Enable GET requests to the /ingester/shutdown endpoint to trigger an ingester shutdown. This is a potentially dangerous operation and should only be enabled consciously.", - "fieldValue": null, - "fieldDefaultValue": false, - "fieldFlag": "api.get-request-for-ingester-shutdown-enabled", - "fieldType": "boolean", - "fieldCategory": "deprecated" - }, { "kind": "field", "name": "alertmanager_http_prefix", @@ -4795,6 +4784,17 @@ "fieldType": "string", "fieldCategory": "experimental" }, + { + "kind": "field", + "name": "otel_keep_identifying_resource_attributes", + "required": false, + "desc": "Whether to keep identifying OTel resource attributes in the target_info metric on top of converting to job and instance labels.", + "fieldValue": null, + "fieldDefaultValue": false, + "fieldFlag": "distributor.otel-keep-identifying-resource-attributes", + "fieldType": "boolean", + "fieldCategory": "experimental" + }, { "kind": "field", "name": "ingest_storage_read_consistency", @@ -6711,7 +6711,7 @@ "kind": "field", "name": "auto_create_topic_enabled", "required": false, - "desc": "Enable auto-creation of Kafka topic if it doesn't exist.", + "desc": "Enable auto-creation of Kafka topic on startup if it doesn't exist. If creating the topic fails and the topic doesn't already exist, Mimir will fail to start.", "fieldValue": null, "fieldDefaultValue": true, "fieldFlag": "ingest-storage.kafka.auto-create-topic-enabled", @@ -6721,9 +6721,9 @@ "kind": "field", "name": "auto_create_topic_default_partitions", "required": false, - "desc": "When auto-creation of Kafka topic is enabled and this value is positive, Kafka's num.partitions configuration option is set on Kafka brokers with this value when Mimir component that uses Kafka starts. This configuration option specifies the default number of partitions that the Kafka broker uses for auto-created topics. Note that this is a Kafka-cluster wide setting, and applies to any auto-created topic. If the setting of num.partitions fails, Mimir proceeds anyways, but auto-created topics could have an incorrect number of partitions.", + "desc": "When auto-creation of Kafka topic is enabled and this value is positive, Mimir will create the topic with this number of partitions. When the value is -1 the Kafka broker will use the default number of partitions (num.partitions configuration).", "fieldValue": null, - "fieldDefaultValue": 0, + "fieldDefaultValue": -1, "fieldFlag": "ingest-storage.kafka.auto-create-topic-default-partitions", "fieldType": "int" }, @@ -6759,22 +6759,12 @@ }, { "kind": "field", - "name": "startup_fetch_concurrency", - "required": false, - "desc": "The number of concurrent fetch requests that the ingester makes when reading data from Kafka during startup. 0 to disable.", - "fieldValue": null, - "fieldDefaultValue": 0, - "fieldFlag": "ingest-storage.kafka.startup-fetch-concurrency", - "fieldType": "int" - }, - { - "kind": "field", - "name": "ongoing_fetch_concurrency", + "name": "fetch_concurrency_max", "required": false, - "desc": "The number of concurrent fetch requests that the ingester makes when reading data continuously from Kafka after startup. Is disabled unless ingest-storage.kafka.startup-fetch-concurrency is greater than 0. 0 to disable.", + "desc": "The maximum number of concurrent fetch requests that the ingester makes when reading data from Kafka during startup. Concurrent fetch requests are issued only when there is sufficient backlog of records to consume. 0 to disable.", "fieldValue": null, "fieldDefaultValue": 0, - "fieldFlag": "ingest-storage.kafka.ongoing-fetch-concurrency", + "fieldFlag": "ingest-storage.kafka.fetch-concurrency-max", "fieldType": "int" }, { @@ -12425,6 +12415,17 @@ "fieldDefaultValue": "", "fieldFlag": "ruler.alertmanager-client.oauth.scopes", "fieldType": "string" + }, + { + "kind": "field", + "name": "endpoint_params", + "required": false, + "desc": "Optional additional URL parameters to send to the token URL.", + "fieldValue": null, + "fieldDefaultValue": {}, + "fieldFlag": "ruler.alertmanager-client.oauth.endpoint-params", + "fieldType": "map of string to string", + "fieldCategory": "advanced" } ], "fieldValue": null, diff --git a/cmd/mimir/help-all.txt.tmpl b/cmd/mimir/help-all.txt.tmpl index 40dd0d5d3aa..08bc71314d3 100644 --- a/cmd/mimir/help-all.txt.tmpl +++ b/cmd/mimir/help-all.txt.tmpl @@ -359,8 +359,6 @@ Usage of ./cmd/mimir/mimir: [experimental] Enable UTF-8 strict mode. Allows UTF-8 characters in the matchers for routes and inhibition rules, in silences, and in the labels for alerts. It is recommended that all tenants run the `migrate-utf8` command in mimirtool before enabling this mode. Otherwise, some tenant configurations might fail to load. For more information, refer to [Enable UTF-8](https://grafana.com/docs/mimir//references/architecture/components/alertmanager/#enable-utf-8). Enabling and then disabling UTF-8 strict mode can break existing Alertmanager configurations if tenants added UTF-8 characters to their Alertmanager configuration while it was enabled. -alertmanager.web.external-url string The URL under which Alertmanager is externally reachable (eg. could be different than -http.alertmanager-http-prefix in case Alertmanager is served via a reverse proxy). This setting is used both to configure the internal requests router and to generate links in alert templates. If the external URL has a path portion, it will be used to prefix all HTTP endpoints served by Alertmanager, both the UI and API. (default http://localhost:8080/alertmanager) - -api.get-request-for-ingester-shutdown-enabled - [deprecated] Enable GET requests to the /ingester/shutdown endpoint to trigger an ingester shutdown. This is a potentially dangerous operation and should only be enabled consciously. -api.skip-label-count-validation-header-enabled Allows to disable enforcement of the label count limit "max_label_names_per_series" via X-Mimir-SkipLabelCountValidation header on the http write path. Allowing this for external clients allows any client to send invalid label counts. After enabling it, requests with a specific HTTP header set to true will not have label counts validated. -api.skip-label-name-validation-header-enabled @@ -1389,6 +1387,8 @@ Usage of ./cmd/mimir/mimir: [experimental] Enable metric relabeling for the tenant. This configuration option can be used to forcefully disable metric relabeling on a per-tenant basis. (default true) -distributor.otel-created-timestamp-zero-ingestion-enabled [experimental] Whether to enable translation of OTel start timestamps to Prometheus zero samples in the OTLP endpoint. + -distributor.otel-keep-identifying-resource-attributes + [experimental] Whether to keep identifying OTel resource attributes in the target_info metric on top of converting to job and instance labels. -distributor.otel-metric-suffixes-enabled Whether to enable automatic suffixes to names of metrics ingested through OTLP. -distributor.otel-promote-resource-attributes comma-separated-list-of-strings @@ -1498,9 +1498,9 @@ Usage of ./cmd/mimir/mimir: -ingest-storage.kafka.address string The Kafka backend address. -ingest-storage.kafka.auto-create-topic-default-partitions int - When auto-creation of Kafka topic is enabled and this value is positive, Kafka's num.partitions configuration option is set on Kafka brokers with this value when Mimir component that uses Kafka starts. This configuration option specifies the default number of partitions that the Kafka broker uses for auto-created topics. Note that this is a Kafka-cluster wide setting, and applies to any auto-created topic. If the setting of num.partitions fails, Mimir proceeds anyways, but auto-created topics could have an incorrect number of partitions. + When auto-creation of Kafka topic is enabled and this value is positive, Mimir will create the topic with this number of partitions. When the value is -1 the Kafka broker will use the default number of partitions (num.partitions configuration). (default -1) -ingest-storage.kafka.auto-create-topic-enabled - Enable auto-creation of Kafka topic if it doesn't exist. (default true) + Enable auto-creation of Kafka topic on startup if it doesn't exist. If creating the topic fails and the topic doesn't already exist, Mimir will fail to start. (default true) -ingest-storage.kafka.client-id string The Kafka client ID. -ingest-storage.kafka.consume-from-position-at-startup string @@ -1513,6 +1513,8 @@ Usage of ./cmd/mimir/mimir: How frequently a consumer should commit the consumed offset to Kafka. The last committed offset is used at startup to continue the consumption from where it was left. (default 1s) -ingest-storage.kafka.dial-timeout duration The maximum time allowed to open a connection to a Kafka broker. (default 2s) + -ingest-storage.kafka.fetch-concurrency-max int + The maximum number of concurrent fetch requests that the ingester makes when reading data from Kafka during startup. Concurrent fetch requests are issued only when there is sufficient backlog of records to consume. 0 to disable. -ingest-storage.kafka.ingestion-concurrency-batch-size int The number of timeseries to batch together before ingesting to the TSDB head. Only use this setting when -ingest-storage.kafka.ingestion-concurrency-max is greater than 0. (default 150) -ingest-storage.kafka.ingestion-concurrency-estimated-bytes-per-sample int @@ -1531,8 +1533,6 @@ Usage of ./cmd/mimir/mimir: The maximum number of buffered records ready to be processed. This limit applies to the sum of all inflight requests. Set to 0 to disable the limit. (default 100000000) -ingest-storage.kafka.max-consumer-lag-at-startup duration The guaranteed maximum lag before a consumer is considered to have caught up reading from a partition at startup, becomes ACTIVE in the hash ring and passes the readiness check. Set both -ingest-storage.kafka.target-consumer-lag-at-startup and -ingest-storage.kafka.max-consumer-lag-at-startup to 0 to disable waiting for maximum consumer lag being honored at startup. (default 15s) - -ingest-storage.kafka.ongoing-fetch-concurrency int - The number of concurrent fetch requests that the ingester makes when reading data continuously from Kafka after startup. Is disabled unless ingest-storage.kafka.startup-fetch-concurrency is greater than 0. 0 to disable. -ingest-storage.kafka.producer-max-buffered-bytes int The maximum size of (uncompressed) buffered and unacknowledged produced records sent to Kafka. The produce request fails once this limit is reached. This limit is per Kafka client. 0 to disable the limit. (default 1073741824) -ingest-storage.kafka.producer-max-record-size-bytes int @@ -1541,8 +1541,6 @@ Usage of ./cmd/mimir/mimir: The password used to authenticate to Kafka using the SASL plain mechanism. To enable SASL, configure both the username and password. -ingest-storage.kafka.sasl-username string The username used to authenticate to Kafka using the SASL plain mechanism. To enable SASL, configure both the username and password. - -ingest-storage.kafka.startup-fetch-concurrency int - The number of concurrent fetch requests that the ingester makes when reading data from Kafka during startup. 0 to disable. -ingest-storage.kafka.target-consumer-lag-at-startup duration The best-effort maximum lag a consumer tries to achieve at startup. Set both -ingest-storage.kafka.target-consumer-lag-at-startup and -ingest-storage.kafka.max-consumer-lag-at-startup to 0 to disable waiting for maximum consumer lag being honored at startup. (default 2s) -ingest-storage.kafka.topic string @@ -2829,6 +2827,8 @@ Usage of ./cmd/mimir/mimir: OAuth2 client ID. Enables the use of OAuth2 for authenticating with Alertmanager. -ruler.alertmanager-client.oauth.client_secret string OAuth2 client secret. + -ruler.alertmanager-client.oauth.endpoint-params value + Optional additional URL parameters to send to the token URL. (default {}) -ruler.alertmanager-client.oauth.scopes comma-separated-list-of-strings Optional scopes to include with the token request. -ruler.alertmanager-client.oauth.token_url string diff --git a/cmd/mimir/help.txt.tmpl b/cmd/mimir/help.txt.tmpl index 665632c8491..f25ae81834b 100644 --- a/cmd/mimir/help.txt.tmpl +++ b/cmd/mimir/help.txt.tmpl @@ -398,9 +398,9 @@ Usage of ./cmd/mimir/mimir: -ingest-storage.kafka.address string The Kafka backend address. -ingest-storage.kafka.auto-create-topic-default-partitions int - When auto-creation of Kafka topic is enabled and this value is positive, Kafka's num.partitions configuration option is set on Kafka brokers with this value when Mimir component that uses Kafka starts. This configuration option specifies the default number of partitions that the Kafka broker uses for auto-created topics. Note that this is a Kafka-cluster wide setting, and applies to any auto-created topic. If the setting of num.partitions fails, Mimir proceeds anyways, but auto-created topics could have an incorrect number of partitions. + When auto-creation of Kafka topic is enabled and this value is positive, Mimir will create the topic with this number of partitions. When the value is -1 the Kafka broker will use the default number of partitions (num.partitions configuration). (default -1) -ingest-storage.kafka.auto-create-topic-enabled - Enable auto-creation of Kafka topic if it doesn't exist. (default true) + Enable auto-creation of Kafka topic on startup if it doesn't exist. If creating the topic fails and the topic doesn't already exist, Mimir will fail to start. (default true) -ingest-storage.kafka.client-id string The Kafka client ID. -ingest-storage.kafka.consume-from-position-at-startup string @@ -413,6 +413,8 @@ Usage of ./cmd/mimir/mimir: How frequently a consumer should commit the consumed offset to Kafka. The last committed offset is used at startup to continue the consumption from where it was left. (default 1s) -ingest-storage.kafka.dial-timeout duration The maximum time allowed to open a connection to a Kafka broker. (default 2s) + -ingest-storage.kafka.fetch-concurrency-max int + The maximum number of concurrent fetch requests that the ingester makes when reading data from Kafka during startup. Concurrent fetch requests are issued only when there is sufficient backlog of records to consume. 0 to disable. -ingest-storage.kafka.ingestion-concurrency-batch-size int The number of timeseries to batch together before ingesting to the TSDB head. Only use this setting when -ingest-storage.kafka.ingestion-concurrency-max is greater than 0. (default 150) -ingest-storage.kafka.ingestion-concurrency-estimated-bytes-per-sample int @@ -431,8 +433,6 @@ Usage of ./cmd/mimir/mimir: The maximum number of buffered records ready to be processed. This limit applies to the sum of all inflight requests. Set to 0 to disable the limit. (default 100000000) -ingest-storage.kafka.max-consumer-lag-at-startup duration The guaranteed maximum lag before a consumer is considered to have caught up reading from a partition at startup, becomes ACTIVE in the hash ring and passes the readiness check. Set both -ingest-storage.kafka.target-consumer-lag-at-startup and -ingest-storage.kafka.max-consumer-lag-at-startup to 0 to disable waiting for maximum consumer lag being honored at startup. (default 15s) - -ingest-storage.kafka.ongoing-fetch-concurrency int - The number of concurrent fetch requests that the ingester makes when reading data continuously from Kafka after startup. Is disabled unless ingest-storage.kafka.startup-fetch-concurrency is greater than 0. 0 to disable. -ingest-storage.kafka.producer-max-buffered-bytes int The maximum size of (uncompressed) buffered and unacknowledged produced records sent to Kafka. The produce request fails once this limit is reached. This limit is per Kafka client. 0 to disable the limit. (default 1073741824) -ingest-storage.kafka.producer-max-record-size-bytes int @@ -441,8 +441,6 @@ Usage of ./cmd/mimir/mimir: The password used to authenticate to Kafka using the SASL plain mechanism. To enable SASL, configure both the username and password. -ingest-storage.kafka.sasl-username string The username used to authenticate to Kafka using the SASL plain mechanism. To enable SASL, configure both the username and password. - -ingest-storage.kafka.startup-fetch-concurrency int - The number of concurrent fetch requests that the ingester makes when reading data from Kafka during startup. 0 to disable. -ingest-storage.kafka.target-consumer-lag-at-startup duration The best-effort maximum lag a consumer tries to achieve at startup. Set both -ingest-storage.kafka.target-consumer-lag-at-startup and -ingest-storage.kafka.max-consumer-lag-at-startup to 0 to disable waiting for maximum consumer lag being honored at startup. (default 2s) -ingest-storage.kafka.topic string @@ -715,6 +713,8 @@ Usage of ./cmd/mimir/mimir: OAuth2 client ID. Enables the use of OAuth2 for authenticating with Alertmanager. -ruler.alertmanager-client.oauth.client_secret string OAuth2 client secret. + -ruler.alertmanager-client.oauth.endpoint-params value + Optional additional URL parameters to send to the token URL. (default {}) -ruler.alertmanager-client.oauth.scopes comma-separated-list-of-strings Optional scopes to include with the token request. -ruler.alertmanager-client.oauth.token_url string diff --git a/cmd/mimirtool/Dockerfile b/cmd/mimirtool/Dockerfile index 4209a545e1b..73653218be3 100644 --- a/cmd/mimirtool/Dockerfile +++ b/cmd/mimirtool/Dockerfile @@ -1,6 +1,6 @@ # SPDX-License-Identifier: Apache-2.0 -FROM alpine:3.20.3 +FROM alpine:3.21.0 ARG EXTRA_PACKAGES RUN apk add --no-cache ca-certificates tzdata $EXTRA_PACKAGES # Expose TARGETOS and TARGETARCH variables. These are supported by Docker when using BuildKit, but must be "enabled" using ARG. diff --git a/cmd/query-tee/Dockerfile b/cmd/query-tee/Dockerfile index 3c8e811b51f..8584716ef13 100644 --- a/cmd/query-tee/Dockerfile +++ b/cmd/query-tee/Dockerfile @@ -3,7 +3,7 @@ # Provenance-includes-license: Apache-2.0 # Provenance-includes-copyright: The Cortex Authors. -FROM alpine:3.20.3 +FROM alpine:3.21.0 ARG EXTRA_PACKAGES RUN apk add --no-cache ca-certificates tzdata $EXTRA_PACKAGES # Expose TARGETOS and TARGETARCH variables. These are supported by Docker when using BuildKit, but must be "enabled" using ARG. diff --git a/development/mimir-ingest-storage/dev.dockerfile b/development/mimir-ingest-storage/dev.dockerfile index 0b9b18e1652..2572deb6dab 100644 --- a/development/mimir-ingest-storage/dev.dockerfile +++ b/development/mimir-ingest-storage/dev.dockerfile @@ -3,7 +3,7 @@ FROM $BUILD_IMAGE ENV CGO_ENABLED=0 RUN go install github.com/go-delve/delve/cmd/dlv@v1.22.0 -FROM alpine:3.20.3 +FROM alpine:3.21.0 RUN mkdir /mimir WORKDIR /mimir diff --git a/development/mimir-ingest-storage/docker-compose.jsonnet b/development/mimir-ingest-storage/docker-compose.jsonnet index b120b416836..544dbdc729e 100644 --- a/development/mimir-ingest-storage/docker-compose.jsonnet +++ b/development/mimir-ingest-storage/docker-compose.jsonnet @@ -54,8 +54,7 @@ std.manifestYamlDoc({ '-ingester.ring.prefix=exclusive-prefix', '-ingest-storage.kafka.consume-from-position-at-startup=end', '-ingest-storage.kafka.consume-from-timestamp-at-startup=0', - '-ingest-storage.kafka.startup-fetch-concurrency=15', - '-ingest-storage.kafka.ongoing-fetch-concurrency=2', + '-ingest-storage.kafka.fetch-concurrency-max=2', '-ingest-storage.kafka.ingestion-concurrency-max=2', '-ingest-storage.kafka.ingestion-concurrency-batch-size=150', ], diff --git a/development/mimir-ingest-storage/docker-compose.yml b/development/mimir-ingest-storage/docker-compose.yml index acb9a6cbb93..39bf2e18692 100644 --- a/development/mimir-ingest-storage/docker-compose.yml +++ b/development/mimir-ingest-storage/docker-compose.yml @@ -322,7 +322,7 @@ "command": - "sh" - "-c" - - "exec ./mimir -config.file=./config/mimir.yaml -target=ingester -activity-tracker.filepath=/activity/mimir-write-zone-c-61 -ingester.ring.instance-availability-zone=zone-c -ingester.ring.instance-id=ingester-zone-c-61 -ingester.partition-ring.prefix=exclusive-prefix -ingester.ring.prefix=exclusive-prefix -ingest-storage.kafka.consume-from-position-at-startup=end -ingest-storage.kafka.consume-from-timestamp-at-startup=0 -ingest-storage.kafka.startup-fetch-concurrency=15 -ingest-storage.kafka.ongoing-fetch-concurrency=2 -ingest-storage.kafka.ingestion-concurrency-max=2 -ingest-storage.kafka.ingestion-concurrency-batch-size=150" + - "exec ./mimir -config.file=./config/mimir.yaml -target=ingester -activity-tracker.filepath=/activity/mimir-write-zone-c-61 -ingester.ring.instance-availability-zone=zone-c -ingester.ring.instance-id=ingester-zone-c-61 -ingester.partition-ring.prefix=exclusive-prefix -ingester.ring.prefix=exclusive-prefix -ingest-storage.kafka.consume-from-position-at-startup=end -ingest-storage.kafka.consume-from-timestamp-at-startup=0 -ingest-storage.kafka.fetch-concurrency-max=2 -ingest-storage.kafka.ingestion-concurrency-max=2 -ingest-storage.kafka.ingestion-concurrency-batch-size=150" "depends_on": "kafka_1": "condition": "service_healthy" diff --git a/development/mimir-microservices-mode/dev.dockerfile b/development/mimir-microservices-mode/dev.dockerfile index 5b4fa84cd13..63eea13f8e6 100644 --- a/development/mimir-microservices-mode/dev.dockerfile +++ b/development/mimir-microservices-mode/dev.dockerfile @@ -3,7 +3,7 @@ FROM $BUILD_IMAGE ENV CGO_ENABLED=0 RUN go install github.com/go-delve/delve/cmd/dlv@v1.23.0 -FROM alpine:3.20.3 +FROM alpine:3.21.0 RUN mkdir /mimir WORKDIR /mimir diff --git a/development/mimir-monolithic-mode-with-swift-storage/dev.dockerfile b/development/mimir-monolithic-mode-with-swift-storage/dev.dockerfile index 29cdb37ffa0..56b8fccb452 100644 --- a/development/mimir-monolithic-mode-with-swift-storage/dev.dockerfile +++ b/development/mimir-monolithic-mode-with-swift-storage/dev.dockerfile @@ -1,4 +1,4 @@ -FROM alpine:3.20.3 +FROM alpine:3.21.0 RUN mkdir /mimir WORKDIR /mimir diff --git a/development/mimir-monolithic-mode/config/runtime.yaml b/development/mimir-monolithic-mode/config/runtime.yaml index cf14c302761..9374fcb0233 100644 --- a/development/mimir-monolithic-mode/config/runtime.yaml +++ b/development/mimir-monolithic-mode/config/runtime.yaml @@ -1 +1,4 @@ # This file can be used to set overrides or other runtime config. +overrides: + anonymous: + otel_keep_identifying_resource_attributes: true diff --git a/development/mimir-monolithic-mode/dev.dockerfile b/development/mimir-monolithic-mode/dev.dockerfile index ae113971b7b..8f5ba3f70ee 100644 --- a/development/mimir-monolithic-mode/dev.dockerfile +++ b/development/mimir-monolithic-mode/dev.dockerfile @@ -1,4 +1,4 @@ -FROM alpine:3.20.3 +FROM alpine:3.21.0 RUN mkdir /mimir WORKDIR /mimir diff --git a/development/mimir-monolithic-mode/docker-compose.yml b/development/mimir-monolithic-mode/docker-compose.yml index 271a5835dce..664cb1226b8 100644 --- a/development/mimir-monolithic-mode/docker-compose.yml +++ b/development/mimir-monolithic-mode/docker-compose.yml @@ -32,7 +32,7 @@ services: environment: - GF_AUTH_ANONYMOUS_ENABLED=true - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin - image: grafana/grafana:11.3.1 + image: grafana/grafana:11.4.0 ports: - 3000:3000 volumes: @@ -53,7 +53,7 @@ services: grafana-alloy: profiles: - grafana-alloy - image: grafana/alloy:v1.5.0 + image: grafana/alloy:v1.5.1 command: ["run", "--server.http.listen-addr=0.0.0.0:9092", "--storage.path=/var/lib/alloy/data", "/etc/alloy/config.alloy"] volumes: - ./config/config.alloy:/etc/alloy/config.alloy @@ -111,7 +111,7 @@ services: - .data-mimir-2:/data:delegated otel-remote-write: - image: otel/opentelemetry-collector-contrib:0.114.0 + image: otel/opentelemetry-collector-contrib:0.115.1 profiles: - otel-collector-remote-write volumes: @@ -126,7 +126,7 @@ services: - 55679:55679 # zpages extension otel-otlp: - image: otel/opentelemetry-collector-contrib:0.114.0 + image: otel/opentelemetry-collector-contrib:0.115.1 profiles: - otel-collector-otlp-push volumes: diff --git a/development/mimir-read-write-mode/dev.dockerfile b/development/mimir-read-write-mode/dev.dockerfile index 0b9b18e1652..2572deb6dab 100644 --- a/development/mimir-read-write-mode/dev.dockerfile +++ b/development/mimir-read-write-mode/dev.dockerfile @@ -3,7 +3,7 @@ FROM $BUILD_IMAGE ENV CGO_ENABLED=0 RUN go install github.com/go-delve/delve/cmd/dlv@v1.22.0 -FROM alpine:3.20.3 +FROM alpine:3.21.0 RUN mkdir /mimir WORKDIR /mimir diff --git a/docs/sources/mimir/configure/about-versioning.md b/docs/sources/mimir/configure/about-versioning.md index 892ca5d0296..3eb5a0de095 100644 --- a/docs/sources/mimir/configure/about-versioning.md +++ b/docs/sources/mimir/configure/about-versioning.md @@ -88,6 +88,10 @@ The following features are currently experimental: - `-distributor.otel-created-timestamp-zero-ingestion-enabled` - Promote a certain set of OTel resource attributes to labels - `-distributor.promote-otel-resource-attributes` + - Add experimental `memberlist` key-value store for ha_tracker. Note that this feature is `experimental`, as the upper limits of propagation times have not yet been validated. Additionally, cleanup operations have not yet been implemented for the memberlist entries. + - `-distributor.ha-tracker.kvstore.store` + - Allow keeping OpenTelemetry `service.instance.id`, `service.name` and `service.namespace` resource attributes in `target_info` on top of converting them to the `instance` and `job` labels. + - `-distributor.otel-keep-identifying-resource-attributes` - Hash ring - Disabling ring heartbeat timeouts - `-distributor.ring.heartbeat-timeout=0` diff --git a/docs/sources/mimir/configure/configuration-parameters/index.md b/docs/sources/mimir/configure/configuration-parameters/index.md index bfd77794a8a..f9a147e7bca 100644 --- a/docs/sources/mimir/configure/configuration-parameters/index.md +++ b/docs/sources/mimir/configure/configuration-parameters/index.md @@ -149,12 +149,6 @@ api: # CLI flag: -api.skip-label-count-validation-header-enabled [skip_label_count_validation_header_enabled: | default = false] - # (deprecated) Enable GET requests to the /ingester/shutdown endpoint to - # trigger an ingester shutdown. This is a potentially dangerous operation and - # should only be enabled consciously. - # CLI flag: -api.get-request-for-ingester-shutdown-enabled - [get_request_for_ingester_shutdown_enabled: | default = false] - # (advanced) HTTP URL path under which the Alertmanager ui and api will be # served. # CLI flag: -http.alertmanager-http-prefix @@ -804,9 +798,8 @@ ha_tracker: # CLI flag: -distributor.ha-tracker.failover-timeout [ha_tracker_failover_timeout: | default = 30s] - # Backend storage to use for the ring. Please be aware that memberlist is not - # supported by the HA tracker since gossip propagation is too slow for HA - # purposes. + # Backend storage to use for the ring. Note that memberlist support is + # experimental. kvstore: # Backend storage to use for the ring. Supported values are: consul, etcd, # inmemory, memberlist, multi. @@ -1972,6 +1965,10 @@ alertmanager_client: # CLI flag: -ruler.alertmanager-client.oauth.scopes [scopes: | default = ""] + # (advanced) Optional additional URL parameters to send to the token URL. + # CLI flag: -ruler.alertmanager-client.oauth.endpoint-params + [endpoint_params: | default = {}] + # (advanced) Optional HTTP, HTTPS via CONNECT, or SOCKS5 proxy URL to route # requests through. Applies to all requests, including auxiliary traffic, such # as OAuth token requests. @@ -3800,6 +3797,11 @@ The `limits` block configures default and per-tenant limits imposed by component # CLI flag: -distributor.otel-promote-resource-attributes [promote_otel_resource_attributes: | default = ""] +# (experimental) Whether to keep identifying OTel resource attributes in the +# target_info metric on top of converting to job and instance labels. +# CLI flag: -distributor.otel-keep-identifying-resource-attributes +[otel_keep_identifying_resource_attributes: | default = false] + # (experimental) The default consistency level to enforce for queries when using # the ingest storage. Supports values: strong, eventual. # CLI flag: -ingest-storage.read-consistency @@ -3910,20 +3912,18 @@ kafka: # CLI flag: -ingest-storage.kafka.max-consumer-lag-at-startup [max_consumer_lag_at_startup: | default = 15s] - # Enable auto-creation of Kafka topic if it doesn't exist. + # Enable auto-creation of Kafka topic on startup if it doesn't exist. If + # creating the topic fails and the topic doesn't already exist, Mimir will + # fail to start. # CLI flag: -ingest-storage.kafka.auto-create-topic-enabled [auto_create_topic_enabled: | default = true] # When auto-creation of Kafka topic is enabled and this value is positive, - # Kafka's num.partitions configuration option is set on Kafka brokers with - # this value when Mimir component that uses Kafka starts. This configuration - # option specifies the default number of partitions that the Kafka broker uses - # for auto-created topics. Note that this is a Kafka-cluster wide setting, and - # applies to any auto-created topic. If the setting of num.partitions fails, - # Mimir proceeds anyways, but auto-created topics could have an incorrect - # number of partitions. + # Mimir will create the topic with this number of partitions. When the value + # is -1 the Kafka broker will use the default number of partitions + # (num.partitions configuration). # CLI flag: -ingest-storage.kafka.auto-create-topic-default-partitions - [auto_create_topic_default_partitions: | default = 0] + [auto_create_topic_default_partitions: | default = -1] # The maximum size of a Kafka record data that should be generated by the # producer. An incoming write request larger than this size is split into @@ -3943,17 +3943,11 @@ kafka: # CLI flag: -ingest-storage.kafka.wait-strong-read-consistency-timeout [wait_strong_read_consistency_timeout: | default = 20s] - # The number of concurrent fetch requests that the ingester makes when reading - # data from Kafka during startup. 0 to disable. - # CLI flag: -ingest-storage.kafka.startup-fetch-concurrency - [startup_fetch_concurrency: | default = 0] - - # The number of concurrent fetch requests that the ingester makes when reading - # data continuously from Kafka after startup. Is disabled unless - # ingest-storage.kafka.startup-fetch-concurrency is greater than 0. 0 to - # disable. - # CLI flag: -ingest-storage.kafka.ongoing-fetch-concurrency - [ongoing_fetch_concurrency: | default = 0] + # The maximum number of concurrent fetch requests that the ingester makes when + # reading data from Kafka during startup. Concurrent fetch requests are issued + # only when there is sufficient backlog of records to consume. 0 to disable. + # CLI flag: -ingest-storage.kafka.fetch-concurrency-max + [fetch_concurrency_max: | default = 0] # When enabled, the fetch request MaxBytes field is computed using the # compressed size of previous records. When disabled, MaxBytes is computed diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/alertmanager-resources/mimir-alertmanager-resources.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/alertmanager-resources/mimir-alertmanager-resources.png index 1d95ea056a1..a3306db4f89 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/alertmanager-resources/mimir-alertmanager-resources.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/alertmanager-resources/mimir-alertmanager-resources.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/alertmanager/mimir-alertmanager.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/alertmanager/mimir-alertmanager.png index b9700f2b33c..f8fcabb6a0b 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/alertmanager/mimir-alertmanager.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/alertmanager/mimir-alertmanager.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/compactor-resources/mimir-compactor-resources.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/compactor-resources/mimir-compactor-resources.png index 8be61a18f28..3d69e138ca2 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/compactor-resources/mimir-compactor-resources.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/compactor-resources/mimir-compactor-resources.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/compactor/mimir-compactor.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/compactor/mimir-compactor.png index 4a06de8c18e..4ab87041851 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/compactor/mimir-compactor.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/compactor/mimir-compactor.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/config/mimir-config.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/config/mimir-config.png index 2973a77615d..97c61f61dc1 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/config/mimir-config.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/config/mimir-config.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/object-store/mimir-object-store.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/object-store/mimir-object-store.png index 6faa3207b60..85ddc5c4b5c 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/object-store/mimir-object-store.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/object-store/mimir-object-store.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/overrides/mimir-overrides.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/overrides/mimir-overrides.png index 641548e84a6..c73b4769d32 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/overrides/mimir-overrides.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/overrides/mimir-overrides.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/overview-networking/mimir-overview-networking.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/overview-networking/mimir-overview-networking.png index a22edf7c7eb..2523b0f8698 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/overview-networking/mimir-overview-networking.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/overview-networking/mimir-overview-networking.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/overview-resources/mimir-overview-resources.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/overview-resources/mimir-overview-resources.png index b6b35ed6836..347d8c32e3a 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/overview-resources/mimir-overview-resources.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/overview-resources/mimir-overview-resources.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/overview/mimir-overview.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/overview/mimir-overview.png index f8ad4cd3a27..8260b9fc261 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/overview/mimir-overview.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/overview/mimir-overview.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/queries/mimir-queries.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/queries/mimir-queries.png index 452f894d42e..ed9087eef72 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/queries/mimir-queries.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/queries/mimir-queries.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/reads-networking/mimir-reads-networking.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/reads-networking/mimir-reads-networking.png index 763b8ac8b2a..2e90a6cf784 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/reads-networking/mimir-reads-networking.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/reads-networking/mimir-reads-networking.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/reads-resources/mimir-reads-resources.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/reads-resources/mimir-reads-resources.png index 4c598e94e7f..6c04029795e 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/reads-resources/mimir-reads-resources.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/reads-resources/mimir-reads-resources.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/reads/mimir-reads.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/reads/mimir-reads.png index 8c1c35ed860..3d114c52a14 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/reads/mimir-reads.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/reads/mimir-reads.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/remote-ruler-reads-networking/mimir-remote-ruler-reads-networking.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/remote-ruler-reads-networking/mimir-remote-ruler-reads-networking.png index 8e68eccb0ca..12185df986f 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/remote-ruler-reads-networking/mimir-remote-ruler-reads-networking.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/remote-ruler-reads-networking/mimir-remote-ruler-reads-networking.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/remote-ruler-reads-resources/mimir-remote-ruler-reads-resources.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/remote-ruler-reads-resources/mimir-remote-ruler-reads-resources.png index 32437d0ddd8..4d227fab442 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/remote-ruler-reads-resources/mimir-remote-ruler-reads-resources.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/remote-ruler-reads-resources/mimir-remote-ruler-reads-resources.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/remote-ruler-reads/mimir-remote-ruler-reads.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/remote-ruler-reads/mimir-remote-ruler-reads.png index 750cf59f962..7162abbe627 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/remote-ruler-reads/mimir-remote-ruler-reads.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/remote-ruler-reads/mimir-remote-ruler-reads.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/rollout-progress/mimir-rollout-progress.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/rollout-progress/mimir-rollout-progress.png index 47e29658ff7..27ec7e70b53 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/rollout-progress/mimir-rollout-progress.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/rollout-progress/mimir-rollout-progress.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/ruler/mimir-ruler.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/ruler/mimir-ruler.png index acc25f6a9a8..eac4e73006b 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/ruler/mimir-ruler.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/ruler/mimir-ruler.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/scaling/mimir-scaling.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/scaling/mimir-scaling.png index 8d80bdeb254..27e76623d2b 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/scaling/mimir-scaling.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/scaling/mimir-scaling.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/tenants/mimir-tenants.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/tenants/mimir-tenants.png index e3733d9ca60..62c69a59ae8 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/tenants/mimir-tenants.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/tenants/mimir-tenants.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/writes-networking/mimir-writes-networking.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/writes-networking/mimir-writes-networking.png index 3f6ec7dd65f..301b2bc3875 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/writes-networking/mimir-writes-networking.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/writes-networking/mimir-writes-networking.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/writes-resources/mimir-writes-resources.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/writes-resources/mimir-writes-resources.png index 62284ba6ee0..bcc53ace439 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/writes-resources/mimir-writes-resources.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/writes-resources/mimir-writes-resources.png differ diff --git a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/writes/mimir-writes.png b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/writes/mimir-writes.png index 5c213383841..316b3020b7a 100644 Binary files a/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/writes/mimir-writes.png and b/docs/sources/mimir/manage/monitor-grafana-mimir/dashboards/writes/mimir-writes.png differ diff --git a/docs/sources/mimir/manage/run-production-environment/production-tips/index.md b/docs/sources/mimir/manage/run-production-environment/production-tips/index.md index eae5143d4d0..2435d0493a5 100644 --- a/docs/sources/mimir/manage/run-production-environment/production-tips/index.md +++ b/docs/sources/mimir/manage/run-production-environment/production-tips/index.md @@ -147,6 +147,12 @@ The chunks caches store portions of time series samples fetched from object stor Entries in this cache tend to be large (several kilobytes) and are fetched in batches by the store-gateway components. This results in higher bandwidth usage compared to other caches. +### Cache size + +Memcached [extstore](https://docs.memcached.org/features/flashstorage/) feature allows to extend Memcached’s memory space onto flash (or similar) storage. + +Refer to [how we scaled Grafana Cloud Logs' Memcached cluster to 50TB and improved reliability](https://grafana.com/blog/2023/08/23/how-we-scaled-grafana-cloud-logs-memcached-cluster-to-50tb-and-improved-reliability/). + ## Security We recommend securing the Grafana Mimir cluster. @@ -176,3 +182,20 @@ To configure gRPC compression, use the following CLI flags or their YAML equival | `-ruler.query-frontend.grpc-client-config.grpc-compression` | `ingester_client.grpc_client_config.grpc_compression` | | `-alertmanager.alertmanager-client.grpc-compression` | `query_scheduler.grpc_client_config.grpc_compression` | | `-ingester.client.grpc-compression` | `ruler.query_frontend.grpc_client_config.grpc_compression` | + +## Heavy multi-tenancy + +For each tenant, Mimir opens and maintains a TSDB in memory. If you have a significant number of tenants, the memory overhead might become prohibitive. +To reduce the associated overhead, consider the following: + +- Reduce `-blocks-storage.tsdb.head-chunks-write-buffer-size-bytes`, default `4MB`. For example, try `1MB` or `128KB`. +- Reduce `-blocks-storage.tsdb.stripe-size`, default `16384`. For example, try `256`, or even `64`. +- Configure [shuffle sharding](https://grafana.com/docs/mimir/latest/configure/configure-shuffle-sharding/) + +## Periodic latency spikes when cutting blocks + +Depending on the workload, you might witness latency spikes when Mimir cuts blocks. +To reduce the impact of this behavior, consider the following: + +- Upgrade to `2.15+`. Refer to . +- Reduce `-blocks-storage.tsdb.block-ranges-period`, default `2h`. For example. try `1h`. diff --git a/docs/sources/mimir/manage/tools/mimirtool.md b/docs/sources/mimir/manage/tools/mimirtool.md index ee78c7c7248..15e4a0f7a51 100644 --- a/docs/sources/mimir/manage/tools/mimirtool.md +++ b/docs/sources/mimir/manage/tools/mimirtool.md @@ -77,6 +77,15 @@ For Mimirtools to interact with Grafana Mimir, Grafana Enterprise Metrics, Prome | `MIMIR_API_KEY` | `--key` | Sets the basic auth password. If you're using Grafana Cloud, this variable is your API key. | | `MIMIR_TENANT_ID` | `--id` | Sets the tenant ID of the Grafana Mimir instance that Mimirtools interacts with. | +It is also possible to set TLS-related options with the following environment variables or CLI flags: + +| Environment variable | Flag | Description | +| -------------------------------- | ---------------------------- | ---------------------------------------------------------------------------------------------------------------------- | +| `MIMIR_TLS_CA_PATH` | `--tls-ca-path` | Sets the path to the CA certificate to use to verify the connection to the Grafana Mimir cluster. | +| `MIMIR_TLS_CERT_PATH` | `--tls-cert-path` | Sets the path to the client certificate to use to authenticate to the Grafana Mimir cluster. | +| `MIMIR_TLS_KEY_PATH` | `--tls-key-path` | Sets the path to the private key to use to authenticate to the Grafana Mimir cluster. | +| `MIMIR_TLS_INSECURE_SKIP_VERIFY` | `--tls-insecure-skip-verify` | If `true`, disables verification of the Grafana Mimir cluster's TLS certificate. This is insecure and not recommended. | + ## Commands The following sections outline the commands that you can run against Grafana Mimir and Grafana Cloud Metrics. diff --git a/go.mod b/go.mod index 2d1ddb3c8cd..7f5e7be64d7 100644 --- a/go.mod +++ b/go.mod @@ -22,11 +22,11 @@ require ( github.com/golang/snappy v0.0.4 github.com/google/gopacket v1.1.19 github.com/gorilla/mux v1.8.1 - github.com/grafana/dskit v0.0.0-20241125123840-77bb9ddddb0c + github.com/grafana/dskit v0.0.0-20241212153328-e27df29220ea github.com/grafana/e2e v0.1.2-0.20240118170847-db90b84177fc github.com/hashicorp/golang-lru v1.0.2 // indirect github.com/json-iterator/go v1.1.12 - github.com/minio/minio-go/v7 v7.0.81 + github.com/minio/minio-go/v7 v7.0.82 github.com/mitchellh/go-wordwrap v1.0.1 github.com/oklog/ulid v1.3.1 github.com/opentracing-contrib/go-grpc v0.1.0 @@ -45,9 +45,9 @@ require ( github.com/uber/jaeger-client-go v2.30.0+incompatible go.uber.org/atomic v1.11.0 go.uber.org/goleak v1.3.0 - golang.org/x/crypto v0.29.0 - golang.org/x/net v0.31.0 - golang.org/x/sync v0.9.0 + golang.org/x/crypto v0.31.0 + golang.org/x/net v0.32.0 + golang.org/x/sync v0.10.0 golang.org/x/time v0.8.0 google.golang.org/grpc v1.67.1 gopkg.in/yaml.v2 v2.4.0 @@ -79,7 +79,7 @@ require ( github.com/tjhop/slog-gokit v0.1.2 github.com/twmb/franz-go v1.18.0 github.com/twmb/franz-go/pkg/kadm v1.14.0 - github.com/twmb/franz-go/pkg/kfake v0.0.0-20241128200620-605b994f2744 + github.com/twmb/franz-go/pkg/kfake v0.0.0-20241202133023-293b7c4c56bb github.com/twmb/franz-go/pkg/kmsg v1.9.0 github.com/twmb/franz-go/plugin/kotel v1.5.0 github.com/twmb/franz-go/plugin/kprom v1.1.0 @@ -88,8 +88,8 @@ require ( go.opentelemetry.io/otel v1.32.0 go.opentelemetry.io/otel/trace v1.32.0 go.uber.org/multierr v1.11.0 - golang.org/x/exp v0.0.0-20241108190413-2d47ceb2692f - golang.org/x/term v0.26.0 + golang.org/x/exp v0.0.0-20241215155358-4a5509556b9e + golang.org/x/term v0.27.0 google.golang.org/api v0.209.0 google.golang.org/protobuf v1.35.2 sigs.k8s.io/kustomize/kyaml v0.18.1 @@ -130,6 +130,7 @@ require ( github.com/mitchellh/reflectwalk v1.0.0 // indirect github.com/pires/go-proxyproto v0.7.0 // indirect github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect + github.com/prometheus/sigv4 v0.1.0 // indirect github.com/ryanuber/go-glob v1.0.0 // indirect github.com/shopspring/decimal v1.2.0 // indirect github.com/spf13/cast v1.5.0 // indirect @@ -137,13 +138,13 @@ require ( github.com/tklauser/numcpus v0.6.1 // indirect github.com/yusufpapurcu/wmi v1.2.4 // indirect go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.54.0 // indirect - go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.56.0 // indirect + go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.57.0 // indirect go.opentelemetry.io/otel/sdk/metric v1.30.0 // indirect gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc // indirect gopkg.in/mail.v2 v2.3.1 // indirect gopkg.in/telebot.v3 v3.2.1 // indirect - k8s.io/apimachinery v0.31.1 // indirect - k8s.io/client-go v0.31.1 // indirect + k8s.io/apimachinery v0.31.3 // indirect + k8s.io/client-go v0.31.3 // indirect k8s.io/klog/v2 v2.130.1 // indirect ) @@ -179,7 +180,7 @@ require ( github.com/dlclark/regexp2 v1.11.0 // indirect github.com/docker/go-connections v0.4.1-0.20210727194412-58542c764a11 // indirect github.com/docker/go-units v0.5.0 // indirect - github.com/efficientgo/core v1.0.0-rc.0.0.20221201130417-ba593f67d2a4 // indirect + github.com/efficientgo/core v1.0.0-rc.3 github.com/efficientgo/e2e v0.13.1-0.20220923082810-8fa9daa8af8a // indirect github.com/facette/natsort v0.0.0-20181210072756-2cd4dd1e2dcb // indirect github.com/fatih/color v1.16.0 // indirect @@ -267,24 +268,24 @@ require ( go.mongodb.org/mongo-driver v1.14.0 // indirect go.opencensus.io v0.24.0 // indirect go.opentelemetry.io/collector/semconv v0.114.0 - go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.56.0 // indirect + go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.57.0 // indirect go.opentelemetry.io/otel/metric v1.32.0 // indirect go.uber.org/zap v1.21.0 // indirect golang.org/x/mod v0.22.0 // indirect golang.org/x/oauth2 v0.24.0 // indirect - golang.org/x/sys v0.27.0 // indirect - golang.org/x/text v0.20.0 // indirect - golang.org/x/tools v0.27.0 // indirect + golang.org/x/sys v0.28.0 // indirect + golang.org/x/text v0.21.0 // indirect + golang.org/x/tools v0.28.0 // indirect google.golang.org/genproto v0.0.0-20241113202542-65e8d215514f // indirect google.golang.org/genproto/googleapis/api v0.0.0-20241104194629-dd2ea8efbc28 // indirect - google.golang.org/genproto/googleapis/rpc v0.0.0-20241118233622-e639e219e697 + google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576 k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 // indirect k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 // indirect sigs.k8s.io/yaml v1.4.0 // indirect ) // Using a fork of Prometheus with Mimir-specific changes. -replace github.com/prometheus/prometheus => github.com/grafana/mimir-prometheus v0.0.0-20241204081021-cb322705289f +replace github.com/prometheus/prometheus => github.com/grafana/mimir-prometheus v0.0.0-20241210170917-0a0a41616520 // Replace memberlist with our fork which includes some fixes that haven't been // merged upstream yet: diff --git a/go.sum b/go.sum index 4dab136ee15..c484fe73d61 100644 --- a/go.sum +++ b/go.sum @@ -954,8 +954,8 @@ github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2 h1:tdlZCpZ/P9DhczC github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw= github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78= github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc= -github.com/digitalocean/godo v1.128.0 h1:cGn/ibMSRZ9+8etbzMv2MnnCEPTTGlEnx3HHTPwdk1U= -github.com/digitalocean/godo v1.128.0/go.mod h1:PU8JB6I1XYkQIdHFop8lLAY9ojp6M0XcU0TWaQSxbrc= +github.com/digitalocean/godo v1.131.0 h1:0WHymufAV5avpodT0h5/pucUVfO4v7biquOIqhLeROY= +github.com/digitalocean/godo v1.131.0/go.mod h1:PU8JB6I1XYkQIdHFop8lLAY9ojp6M0XcU0TWaQSxbrc= github.com/distribution/reference v0.5.0 h1:/FUIFXtfc/x2gpa5/VGfiGLuOIdYa1t65IKK2OFGvA0= github.com/distribution/reference v0.5.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E= github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI= @@ -974,8 +974,8 @@ github.com/ebitengine/purego v0.8.1 h1:sdRKd6plj7KYW33EH5As6YKfe8m9zbN9JMrOjNVF/ github.com/ebitengine/purego v0.8.1/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ= github.com/edsrzf/mmap-go v1.2.0 h1:hXLYlkbaPzt1SaQk+anYwKSRNhufIDCchSPkUD6dD84= github.com/edsrzf/mmap-go v1.2.0/go.mod h1:19H/e8pUPLicwkyNgOykDXkJ9F0MHE+Z52B8EIth78Q= -github.com/efficientgo/core v1.0.0-rc.0.0.20221201130417-ba593f67d2a4 h1:rydBwnBoywKQMjWF0z8SriYtQ+uUcaFsxuijMjJr5PI= -github.com/efficientgo/core v1.0.0-rc.0.0.20221201130417-ba593f67d2a4/go.mod h1:kQa0V74HNYMfuJH6jiPiwNdpWXl4xd/K4tzlrcvYDQI= +github.com/efficientgo/core v1.0.0-rc.3 h1:X6CdgycYWDcbYiJr1H1+lQGzx13o7bq3EUkbB9DsSPc= +github.com/efficientgo/core v1.0.0-rc.3/go.mod h1:FfGdkzWarkuzOlY04VY+bGfb1lWrjaL6x/GLcQ4vJps= github.com/efficientgo/e2e v0.13.1-0.20220923082810-8fa9daa8af8a h1:cnJajqeh/HjvJLhI3wPvWG9OQ4gU79+4pELRD5Pkih8= github.com/efficientgo/e2e v0.13.1-0.20220923082810-8fa9daa8af8a/go.mod h1:Hi+sz0REtlhVZ8zcdeTC3j6LUEEpJpPtNjOaOKuNcgI= github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g= @@ -1081,8 +1081,8 @@ github.com/go-playground/universal-translator v0.17.0/go.mod h1:UkSxE5sNxxRwHyU+ github.com/go-playground/validator/v10 v10.4.1/go.mod h1:nlOn6nFhuKACm19sB/8EGNn9GlaMV7XkbRSipzJ0Ii4= github.com/go-redis/redis/v8 v8.11.5 h1:AcZZR7igkdvfVmQTPnu9WE37LRrO/YrBH5zWyjDC0oI= github.com/go-redis/redis/v8 v8.11.5/go.mod h1:gREzHqY1hg6oD9ngVRbLStwAWKhA0FEgq8Jd4h5lpwo= -github.com/go-resty/resty/v2 v2.13.1 h1:x+LHXBI2nMB1vqndymf26quycC4aggYJ7DECYbiz03g= -github.com/go-resty/resty/v2 v2.13.1/go.mod h1:GznXlLxkq6Nh4sU59rPmUw3VtgpO3aS96ORAI6Q7d+0= +github.com/go-resty/resty/v2 v2.15.3 h1:bqff+hcqAflpiF591hhJzNdkRsFhlB96CYfBwSFvql8= +github.com/go-resty/resty/v2 v2.15.3/go.mod h1:0fHAoK7JoBy/Ch36N8VFeMsK7xQOHhvWaC3iOktwmIU= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= github.com/go-test/deep v1.1.0 h1:WOcxcdHcvdgThNXjw0t76K42FXTU7HpNQWHpA2HHNlg= github.com/go-test/deep v1.1.0/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE= @@ -1267,8 +1267,8 @@ github.com/grafana-tools/sdk v0.0.0-20220919052116-6562121319fc h1:PXZQA2WCxe85T github.com/grafana-tools/sdk v0.0.0-20220919052116-6562121319fc/go.mod h1:AHHlOEv1+GGQ3ktHMlhuTUwo3zljV3QJbC0+8o2kn+4= github.com/grafana/alerting v0.0.0-20241203173111-9d4ebec5f6b8 h1:77+Y8w2sXpMqTEyyyGE6WDk5U8v6ynCO9lBkMEqzyIo= github.com/grafana/alerting v0.0.0-20241203173111-9d4ebec5f6b8/go.mod h1:QsnoKX/iYZxA4Cv+H+wC7uxutBD8qi8ZW5UJvD2TYmU= -github.com/grafana/dskit v0.0.0-20241125123840-77bb9ddddb0c h1:6P6ypl4cRRCOMHbvvppecSTe1e6+f185lXp53e9FsJU= -github.com/grafana/dskit v0.0.0-20241125123840-77bb9ddddb0c/go.mod h1:SPLNCARd4xdjCkue0O6hvuoveuS1dGJjDnfxYe405YQ= +github.com/grafana/dskit v0.0.0-20241212153328-e27df29220ea h1:hchD5kBCIEx+BH6neVQkC/d4pwGlGDP74CFkrB/KUpA= +github.com/grafana/dskit v0.0.0-20241212153328-e27df29220ea/go.mod h1:SPLNCARd4xdjCkue0O6hvuoveuS1dGJjDnfxYe405YQ= github.com/grafana/e2e v0.1.2-0.20240118170847-db90b84177fc h1:BW+LjKJDz0So5LI8UZfW5neWeKpSkWqhmGjQFzcFfLM= github.com/grafana/e2e v0.1.2-0.20240118170847-db90b84177fc/go.mod h1:JVmqPBe8A/pZWwRoJW5ZjyALeY5OXMzPl7LrVXOdZAI= github.com/grafana/franz-go v0.0.0-20241009100846-782ba1442937 h1:fwwnG/NcygoS6XbAaEyK2QzMXI/BZIEJvQ3CD+7XZm8= @@ -1279,8 +1279,8 @@ github.com/grafana/gomemcache v0.0.0-20241016125027-0a5bcc5aef40 h1:1TeKhyS+pvzO github.com/grafana/gomemcache v0.0.0-20241016125027-0a5bcc5aef40/go.mod h1:IGRj8oOoxwJbHBYl1+OhS9UjQR0dv6SQOep7HqmtyFU= github.com/grafana/memberlist v0.3.1-0.20220714140823-09ffed8adbbe h1:yIXAAbLswn7VNWBIvM71O2QsgfgW9fRXZNR0DXe6pDU= github.com/grafana/memberlist v0.3.1-0.20220714140823-09ffed8adbbe/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE= -github.com/grafana/mimir-prometheus v0.0.0-20241204081021-cb322705289f h1:osEg8aUQ8HtRV8fcJ1dliJ/pxDpBARJywkyOsgM6Rv8= -github.com/grafana/mimir-prometheus v0.0.0-20241204081021-cb322705289f/go.mod h1:SLFE244uZpIWNevAuaMLJmsy5QluDTSUwLKKGK3jkk0= +github.com/grafana/mimir-prometheus v0.0.0-20241210170917-0a0a41616520 h1:FADazl5oVYBARbfVMtLkPQ9IfIwhiE9lrPrKNPOHBV4= +github.com/grafana/mimir-prometheus v0.0.0-20241210170917-0a0a41616520/go.mod h1:NpYc1U0eC7m6xUh3t3Pq565KxaIc08Oaquiu71dEMi8= github.com/grafana/opentracing-contrib-go-stdlib v0.0.0-20230509071955-f410e79da956 h1:em1oddjXL8c1tL0iFdtVtPloq2hRPen2MJQKoAWpxu0= github.com/grafana/opentracing-contrib-go-stdlib v0.0.0-20230509071955-f410e79da956/go.mod h1:qtI1ogk+2JhVPIXVc6q+NHziSmy2W5GbdQZFUHADCBU= github.com/grafana/prometheus-alertmanager v0.25.1-0.20240930132144-b5e64e81e8d3 h1:6D2gGAwyQBElSrp3E+9lSr7k8gLuP3Aiy20rweLWeBw= @@ -1371,8 +1371,8 @@ github.com/hashicorp/vault/api/auth/kubernetes v0.8.0 h1:6jPcORq7OHwf+MCbaaUmiBv github.com/hashicorp/vault/api/auth/kubernetes v0.8.0/go.mod h1:nfl5sRUUork0ZSfV3xf+pgAFQSD5kSkL0k9axg523DM= github.com/hashicorp/vault/api/auth/userpass v0.8.0 h1:JFFzMld+VO/S1v8HQNJzcy+3o+xfx/iH49dsiQ1G5jk= github.com/hashicorp/vault/api/auth/userpass v0.8.0/go.mod h1:+XbsSnbbyo+yjySfKcIsyl28kO4C/c4Czo7og0XCtUo= -github.com/hetznercloud/hcloud-go/v2 v2.15.0 h1:6mpMJ/RuX1woZj+MCJdyKNEX9129KDkEIDeeyfr4GD4= -github.com/hetznercloud/hcloud-go/v2 v2.15.0/go.mod h1:h8sHav+27Xa+48cVMAvAUMELov5h298Ilg2vflyTHgg= +github.com/hetznercloud/hcloud-go/v2 v2.17.0 h1:ge0w2piey9SV6XGyU/wQ6HBR24QyMbJ3wLzezplqR68= +github.com/hetznercloud/hcloud-go/v2 v2.17.0/go.mod h1:zfyZ4Orx+mPpYDzWAxXR7DHGL50nnlZ5Edzgs1o6f/s= github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM= github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg= github.com/huandu/xstrings v1.3.1/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE= @@ -1448,8 +1448,8 @@ github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0 github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= github.com/ledongthuc/pdf v0.0.0-20220302134840-0c2507a12d80/go.mod h1:imJHygn/1yfhB7XSJJKlFZKl/J+dCPAknuiaGOshXAs= github.com/leodido/go-urn v1.2.0/go.mod h1:+8+nEpDfqqsY+g338gtMEUOtuK+4dEMhiQEgxpxOKII= -github.com/linode/linodego v1.42.0 h1:ZSbi4MtvwrfB9Y6bknesorvvueBGGilcmh2D5dq76RM= -github.com/linode/linodego v1.42.0/go.mod h1:2yzmY6pegPBDgx2HDllmt0eIk2IlzqcgK6NR0wFCFRY= +github.com/linode/linodego v1.43.0 h1:sGeBB3caZt7vKBoPS5p4AVzmlG4JoqQOdigIibx3egk= +github.com/linode/linodego v1.43.0/go.mod h1:n4TMFu1UVNala+icHqrTEFFaicYSF74cSAUG5zkTwfA= github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4= github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I= github.com/lyft/protoc-gen-star v0.6.0/go.mod h1:TGAoBVkt8w7MPG72TrKIu85MIdXwDuzJYeZuUPFPNwA= @@ -1494,8 +1494,8 @@ github.com/minio/asm2plan9s v0.0.0-20200509001527-cdd76441f9d8/go.mod h1:mC1jAcs github.com/minio/c2goasm v0.0.0-20190812172519-36a3d3bbc4f3/go.mod h1:RagcQ7I8IeTMnF8JTXieKnO4Z6JCsikNEzj0DwauVzE= github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34= github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM= -github.com/minio/minio-go/v7 v7.0.81 h1:SzhMN0TQ6T/xSBu6Nvw3M5M8voM+Ht8RH3hE8S7zxaA= -github.com/minio/minio-go/v7 v7.0.81/go.mod h1:84gmIilaX4zcvAWWzJ5Z1WI5axN+hAbM5w25xf8xvC0= +github.com/minio/minio-go/v7 v7.0.82 h1:tWfICLhmp2aFPXL8Tli0XDTHj2VB/fNf0PC1f/i1gRo= +github.com/minio/minio-go/v7 v7.0.82/go.mod h1:84gmIilaX4zcvAWWzJ5Z1WI5axN+hAbM5w25xf8xvC0= github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc= github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI= github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db h1:62I3jR2EmQ4l5rM/4FEfDWcRD+abF5XlKShorW5LRoQ= @@ -1624,6 +1624,8 @@ github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4O github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA= github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc= github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk= +github.com/prometheus/sigv4 v0.1.0 h1:FgxH+m1qf9dGQ4w8Dd6VkthmpFQfGTzUeavMoQeG1LA= +github.com/prometheus/sigv4 v0.1.0/go.mod h1:doosPW9dOitMzYe2I2BN0jZqUuBrGPbXrNsTScN18iU= github.com/rainycape/unidecode v0.0.0-20150907023854-cb7f23ec59be h1:ta7tUOvsPHVHGom5hKW5VXNc2xZIkfCKP8iaqOyYtUQ= github.com/rainycape/unidecode v0.0.0-20150907023854-cb7f23ec59be/go.mod h1:MIDFMn7db1kT65GmV94GzpX9Qdi7N/pQlwb+AN8wh+Q= github.com/redis/go-redis/v9 v9.6.1 h1:HHDteefn6ZkTtY5fGUE8tj8uy85AHk6zP7CpzIAM0y4= @@ -1726,8 +1728,8 @@ github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9f github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM= github.com/twmb/franz-go/pkg/kadm v1.14.0 h1:nAn1co1lXzJQocpzyIyOFOjUBf4WHWs5/fTprXy2IZs= github.com/twmb/franz-go/pkg/kadm v1.14.0/go.mod h1:XjOPz6ZaXXjrW2jVCfLuucP8H1w2TvD6y3PT2M+aAM4= -github.com/twmb/franz-go/pkg/kfake v0.0.0-20241128200620-605b994f2744 h1:GL3EJM8aCyIwpN44kNWaEmoTKoYa8/IYMwEhp8S+Jtw= -github.com/twmb/franz-go/pkg/kfake v0.0.0-20241128200620-605b994f2744/go.mod h1:nkBI/wGFp7t1NJnnCeJdS4sX5atPAqwCPpDXKuI7SC8= +github.com/twmb/franz-go/pkg/kfake v0.0.0-20241202133023-293b7c4c56bb h1:cNa7PaAvkAhYIOootH/5gRO9ljbI3MIObf5qU/PKFKY= +github.com/twmb/franz-go/pkg/kfake v0.0.0-20241202133023-293b7c4c56bb/go.mod h1:nkBI/wGFp7t1NJnnCeJdS4sX5atPAqwCPpDXKuI7SC8= github.com/twmb/franz-go/pkg/kmsg v1.9.0 h1:JojYUph2TKAau6SBtErXpXGC7E3gg4vGZMv9xFU/B6M= github.com/twmb/franz-go/pkg/kmsg v1.9.0/go.mod h1:CMbfazviCyY6HM0SXuG5t9vOwYDHRCSrJJyBAe5paqg= github.com/twmb/franz-go/plugin/kotel v1.5.0 h1:TiPfGUbQK384OO7ZYGdo7JuPCbJn+/8njQ/D9Je9CDE= @@ -1783,16 +1785,16 @@ go.opentelemetry.io/collector/semconv v0.114.0 h1:/eKcCJwZepQUtEuFuxa0thx2XIOvhF go.opentelemetry.io/collector/semconv v0.114.0/go.mod h1:zCJ5njhWpejR+A40kiEoeFm1xq1uzyZwMnRNX6/D82A= go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.54.0 h1:r6I7RJCN86bpD/FQwedZ0vSixDpwuWREjW9oRMsmqDc= go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.54.0/go.mod h1:B9yO6b04uB80CzjedvewuqDhxJxi11s7/GtiGa8bAjI= -go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.56.0 h1:4BZHA+B1wXEQoGNHxW8mURaLhcdGwvRnmhGbm+odRbc= -go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.56.0/go.mod h1:3qi2EEwMgB4xnKgPLqsDP3j9qxnHDZeHsnAxfjQqTko= -go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.56.0 h1:UP6IpuHFkUgOQL9FFQFrZ+5LiwhhYRbi7VZSIx6Nj5s= -go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.56.0/go.mod h1:qxuZLtbq5QDtdeSHsS7bcf6EH6uO6jUAgk764zd3rhM= +go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.57.0 h1:7F3XCD6WYzDkwbi8I8N+oYJWquPVScnRosKGgqjsR8c= +go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.57.0/go.mod h1:Dk3C0BfIlZDZ5c6eVS7TYiH2vssuyUU3vUsgbrR+5V4= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.57.0 h1:DheMAlT6POBP+gh8RUH19EOTnQIor5QE0uSRPtzCpSw= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.57.0/go.mod h1:wZcGmeVO9nzP67aYSLDqXNWK87EZWhi7JWj1v7ZXf94= go.opentelemetry.io/otel v1.32.0 h1:WnBN+Xjcteh0zdk01SVqV55d/m62NJLJdIyb4y/WO5U= go.opentelemetry.io/otel v1.32.0/go.mod h1:00DCVSB0RQcnzlwyTfqtxSm+DRr9hpYrHjNGiBHVQIg= go.opentelemetry.io/otel/metric v1.32.0 h1:xV2umtmNcThh2/a/aCP+h64Xx5wsj8qqnkYZktzNa0M= go.opentelemetry.io/otel/metric v1.32.0/go.mod h1:jH7CIbbK6SH2V2wE16W05BHCtIDzauciCRLoc/SyMv8= -go.opentelemetry.io/otel/sdk v1.31.0 h1:xLY3abVHYZ5HSfOg3l2E5LUj2Cwva5Y7yGxnSW9H5Gk= -go.opentelemetry.io/otel/sdk v1.31.0/go.mod h1:TfRbMdhvxIIr/B2N2LQW2S5v9m3gOQ/08KsbbO5BPT0= +go.opentelemetry.io/otel/sdk v1.32.0 h1:RNxepc9vK59A8XsgZQouW8ue8Gkb4jpWtJm9ge5lEG4= +go.opentelemetry.io/otel/sdk v1.32.0/go.mod h1:LqgegDBjKMmb2GC6/PrTnteJG39I8/vJCAP9LlJXEjU= go.opentelemetry.io/otel/sdk/metric v1.30.0 h1:QJLT8Pe11jyHBHfSAgYH7kEmT24eX792jZO1bo4BXkM= go.opentelemetry.io/otel/sdk/metric v1.30.0/go.mod h1:waS6P3YqFNzeP01kuo/MBBYqaoBJl7efRQHOaydhy1Y= go.opentelemetry.io/otel/trace v1.32.0 h1:WIC9mYrXf8TmY/EXuULKc8hR17vE+Hjv2cssQDe03fM= @@ -1838,8 +1840,8 @@ golang.org/x/crypto v0.18.0/go.mod h1:R0j02AL6hcrfOiy9T4ZYp/rcWeMxM3L6QYxlOuEG1m golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU= golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs= golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8= -golang.org/x/crypto v0.29.0 h1:L5SG1JTTXupVV3n6sUqMTeWbjAyfPwoda2DLX8J8FrQ= -golang.org/x/crypto v0.29.0/go.mod h1:+F4F4N5hv6v38hfeYwTdx20oUvLLc+QfrE9Ax9HtgRg= +golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U= +golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk= golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20180807140117-3d87b88a115f/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= @@ -1855,8 +1857,8 @@ golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u0 golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM= golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU= golang.org/x/exp v0.0.0-20220827204233-334a2380cb91/go.mod h1:cyybsKvd6eL0RnXn6p/Grxp8F5bW7iYuBgsNCOHpMYE= -golang.org/x/exp v0.0.0-20241108190413-2d47ceb2692f h1:XdNn9LlyWAhLVp6P/i8QYBW+hlyhrhei9uErw2B5GJo= -golang.org/x/exp v0.0.0-20241108190413-2d47ceb2692f/go.mod h1:D5SMRVC3C2/4+F/DB1wZsLRnSNimn2Sp/NPsCrsv8ak= +golang.org/x/exp v0.0.0-20241215155358-4a5509556b9e h1:4qufH0hlUYs6AO6XmZC3GqfDPGSXHVXUFR6OND+iJX4= +golang.org/x/exp v0.0.0-20241215155358-4a5509556b9e/go.mod h1:qj5a5QZpwLU2NLQudwIN5koi3beDhSAlJwa67PuM98c= golang.org/x/image v0.0.0-20180708004352-c73c2afc3b81/go.mod h1:ux5Hcp/YLpHSI86hEcLt0YII63i6oz57MZXIpbrjZUs= golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js= golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0= @@ -1975,8 +1977,8 @@ golang.org/x/net v0.20.0/go.mod h1:z8BVo6PvndSri0LbOE3hAn0apkU+1YvI6E70E9jsnvY= golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44= golang.org/x/net v0.22.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg= golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM= -golang.org/x/net v0.31.0 h1:68CPQngjLL0r2AlUKiSxtQFKvzRVbnzLwMUn5SzcLHo= -golang.org/x/net v0.31.0/go.mod h1:P4fl1q7dY2hnZFxEk4pPSkDHF+QqjitcnDjUQyMM+pM= +golang.org/x/net v0.32.0 h1:ZqPmj8Kzc+Y6e0+skZsuACbx+wzMgo5MQsJh9Qd6aYI= +golang.org/x/net v0.32.0/go.mod h1:CwU0IoeOlnQQWJ6ioyFrfRuomB8GKF6KbYXZVyeXNfs= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= @@ -2028,8 +2030,8 @@ golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.2.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= -golang.org/x/sync v0.9.0 h1:fEo0HyrW1GIgZdpbhCRO0PkJajUS5H9IFUztCgEo2jQ= -golang.org/x/sync v0.9.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= +golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ= +golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= @@ -2140,8 +2142,8 @@ golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= -golang.org/x/sys v0.27.0 h1:wBqf8DvsY9Y/2P8gAfPDEYNuS30J4lPHJxXSb/nJZ+s= -golang.org/x/sys v0.27.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA= +golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= @@ -2158,8 +2160,8 @@ golang.org/x/term v0.16.0/go.mod h1:yn7UURbUtPyrVJPGPq404EukNFxcm/foM+bV/bfcDsY= golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk= golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58= golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY= -golang.org/x/term v0.26.0 h1:WEQa6V3Gja/BhNxg540hBip/kkaYtRg3cxg4oXSw4AU= -golang.org/x/term v0.26.0/go.mod h1:Si5m1o57C5nBNQo5z1iq+XDijt21BDBDp2bK0QI8e3E= +golang.org/x/term v0.27.0 h1:WP60Sv1nlK1T6SupCHbXzSaN0b9wUmsPoRS9b61A23Q= +golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -2179,8 +2181,8 @@ golang.org/x/text v0.10.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= -golang.org/x/text v0.20.0 h1:gK/Kv2otX8gz+wn7Rmb3vT96ZwuoxnQlY+HlJVj7Qug= -golang.org/x/text v0.20.0/go.mod h1:D4IsuqiFMhST5bX19pQ9ikHC2GsaKyk/oF+pn3ducp4= +golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo= +golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= @@ -2256,8 +2258,8 @@ golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s= golang.org/x/tools v0.9.1/go.mod h1:owI94Op576fPu3cIGQeHs3joujW/2Oc6MtlxbF5dfNc= golang.org/x/tools v0.10.0/go.mod h1:UJwyiVBsOA2uwvK/e5OY3GTpDUJriEd+/YlqAwLPmyM= golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58= -golang.org/x/tools v0.27.0 h1:qEKojBykQkQ4EynWy4S8Weg69NumxKdn40Fce3uc/8o= -golang.org/x/tools v0.27.0/go.mod h1:sUi0ZgbwW9ZPAq26Ekut+weQPR5eIM6GQLQ1Yjm1H0Q= +golang.org/x/tools v0.28.0 h1:WuB6qZ4RPCQo5aP3WdKZS7i595EdWqWR8vqJTlwTVK8= +golang.org/x/tools v0.28.0/go.mod h1:dcIOrVd3mfQKTgrDVQHqCPMWy6lnhfhtX3hLXYVLfRw= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= @@ -2517,8 +2519,8 @@ google.golang.org/genproto/googleapis/rpc v0.0.0-20230803162519-f966b187b2e5/go. google.golang.org/genproto/googleapis/rpc v0.0.0-20240318140521-94a12d6c2237/go.mod h1:WtryC6hu0hhx87FDGxWCDptyssuo68sk10vYjF+T9fY= google.golang.org/genproto/googleapis/rpc v0.0.0-20240521202816-d264139d666e/go.mod h1:EfXuqaE1J41VCDicxHzUDm+8rk+7ZdXzHV0IhO/I6s0= google.golang.org/genproto/googleapis/rpc v0.0.0-20240528184218-531527333157/go.mod h1:EfXuqaE1J41VCDicxHzUDm+8rk+7ZdXzHV0IhO/I6s0= -google.golang.org/genproto/googleapis/rpc v0.0.0-20241118233622-e639e219e697 h1:LWZqQOEjDyONlF1H6afSWpAL/znlREo2tHfLoe+8LMA= -google.golang.org/genproto/googleapis/rpc v0.0.0-20241118233622-e639e219e697/go.mod h1:5uTbfoYQed2U9p3KIj2/Zzm02PYhndfdmML0qC3q3FU= +google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576 h1:8ZmaLZE4XWrtU3MyClkYqqtl6Oegr3235h7jxsDyqCY= +google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576/go.mod h1:5uTbfoYQed2U9p3KIj2/Zzm02PYhndfdmML0qC3q3FU= google.golang.org/grpc v1.65.0 h1:bs/cUb4lp1G5iImFFd3u5ixQzweKizoZJAwBNLR42lc= google.golang.org/grpc v1.65.0/go.mod h1:WgYC2ypjlB0EiQi6wdKixMqukr6lBc0Vo+oOgjrM5ZQ= google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw= @@ -2582,12 +2584,12 @@ honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= honnef.co/go/tools v0.1.3/go.mod h1:NgwopIslSNH47DimFoV78dnkksY2EFtX0ajyb3K/las= -k8s.io/api v0.31.1 h1:Xe1hX/fPW3PXYYv8BlozYqw63ytA92snr96zMW9gWTU= -k8s.io/api v0.31.1/go.mod h1:sbN1g6eY6XVLeqNsZGLnI5FwVseTrZX7Fv3O26rhAaI= -k8s.io/apimachinery v0.31.1 h1:mhcUBbj7KUjaVhyXILglcVjuS4nYXiwC+KKFBgIVy7U= -k8s.io/apimachinery v0.31.1/go.mod h1:rsPdaZJfTfLsNJSQzNHQvYoTmxhoOEofxtOsF3rtsMo= -k8s.io/client-go v0.31.1 h1:f0ugtWSbWpxHR7sjVpQwuvw9a3ZKLXX0u0itkFXufb0= -k8s.io/client-go v0.31.1/go.mod h1:sKI8871MJN2OyeqRlmA4W4KM9KBdBUpDLu/43eGemCg= +k8s.io/api v0.31.3 h1:umzm5o8lFbdN/hIXbrK9oRpOproJO62CV1zqxXrLgk8= +k8s.io/api v0.31.3/go.mod h1:UJrkIp9pnMOI9K2nlL6vwpxRzzEX5sWgn8kGQe92kCE= +k8s.io/apimachinery v0.31.3 h1:6l0WhcYgasZ/wk9ktLq5vLaoXJJr5ts6lkaQzgeYPq4= +k8s.io/apimachinery v0.31.3/go.mod h1:rsPdaZJfTfLsNJSQzNHQvYoTmxhoOEofxtOsF3rtsMo= +k8s.io/client-go v0.31.3 h1:CAlZuM+PH2cm+86LOBemaJI/lQ5linJ6UFxKX/SoG+4= +k8s.io/client-go v0.31.3/go.mod h1:2CgjPUTpv3fE5dNygAr2NcM8nhHzXvxB8KL5gYc3kJs= k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 h1:BZqlfIlq5YbRMFko6/PM7FjZpUb45WallggurYhKGag= diff --git a/integration/configs.go b/integration/configs.go index 9c2e7ff3853..5fbf64334b5 100644 --- a/integration/configs.go +++ b/integration/configs.go @@ -249,6 +249,11 @@ blocks_storage: // Do not wait before switching an INACTIVE partition to ACTIVE. "-ingester.partition-ring.min-partition-owners-count": "0", "-ingester.partition-ring.min-partition-owners-duration": "0s", + + // Enable ingestion and fetch concurrency + "-ingest-storage.kafka.fetch-concurrency-max": "12", + "-ingest-storage.kafka.ingestion-concurrency-max": "8", + "-ingest-storage.kafka.auto-create-topic-default-partitions": "10", } } ) diff --git a/operations/helm/charts/mimir-distributed/Chart.yaml b/operations/helm/charts/mimir-distributed/Chart.yaml index ba340f596fe..fe980ccaca3 100644 --- a/operations/helm/charts/mimir-distributed/Chart.yaml +++ b/operations/helm/charts/mimir-distributed/Chart.yaml @@ -1,6 +1,6 @@ apiVersion: v2 -version: 5.6.0-weekly.319 -appVersion: r319 +version: 5.6.0-weekly.320 +appVersion: r320 description: "Grafana Mimir" home: https://grafana.com/docs/helm-charts/mimir-distributed/latest/ icon: https://grafana.com/static/img/logos/logo-mimir.svg diff --git a/operations/helm/charts/mimir-distributed/README.md b/operations/helm/charts/mimir-distributed/README.md index 950a03173d5..3d80986107b 100644 --- a/operations/helm/charts/mimir-distributed/README.md +++ b/operations/helm/charts/mimir-distributed/README.md @@ -4,7 +4,7 @@ Helm chart for deploying [Grafana Mimir](https://grafana.com/docs/mimir/latest/) For the full documentation, visit [Grafana mimir-distributed Helm chart documentation](https://grafana.com/docs/helm-charts/mimir-distributed/latest/). -> **Note:** The documentation version is derived from the Helm chart version which is 5.6.0-weekly.319. +> **Note:** The documentation version is derived from the Helm chart version which is 5.6.0-weekly.320. When upgrading from Helm chart version 4.X, please see [Migrate the Helm chart from version 4.x to 5.0](https://grafana.com/docs/helm-charts/mimir-distributed/latest/migration-guides/migrate-helm-chart-4.x-to-5.0/). When upgrading from Helm chart version 3.x, please see [Migrate from single zone to zone-aware replication with Helm](https://grafana.com/docs/helm-charts/mimir-distributed/latest/migration-guides/migrate-from-single-zone-with-helm/). @@ -14,7 +14,7 @@ When upgrading from Helm chart version 2.1, please see [Upgrade the Grafana Mimi # mimir-distributed -![Version: 5.6.0-weekly.319](https://img.shields.io/badge/Version-5.6.0--weekly.319-informational?style=flat-square) ![AppVersion: r319](https://img.shields.io/badge/AppVersion-r319-informational?style=flat-square) +![Version: 5.6.0-weekly.320](https://img.shields.io/badge/Version-5.6.0--weekly.320-informational?style=flat-square) ![AppVersion: r320](https://img.shields.io/badge/AppVersion-r320-informational?style=flat-square) Grafana Mimir diff --git a/operations/helm/charts/mimir-distributed/templates/distributor/distributor-so.yaml b/operations/helm/charts/mimir-distributed/templates/distributor/distributor-so.yaml index fe17393246b..cfc82351dab 100644 --- a/operations/helm/charts/mimir-distributed/templates/distributor/distributor-so.yaml +++ b/operations/helm/charts/mimir-distributed/templates/distributor/distributor-so.yaml @@ -31,12 +31,15 @@ spec: {{- if .Values.kedaAutoscaling.customHeaders }} customHeaders: {{ (include "mimir.lib.mapToCSVString" (dict "map" .Values.kedaAutoscaling.customHeaders)) | quote }} {{- end }} - ignoreNullValues: {{ .Values.kedaAutoscaling.ignoreNullValues }} - unsafeSsl: {{ .Values.kedaAutoscaling.ignoreNullValues }} + ignoreNullValues: "{{ .Values.kedaAutoscaling.ignoreNullValues }}" + unsafeSsl: "{{ .Values.kedaAutoscaling.unsafeSsl }}" + {{- if .Values.kedaAutoscaling.authentication.enabled }} + authModes: "{{ .Values.kedaAutoscaling.authentication.authModes }}" + {{- end }} type: prometheus {{- if .Values.kedaAutoscaling.authentication.enabled }} authenticationRef: - name: {{ include "mimir.resourceName" (dict "ctx" .) }} + name: keda-triggger-auth {{- end }} - metadata: query: max_over_time(sum((sum by (pod) (container_memory_working_set_bytes{container="distributor",namespace="{{ .Release.Namespace }}"}) and max by (pod) (up{container="distributor",namespace="{{ .Release.Namespace }}"}) > 0) or vector(0))[15m:]) + sum(sum by (pod) (max_over_time(kube_pod_container_resource_requests{container="distributor",namespace="{{ .Release.Namespace }}", resource="memory"}[15m])) and max by (pod) (changes(kube_pod_container_status_restarts_total{container="distributor",namespace="{{ .Release.Namespace }}"}[15m]) > 0) and max by (pod) (kube_pod_container_status_last_terminated_reason{container="distributor",namespace="{{ .Release.Namespace }}", reason="OOMKilled"}) or vector(0)) @@ -46,11 +49,14 @@ spec: {{- if .Values.kedaAutoscaling.customHeaders }} customHeaders: {{ (include "mimir.lib.mapToCSVString" (dict "map" .Values.kedaAutoscaling.customHeaders)) | quote }} {{- end }} - ignoreNullValues: {{ .Values.kedaAutoscaling.ignoreNullValues }} - unsafeSsl: {{ .Values.kedaAutoscaling.ignoreNullValues }} + ignoreNullValues: "{{ .Values.kedaAutoscaling.ignoreNullValues }}" + unsafeSsl: "{{ .Values.kedaAutoscaling.unsafeSsl }}" + {{- if .Values.kedaAutoscaling.authentication.enabled }} + authModes: "{{ .Values.kedaAutoscaling.authentication.authModes }}" + {{- end }} type: prometheus {{- if .Values.kedaAutoscaling.authentication.enabled }} authenticationRef: - name: {{ include "mimir.resourceName" (dict "ctx" .) }} + name: keda-triggger-auth {{- end }} {{- end }} diff --git a/operations/helm/charts/mimir-distributed/templates/querier/querier-so.yaml b/operations/helm/charts/mimir-distributed/templates/querier/querier-so.yaml index ec41bdc3544..60c9e1092ba 100644 --- a/operations/helm/charts/mimir-distributed/templates/querier/querier-so.yaml +++ b/operations/helm/charts/mimir-distributed/templates/querier/querier-so.yaml @@ -33,13 +33,16 @@ spec: {{- if .Values.kedaAutoscaling.customHeaders }} customHeaders: {{ (include "mimir.lib.mapToCSVString" (dict "map" .Values.kedaAutoscaling.customHeaders)) | quote }} {{- end }} - ignoreNullValues: {{ .Values.kedaAutoscaling.ignoreNullValues }} - unsafeSsl: {{ .Values.kedaAutoscaling.ignoreNullValues }} + ignoreNullValues: "{{ .Values.kedaAutoscaling.ignoreNullValues }}" + unsafeSsl: "{{ .Values.kedaAutoscaling.unsafeSsl }}" + {{- if .Values.kedaAutoscaling.authentication.enabled }} + authModes: "{{ .Values.kedaAutoscaling.authentication.authModes }}" + {{- end }} name: cortex_querier_hpa_default type: prometheus {{- if .Values.kedaAutoscaling.authentication.enabled }} authenticationRef: - name: {{ include "mimir.resourceName" (dict "ctx" .) }} + name: keda-triggger-auth {{- end }} - metadata: query: sum(rate(cortex_querier_request_duration_seconds_sum{container="querier",namespace="{{ .Release.Namespace }}"}[1m])) @@ -48,13 +51,16 @@ spec: {{- if .Values.kedaAutoscaling.customHeaders }} customHeaders: {{ (include "mimir.lib.mapToCSVString" (dict "map" .Values.kedaAutoscaling.customHeaders)) | quote }} {{- end }} - ignoreNullValues: {{ .Values.kedaAutoscaling.ignoreNullValues }} - unsafeSsl: {{ .Values.kedaAutoscaling.ignoreNullValues }} + ignoreNullValues: "{{ .Values.kedaAutoscaling.ignoreNullValues }}" + unsafeSsl: "{{ .Values.kedaAutoscaling.unsafeSsl }}" + {{- if .Values.kedaAutoscaling.authentication.enabled }} + authModes: "{{ .Values.kedaAutoscaling.authentication.authModes }}" + {{- end }} name: cortex_querier_hpa_default_requests_duration type: prometheus {{- if .Values.kedaAutoscaling.authentication.enabled }} authenticationRef: - name: {{ include "mimir.resourceName" (dict "ctx" .) }} + name: keda-triggger-auth {{- end }} {{- $autoscaling := .Values.querier.kedaAutoscaling -}} {{- if .Values.querier.kedaAutoscaling.predictiveScalingEnabled }} @@ -65,13 +71,16 @@ spec: {{- if .Values.kedaAutoscaling.customHeaders }} customHeaders: {{ (include "mimir.lib.mapToCSVString" (dict "map" .Values.kedaAutoscaling.customHeaders)) | quote }} {{- end }} - ignoreNullValues: {{ .Values.kedaAutoscaling.ignoreNullValues }} - unsafeSsl: {{ .Values.kedaAutoscaling.ignoreNullValues }} + ignoreNullValues: "{{ .Values.kedaAutoscaling.ignoreNullValues }}" + unsafeSsl: "{{ .Values.kedaAutoscaling.unsafeSsl }}" + {{- if .Values.kedaAutoscaling.authentication.enabled }} + authModes: "{{ .Values.kedaAutoscaling.authentication.authModes }}" + {{- end }} name: cortex_querier_hpa_default_predictive type: prometheus {{- if .Values.kedaAutoscaling.authentication.enabled }} authenticationRef: - name: {{ include "mimir.resourceName" (dict "ctx" .) }} + name: keda-triggger-auth {{- end }} {{- end }} {{- end }} diff --git a/operations/helm/charts/mimir-distributed/templates/query-frontend/query-frontend-so.yaml b/operations/helm/charts/mimir-distributed/templates/query-frontend/query-frontend-so.yaml index 8ff1e4a0094..49ab3a5e098 100644 --- a/operations/helm/charts/mimir-distributed/templates/query-frontend/query-frontend-so.yaml +++ b/operations/helm/charts/mimir-distributed/templates/query-frontend/query-frontend-so.yaml @@ -31,12 +31,15 @@ spec: {{- if .Values.kedaAutoscaling.customHeaders }} customHeaders: {{ (include "mimir.lib.mapToCSVString" (dict "map" .Values.kedaAutoscaling.customHeaders)) | quote }} {{- end }} - ignoreNullValues: {{ .Values.kedaAutoscaling.ignoreNullValues }} - unsafeSsl: {{ .Values.kedaAutoscaling.ignoreNullValues }} + ignoreNullValues: "{{ .Values.kedaAutoscaling.ignoreNullValues }}" + unsafeSsl: "{{ .Values.kedaAutoscaling.unsafeSsl }}" + {{- if .Values.kedaAutoscaling.authentication.enabled }} + authModes: "{{ .Values.kedaAutoscaling.authentication.authModes }}" + {{- end }} type: prometheus {{- if .Values.kedaAutoscaling.authentication.enabled }} authenticationRef: - name: {{ include "mimir.resourceName" (dict "ctx" .) }} + name: keda-triggger-auth {{- end }} - metadata: query: max_over_time(sum((sum by (pod) (container_memory_working_set_bytes{container="query-frontend",namespace="{{ .Release.Namespace }}"}) and max by (pod) (up{container="query-frontend",namespace="{{ .Release.Namespace }}"}) > 0) or vector(0))[15m:]) + sum(sum by (pod) (max_over_time(kube_pod_container_resource_requests{container="query-frontend",namespace="{{ .Release.Namespace }}", resource="memory"}[15m])) and max by (pod) (changes(kube_pod_container_status_restarts_total{container="query-frontend",namespace="{{ .Release.Namespace }}"}[15m]) > 0) and max by (pod) (kube_pod_container_status_last_terminated_reason{container="query-frontend",namespace="{{ .Release.Namespace }}", reason="OOMKilled"}) or vector(0)) @@ -46,11 +49,14 @@ spec: {{- if .Values.kedaAutoscaling.customHeaders }} customHeaders: {{ (include "mimir.lib.mapToCSVString" (dict "map" .Values.kedaAutoscaling.customHeaders)) | quote }} {{- end }} - ignoreNullValues: {{ .Values.kedaAutoscaling.ignoreNullValues }} - unsafeSsl: {{ .Values.kedaAutoscaling.ignoreNullValues }} + ignoreNullValues: "{{ .Values.kedaAutoscaling.ignoreNullValues }}" + unsafeSsl: "{{ .Values.kedaAutoscaling.unsafeSsl }}" + {{- if .Values.kedaAutoscaling.authentication.enabled }} + authModes: "{{ .Values.kedaAutoscaling.authentication.authModes }}" + {{- end }} type: prometheus {{- if .Values.kedaAutoscaling.authentication.enabled }} authenticationRef: - name: {{ include "mimir.resourceName" (dict "ctx" .) }} + name: keda-triggger-auth {{- end }} {{- end }} diff --git a/operations/helm/charts/mimir-distributed/templates/ruler-querier/ruler-querier-so.yaml b/operations/helm/charts/mimir-distributed/templates/ruler-querier/ruler-querier-so.yaml index f2d68eaabc6..9ee51e99146 100644 --- a/operations/helm/charts/mimir-distributed/templates/ruler-querier/ruler-querier-so.yaml +++ b/operations/helm/charts/mimir-distributed/templates/ruler-querier/ruler-querier-so.yaml @@ -31,13 +31,16 @@ spec: {{- if .Values.kedaAutoscaling.customHeaders }} customHeaders: {{ (include "mimir.lib.mapToCSVString" (dict "map" .Values.kedaAutoscaling.customHeaders)) | quote }} {{- end }} - ignoreNullValues: {{ .Values.kedaAutoscaling.ignoreNullValues }} - unsafeSsl: {{ .Values.kedaAutoscaling.ignoreNullValues }} + ignoreNullValues: "{{ .Values.kedaAutoscaling.ignoreNullValues }}" + unsafeSsl: "{{ .Values.kedaAutoscaling.unsafeSsl }}" + {{- if .Values.kedaAutoscaling.authentication.enabled }} + authModes: "{{ .Values.kedaAutoscaling.authentication.authModes }}" + {{- end }} name: cortex_querier_hpa_default type: prometheus {{- if .Values.kedaAutoscaling.authentication.enabled }} authenticationRef: - name: {{ include "mimir.resourceName" (dict "ctx" .) }} + name: keda-triggger-auth {{- end }} - metadata: query: sum(rate(cortex_querier_request_duration_seconds_sum{container="ruler-querier",namespace="{{ .Release.Namespace }}"}[1m])) @@ -46,13 +49,16 @@ spec: {{- if .Values.kedaAutoscaling.customHeaders }} customHeaders: {{ (include "mimir.lib.mapToCSVString" (dict "map" .Values.kedaAutoscaling.customHeaders)) | quote }} {{- end }} - ignoreNullValues: {{ .Values.kedaAutoscaling.ignoreNullValues }} - unsafeSsl: {{ .Values.kedaAutoscaling.ignoreNullValues }} + ignoreNullValues: "{{ .Values.kedaAutoscaling.ignoreNullValues }}" + unsafeSsl: "{{ .Values.kedaAutoscaling.unsafeSsl }}" + {{- if .Values.kedaAutoscaling.authentication.enabled }} + authModes: "{{ .Values.kedaAutoscaling.authentication.authModes }}" + {{- end }} name: cortex_querier_hpa_default_requests_duration type: prometheus {{- if .Values.kedaAutoscaling.authentication.enabled }} authenticationRef: - name: {{ include "mimir.resourceName" (dict "ctx" .) }} + name: keda-triggger-auth {{- end }} {{- end }} {{- end }} \ No newline at end of file diff --git a/operations/helm/charts/mimir-distributed/templates/ruler-query-frontend/ruler-query-frontend-so.yaml b/operations/helm/charts/mimir-distributed/templates/ruler-query-frontend/ruler-query-frontend-so.yaml index b196d6516d9..648f60397a4 100644 --- a/operations/helm/charts/mimir-distributed/templates/ruler-query-frontend/ruler-query-frontend-so.yaml +++ b/operations/helm/charts/mimir-distributed/templates/ruler-query-frontend/ruler-query-frontend-so.yaml @@ -32,12 +32,15 @@ spec: {{- if .Values.kedaAutoscaling.customHeaders }} customHeaders: {{ (include "mimir.lib.mapToCSVString" (dict "map" .Values.kedaAutoscaling.customHeaders)) | quote }} {{- end }} - ignoreNullValues: {{ .Values.kedaAutoscaling.ignoreNullValues }} - unsafeSsl: {{ .Values.kedaAutoscaling.ignoreNullValues }} + ignoreNullValues: "{{ .Values.kedaAutoscaling.ignoreNullValues }}" + unsafeSsl: "{{ .Values.kedaAutoscaling.unsafeSsl }}" + {{- if .Values.kedaAutoscaling.authentication.enabled }} + authModes: "{{ .Values.kedaAutoscaling.authentication.authModes }}" + {{- end }} type: prometheus {{- if .Values.kedaAutoscaling.authentication.enabled }} authenticationRef: - name: {{ include "mimir.resourceName" (dict "ctx" .) }} + name: keda-triggger-auth {{- end }} - metadata: query: max_over_time(sum((sum by (pod) (container_memory_working_set_bytes{container="ruler-query-frontend",namespace="{{ .Release.Namespace }}"}) and max by (pod) (up{container="ruler-query-frontend",namespace="{{ .Release.Namespace }}"}) > 0) or vector(0))[15m:]) + sum(sum by (pod) (max_over_time(kube_pod_container_resource_requests{container="ruler-query-frontend",namespace="{{ .Release.Namespace }}", resource="memory"}[15m])) and max by (pod) (changes(kube_pod_container_status_restarts_total{container="ruler-query-frontend",namespace="{{ .Release.Namespace }}"}[15m]) > 0) and max by (pod) (kube_pod_container_status_last_terminated_reason{container="ruler-query-frontend",namespace="{{ .Release.Namespace }}", reason="OOMKilled"}) or vector(0)) @@ -47,12 +50,15 @@ spec: {{- if .Values.kedaAutoscaling.customHeaders }} customHeaders: {{ (include "mimir.lib.mapToCSVString" (dict "map" .Values.kedaAutoscaling.customHeaders)) | quote }} {{- end }} - ignoreNullValues: {{ .Values.kedaAutoscaling.ignoreNullValues }} - unsafeSsl: {{ .Values.kedaAutoscaling.ignoreNullValues }} + ignoreNullValues: "{{ .Values.kedaAutoscaling.ignoreNullValues }}" + unsafeSsl: "{{ .Values.kedaAutoscaling.unsafeSsl }}" + {{- if .Values.kedaAutoscaling.authentication.enabled }} + authModes: "{{ .Values.kedaAutoscaling.authentication.authModes }}" + {{- end }} type: prometheus {{- if .Values.kedaAutoscaling.authentication.enabled }} authenticationRef: - name: {{ include "mimir.resourceName" (dict "ctx" .) }} + name: keda-triggger-auth {{- end }} {{- end }} {{- end }} diff --git a/operations/helm/charts/mimir-distributed/templates/ruler/ruler-so.yaml b/operations/helm/charts/mimir-distributed/templates/ruler/ruler-so.yaml index 2c1d82603a8..96f588e7dc9 100644 --- a/operations/helm/charts/mimir-distributed/templates/ruler/ruler-so.yaml +++ b/operations/helm/charts/mimir-distributed/templates/ruler/ruler-so.yaml @@ -31,12 +31,15 @@ spec: {{- if .Values.kedaAutoscaling.customHeaders }} customHeaders: {{ (include "mimir.lib.mapToCSVString" (dict "map" .Values.kedaAutoscaling.customHeaders)) | quote }} {{- end }} - ignoreNullValues: {{ .Values.kedaAutoscaling.ignoreNullValues }} - unsafeSsl: {{ .Values.kedaAutoscaling.ignoreNullValues }} + ignoreNullValues: "{{ .Values.kedaAutoscaling.ignoreNullValues }}" + unsafeSsl: "{{ .Values.kedaAutoscaling.unsafeSsl }}" + {{- if .Values.kedaAutoscaling.authentication.enabled }} + authModes: "{{ .Values.kedaAutoscaling.authentication.authModes }}" + {{- end }} type: prometheus {{- if .Values.kedaAutoscaling.authentication.enabled }} authenticationRef: - name: {{ include "mimir.resourceName" (dict "ctx" .) }} + name: keda-triggger-auth {{- end }} - metadata: query: max_over_time(sum((sum by (pod) (container_memory_working_set_bytes{container="ruler",namespace="{{ .Release.Namespace }}"}) and max by (pod) (up{container="ruler",namespace="{{ .Release.Namespace }}"}) > 0) or vector(0))[15m:]) + sum(sum by (pod) (max_over_time(kube_pod_container_resource_requests{container="ruler",namespace="{{ .Release.Namespace }}", resource="memory"}[15m])) and max by (pod) (changes(kube_pod_container_status_restarts_total{container="ruler",namespace="{{ .Release.Namespace }}"}[15m]) > 0) and max by (pod) (kube_pod_container_status_last_terminated_reason{container="ruler",namespace="{{ .Release.Namespace }}", reason="OOMKilled"}) or vector(0)) @@ -46,11 +49,14 @@ spec: {{- if .Values.kedaAutoscaling.customHeaders }} customHeaders: {{ (include "mimir.lib.mapToCSVString" (dict "map" .Values.kedaAutoscaling.customHeaders)) | quote }} {{- end }} - ignoreNullValues: {{ .Values.kedaAutoscaling.ignoreNullValues }} - unsafeSsl: {{ .Values.kedaAutoscaling.ignoreNullValues }} + ignoreNullValues: "{{ .Values.kedaAutoscaling.ignoreNullValues }}" + unsafeSsl: "{{ .Values.kedaAutoscaling.unsafeSsl }}" + {{- if .Values.kedaAutoscaling.authentication.enabled }} + authModes: "{{ .Values.kedaAutoscaling.authentication.authModes }}" + {{- end }} type: prometheus {{- if .Values.kedaAutoscaling.authentication.enabled }} authenticationRef: - name: {{ include "mimir.resourceName" (dict "ctx" .) }} + name: keda-triggger-auth {{- end }} {{- end }} \ No newline at end of file diff --git a/operations/helm/charts/mimir-distributed/values.yaml b/operations/helm/charts/mimir-distributed/values.yaml index a394aecb8d5..07ebbb793c6 100644 --- a/operations/helm/charts/mimir-distributed/values.yaml +++ b/operations/helm/charts/mimir-distributed/values.yaml @@ -34,7 +34,7 @@ image: # -- Grafana Mimir container image repository. Note: for Grafana Enterprise Metrics use the value 'enterprise.image.repository' repository: grafana/mimir # -- Grafana Mimir container image tag. Note: for Grafana Enterprise Metrics use the value 'enterprise.image.tag' - tag: r319-fcf1f06 + tag: r320-de6d3fb # -- Container pull policy - shared between Grafana Mimir and Grafana Enterprise Metrics pullPolicy: IfNotPresent # -- Optionally specify an array of imagePullSecrets - shared between Grafana Mimir and Grafana Enterprise Metrics @@ -2371,7 +2371,7 @@ memcached: # -- Memcached Docker image repository repository: memcached # -- Memcached Docker image tag - tag: 1.6.32-alpine + tag: 1.6.33-alpine # -- Memcached Docker image pull policy pullPolicy: IfNotPresent @@ -3963,7 +3963,7 @@ enterprise: # -- Grafana Enterprise Metrics container image repository. Note: for Grafana Mimir use the value 'image.repository' repository: grafana/enterprise-metrics # -- Grafana Enterprise Metrics container image tag. Note: for Grafana Mimir use the value 'image.tag' - tag: r319-9dc68a57 + tag: r320-f2d1bf91 # Note: pullPolicy and optional pullSecrets are set in toplevel 'image' section, not here # In order to use Grafana Enterprise Metrics features, you will need to provide the contents of your Grafana Enterprise Metrics diff --git a/operations/helm/tests/enterprise-https-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml b/operations/helm/tests/enterprise-https-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml index 47b80a0b0b4..1b3e22cede9 100644 --- a/operations/helm/tests/enterprise-https-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml +++ b/operations/helm/tests/enterprise-https-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml @@ -50,7 +50,7 @@ spec: secretName: tls-certs containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/enterprise-https-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml b/operations/helm/tests/enterprise-https-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml index c92502b2d2d..1d15f71e1c8 100644 --- a/operations/helm/tests/enterprise-https-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml +++ b/operations/helm/tests/enterprise-https-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml @@ -50,7 +50,7 @@ spec: secretName: tls-certs containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/enterprise-https-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml b/operations/helm/tests/enterprise-https-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml index 2677c9bc668..715f181799f 100644 --- a/operations/helm/tests/enterprise-https-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml +++ b/operations/helm/tests/enterprise-https-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml @@ -50,7 +50,7 @@ spec: secretName: tls-certs containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/enterprise-https-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml b/operations/helm/tests/enterprise-https-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml index 439b92c4870..b263d31fd8b 100644 --- a/operations/helm/tests/enterprise-https-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml +++ b/operations/helm/tests/enterprise-https-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml @@ -50,7 +50,7 @@ spec: secretName: tls-certs containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/graphite-enabled-values-generated/mimir-distributed/templates/graphite-proxy/graphite-aggregation-cache/graphite-aggregation-cache-statefulset.yaml b/operations/helm/tests/graphite-enabled-values-generated/mimir-distributed/templates/graphite-proxy/graphite-aggregation-cache/graphite-aggregation-cache-statefulset.yaml index 52878a1a54a..3eeb278f58b 100644 --- a/operations/helm/tests/graphite-enabled-values-generated/mimir-distributed/templates/graphite-proxy/graphite-aggregation-cache/graphite-aggregation-cache-statefulset.yaml +++ b/operations/helm/tests/graphite-enabled-values-generated/mimir-distributed/templates/graphite-proxy/graphite-aggregation-cache/graphite-aggregation-cache-statefulset.yaml @@ -47,7 +47,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/graphite-enabled-values-generated/mimir-distributed/templates/graphite-proxy/graphite-metric-name-cache/graphite-metric-name-cache-statefulset.yaml b/operations/helm/tests/graphite-enabled-values-generated/mimir-distributed/templates/graphite-proxy/graphite-metric-name-cache/graphite-metric-name-cache-statefulset.yaml index 0c20c7d9e12..7daad2afd98 100644 --- a/operations/helm/tests/graphite-enabled-values-generated/mimir-distributed/templates/graphite-proxy/graphite-metric-name-cache/graphite-metric-name-cache-statefulset.yaml +++ b/operations/helm/tests/graphite-enabled-values-generated/mimir-distributed/templates/graphite-proxy/graphite-metric-name-cache/graphite-metric-name-cache-statefulset.yaml @@ -47,7 +47,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/large-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml b/operations/helm/tests/large-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml index ed1f1b81e5a..0110f599113 100644 --- a/operations/helm/tests/large-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml +++ b/operations/helm/tests/large-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml @@ -47,7 +47,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/large-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml b/operations/helm/tests/large-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml index bb36cbeae25..097164a9684 100644 --- a/operations/helm/tests/large-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml +++ b/operations/helm/tests/large-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml @@ -47,7 +47,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/large-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml b/operations/helm/tests/large-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml index 08b02bcfe6c..d0f0bd82c76 100644 --- a/operations/helm/tests/large-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml +++ b/operations/helm/tests/large-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml @@ -47,7 +47,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/large-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml b/operations/helm/tests/large-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml index 3a604c884e6..8fc9f435555 100644 --- a/operations/helm/tests/large-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml +++ b/operations/helm/tests/large-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml @@ -47,7 +47,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/metamonitoring-values-generated/mimir-distributed/templates/metamonitoring/grafana-dashboards.yaml b/operations/helm/tests/metamonitoring-values-generated/mimir-distributed/templates/metamonitoring/grafana-dashboards.yaml index 369580608d4..e6f1f5dc60a 100644 --- a/operations/helm/tests/metamonitoring-values-generated/mimir-distributed/templates/metamonitoring/grafana-dashboards.yaml +++ b/operations/helm/tests/metamonitoring-values-generated/mimir-distributed/templates/metamonitoring/grafana-dashboards.yaml @@ -1175,13 +1175,13 @@ data: "span": 6, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -1230,37 +1230,37 @@ data: "span": 6, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -9577,7 +9577,7 @@ data: "uid": "$datasource" }, "exemplar": false, - "expr": "(\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\",status_code=~\"5.*|error\"}[$__rate_interval])))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))\n < ($latency_metrics * -Inf)", + "expr": "((\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\",status_code=~\"5.*|error\"}[$__rate_interval])))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))\n) and on() (vector($latency_metrics) == -1)", "instant": false, "legendFormat": "Writes", "range": true @@ -9587,7 +9587,7 @@ data: "uid": "$datasource" }, "exemplar": false, - "expr": "(\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\",status_code=~\"5.*|error\"}[$__rate_interval]))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))\n < ($latency_metrics * +Inf)", + "expr": "((\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\",status_code=~\"5.*|error\"}[$__rate_interval]))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))\n) and on() (vector($latency_metrics) == 1)", "instant": false, "legendFormat": "Writes", "range": true @@ -9597,7 +9597,7 @@ data: "uid": "$datasource" }, "exemplar": false, - "expr": "(\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\",status_code=~\"5.*|error\"}[$__rate_interval])))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))\n < ($latency_metrics * -Inf)", + "expr": "((\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\",status_code=~\"5.*|error\"}[$__rate_interval])))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))\n) and on() (vector($latency_metrics) == -1)", "instant": false, "legendFormat": "Reads", "range": true @@ -9607,7 +9607,7 @@ data: "uid": "$datasource" }, "exemplar": false, - "expr": "(\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\",status_code=~\"5.*|error\"}[$__rate_interval]))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))\n < ($latency_metrics * +Inf)", + "expr": "((\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\",status_code=~\"5.*|error\"}[$__rate_interval]))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))\n) and on() (vector($latency_metrics) == 1)", "instant": false, "legendFormat": "Reads", "range": true @@ -9864,13 +9864,13 @@ data: "span": 3, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -9919,37 +9919,37 @@ data: "span": 3, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -10210,13 +10210,13 @@ data: "span": 3, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -10265,37 +10265,37 @@ data: "span": 3, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -10553,23 +10553,23 @@ data: "span": 3, "targets": [ { - "expr": "sum by (route) (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum by (route) (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendLink": null }, { - "expr": "sum by (route) (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum by (route) (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendLink": null }, { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_.*\",route!~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_.*\",route!~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "other", "legendLink": null }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_.*\",route!~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_.*\",route!~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "other", "legendLink": null @@ -17922,13 +17922,13 @@ data: "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query\"}[$__rate_interval])) + sum(rate(cortex_prometheus_rule_evaluations_total{cluster=~\"$cluster\", job=~\"($namespace)/((ruler|cortex|mimir|mimir-backend.*))\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query\"}[$__rate_interval])) + sum(rate(cortex_prometheus_rule_evaluations_total{cluster=~\"$cluster\", job=~\"($namespace)/((ruler|cortex|mimir|mimir-backend.*))\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query\"}[$__rate_interval]))) + sum(rate(cortex_prometheus_rule_evaluations_total{cluster=~\"$cluster\", job=~\"($namespace)/((ruler|cortex|mimir|mimir-backend.*))\"}[$__rate_interval])) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query\"}[$__rate_interval]))) + sum(rate(cortex_prometheus_rule_evaluations_total{cluster=~\"$cluster\", job=~\"($namespace)/((ruler|cortex|mimir|mimir-backend.*))\"}[$__rate_interval]))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -18004,13 +18004,13 @@ data: "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query_range\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query_range\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query_range\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query_range\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -18086,13 +18086,13 @@ data: "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_labels\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_labels\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_labels\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_labels\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -18168,13 +18168,13 @@ data: "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_label_name_values\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_label_name_values\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_label_name_values\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_label_name_values\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -18250,13 +18250,13 @@ data: "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_series\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_series\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_series\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_series\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -18485,13 +18485,13 @@ data: "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -18540,37 +18540,37 @@ data: "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -18620,14 +18620,14 @@ data: "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null @@ -19759,13 +19759,13 @@ data: "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\",route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\",route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\",route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\",route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -19814,37 +19814,37 @@ data: "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -19894,14 +19894,14 @@ data: "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null @@ -20097,13 +20097,13 @@ data: "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\",route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\",route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\",route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\",route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -20152,37 +20152,37 @@ data: "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -20232,14 +20232,14 @@ data: "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null @@ -24968,13 +24968,13 @@ data: "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\",route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\",route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\",route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\",route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -25203,13 +25203,13 @@ data: "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -25258,37 +25258,37 @@ data: "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -25338,14 +25338,14 @@ data: "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null @@ -26884,7 +26884,7 @@ data: "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"2.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"2.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -26894,7 +26894,7 @@ data: "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"2.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"2.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -27006,7 +27006,7 @@ data: "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"4.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"4.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -27016,7 +27016,7 @@ data: "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"4.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"4.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -27124,7 +27124,7 @@ data: "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"5.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"5.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -27134,7 +27134,7 @@ data: "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"5.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"5.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -27246,7 +27246,7 @@ data: "steppedLine": false, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"})) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -27256,7 +27256,7 @@ data: "step": null }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"})) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -27360,7 +27360,7 @@ data: "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"2.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"2.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -27370,7 +27370,7 @@ data: "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"2.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"2.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -27482,7 +27482,7 @@ data: "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"4.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"4.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -27492,7 +27492,7 @@ data: "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"4.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"4.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -27600,7 +27600,7 @@ data: "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"5.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"5.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -27610,7 +27610,7 @@ data: "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"5.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"5.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -27722,7 +27722,7 @@ data: "steppedLine": false, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -27732,7 +27732,7 @@ data: "step": null }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -28003,25 +28003,25 @@ data: }, "targets": [ { - "expr": "1 - (\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))[1h:])\n)\n < ($latency_metrics * +Inf)", + "expr": "(1 - (\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))[1h:])\n)\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "writes", "legendLink": null }, { - "expr": "1 - (\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))[1h:])\n)\n < ($latency_metrics * -Inf)", + "expr": "(1 - (\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))[1h:])\n)\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "writes", "legendLink": null }, { - "expr": "1 - (\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))[1h:])\n)\n < ($latency_metrics * +Inf)", + "expr": "(1 - (\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))[1h:])\n)\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "reads", "legendLink": null }, { - "expr": "1 - (\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))[1h:])\n)\n < ($latency_metrics * -Inf)", + "expr": "(1 - (\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))[1h:])\n)\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "reads", "legendLink": null @@ -39938,13 +39938,13 @@ data: "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -39993,37 +39993,37 @@ data: "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -40073,14 +40073,14 @@ data: "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null @@ -40277,13 +40277,13 @@ data: "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -40332,37 +40332,37 @@ data: "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -40412,14 +40412,14 @@ data: "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null diff --git a/operations/helm/tests/small-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml b/operations/helm/tests/small-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml index d939c6ad7d0..7c49fda594e 100644 --- a/operations/helm/tests/small-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml +++ b/operations/helm/tests/small-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml @@ -47,7 +47,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/small-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml b/operations/helm/tests/small-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml index 1b44f570438..b625df2e31f 100644 --- a/operations/helm/tests/small-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml +++ b/operations/helm/tests/small-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml @@ -47,7 +47,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/small-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml b/operations/helm/tests/small-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml index c9ebcc2325d..e1d5724943a 100644 --- a/operations/helm/tests/small-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml +++ b/operations/helm/tests/small-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml @@ -47,7 +47,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/small-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml b/operations/helm/tests/small-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml index 558f8ff30f0..738ee9ad36d 100644 --- a/operations/helm/tests/small-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml +++ b/operations/helm/tests/small-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml @@ -47,7 +47,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/admin-cache/admin-cache-statefulset.yaml b/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/admin-cache/admin-cache-statefulset.yaml index fa888e490b5..d504cea0fd4 100644 --- a/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/admin-cache/admin-cache-statefulset.yaml +++ b/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/admin-cache/admin-cache-statefulset.yaml @@ -48,7 +48,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: null diff --git a/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml b/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml index dfb0787e907..5f00e5f60f7 100644 --- a/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml +++ b/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml @@ -48,7 +48,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: null diff --git a/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml b/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml index 6a4c028e5cf..53fe3ca293e 100644 --- a/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml +++ b/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml @@ -48,7 +48,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: null diff --git a/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml b/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml index f640ca3073a..c5ca9c86eba 100644 --- a/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml +++ b/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml @@ -48,7 +48,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: null diff --git a/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml b/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml index 70d59232761..ea29bdadbeb 100644 --- a/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml +++ b/operations/helm/tests/test-enterprise-configmap-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml @@ -48,7 +48,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: null diff --git a/operations/helm/tests/test-oss-k8s-1.25-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml b/operations/helm/tests/test-oss-k8s-1.25-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml index 1b9d2dab8e6..3f672523de4 100644 --- a/operations/helm/tests/test-oss-k8s-1.25-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml +++ b/operations/helm/tests/test-oss-k8s-1.25-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml @@ -47,7 +47,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/test-oss-k8s-1.25-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml b/operations/helm/tests/test-oss-k8s-1.25-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml index 5dcd1d28e8a..7e561cf3cd6 100644 --- a/operations/helm/tests/test-oss-k8s-1.25-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml +++ b/operations/helm/tests/test-oss-k8s-1.25-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml @@ -47,7 +47,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/test-oss-k8s-1.25-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml b/operations/helm/tests/test-oss-k8s-1.25-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml index d6d5a450aad..cc115aa6179 100644 --- a/operations/helm/tests/test-oss-k8s-1.25-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml +++ b/operations/helm/tests/test-oss-k8s-1.25-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml @@ -47,7 +47,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/test-oss-k8s-1.25-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml b/operations/helm/tests/test-oss-k8s-1.25-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml index 1b1f3185e2b..604592f5072 100644 --- a/operations/helm/tests/test-oss-k8s-1.25-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml +++ b/operations/helm/tests/test-oss-k8s-1.25-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml @@ -47,7 +47,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: diff --git a/operations/helm/tests/test-oss-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml b/operations/helm/tests/test-oss-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml index 51bc498c3d8..59108feb245 100644 --- a/operations/helm/tests/test-oss-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml +++ b/operations/helm/tests/test-oss-values-generated/mimir-distributed/templates/chunks-cache/chunks-cache-statefulset.yaml @@ -48,7 +48,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: null diff --git a/operations/helm/tests/test-oss-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml b/operations/helm/tests/test-oss-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml index e1cfc6ad8b1..0d352d271c8 100644 --- a/operations/helm/tests/test-oss-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml +++ b/operations/helm/tests/test-oss-values-generated/mimir-distributed/templates/index-cache/index-cache-statefulset.yaml @@ -48,7 +48,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: null diff --git a/operations/helm/tests/test-oss-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml b/operations/helm/tests/test-oss-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml index a57d2a8f640..2cd4f0f7cef 100644 --- a/operations/helm/tests/test-oss-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml +++ b/operations/helm/tests/test-oss-values-generated/mimir-distributed/templates/metadata-cache/metadata-cache-statefulset.yaml @@ -48,7 +48,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: null diff --git a/operations/helm/tests/test-oss-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml b/operations/helm/tests/test-oss-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml index 457d8fbe792..4fc6aecec18 100644 --- a/operations/helm/tests/test-oss-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml +++ b/operations/helm/tests/test-oss-values-generated/mimir-distributed/templates/results-cache/results-cache-statefulset.yaml @@ -48,7 +48,7 @@ spec: volumes: containers: - name: memcached - image: memcached:1.6.32-alpine + image: memcached:1.6.33-alpine imagePullPolicy: IfNotPresent resources: limits: null diff --git a/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-alertmanager.json b/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-alertmanager.json index 4d59d4f1fd7..b0f68178c16 100644 --- a/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-alertmanager.json +++ b/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-alertmanager.json @@ -446,13 +446,13 @@ "span": 6, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -501,37 +501,37 @@ "span": 6, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" diff --git a/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-overview.json b/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-overview.json index a00e3e642cf..c93ec499af6 100644 --- a/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-overview.json +++ b/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-overview.json @@ -81,7 +81,7 @@ "uid": "$datasource" }, "exemplar": false, - "expr": "(\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\",status_code=~\"5.*|error\"}[$__rate_interval])))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))\n < ($latency_metrics * -Inf)", + "expr": "((\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\",status_code=~\"5.*|error\"}[$__rate_interval])))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))\n) and on() (vector($latency_metrics) == -1)", "instant": false, "legendFormat": "Writes", "range": true @@ -91,7 +91,7 @@ "uid": "$datasource" }, "exemplar": false, - "expr": "(\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\",status_code=~\"5.*|error\"}[$__rate_interval]))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))\n < ($latency_metrics * +Inf)", + "expr": "((\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\",status_code=~\"5.*|error\"}[$__rate_interval]))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))\n) and on() (vector($latency_metrics) == 1)", "instant": false, "legendFormat": "Writes", "range": true @@ -101,7 +101,7 @@ "uid": "$datasource" }, "exemplar": false, - "expr": "(\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\",status_code=~\"5.*|error\"}[$__rate_interval])))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))\n < ($latency_metrics * -Inf)", + "expr": "((\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\",status_code=~\"5.*|error\"}[$__rate_interval])))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))\n) and on() (vector($latency_metrics) == -1)", "instant": false, "legendFormat": "Reads", "range": true @@ -111,7 +111,7 @@ "uid": "$datasource" }, "exemplar": false, - "expr": "(\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\",status_code=~\"5.*|error\"}[$__rate_interval]))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))\n < ($latency_metrics * +Inf)", + "expr": "((\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\",status_code=~\"5.*|error\"}[$__rate_interval]))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))\n) and on() (vector($latency_metrics) == 1)", "instant": false, "legendFormat": "Reads", "range": true @@ -368,13 +368,13 @@ "span": 3, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -423,37 +423,37 @@ "span": 3, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -714,13 +714,13 @@ "span": 3, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -769,37 +769,37 @@ "span": 3, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -1057,23 +1057,23 @@ "span": 3, "targets": [ { - "expr": "sum by (route) (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum by (route) (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendLink": null }, { - "expr": "sum by (route) (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum by (route) (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendLink": null }, { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_.*\",route!~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_.*\",route!~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "other", "legendLink": null }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_.*\",route!~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_.*\",route!~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "other", "legendLink": null diff --git a/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-reads.json b/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-reads.json index df9c48e42a0..21bb0b9220e 100644 --- a/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-reads.json +++ b/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-reads.json @@ -91,13 +91,13 @@ "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query\"}[$__rate_interval])) + sum(rate(cortex_prometheus_rule_evaluations_total{cluster=~\"$cluster\", job=~\"($namespace)/((ruler|cortex|mimir|mimir-backend.*))\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query\"}[$__rate_interval])) + sum(rate(cortex_prometheus_rule_evaluations_total{cluster=~\"$cluster\", job=~\"($namespace)/((ruler|cortex|mimir|mimir-backend.*))\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query\"}[$__rate_interval]))) + sum(rate(cortex_prometheus_rule_evaluations_total{cluster=~\"$cluster\", job=~\"($namespace)/((ruler|cortex|mimir|mimir-backend.*))\"}[$__rate_interval])) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query\"}[$__rate_interval]))) + sum(rate(cortex_prometheus_rule_evaluations_total{cluster=~\"$cluster\", job=~\"($namespace)/((ruler|cortex|mimir|mimir-backend.*))\"}[$__rate_interval]))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -173,13 +173,13 @@ "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query_range\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query_range\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query_range\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query_range\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -255,13 +255,13 @@ "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_labels\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_labels\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_labels\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_labels\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -337,13 +337,13 @@ "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_label_name_values\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_label_name_values\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_label_name_values\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_label_name_values\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -419,13 +419,13 @@ "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_series\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_series\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_series\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_series\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -654,13 +654,13 @@ "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -709,37 +709,37 @@ "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -789,14 +789,14 @@ "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,instance) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,instance) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (instance) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (instance) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null @@ -1928,13 +1928,13 @@ "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\",route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\",route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\",route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\",route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -1983,37 +1983,37 @@ "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -2063,14 +2063,14 @@ "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,instance) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,instance) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (instance) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (instance) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null @@ -2266,13 +2266,13 @@ "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\",route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\",route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\",route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\",route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -2321,37 +2321,37 @@ "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -2401,14 +2401,14 @@ "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,instance) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,instance) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (instance) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (instance) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null diff --git a/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-remote-ruler-reads.json b/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-remote-ruler-reads.json index 872fd5a2e33..e2d7aefed6d 100644 --- a/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-remote-ruler-reads.json +++ b/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-remote-ruler-reads.json @@ -91,13 +91,13 @@ "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\",route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\",route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\",route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\",route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -326,13 +326,13 @@ "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -381,37 +381,37 @@ "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -461,14 +461,14 @@ "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,instance) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,instance) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (instance) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (instance) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null diff --git a/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-rollout-progress.json b/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-rollout-progress.json index a741a8c9904..7979620523d 100644 --- a/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-rollout-progress.json +++ b/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-rollout-progress.json @@ -242,7 +242,7 @@ "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"2.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"2.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -252,7 +252,7 @@ "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"2.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"2.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -364,7 +364,7 @@ "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"4.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"4.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -374,7 +374,7 @@ "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"4.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"4.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -482,7 +482,7 @@ "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"5.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"5.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -492,7 +492,7 @@ "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"5.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"5.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -604,7 +604,7 @@ "steppedLine": false, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"})) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -614,7 +614,7 @@ "step": null }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"})) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -718,7 +718,7 @@ "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"2.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"2.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -728,7 +728,7 @@ "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"2.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"2.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -840,7 +840,7 @@ "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"4.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"4.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -850,7 +850,7 @@ "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"4.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"4.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -958,7 +958,7 @@ "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"5.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"5.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -968,7 +968,7 @@ "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"5.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"5.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -1080,7 +1080,7 @@ "steppedLine": false, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -1090,7 +1090,7 @@ "step": null }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -1361,25 +1361,25 @@ }, "targets": [ { - "expr": "1 - (\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))[1h:])\n)\n < ($latency_metrics * +Inf)", + "expr": "(1 - (\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))[1h:])\n)\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "writes", "legendLink": null }, { - "expr": "1 - (\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))[1h:])\n)\n < ($latency_metrics * -Inf)", + "expr": "(1 - (\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))[1h:])\n)\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "writes", "legendLink": null }, { - "expr": "1 - (\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))[1h:])\n)\n < ($latency_metrics * +Inf)", + "expr": "(1 - (\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))[1h:])\n)\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "reads", "legendLink": null }, { - "expr": "1 - (\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))[1h:])\n)\n < ($latency_metrics * -Inf)", + "expr": "(1 - (\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))[1h:])\n)\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "reads", "legendLink": null diff --git a/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-writes.json b/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-writes.json index 546e953d3e0..27211a36231 100644 --- a/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-writes.json +++ b/operations/mimir-mixin-compiled-baremetal/dashboards/mimir-writes.json @@ -623,13 +623,13 @@ "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -678,37 +678,37 @@ "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -758,14 +758,14 @@ "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,instance) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,instance) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (instance) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (instance) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null @@ -962,13 +962,13 @@ "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -1017,37 +1017,37 @@ "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -1097,14 +1097,14 @@ "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,instance) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,instance) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (instance) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (instance) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null diff --git a/operations/mimir-mixin-compiled/dashboards/mimir-alertmanager.json b/operations/mimir-mixin-compiled/dashboards/mimir-alertmanager.json index 7f75a38a1b8..5c19ad2bd7f 100644 --- a/operations/mimir-mixin-compiled/dashboards/mimir-alertmanager.json +++ b/operations/mimir-mixin-compiled/dashboards/mimir-alertmanager.json @@ -446,13 +446,13 @@ "span": 6, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -501,37 +501,37 @@ "span": 6, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((alertmanager|cortex|mimir|mimir-backend.*))\", route=~\"/alertmanagerpb.Alertmanager/HandleRequest\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" diff --git a/operations/mimir-mixin-compiled/dashboards/mimir-overview.json b/operations/mimir-mixin-compiled/dashboards/mimir-overview.json index a00e3e642cf..c93ec499af6 100644 --- a/operations/mimir-mixin-compiled/dashboards/mimir-overview.json +++ b/operations/mimir-mixin-compiled/dashboards/mimir-overview.json @@ -81,7 +81,7 @@ "uid": "$datasource" }, "exemplar": false, - "expr": "(\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\",status_code=~\"5.*|error\"}[$__rate_interval])))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))\n < ($latency_metrics * -Inf)", + "expr": "((\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\",status_code=~\"5.*|error\"}[$__rate_interval])))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))\n) and on() (vector($latency_metrics) == -1)", "instant": false, "legendFormat": "Writes", "range": true @@ -91,7 +91,7 @@ "uid": "$datasource" }, "exemplar": false, - "expr": "(\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\",status_code=~\"5.*|error\"}[$__rate_interval]))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))\n < ($latency_metrics * +Inf)", + "expr": "((\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\",status_code=~\"5.*|error\"}[$__rate_interval]))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))\n) and on() (vector($latency_metrics) == 1)", "instant": false, "legendFormat": "Writes", "range": true @@ -101,7 +101,7 @@ "uid": "$datasource" }, "exemplar": false, - "expr": "(\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\",status_code=~\"5.*|error\"}[$__rate_interval])))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))\n < ($latency_metrics * -Inf)", + "expr": "((\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\",status_code=~\"5.*|error\"}[$__rate_interval])))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))\n) and on() (vector($latency_metrics) == -1)", "instant": false, "legendFormat": "Reads", "range": true @@ -111,7 +111,7 @@ "uid": "$datasource" }, "exemplar": false, - "expr": "(\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\",status_code=~\"5.*|error\"}[$__rate_interval]))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))\n < ($latency_metrics * +Inf)", + "expr": "((\n # gRPC errors are not tracked as 5xx but \"error\".\n sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\",status_code=~\"5.*|error\"}[$__rate_interval]))\n or\n # Handle the case no failure has been tracked yet.\n vector(0)\n)\n/\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))\n) and on() (vector($latency_metrics) == 1)", "instant": false, "legendFormat": "Reads", "range": true @@ -368,13 +368,13 @@ "span": 3, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -423,37 +423,37 @@ "span": 3, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -714,13 +714,13 @@ "span": 3, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -769,37 +769,37 @@ "span": 3, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -1057,23 +1057,23 @@ "span": 3, "targets": [ { - "expr": "sum by (route) (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum by (route) (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendLink": null }, { - "expr": "sum by (route) (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum by (route) (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendLink": null }, { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_.*\",route!~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_.*\",route!~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "other", "legendLink": null }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_.*\",route!~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_.*\",route!~\"(prometheus|api_prom)(_api_v1_query|_api_v1_query_range|_api_v1_labels|_api_v1_label_name_values|_api_v1_series|_api_v1_read|_api_v1_metadata|_api_v1_query_exemplars|_api_v1_cardinality_active_series|_api_v1_cardinality_label_names|_api_v1_cardinality_label_values)\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "other", "legendLink": null diff --git a/operations/mimir-mixin-compiled/dashboards/mimir-reads.json b/operations/mimir-mixin-compiled/dashboards/mimir-reads.json index 3a1d2abd4c2..bda01cfe52b 100644 --- a/operations/mimir-mixin-compiled/dashboards/mimir-reads.json +++ b/operations/mimir-mixin-compiled/dashboards/mimir-reads.json @@ -91,13 +91,13 @@ "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query\"}[$__rate_interval])) + sum(rate(cortex_prometheus_rule_evaluations_total{cluster=~\"$cluster\", job=~\"($namespace)/((ruler|cortex|mimir|mimir-backend.*))\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query\"}[$__rate_interval])) + sum(rate(cortex_prometheus_rule_evaluations_total{cluster=~\"$cluster\", job=~\"($namespace)/((ruler|cortex|mimir|mimir-backend.*))\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query\"}[$__rate_interval]))) + sum(rate(cortex_prometheus_rule_evaluations_total{cluster=~\"$cluster\", job=~\"($namespace)/((ruler|cortex|mimir|mimir-backend.*))\"}[$__rate_interval])) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query\"}[$__rate_interval]))) + sum(rate(cortex_prometheus_rule_evaluations_total{cluster=~\"$cluster\", job=~\"($namespace)/((ruler|cortex|mimir|mimir-backend.*))\"}[$__rate_interval]))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -173,13 +173,13 @@ "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query_range\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query_range\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query_range\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_query_range\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -255,13 +255,13 @@ "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_labels\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_labels\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_labels\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_labels\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -337,13 +337,13 @@ "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_label_name_values\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_label_name_values\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_label_name_values\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_label_name_values\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -419,13 +419,13 @@ "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_series\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_series\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_series\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\",route=~\"(prometheus|api_prom)_api_v1_series\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -654,13 +654,13 @@ "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -709,37 +709,37 @@ "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -789,14 +789,14 @@ "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null @@ -1928,13 +1928,13 @@ "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\",route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\",route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\",route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\",route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -1983,37 +1983,37 @@ "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -2063,14 +2063,14 @@ "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=~\"/cortex.Ingester/(QueryStream|QueryExemplars|LabelValues|LabelNames|UserStats|AllUserStats|MetricsForLabelMatchers|MetricsMetadata|LabelNamesAndValues|LabelValuesCardinality|ActiveSeries)\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null @@ -2266,13 +2266,13 @@ "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\",route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\",route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\",route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\",route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -2321,37 +2321,37 @@ "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -2401,14 +2401,14 @@ "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((store-gateway.*|cortex|mimir|mimir-backend.*))\", route=~\"/gatewaypb.StoreGateway/.*\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null diff --git a/operations/mimir-mixin-compiled/dashboards/mimir-remote-ruler-reads.json b/operations/mimir-mixin-compiled/dashboards/mimir-remote-ruler-reads.json index 27a6d1c4be2..8573fb334a4 100644 --- a/operations/mimir-mixin-compiled/dashboards/mimir-remote-ruler-reads.json +++ b/operations/mimir-mixin-compiled/dashboards/mimir-remote-ruler-reads.json @@ -91,13 +91,13 @@ "steppedLine": false, "targets": [ { - "expr": "sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\",route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum (rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\",route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "instant": true, "refId": "A_classic" }, { - "expr": "sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\",route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum (histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\",route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "instant": true, "refId": "A" @@ -326,13 +326,13 @@ "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -381,37 +381,37 @@ "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -461,14 +461,14 @@ "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ruler-query-frontend.*))\", route=~\"/httpgrpc.HTTP/Handle|.*api_v1_query\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null diff --git a/operations/mimir-mixin-compiled/dashboards/mimir-rollout-progress.json b/operations/mimir-mixin-compiled/dashboards/mimir-rollout-progress.json index a741a8c9904..7979620523d 100644 --- a/operations/mimir-mixin-compiled/dashboards/mimir-rollout-progress.json +++ b/operations/mimir-mixin-compiled/dashboards/mimir-rollout-progress.json @@ -242,7 +242,7 @@ "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"2.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"2.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -252,7 +252,7 @@ "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"2.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"2.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -364,7 +364,7 @@ "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"4.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"4.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -374,7 +374,7 @@ "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"4.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"4.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -482,7 +482,7 @@ "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"5.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"5.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -492,7 +492,7 @@ "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"5.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\", status_code=~\"5.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -604,7 +604,7 @@ "steppedLine": false, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"})) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -614,7 +614,7 @@ "step": null }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"})) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -718,7 +718,7 @@ "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"2.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"2.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -728,7 +728,7 @@ "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"2.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"2.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -840,7 +840,7 @@ "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"4.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"4.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -850,7 +850,7 @@ "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"4.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"4.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -958,7 +958,7 @@ "steppedLine": false, "targets": [ { - "expr": "sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"5.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])) < ($latency_metrics * +Inf)", + "expr": "(sum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"5.+\"}[$__rate_interval])) /\nsum(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -968,7 +968,7 @@ "step": null }, { - "expr": "sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"5.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(sum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\", status_code=~\"5.+\"}[$__rate_interval]))) /\nsum(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -1080,7 +1080,7 @@ "steppedLine": false, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))) and on() (vector($latency_metrics) == 1)", "format": null, "instant": false, "interval": "", @@ -1090,7 +1090,7 @@ "step": null }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"})) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))) and on() (vector($latency_metrics) == -1)", "format": null, "instant": false, "interval": "", @@ -1361,25 +1361,25 @@ }, "targets": [ { - "expr": "1 - (\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))[1h:])\n)\n < ($latency_metrics * +Inf)", + "expr": "(1 - (\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))[1h:])\n)\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "writes", "legendLink": null }, { - "expr": "1 - (\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))[1h:])\n)\n < ($latency_metrics * -Inf)", + "expr": "(1 - (\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"api_(v1|prom)_push|otlp_v1_metrics\"}))[1h:])\n)\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "writes", "legendLink": null }, { - "expr": "1 - (\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))[1h:])\n)\n < ($latency_metrics * +Inf)", + "expr": "(1 - (\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))[1h:])\n)\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "reads", "legendLink": null }, { - "expr": "1 - (\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))[1h:])\n)\n < ($latency_metrics * -Inf)", + "expr": "(1 - (\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"} offset 24h))[1h:])\n /\n avg_over_time(histogram_quantile(0.99, sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((query-frontend.*|cortex|mimir|mimir-read.*))\", route=~\"(prometheus|api_prom)_api_v1_.+\"}))[1h:])\n)\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "reads", "legendLink": null diff --git a/operations/mimir-mixin-compiled/dashboards/mimir-writes.json b/operations/mimir-mixin-compiled/dashboards/mimir-writes.json index 47efe38590b..990aa780b32 100644 --- a/operations/mimir-mixin-compiled/dashboards/mimir-writes.json +++ b/operations/mimir-mixin-compiled/dashboards/mimir-writes.json @@ -623,13 +623,13 @@ "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -678,37 +678,37 @@ "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -758,14 +758,14 @@ "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((distributor.*|cortex|mimir|mimir-write.*))\", route=~\"/distributor.Distributor/Push|/httpgrpc.*|api_(v1|prom)_push|otlp_v1_metrics\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null @@ -962,13 +962,13 @@ "span": 4, "targets": [ { - "expr": "sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * +Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(rate(cortex_request_duration_seconds_count{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval]),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A_classic" }, { - "expr": "sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n < ($latency_metrics * -Inf)", + "expr": "(sum by (status) (\n label_replace(label_replace(histogram_count(rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval])),\n \"status\", \"${1}xx\", \"status_code\", \"([0-9])..\"),\n \"status\", \"${1}\", \"status_code\", \"([a-zA-Z]+)\"))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "{{status}}", "refId": "A" @@ -1017,37 +1017,37 @@ "span": 4, "targets": [ { - "expr": "histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_classic" }, { - "expr": "histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "99th percentile", "refId": "A_native" }, { - "expr": "histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3 < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.50, sum by (le) (cluster_job_route:cortex_request_duration_seconds_bucket:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_classic" }, { - "expr": "histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3 < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.50, sum (cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) * 1e3) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "50th percentile", "refId": "B_native" }, { - "expr": "1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})\n < ($latency_metrics * +Inf)", + "expr": "(1e3 * sum(cluster_job_route:cortex_request_duration_seconds_sum:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}) /\nsum(cluster_job_route:cortex_request_duration_seconds_count:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})\n) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "Average", "refId": "C_classic" }, { - "expr": "1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}))\n < ($latency_metrics * -Inf)", + "expr": "(1e3 * sum(histogram_sum(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"})) /\nsum(histogram_count(cluster_job_route:cortex_request_duration_seconds:sum_rate{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}))\n) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "Average", "refId": "C_native" @@ -1097,14 +1097,14 @@ "targets": [ { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval]))) < ($latency_metrics * +Inf)", + "expr": "(histogram_quantile(0.99, sum by (le,pod) (rate(cortex_request_duration_seconds_bucket{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval])))) and on() (vector($latency_metrics) == 1)", "format": "time_series", "legendFormat": "", "legendLink": null }, { "exemplar": true, - "expr": "histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval]))) < ($latency_metrics * -Inf)", + "expr": "(histogram_quantile(0.99, sum by (pod) (rate(cortex_request_duration_seconds{cluster=~\"$cluster\", job=~\"($namespace)/((ingester.*|cortex|mimir|mimir-write.*))\", route=\"/cortex.Ingester/Push\"}[$__rate_interval])))) and on() (vector($latency_metrics) == -1)", "format": "time_series", "legendFormat": "", "legendLink": null diff --git a/operations/mimir-mixin/jsonnetfile.lock.json b/operations/mimir-mixin/jsonnetfile.lock.json index e0ee06e1b63..d22ff4f1c47 100644 --- a/operations/mimir-mixin/jsonnetfile.lock.json +++ b/operations/mimir-mixin/jsonnetfile.lock.json @@ -8,7 +8,7 @@ "subdir": "grafana-builder" } }, - "version": "3f0a5b0eeb2f5dc381a420b35d27198bd9b72e8c", + "version": "767befa8fb46a07be516dec2777d7d89909a529d", "sum": "yxqWcq/N3E/a/XreeU6EuE6X7kYPnG0AspAQFKOjASo=" }, { @@ -18,8 +18,8 @@ "subdir": "mixin-utils" } }, - "version": "33fd33f2cd59a045ce4580f0b5a7e06a620e8d37", - "sum": "mzLmCv9n3ldLChVGPfyRJOVKoBw+dfK40vU9792aHIM=" + "version": "767befa8fb46a07be516dec2777d7d89909a529d", + "sum": "SRElwa/XrKAN8aZA9zvdRUx8iebl2It7KNQ7VFvMcBA=" } ], "legacyImports": false diff --git a/operations/mimir-tests/test-automated-downscale-generated.yaml b/operations/mimir-tests/test-automated-downscale-generated.yaml index a19e281fa8a..39857d5465a 100644 --- a/operations/mimir-tests/test-automated-downscale-generated.yaml +++ b/operations/mimir-tests/test-automated-downscale-generated.yaml @@ -1026,7 +1026,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-automated-downscale-v2-generated.yaml b/operations/mimir-tests/test-automated-downscale-v2-generated.yaml index 4cd13a1bc0f..b864c1ce4ea 100644 --- a/operations/mimir-tests/test-automated-downscale-v2-generated.yaml +++ b/operations/mimir-tests/test-automated-downscale-v2-generated.yaml @@ -1085,7 +1085,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-compactor-concurrent-rollout-generated.yaml b/operations/mimir-tests/test-compactor-concurrent-rollout-generated.yaml index c6afcd3d152..00536419e6a 100644 --- a/operations/mimir-tests/test-compactor-concurrent-rollout-generated.yaml +++ b/operations/mimir-tests/test-compactor-concurrent-rollout-generated.yaml @@ -846,7 +846,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-compactor-concurrent-rollout-max-unavailable-percent-generated.yaml b/operations/mimir-tests/test-compactor-concurrent-rollout-max-unavailable-percent-generated.yaml index 4bb1974f2f1..e1aa5c3531e 100644 --- a/operations/mimir-tests/test-compactor-concurrent-rollout-max-unavailable-percent-generated.yaml +++ b/operations/mimir-tests/test-compactor-concurrent-rollout-max-unavailable-percent-generated.yaml @@ -846,7 +846,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-consul-multi-zone-generated.yaml b/operations/mimir-tests/test-consul-multi-zone-generated.yaml index e6b60fc07ba..1c9c7438b8c 100644 --- a/operations/mimir-tests/test-consul-multi-zone-generated.yaml +++ b/operations/mimir-tests/test-consul-multi-zone-generated.yaml @@ -1397,7 +1397,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-deployment-mode-migration-generated.yaml b/operations/mimir-tests/test-deployment-mode-migration-generated.yaml index 877bf29104f..b7d757e86b7 100644 --- a/operations/mimir-tests/test-deployment-mode-migration-generated.yaml +++ b/operations/mimir-tests/test-deployment-mode-migration-generated.yaml @@ -1427,7 +1427,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-ingest-storage-autoscaling-custom-stabilization-window-generated.yaml b/operations/mimir-tests/test-ingest-storage-autoscaling-custom-stabilization-window-generated.yaml index 6c0780198cd..998ce1439e2 100644 --- a/operations/mimir-tests/test-ingest-storage-autoscaling-custom-stabilization-window-generated.yaml +++ b/operations/mimir-tests/test-ingest-storage-autoscaling-custom-stabilization-window-generated.yaml @@ -1230,7 +1230,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-ingest-storage-autoscaling-multiple-triggers-generated.yaml b/operations/mimir-tests/test-ingest-storage-autoscaling-multiple-triggers-generated.yaml index 41dcf23a9a5..9b0202be629 100644 --- a/operations/mimir-tests/test-ingest-storage-autoscaling-multiple-triggers-generated.yaml +++ b/operations/mimir-tests/test-ingest-storage-autoscaling-multiple-triggers-generated.yaml @@ -1230,7 +1230,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-ingest-storage-autoscaling-one-trigger-generated.yaml b/operations/mimir-tests/test-ingest-storage-autoscaling-one-trigger-generated.yaml index 32663acef16..7f0ae564978 100644 --- a/operations/mimir-tests/test-ingest-storage-autoscaling-one-trigger-generated.yaml +++ b/operations/mimir-tests/test-ingest-storage-autoscaling-one-trigger-generated.yaml @@ -1230,7 +1230,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-ingest-storage-migration-step-0-generated.yaml b/operations/mimir-tests/test-ingest-storage-migration-step-0-generated.yaml index e3a60d008fc..913bed8a585 100644 --- a/operations/mimir-tests/test-ingest-storage-migration-step-0-generated.yaml +++ b/operations/mimir-tests/test-ingest-storage-migration-step-0-generated.yaml @@ -1148,7 +1148,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-ingest-storage-migration-step-1-generated.yaml b/operations/mimir-tests/test-ingest-storage-migration-step-1-generated.yaml index 70354b7223c..3100561796e 100644 --- a/operations/mimir-tests/test-ingest-storage-migration-step-1-generated.yaml +++ b/operations/mimir-tests/test-ingest-storage-migration-step-1-generated.yaml @@ -1219,7 +1219,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-ingest-storage-migration-step-10-generated.yaml b/operations/mimir-tests/test-ingest-storage-migration-step-10-generated.yaml index 62695e33d49..3e50f36dc16 100644 --- a/operations/mimir-tests/test-ingest-storage-migration-step-10-generated.yaml +++ b/operations/mimir-tests/test-ingest-storage-migration-step-10-generated.yaml @@ -1207,7 +1207,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-ingest-storage-migration-step-11-generated.yaml b/operations/mimir-tests/test-ingest-storage-migration-step-11-generated.yaml index 524333cc6a0..d5095c24337 100644 --- a/operations/mimir-tests/test-ingest-storage-migration-step-11-generated.yaml +++ b/operations/mimir-tests/test-ingest-storage-migration-step-11-generated.yaml @@ -1207,7 +1207,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-ingest-storage-migration-step-2-generated.yaml b/operations/mimir-tests/test-ingest-storage-migration-step-2-generated.yaml index 655ee168055..d0af7042f3f 100644 --- a/operations/mimir-tests/test-ingest-storage-migration-step-2-generated.yaml +++ b/operations/mimir-tests/test-ingest-storage-migration-step-2-generated.yaml @@ -1226,7 +1226,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-ingest-storage-migration-step-3-generated.yaml b/operations/mimir-tests/test-ingest-storage-migration-step-3-generated.yaml index 4f968e6020a..2fb491ec918 100644 --- a/operations/mimir-tests/test-ingest-storage-migration-step-3-generated.yaml +++ b/operations/mimir-tests/test-ingest-storage-migration-step-3-generated.yaml @@ -1237,7 +1237,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-ingest-storage-migration-step-4-generated.yaml b/operations/mimir-tests/test-ingest-storage-migration-step-4-generated.yaml index 353d23827f3..7aea3eac1b1 100644 --- a/operations/mimir-tests/test-ingest-storage-migration-step-4-generated.yaml +++ b/operations/mimir-tests/test-ingest-storage-migration-step-4-generated.yaml @@ -1236,7 +1236,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-ingest-storage-migration-step-5a-generated.yaml b/operations/mimir-tests/test-ingest-storage-migration-step-5a-generated.yaml index 41dc2fce71b..eb1272a7f27 100644 --- a/operations/mimir-tests/test-ingest-storage-migration-step-5a-generated.yaml +++ b/operations/mimir-tests/test-ingest-storage-migration-step-5a-generated.yaml @@ -1236,7 +1236,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-ingest-storage-migration-step-5b-generated.yaml b/operations/mimir-tests/test-ingest-storage-migration-step-5b-generated.yaml index 10b92ad726f..3fb03c6e853 100644 --- a/operations/mimir-tests/test-ingest-storage-migration-step-5b-generated.yaml +++ b/operations/mimir-tests/test-ingest-storage-migration-step-5b-generated.yaml @@ -1236,7 +1236,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-ingest-storage-migration-step-6-generated.yaml b/operations/mimir-tests/test-ingest-storage-migration-step-6-generated.yaml index 537415fa93f..779c584fb7e 100644 --- a/operations/mimir-tests/test-ingest-storage-migration-step-6-generated.yaml +++ b/operations/mimir-tests/test-ingest-storage-migration-step-6-generated.yaml @@ -1167,7 +1167,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-ingest-storage-migration-step-7-generated.yaml b/operations/mimir-tests/test-ingest-storage-migration-step-7-generated.yaml index 497fc7eb2b5..61e55c655f3 100644 --- a/operations/mimir-tests/test-ingest-storage-migration-step-7-generated.yaml +++ b/operations/mimir-tests/test-ingest-storage-migration-step-7-generated.yaml @@ -1171,7 +1171,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-ingest-storage-migration-step-8-generated.yaml b/operations/mimir-tests/test-ingest-storage-migration-step-8-generated.yaml index a068fceec16..189c31bdb8d 100644 --- a/operations/mimir-tests/test-ingest-storage-migration-step-8-generated.yaml +++ b/operations/mimir-tests/test-ingest-storage-migration-step-8-generated.yaml @@ -1171,7 +1171,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-ingest-storage-migration-step-9-generated.yaml b/operations/mimir-tests/test-ingest-storage-migration-step-9-generated.yaml index ff1a427be75..dfb7cb46642 100644 --- a/operations/mimir-tests/test-ingest-storage-migration-step-9-generated.yaml +++ b/operations/mimir-tests/test-ingest-storage-migration-step-9-generated.yaml @@ -1148,7 +1148,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-ingest-storage-migration-step-final-generated.yaml b/operations/mimir-tests/test-ingest-storage-migration-step-final-generated.yaml index f04144ae5a8..70b7bff78f9 100644 --- a/operations/mimir-tests/test-ingest-storage-migration-step-final-generated.yaml +++ b/operations/mimir-tests/test-ingest-storage-migration-step-final-generated.yaml @@ -1230,7 +1230,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-multi-zone-distributor-generated.yaml b/operations/mimir-tests/test-multi-zone-distributor-generated.yaml index c4960054547..c83f3bc6e9d 100644 --- a/operations/mimir-tests/test-multi-zone-distributor-generated.yaml +++ b/operations/mimir-tests/test-multi-zone-distributor-generated.yaml @@ -1184,7 +1184,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-multi-zone-etcd-generated.yaml b/operations/mimir-tests/test-multi-zone-etcd-generated.yaml index 7d74af5b71b..bb59b7ba08c 100644 --- a/operations/mimir-tests/test-multi-zone-etcd-generated.yaml +++ b/operations/mimir-tests/test-multi-zone-etcd-generated.yaml @@ -1026,7 +1026,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-multi-zone-generated.yaml b/operations/mimir-tests/test-multi-zone-generated.yaml index 4f69b77668e..fd203261b63 100644 --- a/operations/mimir-tests/test-multi-zone-generated.yaml +++ b/operations/mimir-tests/test-multi-zone-generated.yaml @@ -1026,7 +1026,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-multi-zone-with-ongoing-migration-generated.yaml b/operations/mimir-tests/test-multi-zone-with-ongoing-migration-generated.yaml index a21d0a58ae8..668c45f2e8b 100644 --- a/operations/mimir-tests/test-multi-zone-with-ongoing-migration-generated.yaml +++ b/operations/mimir-tests/test-multi-zone-with-ongoing-migration-generated.yaml @@ -1094,7 +1094,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-multi-zone-with-store-gateway-automated-downscaling-generated.yaml b/operations/mimir-tests/test-multi-zone-with-store-gateway-automated-downscaling-generated.yaml index 34ae76fd876..b4fde5e1493 100644 --- a/operations/mimir-tests/test-multi-zone-with-store-gateway-automated-downscaling-generated.yaml +++ b/operations/mimir-tests/test-multi-zone-with-store-gateway-automated-downscaling-generated.yaml @@ -1104,7 +1104,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-pvc-auto-deletion-generated.yaml b/operations/mimir-tests/test-pvc-auto-deletion-generated.yaml index cc830b4a71d..25c3cbf4bd8 100644 --- a/operations/mimir-tests/test-pvc-auto-deletion-generated.yaml +++ b/operations/mimir-tests/test-pvc-auto-deletion-generated.yaml @@ -960,7 +960,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-read-write-deployment-mode-s3-autoscaled-generated.yaml b/operations/mimir-tests/test-read-write-deployment-mode-s3-autoscaled-generated.yaml index a4d8cbea87b..e46c1b5c339 100644 --- a/operations/mimir-tests/test-read-write-deployment-mode-s3-autoscaled-generated.yaml +++ b/operations/mimir-tests/test-read-write-deployment-mode-s3-autoscaled-generated.yaml @@ -629,7 +629,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-read-write-deployment-mode-s3-caches-disabled-generated.yaml b/operations/mimir-tests/test-read-write-deployment-mode-s3-caches-disabled-generated.yaml index c4666410241..b18f725a094 100644 --- a/operations/mimir-tests/test-read-write-deployment-mode-s3-caches-disabled-generated.yaml +++ b/operations/mimir-tests/test-read-write-deployment-mode-s3-caches-disabled-generated.yaml @@ -492,7 +492,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-read-write-deployment-mode-s3-generated.yaml b/operations/mimir-tests/test-read-write-deployment-mode-s3-generated.yaml index 4aeddee521a..c8dd41eff83 100644 --- a/operations/mimir-tests/test-read-write-deployment-mode-s3-generated.yaml +++ b/operations/mimir-tests/test-read-write-deployment-mode-s3-generated.yaml @@ -630,7 +630,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir-tests/test-single-to-multi-zone-distributor-migration-generated.yaml b/operations/mimir-tests/test-single-to-multi-zone-distributor-migration-generated.yaml index d35066a9419..2183261af95 100644 --- a/operations/mimir-tests/test-single-to-multi-zone-distributor-migration-generated.yaml +++ b/operations/mimir-tests/test-single-to-multi-zone-distributor-migration-generated.yaml @@ -1315,7 +1315,7 @@ spec: - -kubernetes.namespace=default - -use-zone-tracker=true - -zone-tracker.config-map-name=rollout-operator-zone-tracker - image: grafana/rollout-operator:v0.20.0 + image: grafana/rollout-operator:v0.22.0 imagePullPolicy: IfNotPresent name: rollout-operator ports: diff --git a/operations/mimir/images.libsonnet b/operations/mimir/images.libsonnet index f8817714acc..354b6a4ed24 100644 --- a/operations/mimir/images.libsonnet +++ b/operations/mimir/images.libsonnet @@ -28,6 +28,6 @@ mimir_backend: self.mimir, // See: https://github.com/grafana/rollout-operator - rollout_operator: 'grafana/rollout-operator:v0.20.0', + rollout_operator: 'grafana/rollout-operator:v0.22.0', }, } diff --git a/pkg/api/api.go b/pkg/api/api.go index a0592f74f15..e2f6da5735c 100644 --- a/pkg/api/api.go +++ b/pkg/api/api.go @@ -51,9 +51,6 @@ type Config struct { SkipLabelNameValidationHeader bool `yaml:"skip_label_name_validation_header_enabled" category:"advanced"` SkipLabelCountValidationHeader bool `yaml:"skip_label_count_validation_header_enabled" category:"advanced"` - // TODO: Remove option in Mimir 2.15. - GETRequestForIngesterShutdownEnabled bool `yaml:"get_request_for_ingester_shutdown_enabled" category:"deprecated"` - AlertmanagerHTTPPrefix string `yaml:"alertmanager_http_prefix" category:"advanced"` PrometheusHTTPPrefix string `yaml:"prometheus_http_prefix" category:"advanced"` @@ -72,7 +69,6 @@ type Config struct { func (cfg *Config) RegisterFlags(f *flag.FlagSet) { f.BoolVar(&cfg.SkipLabelNameValidationHeader, "api.skip-label-name-validation-header-enabled", false, "Allows to skip label name validation via X-Mimir-SkipLabelNameValidation header on the http write path. Use with caution as it breaks PromQL. Allowing this for external clients allows any client to send invalid label names. After enabling it, requests with a specific HTTP header set to true will not have label names validated.") f.BoolVar(&cfg.SkipLabelCountValidationHeader, "api.skip-label-count-validation-header-enabled", false, "Allows to disable enforcement of the label count limit \"max_label_names_per_series\" via X-Mimir-SkipLabelCountValidation header on the http write path. Allowing this for external clients allows any client to send invalid label counts. After enabling it, requests with a specific HTTP header set to true will not have label counts validated.") - f.BoolVar(&cfg.GETRequestForIngesterShutdownEnabled, "api.get-request-for-ingester-shutdown-enabled", false, "Enable GET requests to the /ingester/shutdown endpoint to trigger an ingester shutdown. This is a potentially dangerous operation and should only be enabled consciously.") cfg.RegisterFlagsWithPrefix("", f) } @@ -314,9 +310,6 @@ func (a *API) RegisterIngester(i Ingester) { a.RegisterRoute("/ingester/prepare-instance-ring-downscale", http.HandlerFunc(i.PrepareInstanceRingDownscaleHandler), false, true, "GET", "POST", "DELETE") a.RegisterRoute("/ingester/unregister-on-shutdown", http.HandlerFunc(i.PrepareUnregisterHandler), false, false, "GET", "PUT", "DELETE") a.RegisterRoute("/ingester/shutdown", http.HandlerFunc(i.ShutdownHandler), false, true, "POST") - if a.cfg.GETRequestForIngesterShutdownEnabled { - a.RegisterDeprecatedRoute("/ingester/shutdown", http.HandlerFunc(i.ShutdownHandler), false, true, "GET") - } a.RegisterRoute("/ingester/tsdb_metrics", http.HandlerFunc(i.UserRegistryHandler), true, true, "GET") a.indexPage.AddLinks(defaultWeight, "Ingester", []IndexPageLink{ diff --git a/pkg/api/api_test.go b/pkg/api/api_test.go index 62f2af308d9..43c670243c5 100644 --- a/pkg/api/api_test.go +++ b/pkg/api/api_test.go @@ -201,24 +201,15 @@ func (mi MockIngester) ShutdownHandler(w http.ResponseWriter, _ *http.Request) { func TestApiIngesterShutdown(t *testing.T) { for _, tc := range []struct { name string - setFlag bool expectedStatusCode int }{ - { - name: "flag set to true, enable GET request for ingester shutdown", - setFlag: true, - expectedStatusCode: http.StatusNoContent, - }, { name: "flag not set (default), disable GET request for ingester shutdown", - setFlag: false, expectedStatusCode: http.StatusMethodNotAllowed, }, } { t.Run(tc.name, func(t *testing.T) { - cfg := Config{ - GETRequestForIngesterShutdownEnabled: tc.setFlag, - } + cfg := Config{} serverCfg := getServerConfig(t) federationCfg := tenantfederation.Config{} srv, err := server.New(serverCfg) diff --git a/pkg/blockbuilder/blockbuilder.go b/pkg/blockbuilder/blockbuilder.go index 728d6f56671..865455661be 100644 --- a/pkg/blockbuilder/blockbuilder.go +++ b/pkg/blockbuilder/blockbuilder.go @@ -92,7 +92,7 @@ func (b *BlockBuilder) starting(context.Context) (err error) { b.kafkaClient, err = ingest.NewKafkaReaderClient( b.cfg.Kafka, - ingest.NewKafkaReaderClientMetrics("block-builder", b.register), + ingest.NewKafkaReaderClientMetrics(ingest.ReaderMetricsPrefix, "block-builder", b.register), b.logger, ) if err != nil { diff --git a/pkg/blockbuilder/kafkautil_test.go b/pkg/blockbuilder/kafkautil_test.go index 3576bd1c7d2..1a99590b410 100644 --- a/pkg/blockbuilder/kafkautil_test.go +++ b/pkg/blockbuilder/kafkautil_test.go @@ -146,7 +146,6 @@ func TestKafkaMarshallingCommitMeta(t *testing.T) { func mustKafkaClient(t *testing.T, addrs ...string) *kgo.Client { writeClient, err := kgo.NewClient( kgo.SeedBrokers(addrs...), - kgo.AllowAutoTopicCreation(), // We will choose the partition of each record. kgo.RecordPartitioner(kgo.ManualPartitioner()), ) diff --git a/pkg/blockbuilder/scheduler/scheduler.go b/pkg/blockbuilder/scheduler/scheduler.go index efbce4b46b5..971752a9c7d 100644 --- a/pkg/blockbuilder/scheduler/scheduler.go +++ b/pkg/blockbuilder/scheduler/scheduler.go @@ -60,7 +60,7 @@ func New( func (s *BlockBuilderScheduler) starting(ctx context.Context) error { kc, err := ingest.NewKafkaReaderClient( s.cfg.Kafka, - ingest.NewKafkaReaderClientMetrics("block-builder-scheduler", s.register), + ingest.NewKafkaReaderClientMetrics(ingest.ReaderMetricsPrefix, "block-builder-scheduler", s.register), s.logger, ) if err != nil { diff --git a/pkg/blockbuilder/scheduler/scheduler_test.go b/pkg/blockbuilder/scheduler/scheduler_test.go index 1228618b961..cecd6d0c378 100644 --- a/pkg/blockbuilder/scheduler/scheduler_test.go +++ b/pkg/blockbuilder/scheduler/scheduler_test.go @@ -25,7 +25,6 @@ import ( func mustKafkaClient(t *testing.T, addrs ...string) *kgo.Client { writeClient, err := kgo.NewClient( kgo.SeedBrokers(addrs...), - kgo.AllowAutoTopicCreation(), // We will choose the partition of each record. kgo.RecordPartitioner(kgo.ManualPartitioner()), ) diff --git a/pkg/continuoustest/client.go b/pkg/continuoustest/client.go index 13f694e078b..9592ff67be3 100644 --- a/pkg/continuoustest/client.go +++ b/pkg/continuoustest/client.go @@ -20,7 +20,6 @@ import ( querierapi "github.com/grafana/mimir/pkg/querier/api" "github.com/grafana/mimir/pkg/util/chunkinfologger" "github.com/grafana/mimir/pkg/util/instrumentation" - util_math "github.com/grafana/mimir/pkg/util/math" ) const ( @@ -216,7 +215,7 @@ func (c *Client) WriteSeries(ctx context.Context, series []prompb.TimeSeries) (i // Honor the batch size. for len(series) > 0 { - end := util_math.Min(len(series), c.cfg.WriteBatchSize) + end := min(len(series), c.cfg.WriteBatchSize) batch := series[0:end] series = series[end:] diff --git a/pkg/distributor/distributor.go b/pkg/distributor/distributor.go index 3723d46b669..7bf589f7bc4 100644 --- a/pkg/distributor/distributor.go +++ b/pkg/distributor/distributor.go @@ -53,6 +53,7 @@ import ( "github.com/grafana/mimir/pkg/querier/stats" "github.com/grafana/mimir/pkg/storage/ingest" "github.com/grafana/mimir/pkg/util" + "github.com/grafana/mimir/pkg/util/extract" "github.com/grafana/mimir/pkg/util/globalerror" mimir_limiter "github.com/grafana/mimir/pkg/util/limiter" util_math "github.com/grafana/mimir/pkg/util/math" @@ -182,8 +183,15 @@ type Distributor struct { // partitionsRing is the hash ring holding ingester partitions. It's used when ingest storage is enabled. partitionsRing *ring.PartitionInstanceRing + + // For testing functionality that relies on timing without having to sleep in unit tests. + sleep func(time.Duration) + now func() time.Time } +func defaultSleep(d time.Duration) { time.Sleep(d) } +func defaultNow() time.Time { return time.Now() } + // OTelResourceAttributePromotionConfig contains methods for configuring OTel resource attribute promotion. type OTelResourceAttributePromotionConfig interface { // PromoteOTelResourceAttributes returns which OTel resource attributes to promote for tenant ID. @@ -437,6 +445,8 @@ func New(cfg Config, clientConfig ingester_client.Config, limits *validation.Ove }), PushMetrics: newPushMetrics(reg), + now: defaultNow, + sleep: defaultSleep, } // Initialize expected rejected request labels @@ -728,40 +738,102 @@ func (d *Distributor) checkSample(ctx context.Context, userID, cluster, replica return true, nil } -// Validates a single series from a write request. +// validateSamples validates samples of a single timeseries and removes the ones with duplicated timestamps. +// Returns an error explaining the first validation finding. // May alter timeseries data in-place. // The returned error may retain the series labels. -// It uses the passed nowt time to observe the delay of sample timestamps. -func (d *Distributor) validateSeries(nowt time.Time, ts *mimirpb.PreallocTimeseries, userID, group string, skipLabelValidation, skipLabelCountValidation bool, minExemplarTS, maxExemplarTS int64) error { - if err := validateLabels(d.sampleValidationMetrics, d.limits, userID, group, ts.Labels, skipLabelValidation, skipLabelCountValidation); err != nil { - return err +func (d *Distributor) validateSamples(now model.Time, ts *mimirpb.PreallocTimeseries, userID, group string) error { + if len(ts.Samples) == 0 { + return nil } - now := model.TimeFromUnixNano(nowt.UnixNano()) + if len(ts.Samples) == 1 { + return validateSample(d.sampleValidationMetrics, now, d.limits, userID, group, ts.Labels, ts.Samples[0]) + } + timestamps := make(map[int64]struct{}, min(len(ts.Samples), 100)) + currPos := 0 + duplicatesFound := false for _, s := range ts.Samples { + if _, ok := timestamps[s.TimestampMs]; ok { + // A sample with the same timestamp has already been validated, so we skip it. + duplicatesFound = true + continue + } + + timestamps[s.TimestampMs] = struct{}{} if err := validateSample(d.sampleValidationMetrics, now, d.limits, userID, group, ts.Labels, s); err != nil { return err } + + ts.Samples[currPos] = s + currPos++ + } + + if duplicatesFound { + ts.Samples = ts.Samples[:currPos] + ts.SamplesUpdated() } + return nil +} + +// validateHistograms validates histograms of a single timeseries and removes the ones with duplicated timestamps. +// Returns an error explaining the first validation finding. +// May alter timeseries data in-place. +// The returned error may retain the series labels. +func (d *Distributor) validateHistograms(now model.Time, ts *mimirpb.PreallocTimeseries, userID, group string) error { + if len(ts.Histograms) == 0 { + return nil + } + + if len(ts.Histograms) == 1 { + updated, err := validateSampleHistogram(d.sampleValidationMetrics, now, d.limits, userID, group, ts.Labels, &ts.Histograms[0]) + if err != nil { + return err + } + if updated { + ts.HistogramsUpdated() + } + return nil + } + + timestamps := make(map[int64]struct{}, min(len(ts.Histograms), 100)) + currPos := 0 histogramsUpdated := false - for i := range ts.Histograms { - updated, err := validateSampleHistogram(d.sampleValidationMetrics, now, d.limits, userID, group, ts.Labels, &ts.Histograms[i]) + for idx := range ts.Histograms { + if _, ok := timestamps[ts.Histograms[idx].Timestamp]; ok { + // A sample with the same timestamp has already been validated, so we skip it. + histogramsUpdated = true + continue + } + + timestamps[ts.Histograms[idx].Timestamp] = struct{}{} + updated, err := validateSampleHistogram(d.sampleValidationMetrics, now, d.limits, userID, group, ts.Labels, &ts.Histograms[idx]) if err != nil { return err } histogramsUpdated = histogramsUpdated || updated + + ts.Histograms[currPos] = ts.Histograms[idx] + currPos++ } + if histogramsUpdated { + ts.Histograms = ts.Histograms[:currPos] ts.HistogramsUpdated() } + return nil +} + +// validateExemplars validates exemplars of a single timeseries. +// May alter timeseries data in-place. +func (d *Distributor) validateExemplars(ts *mimirpb.PreallocTimeseries, userID string, minExemplarTS, maxExemplarTS int64) { if d.limits.MaxGlobalExemplarsPerUser(userID) == 0 { ts.ClearExemplars() - return nil + return } - allowedExemplars := d.limits.MaxExemplarsPerSeriesPerRequest(userID) if allowedExemplars > 0 && len(ts.Exemplars) > allowedExemplars { d.exemplarValidationMetrics.tooManyExemplars.WithLabelValues(userID).Add(float64(len(ts.Exemplars) - allowedExemplars)) @@ -793,7 +865,40 @@ func (d *Distributor) validateSeries(nowt time.Time, ts *mimirpb.PreallocTimeser if !isInOrder { ts.SortExemplars() } - return nil +} + +// Validates a single series from a write request. +// May alter timeseries data in-place. +// Returns a boolean stating if the timeseries should be removed from the request, and an error explaining the first validation finding. +// The returned error may retain the series labels. +// It uses the passed nowt time to observe the delay of sample timestamps. +func (d *Distributor) validateSeries(nowt time.Time, ts *mimirpb.PreallocTimeseries, userID, group string, skipLabelValidation, skipLabelCountValidation bool, minExemplarTS, maxExemplarTS int64) (bool, error) { + if err := validateLabels(d.sampleValidationMetrics, d.limits, userID, group, ts.Labels, skipLabelValidation, skipLabelCountValidation); err != nil { + return true, err + } + + now := model.TimeFromUnixNano(nowt.UnixNano()) + totalSamplesAndHistograms := len(ts.Samples) + len(ts.Histograms) + + if err := d.validateSamples(now, ts, userID, group); err != nil { + return true, err + } + + if err := d.validateHistograms(now, ts, userID, group); err != nil { + return true, err + } + + d.validateExemplars(ts, userID, minExemplarTS, maxExemplarTS) + + deduplicatedSamplesAndHistograms := totalSamplesAndHistograms - len(ts.Samples) - len(ts.Histograms) + + if deduplicatedSamplesAndHistograms > 0 { + d.sampleValidationMetrics.duplicateTimestamp.WithLabelValues(userID, group).Add(float64(deduplicatedSamplesAndHistograms)) + unsafeMetricName, _ := extract.UnsafeMetricNameFromLabelAdapters(ts.Labels) + return false, fmt.Errorf(duplicateTimestampMsgFormat, deduplicatedSamplesAndHistograms, unsafeMetricName) + } + + return false, nil } func (d *Distributor) labelValuesWithNewlines(labels []mimirpb.LabelAdapter) int { count := 0 @@ -825,7 +930,9 @@ func (d *Distributor) wrapPushWithMiddlewares(next PushFunc) PushFunc { next = middlewares[ix](next) } - return next + // The delay middleware must take into account total runtime of all other middlewares and the push func, hence why we wrap all others. + return d.outerMaybeDelayMiddleware(next) + } func (d *Distributor) prePushHaDedupeMiddleware(next PushFunc) PushFunc { @@ -1026,12 +1133,12 @@ func (d *Distributor) prePushValidationMiddleware(next PushFunc) PushFunc { earliestSampleTimestampMs, latestSampleTimestampMs := int64(math.MaxInt64), int64(0) for _, ts := range req.Timeseries { for _, s := range ts.Samples { - earliestSampleTimestampMs = util_math.Min(earliestSampleTimestampMs, s.TimestampMs) - latestSampleTimestampMs = util_math.Max(latestSampleTimestampMs, s.TimestampMs) + earliestSampleTimestampMs = min(earliestSampleTimestampMs, s.TimestampMs) + latestSampleTimestampMs = max(latestSampleTimestampMs, s.TimestampMs) } for _, h := range ts.Histograms { - earliestSampleTimestampMs = util_math.Min(earliestSampleTimestampMs, h.Timestamp) - latestSampleTimestampMs = util_math.Max(latestSampleTimestampMs, h.Timestamp) + earliestSampleTimestampMs = min(earliestSampleTimestampMs, h.Timestamp) + latestSampleTimestampMs = max(latestSampleTimestampMs, h.Timestamp) } } // Update this metric even in case of errors. @@ -1073,7 +1180,7 @@ func (d *Distributor) prePushValidationMiddleware(next PushFunc) PushFunc { skipLabelCountValidation := d.cfg.SkipLabelCountValidation || req.GetSkipLabelCountValidation() // Note that validateSeries may drop some data in ts. - validationErr := d.validateSeries(now, &req.Timeseries[tsIdx], userID, group, skipLabelValidation, skipLabelCountValidation, minExemplarTS, maxExemplarTS) + shouldRemove, validationErr := d.validateSeries(now, &req.Timeseries[tsIdx], userID, group, skipLabelValidation, skipLabelCountValidation, minExemplarTS, maxExemplarTS) // Errors in validation are considered non-fatal, as one series in a request may contain // invalid data but all the remaining series could be perfectly valid. @@ -1082,7 +1189,9 @@ func (d *Distributor) prePushValidationMiddleware(next PushFunc) PushFunc { // The series are never retained by validationErr. This is guaranteed by the way the latter is built. firstPartialErr = newValidationError(validationErr) } - removeIndexes = append(removeIndexes, tsIdx) + if shouldRemove { + removeIndexes = append(removeIndexes, tsIdx) + } continue } @@ -1192,6 +1301,28 @@ func (d *Distributor) metricsMiddleware(next PushFunc) PushFunc { } } +// outerMaybeDelayMiddleware is a middleware that may delay ingestion if configured. +func (d *Distributor) outerMaybeDelayMiddleware(next PushFunc) PushFunc { + return func(ctx context.Context, pushReq *Request) error { + start := d.now() + // Execute the whole middleware chain. + err := next(ctx, pushReq) + + userID, userErr := tenant.TenantID(ctx) // Log tenant ID if available. + if userErr == nil { + // Target delay - time spent processing the middleware chain including the push. + // If the request took longer than the target delay, we don't delay at all as sleep will return immediately for a negative value. + if delay := d.limits.DistributorIngestionArtificialDelay(userID) - d.now().Sub(start); delay > 0 { + d.sleep(util.DurationWithJitter(delay, 0.10)) + } + return err + } + + level.Warn(d.log).Log("msg", "failed to get tenant ID while trying to delay ingestion", "err", userErr) + return err + } +} + type ctxKey int const requestStateKey ctxKey = 1 diff --git a/pkg/distributor/distributor_ingest_storage_test.go b/pkg/distributor/distributor_ingest_storage_test.go index 3fb0274792f..ec0a99e2633 100644 --- a/pkg/distributor/distributor_ingest_storage_test.go +++ b/pkg/distributor/distributor_ingest_storage_test.go @@ -59,11 +59,11 @@ func TestDistributor_Push_ShouldSupportIngestStorage(t *testing.T) { createRequest := func() *mimirpb.WriteRequest { return &mimirpb.WriteRequest{ Timeseries: []mimirpb.PreallocTimeseries{ - makeTimeseries([]string{model.MetricNameLabel, "series_one"}, makeSamples(now.UnixMilli(), 1), makeExemplars([]string{"trace_id", "xxx"}, now.UnixMilli(), 1)), - makeTimeseries([]string{model.MetricNameLabel, "series_two"}, makeSamples(now.UnixMilli(), 2), nil), - makeTimeseries([]string{model.MetricNameLabel, "series_three"}, makeSamples(now.UnixMilli(), 3), nil), - makeTimeseries([]string{model.MetricNameLabel, "series_four"}, makeSamples(now.UnixMilli(), 4), nil), - makeTimeseries([]string{model.MetricNameLabel, "series_five"}, makeSamples(now.UnixMilli(), 5), nil), + makeTimeseries([]string{model.MetricNameLabel, "series_one"}, makeSamples(now.UnixMilli(), 1), nil, makeExemplars([]string{"trace_id", "xxx"}, now.UnixMilli(), 1)), + makeTimeseries([]string{model.MetricNameLabel, "series_two"}, makeSamples(now.UnixMilli(), 2), nil, nil), + makeTimeseries([]string{model.MetricNameLabel, "series_three"}, makeSamples(now.UnixMilli(), 3), nil, nil), + makeTimeseries([]string{model.MetricNameLabel, "series_four"}, makeSamples(now.UnixMilli(), 4), nil, nil), + makeTimeseries([]string{model.MetricNameLabel, "series_five"}, makeSamples(now.UnixMilli(), 5), nil, nil), }, Metadata: []*mimirpb.MetricMetadata{ {MetricFamilyName: "series_one", Type: mimirpb.COUNTER, Help: "Series one description"}, @@ -254,7 +254,7 @@ func TestDistributor_Push_ShouldReturnErrorMappedTo4xxStatusCodeIfWriteRequestCo createWriteRequest := func() *mimirpb.WriteRequest { return &mimirpb.WriteRequest{ Timeseries: []mimirpb.PreallocTimeseries{ - makeTimeseries([]string{model.MetricNameLabel, strings.Repeat("x", hugeLabelValueLength)}, makeSamples(now.UnixMilli(), 1), nil), + makeTimeseries([]string{model.MetricNameLabel, strings.Repeat("x", hugeLabelValueLength)}, makeSamples(now.UnixMilli(), 1), nil, nil), }, } } @@ -315,11 +315,11 @@ func TestDistributor_Push_ShouldSupportWriteBothToIngestersAndPartitions(t *test createRequest := func() *mimirpb.WriteRequest { return &mimirpb.WriteRequest{ Timeseries: []mimirpb.PreallocTimeseries{ - makeTimeseries([]string{model.MetricNameLabel, "series_one"}, makeSamples(now.UnixMilli(), 1), nil), - makeTimeseries([]string{model.MetricNameLabel, "series_two"}, makeSamples(now.UnixMilli(), 2), nil), - makeTimeseries([]string{model.MetricNameLabel, "series_three"}, makeSamples(now.UnixMilli(), 3), nil), - makeTimeseries([]string{model.MetricNameLabel, "series_four"}, makeSamples(now.UnixMilli(), 4), nil), - makeTimeseries([]string{model.MetricNameLabel, "series_five"}, makeSamples(now.UnixMilli(), 5), nil), + makeTimeseries([]string{model.MetricNameLabel, "series_one"}, makeSamples(now.UnixMilli(), 1), nil, nil), + makeTimeseries([]string{model.MetricNameLabel, "series_two"}, makeSamples(now.UnixMilli(), 2), nil, nil), + makeTimeseries([]string{model.MetricNameLabel, "series_three"}, makeSamples(now.UnixMilli(), 3), nil, nil), + makeTimeseries([]string{model.MetricNameLabel, "series_four"}, makeSamples(now.UnixMilli(), 4), nil, nil), + makeTimeseries([]string{model.MetricNameLabel, "series_five"}, makeSamples(now.UnixMilli(), 5), nil, nil), }, } } @@ -519,7 +519,7 @@ func TestDistributor_Push_ShouldCleanupWriteRequestAfterWritingBothToIngestersAn // Send write request. _, err := distributors[0].Push(ctx, &mimirpb.WriteRequest{ Timeseries: []mimirpb.PreallocTimeseries{ - makeTimeseries([]string{model.MetricNameLabel, "series_one"}, makeSamples(now.UnixMilli(), 1), nil), + makeTimeseries([]string{model.MetricNameLabel, "series_one"}, makeSamples(now.UnixMilli(), 1), nil, nil), }, }) require.NoError(t, err) @@ -600,7 +600,7 @@ func TestDistributor_Push_ShouldGivePrecedenceToPartitionsErrorWhenWritingBothTo // Send write request. _, err := distributors[0].Push(ctx, &mimirpb.WriteRequest{ Timeseries: []mimirpb.PreallocTimeseries{ - makeTimeseries([]string{model.MetricNameLabel, "series_one"}, makeSamples(now.UnixMilli(), 1), nil), + makeTimeseries([]string{model.MetricNameLabel, "series_one"}, makeSamples(now.UnixMilli(), 1), nil, nil), }, }) @@ -899,12 +899,12 @@ func TestDistributor_LabelValuesCardinality_AvailabilityAndConsistencyWithIngest var ( // Define fixtures used in tests. - series1 = makeTimeseries([]string{labels.MetricName, "series_1", "job", "job-a", "service", "service-1"}, makeSamples(0, 0), nil) - series2 = makeTimeseries([]string{labels.MetricName, "series_2", "job", "job-b", "service", "service-1"}, makeSamples(0, 0), nil) - series3 = makeTimeseries([]string{labels.MetricName, "series_3", "job", "job-c", "service", "service-1"}, makeSamples(0, 0), nil) - series4 = makeTimeseries([]string{labels.MetricName, "series_4", "job", "job-a", "service", "service-1"}, makeSamples(0, 0), nil) - series5 = makeTimeseries([]string{labels.MetricName, "series_5", "job", "job-a", "service", "service-2"}, makeSamples(0, 0), nil) - series6 = makeTimeseries([]string{labels.MetricName, "series_6", "job", "job-b" /* no service label */}, makeSamples(0, 0), nil) + series1 = makeTimeseries([]string{labels.MetricName, "series_1", "job", "job-a", "service", "service-1"}, makeSamples(0, 0), nil, nil) + series2 = makeTimeseries([]string{labels.MetricName, "series_2", "job", "job-b", "service", "service-1"}, makeSamples(0, 0), nil, nil) + series3 = makeTimeseries([]string{labels.MetricName, "series_3", "job", "job-c", "service", "service-1"}, makeSamples(0, 0), nil, nil) + series4 = makeTimeseries([]string{labels.MetricName, "series_4", "job", "job-a", "service", "service-1"}, makeSamples(0, 0), nil, nil) + series5 = makeTimeseries([]string{labels.MetricName, "series_5", "job", "job-a", "service", "service-2"}, makeSamples(0, 0), nil, nil) + series6 = makeTimeseries([]string{labels.MetricName, "series_6", "job", "job-b" /* no service label */}, makeSamples(0, 0), nil, nil) // To keep assertions simple, all tests push all series, and then request the cardinality of the same label names, // so we expect the same response from each successful test. diff --git a/pkg/distributor/distributor_test.go b/pkg/distributor/distributor_test.go index d6f17f22a7f..76a27fff797 100644 --- a/pkg/distributor/distributor_test.go +++ b/pkg/distributor/distributor_test.go @@ -64,7 +64,6 @@ import ( "github.com/grafana/mimir/pkg/util/extract" "github.com/grafana/mimir/pkg/util/globalerror" "github.com/grafana/mimir/pkg/util/limiter" - util_math "github.com/grafana/mimir/pkg/util/math" util_test "github.com/grafana/mimir/pkg/util/test" "github.com/grafana/mimir/pkg/util/testkafka" "github.com/grafana/mimir/pkg/util/validation" @@ -960,7 +959,7 @@ func TestDistributor_PushQuery(t *testing.T) { var expectedIngesters int if shuffleShardSize > 0 { - expectedIngesters = util_math.Min(shuffleShardSize, numIngesters) + expectedIngesters = min(shuffleShardSize, numIngesters) } else { expectedIngesters = numIngesters } @@ -1467,6 +1466,290 @@ func TestDistributor_Push_HistogramValidation(t *testing.T) { } } +func TestDistributor_SampleDuplicateTimestamp(t *testing.T) { + labels := []string{labels.MetricName, "series", "job", "job", "service", "service"} + + testCases := map[string]struct { + req *mimirpb.WriteRequest + expectedSamples []mimirpb.PreallocTimeseries + expectedErrors []error + expectedMetrics string + }{ + "do not deduplicate if there are no duplicated timestamps": { + req: makeWriteRequestWith( + makeTimeseries(labels, makeSamples(10, 1), nil, nil), + makeTimeseries(labels, makeSamples(20, 2), nil, nil), + ), + expectedSamples: []mimirpb.PreallocTimeseries{ + makeTimeseries(labels, makeSamples(10, 1), nil, nil), + makeTimeseries(labels, makeSamples(20, 2), nil, nil), + }, + }, + "deduplicate duplicated timestamps within a single timeseries, and return the first error encountered": { + req: makeWriteRequestWith( + makeTimeseries(labels, append(makeSamples(10, 1), append(makeSamples(20, 2), append(makeSamples(10, 3), makeSamples(20, 4)...)...)...), nil, nil), + makeTimeseries(labels, nil, append(makeHistograms(30, generateTestHistogram(0)), append(makeHistograms(40, generateTestHistogram(1)), append(makeHistograms(40, generateTestHistogram(2)), makeHistograms(30, generateTestHistogram(3))...)...)...), nil), + ), + expectedSamples: []mimirpb.PreallocTimeseries{ + makeTimeseries(labels, append(makeSamples(10, 1), makeSamples(20, 2)...), nil, nil), + makeTimeseries(labels, nil, append(makeHistograms(30, generateTestHistogram(0)), makeHistograms(40, generateTestHistogram(1))...), nil), + }, + expectedErrors: []error{ + fmt.Errorf("samples with duplicated timestamps have been discarded, discarded samples: %d series: '%.200s' (err-mimir-sample-duplicate-timestamp)", 2, "series"), + fmt.Errorf("samples with duplicated timestamps have been discarded, discarded samples: %d series: '%.200s' (err-mimir-sample-duplicate-timestamp)", 2, "series"), + }, + expectedMetrics: ` + # HELP cortex_discarded_samples_total The total number of samples that were discarded. + # TYPE cortex_discarded_samples_total counter + cortex_discarded_samples_total{group="test-group",reason="sample_duplicate_timestamp",user="user"} 4 + `, + }, + "do not deduplicate duplicated timestamps in different timeseries": { + req: makeWriteRequestWith( + makeTimeseries(labels, append(makeSamples(10, 1), makeSamples(10, 2)...), makeHistograms(30, generateTestHistogram(0)), nil), + makeTimeseries(labels, makeSamples(10, 3), append(makeHistograms(20, generateTestHistogram(1)), makeHistograms(20, generateTestHistogram(2))...), nil), + makeTimeseries(labels, makeSamples(10, 4), append(makeHistograms(20, generateTestHistogram(3)), makeHistograms(30, generateTestHistogram(4))...), nil), + ), + expectedSamples: []mimirpb.PreallocTimeseries{ + makeTimeseries(labels, makeSamples(10, 1), makeHistograms(30, generateTestHistogram(0)), nil), + makeTimeseries(labels, makeSamples(10, 3), makeHistograms(20, generateTestHistogram(1)), nil), + makeTimeseries(labels, makeSamples(10, 4), append(makeHistograms(20, generateTestHistogram(3)), makeHistograms(30, generateTestHistogram(4))...), nil), + }, + expectedErrors: []error{ + fmt.Errorf("samples with duplicated timestamps have been discarded, discarded samples: %d series: '%.200s' (err-mimir-sample-duplicate-timestamp)", 1, "series"), + fmt.Errorf("samples with duplicated timestamps have been discarded, discarded samples: %d series: '%.200s' (err-mimir-sample-duplicate-timestamp)", 1, "series"), + nil, + }, + expectedMetrics: ` + # HELP cortex_discarded_samples_total The total number of samples that were discarded. + # TYPE cortex_discarded_samples_total counter + cortex_discarded_samples_total{group="test-group",reason="sample_duplicate_timestamp",user="user"} 2 + `, + }, + } + + for name, tc := range testCases { + t.Run(name, func(t *testing.T) { + limits := prepareDefaultLimits() + ds, _, regs, _ := prepare(t, prepConfig{ + limits: limits, + numDistributors: 1, + }) + + // Pre-condition check. + require.Len(t, ds, 1) + require.Len(t, regs, 1) + + now := mtime.Now() + for i, ts := range tc.req.Timeseries { + shouldRemove, err := ds[0].validateSeries(now, &ts, "user", "test-group", true, true, 0, 0) + require.False(t, shouldRemove) + if len(tc.expectedErrors) == 0 { + require.NoError(t, err) + } else { + if tc.expectedErrors[i] == nil { + require.NoError(t, err) + } else { + require.Error(t, err) + require.Equal(t, tc.expectedErrors[i], err) + } + } + } + + assert.Equal(t, tc.expectedSamples, tc.req.Timeseries) + assert.NoError(t, testutil.GatherAndCompare(regs[0], strings.NewReader(tc.expectedMetrics), "cortex_discarded_samples_total")) + }) + } +} + +func BenchmarkDistributor_SampleDuplicateTimestamp(b *testing.B) { + const ( + metricName = "series" + testSize = 80_000 + numberOfDifferentValues = 40_000 + ) + labels := []string{labels.MetricName, metricName, "job", "job", "service", "service"} + + now := mtime.Now() + timestamp := now.UnixMilli() + + testCases := map[string]struct { + setup func(int) [][]mimirpb.PreallocTimeseries + expectedErrors bool + }{ + "one timeseries with one sample": { + setup: func(n int) [][]mimirpb.PreallocTimeseries { + ts := []mimirpb.PreallocTimeseries{ + makeTimeseries(labels, makeSamples(timestamp, 1), nil, nil), + } + timeseries := make([][]mimirpb.PreallocTimeseries, n) + for i := 0; i < n; i++ { + timeseries[i] = ts + } + return timeseries + }, + expectedErrors: false, + }, + "one timeseries with one histogram": { + setup: func(n int) [][]mimirpb.PreallocTimeseries { + ts := []mimirpb.PreallocTimeseries{ + makeTimeseries(labels, nil, makeHistograms(timestamp, generateTestHistogram(1)), nil), + } + timeseries := make([][]mimirpb.PreallocTimeseries, n) + for i := 0; i < n; i++ { + timeseries[i] = ts + } + return timeseries + }, + expectedErrors: false, + }, + "one timeseries with one sample and one histogram": { + setup: func(n int) [][]mimirpb.PreallocTimeseries { + ts := []mimirpb.PreallocTimeseries{ + makeTimeseries(labels, makeSamples(timestamp-1, 1), makeHistograms(timestamp, generateTestHistogram(2)), nil), + } + timeseries := make([][]mimirpb.PreallocTimeseries, n) + for i := 0; i < n; i++ { + timeseries[i] = ts + } + return timeseries + }, + expectedErrors: false, + }, + "one timeseries with two samples": { + setup: func(n int) [][]mimirpb.PreallocTimeseries { + ts := []mimirpb.PreallocTimeseries{ + makeTimeseries(labels, append(makeSamples(timestamp-1, 1), makeSamples(timestamp, 2)...), nil, nil), + } + timeseries := make([][]mimirpb.PreallocTimeseries, n) + for i := 0; i < n; i++ { + timeseries[i] = ts + } + return timeseries + }, + expectedErrors: false, + }, + "one timeseries with two histograms": { + setup: func(n int) [][]mimirpb.PreallocTimeseries { + ts := []mimirpb.PreallocTimeseries{ + makeTimeseries(labels, nil, append(makeHistograms(timestamp-1, generateTestHistogram(1)), makeHistograms(timestamp, generateTestHistogram(2))...), nil), + } + timeseries := make([][]mimirpb.PreallocTimeseries, n) + for i := 0; i < n; i++ { + timeseries[i] = ts + } + return timeseries + }, + expectedErrors: false, + }, + "one timeseries with two samples and two histograms": { + setup: func(n int) [][]mimirpb.PreallocTimeseries { + ts := []mimirpb.PreallocTimeseries{ + makeTimeseries(labels, append(makeSamples(timestamp-1, 1), makeSamples(timestamp, 2)...), append(makeHistograms(timestamp-1, generateTestHistogram(3)), makeHistograms(timestamp, generateTestHistogram(4))...), nil), + } + timeseries := make([][]mimirpb.PreallocTimeseries, n) + for i := 0; i < n; i++ { + timeseries[i] = ts + } + return timeseries + }, + expectedErrors: false, + }, + "one timeseries with 80_000 samples with duplicated timestamps": { + setup: func(n int) [][]mimirpb.PreallocTimeseries { + timeseries := make([][]mimirpb.PreallocTimeseries, n) + for i := 0; i < n; i++ { + samples := make([]mimirpb.Sample, testSize) + value := 0 + for i := 0; i < testSize; i++ { + if i < numberOfDifferentValues { + value++ + } + samples[i].TimestampMs = timestamp + samples[i].Value = float64(value) + } + timeseries[i] = []mimirpb.PreallocTimeseries{ + makeTimeseries(labels, samples, nil, nil), + } + } + return timeseries + }, + expectedErrors: true, + }, + "one timeseries with 80_000 histograms with duplicated timestamps": { + setup: func(n int) [][]mimirpb.PreallocTimeseries { + timeseries := make([][]mimirpb.PreallocTimeseries, n) + for i := 0; i < n; i++ { + histograms := make([]mimirpb.Histogram, testSize) + value := 0 + for i := 0; i < testSize; i++ { + if i < numberOfDifferentValues { + value++ + } + histograms[i] = makeHistograms(timestamp, generateTestHistogram(value))[0] + } + timeseries[i] = []mimirpb.PreallocTimeseries{ + makeTimeseries(labels, nil, histograms, nil), + } + } + return timeseries + }, + expectedErrors: true, + }, + "one timeseries with 80_000 samples and 80_000 histograms with duplicated timestamps": { + setup: func(n int) [][]mimirpb.PreallocTimeseries { + timeseries := make([][]mimirpb.PreallocTimeseries, n) + for i := 0; i < n; i++ { + samples := make([]mimirpb.Sample, testSize) + histograms := make([]mimirpb.Histogram, testSize) + value := 0 + for i := 0; i < testSize; i++ { + if i < numberOfDifferentValues { + value++ + } + samples[i].TimestampMs = timestamp + samples[i].Value = float64(value) + histograms[i] = makeHistograms(timestamp, generateTestHistogram(value))[0] + } + timeseries[i] = []mimirpb.PreallocTimeseries{ + makeTimeseries(labels, samples, histograms, nil), + } + } + return timeseries + }, + expectedErrors: true, + }, + } + + limits := prepareDefaultLimits() + ds, _, regs, _ := prepare(b, prepConfig{ + limits: limits, + numDistributors: 1, + }) + + // Pre-condition check. + require.Len(b, ds, 1) + require.Len(b, regs, 1) + + for name, tc := range testCases { + b.Run(name, func(b *testing.B) { + timeseries := tc.setup(b.N) + b.ReportAllocs() + b.ResetTimer() + for n := 0; n < b.N; n++ { + for _, ts := range timeseries[n] { + _, err := ds[0].validateSeries(now, &ts, "user", "test-group", true, true, 0, 0) + if !tc.expectedErrors && err != nil { + b.Fatal(err) + } else if tc.expectedErrors && err == nil { + b.Fatal("an error was expected") + } + } + } + }) + } +} + func TestDistributor_ExemplarValidation(t *testing.T) { tests := map[string]struct { prepareConfig func(limits *validation.Limits) @@ -1700,8 +1983,9 @@ func TestDistributor_ExemplarValidation(t *testing.T) { require.Len(t, regs, 1) for _, ts := range tc.req.Timeseries { - err := ds[0].validateSeries(now, &ts, "user", "test-group", false, false, tc.minExemplarTS, tc.maxExemplarTS) + shouldRemove, err := ds[0].validateSeries(now, &ts, "user", "test-group", false, false, tc.minExemplarTS, tc.maxExemplarTS) assert.NoError(t, err) + assert.False(t, shouldRemove) } assert.Equal(t, tc.expectedExemplars, tc.req.Timeseries) @@ -1807,11 +2091,13 @@ func TestDistributor_HistogramReduction(t *testing.T) { require.Len(t, regs, 1) for _, ts := range tc.req.Timeseries { - err := ds[0].validateSeries(now, &ts, "user", "test-group", false, false, 0, 0) + shouldRemove, err := ds[0].validateSeries(now, &ts, "user", "test-group", false, false, 0, 0) if tc.expectedError != nil { require.ErrorAs(t, err, &tc.expectedError) + require.True(t, shouldRemove) } else { assert.NoError(t, err) + assert.False(t, shouldRemove) } } if tc.expectedError == nil { @@ -4021,13 +4307,13 @@ func TestDistributor_LabelValuesCardinality(t *testing.T) { func TestDistributor_LabelValuesCardinality_AvailabilityAndConsistency(t *testing.T) { var ( // Define fixtures used in tests. - series1 = makeTimeseries([]string{labels.MetricName, "series_1", "job", "job-a", "service", "service-1"}, makeSamples(0, 0), nil) - series2 = makeTimeseries([]string{labels.MetricName, "series_2", "job", "job-b", "service", "service-1"}, makeSamples(0, 0), nil) - series3 = makeTimeseries([]string{labels.MetricName, "series_3", "job", "job-c", "service", "service-1"}, makeSamples(0, 0), nil) - series4 = makeTimeseries([]string{labels.MetricName, "series_4", "job", "job-a", "service", "service-1"}, makeSamples(0, 0), nil) - series5 = makeTimeseries([]string{labels.MetricName, "series_5", "job", "job-a", "service", "service-2"}, makeSamples(0, 0), nil) - series6 = makeTimeseries([]string{labels.MetricName, "series_6", "job", "job-b" /* no service label */}, makeSamples(0, 0), nil) - other1 = makeTimeseries([]string{labels.MetricName, "other_1", "job", "job-1", "service", "service-1"}, makeSamples(0, 0), nil) + series1 = makeTimeseries([]string{labels.MetricName, "series_1", "job", "job-a", "service", "service-1"}, makeSamples(0, 0), nil, nil) + series2 = makeTimeseries([]string{labels.MetricName, "series_2", "job", "job-b", "service", "service-1"}, makeSamples(0, 0), nil, nil) + series3 = makeTimeseries([]string{labels.MetricName, "series_3", "job", "job-c", "service", "service-1"}, makeSamples(0, 0), nil, nil) + series4 = makeTimeseries([]string{labels.MetricName, "series_4", "job", "job-a", "service", "service-1"}, makeSamples(0, 0), nil, nil) + series5 = makeTimeseries([]string{labels.MetricName, "series_5", "job", "job-a", "service", "service-2"}, makeSamples(0, 0), nil, nil) + series6 = makeTimeseries([]string{labels.MetricName, "series_6", "job", "job-b" /* no service label */}, makeSamples(0, 0), nil, nil) + other1 = makeTimeseries([]string{labels.MetricName, "other_1", "job", "job-1", "service", "service-1"}, makeSamples(0, 0), nil, nil) // To keep assertions simple, all tests push all series, and then request the cardinality of the same label names, // so we expect the same response from each successful test. @@ -4723,6 +5009,7 @@ func TestRelabelMiddleware(t *testing.T) { }, makeSamples(123, 1.23), nil, + nil, )}, }}, expectedReqs: []*mimirpb.WriteRequest{{ @@ -4734,6 +5021,7 @@ func TestRelabelMiddleware(t *testing.T) { }, makeSamples(123, 1.23), nil, + nil, )}, }}, expectErrs: []bool{false}, @@ -5401,6 +5689,7 @@ func makeWriteRequest(startTimestampMs int64, samples, metadata int, exemplars, }, makeSamples(startTimestampMs+int64(i), float64(i)), nil, + nil, ) if exemplars { @@ -5443,12 +5732,13 @@ func makeWriteRequestWith(series ...mimirpb.PreallocTimeseries) *mimirpb.WriteRe return &mimirpb.WriteRequest{Timeseries: series} } -func makeTimeseries(seriesLabels []string, samples []mimirpb.Sample, exemplars []mimirpb.Exemplar) mimirpb.PreallocTimeseries { +func makeTimeseries(seriesLabels []string, samples []mimirpb.Sample, histograms []mimirpb.Histogram, exemplars []mimirpb.Exemplar) mimirpb.PreallocTimeseries { return mimirpb.PreallocTimeseries{ TimeSeries: &mimirpb.TimeSeries{ - Labels: mimirpb.FromLabelsToLabelAdapters(labels.FromStrings(seriesLabels...)), - Samples: samples, - Exemplars: exemplars, + Labels: mimirpb.FromLabelsToLabelAdapters(labels.FromStrings(seriesLabels...)), + Samples: samples, + Histograms: histograms, + Exemplars: exemplars, }, } } @@ -7689,7 +7979,7 @@ func TestDistributor_Push_SendMessageMetadata(t *testing.T) { req := &mimirpb.WriteRequest{ Timeseries: []mimirpb.PreallocTimeseries{ - makeTimeseries([]string{model.MetricNameLabel, "test1"}, makeSamples(time.Now().UnixMilli(), 1), nil), + makeTimeseries([]string{model.MetricNameLabel, "test1"}, makeSamples(time.Now().UnixMilli(), 1), nil, nil), }, Source: mimirpb.API, } @@ -8004,3 +8294,101 @@ func TestCheckStartedMiddleware(t *testing.T) { require.NotNil(t, err) require.ErrorContains(t, err, "rpc error: code = Internal desc = distributor is unavailable (current state: New)") } + +func Test_outerMaybeDelayMiddleware(t *testing.T) { + tests := []struct { + name string + userID string + delay time.Duration + pushDuration time.Duration + expectedSleep time.Duration + }{ + { + name: "No delay configured", + userID: "user1", + delay: 0, + pushDuration: 500 * time.Millisecond, + expectedSleep: 0, + }, + { + name: "Delay configured but request took longer than delay", + userID: "user2", + delay: 500 * time.Millisecond, + pushDuration: 1 * time.Second, + expectedSleep: 0, + }, + { + name: "Delay configured and request took less than delay", + userID: "user3", + delay: 500 * time.Millisecond, + pushDuration: 50 * time.Millisecond, + expectedSleep: 450 * time.Millisecond, + }, + { + name: "Failed to extract a tenantID", + userID: "", + delay: 500 * time.Millisecond, + pushDuration: 50 * time.Millisecond, + expectedSleep: 0, + }, + } + + for _, tc := range tests { + t.Run(tc.name, func(t *testing.T) { + limits := validation.NewMockTenantLimits(map[string]*validation.Limits{ + tc.userID: { + IngestionArtificialDelay: model.Duration(tc.delay), + }, + }) + overrides, err := validation.NewOverrides(*prepareDefaultLimits(), limits) + require.NoError(t, err) + + // Mock to capture sleep and advance time. + timeSource := &MockTimeSource{CurrentTime: time.Now()} + + distributor := &Distributor{ + log: log.NewNopLogger(), + limits: overrides, + sleep: timeSource.Sleep, + now: timeSource.Now, + } + + // fake push just adds time to the mocked time to make it seem like time has moved forward. + p := func(_ context.Context, _ *Request) error { + timeSource.Add(tc.pushDuration) + return nil + } + + ctx := context.Background() + if tc.userID != "" { + ctx = user.InjectOrgID(ctx, tc.userID) + } + wrappedPush := distributor.outerMaybeDelayMiddleware(p) + err = wrappedPush(ctx, NewParsedRequest(&mimirpb.WriteRequest{})) + require.NoError(t, err) + + // Due to the 10% jitter we need to take into account that the number will not be deterministic in tests. + difference := timeSource.Slept - tc.expectedSleep + require.LessOrEqual(t, difference.Abs(), tc.expectedSleep/10) + }) + } +} + +type MockTimeSource struct { + CurrentTime time.Time + Slept time.Duration +} + +func (m *MockTimeSource) Now() time.Time { + return m.CurrentTime +} + +func (m *MockTimeSource) Sleep(d time.Duration) { + if d > 0 { + m.Slept += d + } +} + +func (m *MockTimeSource) Add(d time.Duration) { + m.CurrentTime = m.CurrentTime.Add(d) +} diff --git a/pkg/distributor/ha_tracker.go b/pkg/distributor/ha_tracker.go index 1db18d603c2..1edb93720b1 100644 --- a/pkg/distributor/ha_tracker.go +++ b/pkg/distributor/ha_tracker.go @@ -34,7 +34,6 @@ import ( var ( errNegativeUpdateTimeoutJitterMax = errors.New("HA tracker max update timeout jitter shouldn't be negative") errInvalidFailoverTimeout = "HA Tracker failover timeout (%v) must be at least 1s greater than update timeout - max jitter (%v)" - errMemberlistUnsupported = errors.New("memberlist is not supported by the HA tracker since gossip propagation is too slow for HA purposes") ) type haTrackerLimits interface { @@ -82,27 +81,32 @@ func (r *ReplicaDesc) mergeWithTime(mergeable memberlist.Mergeable) (memberlist. return nil, nil } + changed := false if other.Replica == r.Replica { // Keeping the one with the most recent receivedAt timestamp if other.ReceivedAt > r.ReceivedAt { *r = *other + changed = true } else if r.ReceivedAt == other.ReceivedAt && r.DeletedAt == 0 && other.DeletedAt != 0 { *r = *other + changed = true } } else { // keep the most recent ElectedAt to reach consistency if other.ElectedAt > r.ElectedAt { *r = *other + changed = true } else if other.ElectedAt == r.ElectedAt { // if the timestamps are equal we compare ReceivedAt if other.ReceivedAt > r.ReceivedAt { *r = *other + changed = true } } } // No changes - if *r != *other { + if !changed { return nil, nil } @@ -147,7 +151,7 @@ type HATrackerConfig struct { // more than this duration FailoverTimeout time.Duration `yaml:"ha_tracker_failover_timeout" category:"advanced"` - KVStore kv.Config `yaml:"kvstore" doc:"description=Backend storage to use for the ring. Please be aware that memberlist is not supported by the HA tracker since gossip propagation is too slow for HA purposes."` + KVStore kv.Config `yaml:"kvstore" doc:"description=Backend storage to use for the ring. Note that memberlist support is experimental."` } // RegisterFlags adds the flags required to config this to the given FlagSet. @@ -175,10 +179,6 @@ func (cfg *HATrackerConfig) Validate() error { return fmt.Errorf(errInvalidFailoverTimeout, cfg.FailoverTimeout, minFailureTimeout) } - if cfg.KVStore.Store == "memberlist" { - return errMemberlistUnsupported - } - return nil } diff --git a/pkg/distributor/ha_tracker_test.go b/pkg/distributor/ha_tracker_test.go index f596e5e4c4e..68ef61e417b 100644 --- a/pkg/distributor/ha_tracker_test.go +++ b/pkg/distributor/ha_tracker_test.go @@ -223,6 +223,13 @@ func TestReplicaDescMerge(t *testing.T) { }(), expectedChange: nil, }, + { + name: "Merge should return no change when replica is the same", + rDesc1: firstReplica(), + rDesc2: firstReplica(), + expectedRDesc: firstReplica(), + expectedChange: nil, + }, } for _, tt := range testsMerge { @@ -423,7 +430,7 @@ func TestHATrackerConfig_Validate(t *testing.T) { }(), expectedErr: nil, }, - "should fail if KV backend is set to memberlist": { + "should pass if KV backend is set to memberlist": { cfg: func() HATrackerConfig { cfg := HATrackerConfig{} flagext.DefaultValues(&cfg) @@ -431,7 +438,7 @@ func TestHATrackerConfig_Validate(t *testing.T) { return cfg }(), - expectedErr: errMemberlistUnsupported, + expectedErr: nil, }, } diff --git a/pkg/distributor/otel.go b/pkg/distributor/otel.go index 2055e7544d4..6c110043a9e 100644 --- a/pkg/distributor/otel.go +++ b/pkg/distributor/otel.go @@ -51,6 +51,7 @@ type OTLPHandlerLimits interface { OTelMetricSuffixesEnabled(id string) bool OTelCreatedTimestampZeroIngestionEnabled(id string) bool PromoteOTelResourceAttributes(id string) []string + OTelKeepIdentifyingResourceAttributes(id string) bool } // OTLPHandler is an http.Handler accepting OTLP write requests. @@ -174,12 +175,13 @@ func OTLPHandler( resourceAttributePromotionConfig = limits } promoteResourceAttributes := resourceAttributePromotionConfig.PromoteOTelResourceAttributes(tenantID) + keepIdentifyingResourceAttributes := limits.OTelKeepIdentifyingResourceAttributes(tenantID) pushMetrics.IncOTLPRequest(tenantID) pushMetrics.ObserveUncompressedBodySize(tenantID, float64(uncompressedBodySize)) var metrics []mimirpb.PreallocTimeseries - metrics, err = otelMetricsToTimeseries(ctx, tenantID, addSuffixes, enableCTZeroIngestion, promoteResourceAttributes, discardedDueToOtelParseError, spanLogger, otlpReq.Metrics()) + metrics, err = otelMetricsToTimeseries(ctx, tenantID, addSuffixes, enableCTZeroIngestion, promoteResourceAttributes, keepIdentifyingResourceAttributes, discardedDueToOtelParseError, spanLogger, otlpReq.Metrics()) if err != nil { return err } @@ -317,14 +319,15 @@ func httpRetryableToOTLPRetryable(httpStatusCode int) int { // writeErrorToHTTPResponseBody converts the given error into a grpc status and marshals it into a byte slice, in order to be written to the response body. // See doc https://opentelemetry.io/docs/specs/otlp/#failures-1 func writeErrorToHTTPResponseBody(reqCtx context.Context, w http.ResponseWriter, httpCode int, grpcCode codes.Code, msg string, logger log.Logger) { + validUTF8Msg := validUTF8Message(msg) w.Header().Set("Content-Type", "application/octet-stream") w.Header().Set("X-Content-Type-Options", "nosniff") if server.IsHandledByHttpgrpcServer(reqCtx) { - w.Header().Set(server.ErrorMessageHeaderKey, msg) // If httpgrpc Server wants to convert this HTTP response into error, use this error message, instead of using response body. + w.Header().Set(server.ErrorMessageHeaderKey, validUTF8Msg) // If httpgrpc Server wants to convert this HTTP response into error, use this error message, instead of using response body. } w.WriteHeader(httpCode) - respBytes, err := proto.Marshal(status.New(grpcCode, msg).Proto()) + respBytes, err := proto.Marshal(status.New(grpcCode, validUTF8Msg).Proto()) if err != nil { level.Error(logger).Log("msg", "otlp response marshal failed", "err", err) writeResponseFailedBody, _ := proto.Marshal(status.New(codes.Internal, "failed to marshal OTLP response").Proto()) @@ -408,12 +411,13 @@ func otelMetricsToMetadata(addSuffixes bool, md pmetric.Metrics) []*mimirpb.Metr return metadata } -func otelMetricsToTimeseries(ctx context.Context, tenantID string, addSuffixes, enableCTZeroIngestion bool, promoteResourceAttributes []string, discardedDueToOtelParseError *prometheus.CounterVec, logger log.Logger, md pmetric.Metrics) ([]mimirpb.PreallocTimeseries, error) { +func otelMetricsToTimeseries(ctx context.Context, tenantID string, addSuffixes, enableCTZeroIngestion bool, promoteResourceAttributes []string, keepIdentifyingResourceAttributes bool, discardedDueToOtelParseError *prometheus.CounterVec, logger log.Logger, md pmetric.Metrics) ([]mimirpb.PreallocTimeseries, error) { converter := otlp.NewMimirConverter() _, errs := converter.FromMetrics(ctx, md, otlp.Settings{ AddMetricSuffixes: addSuffixes, EnableCreatedTimestampZeroIngestion: enableCTZeroIngestion, PromoteResourceAttributes: promoteResourceAttributes, + KeepIdentifyingResourceAttributes: keepIdentifyingResourceAttributes, }, utillog.SlogFromGoKit(logger)) mimirTS := converter.TimeSeries() if errs != nil { diff --git a/pkg/distributor/otel_test.go b/pkg/distributor/otel_test.go index deffa2d6ae7..c381660a76b 100644 --- a/pkg/distributor/otel_test.go +++ b/pkg/distributor/otel_test.go @@ -26,7 +26,6 @@ import ( "github.com/prometheus/common/model" "github.com/prometheus/prometheus/model/labels" "github.com/prometheus/prometheus/prompb" - prometheustranslator "github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheus" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "go.opentelemetry.io/collector/pdata/pcommon" @@ -50,6 +49,7 @@ func TestOTelMetricsToTimeSeries(t *testing.T) { }, []string{tenantID, "group"}) resourceAttrs := map[string]string{ "service.name": "service name", + "service.namespace": "service namespace", "service.instance.id": "service ID", "existent-attr": "resource value", // This one is for testing conflict with metric attribute. @@ -59,32 +59,6 @@ func TestOTelMetricsToTimeSeries(t *testing.T) { // This one is for testing conflict with auto-generated instance attribute. "instance": "resource value", } - expTargetInfoLabels := []mimirpb.LabelAdapter{ - { - Name: labels.MetricName, - Value: "target_info", - }, - } - for k, v := range resourceAttrs { - switch k { - case "service.name": - k = "job" - case "service.instance.id": - k = "instance" - case "job", "instance": - // Ignore, as these labels are generated from service.name and service.instance.id - continue - default: - k = prometheustranslator.NormalizeLabel(k, false) - } - expTargetInfoLabels = append(expTargetInfoLabels, mimirpb.LabelAdapter{ - Name: k, - Value: v, - }) - } - slices.SortStableFunc(expTargetInfoLabels, func(a, b mimirpb.LabelAdapter) int { - return strings.Compare(a.Name, b.Name) - }) md := pmetric.NewMetrics() { @@ -102,9 +76,11 @@ func TestOTelMetricsToTimeSeries(t *testing.T) { } testCases := []struct { - name string - promoteResourceAttributes []string - expectedLabels []mimirpb.LabelAdapter + name string + promoteResourceAttributes []string + keepIdentifyingResourceAttributes bool + expectedLabels []mimirpb.LabelAdapter + expectedInfoLabels []mimirpb.LabelAdapter }{ { name: "Successful conversion without resource attribute promotion", @@ -120,13 +96,92 @@ func TestOTelMetricsToTimeSeries(t *testing.T) { }, { Name: "job", - Value: "service name", + Value: "service namespace/service name", }, { Name: "metric_attr", Value: "metric value", }, }, + expectedInfoLabels: []mimirpb.LabelAdapter{ + { + Name: "__name__", + Value: "target_info", + }, + { + Name: "existent_attr", + Value: "resource value", + }, + { + Name: "metric_attr", + Value: "resource value", + }, + { + Name: "job", + Value: "service namespace/service name", + }, + { + Name: "instance", + Value: "service ID", + }, + }, + }, + { + name: "Successful conversion without resource attribute promotion, and keep identifying resource attributes", + promoteResourceAttributes: nil, + keepIdentifyingResourceAttributes: true, + expectedLabels: []mimirpb.LabelAdapter{ + { + Name: "__name__", + Value: "test_metric", + }, + { + Name: "instance", + Value: "service ID", + }, + { + Name: "job", + Value: "service namespace/service name", + }, + { + Name: "metric_attr", + Value: "metric value", + }, + }, + expectedInfoLabels: []mimirpb.LabelAdapter{ + { + Name: "__name__", + Value: "target_info", + }, + { + Name: "existent_attr", + Value: "resource value", + }, + { + Name: "metric_attr", + Value: "resource value", + }, + { + Name: "job", + Value: "service namespace/service name", + }, + { + Name: "instance", + Value: "service ID", + }, + { + Name: "service_instance_id", + Value: "service ID", + }, + { + Name: "service_name", + Value: "service name", + }, + { + Name: "service_namespace", + Value: "service namespace", + }, + }, }, { name: "Successful conversion with resource attribute promotion", @@ -142,7 +197,7 @@ func TestOTelMetricsToTimeSeries(t *testing.T) { }, { Name: "job", - Value: "service name", + Value: "service namespace/service name", }, { Name: "metric_attr", @@ -153,6 +208,28 @@ func TestOTelMetricsToTimeSeries(t *testing.T) { Value: "resource value", }, }, + expectedInfoLabels: []mimirpb.LabelAdapter{ + { + Name: "__name__", + Value: "target_info", + }, + { + Name: "existent_attr", + Value: "resource value", + }, + { + Name: "metric_attr", + Value: "resource value", + }, + { + Name: "job", + Value: "service namespace/service name", + }, + { + Name: "instance", + Value: "service ID", + }, + }, }, { name: "Successful conversion with resource attribute promotion, conflicting resource attributes are ignored", @@ -168,7 +245,7 @@ func TestOTelMetricsToTimeSeries(t *testing.T) { }, { Name: "job", - Value: "service name", + Value: "service namespace/service name", }, { Name: "existent_attr", @@ -179,12 +256,34 @@ func TestOTelMetricsToTimeSeries(t *testing.T) { Value: "metric value", }, }, + expectedInfoLabels: []mimirpb.LabelAdapter{ + { + Name: "__name__", + Value: "target_info", + }, + { + Name: "existent_attr", + Value: "resource value", + }, + { + Name: "instance", + Value: "service ID", + }, + { + Name: "job", + Value: "service namespace/service name", + }, + { + Name: "metric_attr", + Value: "resource value", + }, + }, }, } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { mimirTS, err := otelMetricsToTimeseries( - context.Background(), tenantID, true, false, tc.promoteResourceAttributes, discardedDueToOTelParseError, log.NewNopLogger(), md, + context.Background(), tenantID, true, false, tc.promoteResourceAttributes, tc.keepIdentifyingResourceAttributes, discardedDueToOTelParseError, log.NewNopLogger(), md, ) require.NoError(t, err) require.Len(t, mimirTS, 2) @@ -205,7 +304,7 @@ func TestOTelMetricsToTimeSeries(t *testing.T) { } assert.ElementsMatch(t, ts.Labels, tc.expectedLabels) - assert.ElementsMatch(t, targetInfo.Labels, expTargetInfoLabels) + assert.ElementsMatch(t, targetInfo.Labels, tc.expectedInfoLabels) }) } } diff --git a/pkg/distributor/otlp/helper_generated.go b/pkg/distributor/otlp/helper_generated.go index 5bbbcab93dc..58e1720b63d 100644 --- a/pkg/distributor/otlp/helper_generated.go +++ b/pkg/distributor/otlp/helper_generated.go @@ -160,7 +160,10 @@ func createAttributes(resource pcommon.Resource, attributes pcommon.Map, setting // map ensures no duplicate label names. l := make(map[string]string, maxLabelCount) for _, label := range labels { - var finalKey = prometheustranslator.NormalizeLabel(label.Name, settings.AllowUTF8) + finalKey := label.Name + if !settings.AllowUTF8 { + finalKey = prometheustranslator.NormalizeLabel(finalKey) + } if existingValue, alreadyExists := l[finalKey]; alreadyExists { l[finalKey] = existingValue + ";" + label.Value } else { @@ -169,7 +172,10 @@ func createAttributes(resource pcommon.Resource, attributes pcommon.Map, setting } for _, lbl := range promotedAttrs { - normalized := prometheustranslator.NormalizeLabel(lbl.Name, settings.AllowUTF8) + normalized := lbl.Name + if !settings.AllowUTF8 { + normalized = prometheustranslator.NormalizeLabel(normalized) + } if _, exists := l[normalized]; !exists { l[normalized] = lbl.Value } @@ -207,8 +213,8 @@ func createAttributes(resource pcommon.Resource, attributes pcommon.Map, setting log.Println("label " + name + " is overwritten. Check if Prometheus reserved labels are used.") } // internal labels should be maintained - if !(len(name) > 4 && name[:2] == "__" && name[len(name)-2:] == "__") { - name = prometheustranslator.NormalizeLabel(name, settings.AllowUTF8) + if !settings.AllowUTF8 && !(len(name) > 4 && name[:2] == "__" && name[len(name)-2:] == "__") { + name = prometheustranslator.NormalizeLabel(name) } l[name] = extras[i+1] } @@ -591,10 +597,9 @@ const defaultIntervalForStartTimestamps = int64(300_000) // handleStartTime adds a zero sample at startTs only if startTs is within validIntervalForStartTimestamps of the sample timestamp. // The reason for doing this is that PRW v1 doesn't support Created Timestamps. After switching to PRW v2's direct CT support, // make use of its direct support fort Created Timestamps instead. -// See https://github.com/prometheus/prometheus/issues/14600 for context. // See https://opentelemetry.io/docs/specs/otel/metrics/data-model/#resets-and-gaps to know more about how OTel handles // resets for cumulative metrics. -func (c *MimirConverter) handleStartTime(startTs, ts int64, labels []mimirpb.LabelAdapter, settings Settings, typ string, val float64, logger *slog.Logger) { +func (c *MimirConverter) handleStartTime(startTs, ts int64, labels []mimirpb.LabelAdapter, settings Settings, typ string, value float64, logger *slog.Logger) { if !settings.EnableCreatedTimestampZeroIngestion { return } @@ -616,9 +621,10 @@ func (c *MimirConverter) handleStartTime(startTs, ts int64, labels []mimirpb.Lab return } - logger.Debug("adding zero value at start_ts", "type", typ, "labels", labelsStringer(labels), "start_ts", startTs, "sample_ts", ts, "sample_value", val) + logger.Debug("adding zero value at start_ts", "type", typ, "labels", labelsStringer(labels), "start_ts", startTs, "sample_ts", ts, "sample_value", value) - c.addSample(&mimirpb.Sample{TimestampMs: startTs, Value: math.Float64frombits(value.QuietZeroNaN)}, labels) + // See https://github.com/prometheus/prometheus/issues/14600 for context. + c.addSample(&mimirpb.Sample{TimestampMs: startTs}, labels) } // handleHistogramStartTime similar to the method above but for native histograms.. @@ -677,6 +683,10 @@ func addResourceTargetInfo(resource pcommon.Resource, settings Settings, timesta } settings.PromoteResourceAttributes = nil + if settings.KeepIdentifyingResourceAttributes { + // Do not pass identifying attributes as ignoreAttrs below. + identifyingAttrs = nil + } labels := createAttributes(resource, attributes, settings, identifyingAttrs, false, model.MetricNameLabel, name) haveIdentifier := false for _, l := range labels { diff --git a/pkg/distributor/otlp/metrics_to_prw_generated.go b/pkg/distributor/otlp/metrics_to_prw_generated.go index 4aeafeb11ce..5eb1391dadd 100644 --- a/pkg/distributor/otlp/metrics_to_prw_generated.go +++ b/pkg/distributor/otlp/metrics_to_prw_generated.go @@ -38,14 +38,17 @@ import ( ) type Settings struct { - Namespace string - ExternalLabels map[string]string - DisableTargetInfo bool - ExportCreatedMetric bool - AddMetricSuffixes bool - SendMetadata bool - AllowUTF8 bool - PromoteResourceAttributes []string + Namespace string + ExternalLabels map[string]string + DisableTargetInfo bool + ExportCreatedMetric bool + AddMetricSuffixes bool + SendMetadata bool + AllowUTF8 bool + PromoteResourceAttributes []string + KeepIdentifyingResourceAttributes bool + + // Mimir specifics. EnableCreatedTimestampZeroIngestion bool ValidIntervalCreatedTimestampZeroIngestion time.Duration } @@ -61,6 +64,7 @@ type MimirConverter struct { unique map[uint64]*mimirpb.TimeSeries conflicts map[uint64][]*mimirpb.TimeSeries everyN everyNTimes + metadata []mimirpb.MetricMetadata } func NewMimirConverter() *MimirConverter { @@ -74,6 +78,16 @@ func NewMimirConverter() *MimirConverter { func (c *MimirConverter) FromMetrics(ctx context.Context, md pmetric.Metrics, settings Settings, logger *slog.Logger) (annots annotations.Annotations, errs error) { c.everyN = everyNTimes{n: 128} resourceMetricsSlice := md.ResourceMetrics() + + numMetrics := 0 + for i := 0; i < resourceMetricsSlice.Len(); i++ { + scopeMetricsSlice := resourceMetricsSlice.At(i).ScopeMetrics() + for j := 0; j < scopeMetricsSlice.Len(); j++ { + numMetrics += scopeMetricsSlice.At(j).Metrics().Len() + } + } + c.metadata = make([]mimirpb.MetricMetadata, 0, numMetrics) + for i := 0; i < resourceMetricsSlice.Len(); i++ { resourceMetrics := resourceMetricsSlice.At(i) resource := resourceMetrics.Resource() @@ -100,6 +114,12 @@ func (c *MimirConverter) FromMetrics(ctx context.Context, md pmetric.Metrics, se } promName := prometheustranslator.BuildCompliantName(metric, settings.Namespace, settings.AddMetricSuffixes, settings.AllowUTF8) + c.metadata = append(c.metadata, mimirpb.MetricMetadata{ + Type: otelMetricTypeToPromMetricType(metric), + MetricFamilyName: promName, + Help: metric.Description(), + Unit: metric.Unit(), + }) // handle individual metrics based on type //exhaustive:enforce diff --git a/pkg/distributor/otlp/otlp_to_openmetrics_metadata_generated.go b/pkg/distributor/otlp/otlp_to_openmetrics_metadata_generated.go index 2c46a2ecf56..0717c88c58e 100644 --- a/pkg/distributor/otlp/otlp_to_openmetrics_metadata_generated.go +++ b/pkg/distributor/otlp/otlp_to_openmetrics_metadata_generated.go @@ -21,8 +21,6 @@ package otlp import ( "go.opentelemetry.io/collector/pdata/pmetric" - prometheustranslator "github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheus" - "github.com/grafana/mimir/pkg/mimirpb" ) @@ -45,36 +43,3 @@ func otelMetricTypeToPromMetricType(otelMetric pmetric.Metric) mimirpb.MetricMet } return mimirpb.UNKNOWN } - -func OtelMetricsToMetadata(md pmetric.Metrics, addMetricSuffixes, allowUTF8 bool) []*mimirpb.MetricMetadata { - resourceMetricsSlice := md.ResourceMetrics() - - metadataLength := 0 - for i := 0; i < resourceMetricsSlice.Len(); i++ { - scopeMetricsSlice := resourceMetricsSlice.At(i).ScopeMetrics() - for j := 0; j < scopeMetricsSlice.Len(); j++ { - metadataLength += scopeMetricsSlice.At(j).Metrics().Len() - } - } - - var metadata = make([]*mimirpb.MetricMetadata, 0, metadataLength) - for i := 0; i < resourceMetricsSlice.Len(); i++ { - resourceMetrics := resourceMetricsSlice.At(i) - scopeMetricsSlice := resourceMetrics.ScopeMetrics() - - for j := 0; j < scopeMetricsSlice.Len(); j++ { - scopeMetrics := scopeMetricsSlice.At(j) - for k := 0; k < scopeMetrics.Metrics().Len(); k++ { - metric := scopeMetrics.Metrics().At(k) - entry := mimirpb.MetricMetadata{ - Type: otelMetricTypeToPromMetricType(metric), - MetricFamilyName: prometheustranslator.BuildCompliantName(metric, "", addMetricSuffixes, allowUTF8), - Help: metric.Description(), - } - metadata = append(metadata, &entry) - } - } - } - - return metadata -} diff --git a/pkg/distributor/push.go b/pkg/distributor/push.go index 5f1cb8c0c72..812250eb2a3 100644 --- a/pkg/distributor/push.go +++ b/pkg/distributor/push.go @@ -221,7 +221,7 @@ func handler( level.Error(logger).Log(msgs...) } addHeaders(w, err, r, code, retryCfg) - http.Error(w, msg, code) + http.Error(w, validUTF8Message(msg), code) } }) } diff --git a/pkg/distributor/push_test.go b/pkg/distributor/push_test.go index 3e985ad6ebf..192769b1141 100644 --- a/pkg/distributor/push_test.go +++ b/pkg/distributor/push_test.go @@ -1244,6 +1244,26 @@ func TestOTLPPushHandlerErrorsAreReportedCorrectlyViaHttpgrpc(t *testing.T) { expectedGrpcErrorMessage: "rpc error: code = Code(400) desc = ReadObjectCB: expect { or n, but found i, error found in #1 byte of ...|invalid|..., bigger context ...|invalid|...", }, + "invalid JSON with non-utf8 characters request returns 400": { + request: &httpgrpc.HTTPRequest{ + Method: "POST", + Headers: []*httpgrpc.Header{ + {Key: "Content-Type", Values: []string{"application/json"}}, + }, + Url: "/otlp", + // \xf6 and \xd3 are not valid UTF8 characters, and they should be replaced with \UFFFD in the output. + Body: []byte("\n\xf6\x16\n\xd3\x02\n\x1d\n\x11container.runtime\x12\x08\n\x06docker\n'\n\x12container.h"), + }, + expectedResponse: &httpgrpc.HTTPResponse{Code: 400, + Headers: []*httpgrpc.Header{ + {Key: "Content-Type", Values: []string{"application/octet-stream"}}, + {Key: "X-Content-Type-Options", Values: []string{"nosniff"}}, + }, + Body: mustMarshalStatus(t, 400, "ReadObjectCB: expect { or n, but found \ufffd, error found in #2 byte of ...|\n\ufffd\u0016\n\ufffd\u0002\n\u001d\n\u0011co|..., bigger context ...|\n\ufffd\u0016\n\ufffd\u0002\n\u001d\n\u0011container.runtime\u0012\u0008\n\u0006docker\n'\n\u0012container.h|..."), + }, + expectedGrpcErrorMessage: "rpc error: code = Code(400) desc = ReadObjectCB: expect { or n, but found \ufffd, error found in #2 byte of ...|\n\ufffd\u0016\n\ufffd\u0002\n\u001d\n\u0011co|..., bigger context ...|\n\ufffd\u0016\n\ufffd\u0002\n\u001d\n\u0011container.runtime\u0012\u0008\n\u0006docker\n'\n\u0012container.h|...", + }, + "empty JSON is good request, with 200 status code": { request: &httpgrpc.HTTPRequest{ Method: "POST", @@ -1359,6 +1379,10 @@ func (o otlpLimitsMock) PromoteOTelResourceAttributes(string) []string { return nil } +func (o otlpLimitsMock) OTelKeepIdentifyingResourceAttributes(string) bool { + return false +} + func promToMimirHistogram(h *prompb.Histogram) mimirpb.Histogram { pSpans := make([]mimirpb.BucketSpan, 0, len(h.PositiveSpans)) for _, span := range h.PositiveSpans { diff --git a/pkg/distributor/query_test.go b/pkg/distributor/query_test.go index 49f10ee3416..9f47672938d 100644 --- a/pkg/distributor/query_test.go +++ b/pkg/distributor/query_test.go @@ -37,10 +37,10 @@ func TestDistributor_QueryExemplars(t *testing.T) { fixtures := []mimirpb.PreallocTimeseries{ // Note: it's important to write at least a sample, otherwise the exemplar timestamp validation doesn't pass. - makeTimeseries([]string{labels.MetricName, "series_1", "namespace", "a"}, makeSamples(int64(now), 1), makeExemplars([]string{"trace_id", "A"}, int64(now), 0)), - makeTimeseries([]string{labels.MetricName, "series_1", "namespace", "b"}, makeSamples(int64(now), 2), makeExemplars([]string{"trace_id", "B"}, int64(now), 0)), - makeTimeseries([]string{labels.MetricName, "series_2", "namespace", "a"}, makeSamples(int64(now), 3), makeExemplars([]string{"trace_id", "C"}, int64(now), 0)), - makeTimeseries([]string{labels.MetricName, "series_2", "namespace", "b"}, makeSamples(int64(now), 4), makeExemplars([]string{"trace_id", "D"}, int64(now), 0)), + makeTimeseries([]string{labels.MetricName, "series_1", "namespace", "a"}, makeSamples(int64(now), 1), nil, makeExemplars([]string{"trace_id", "A"}, int64(now), 0)), + makeTimeseries([]string{labels.MetricName, "series_1", "namespace", "b"}, makeSamples(int64(now), 2), nil, makeExemplars([]string{"trace_id", "B"}, int64(now), 0)), + makeTimeseries([]string{labels.MetricName, "series_2", "namespace", "a"}, makeSamples(int64(now), 3), nil, makeExemplars([]string{"trace_id", "C"}, int64(now), 0)), + makeTimeseries([]string{labels.MetricName, "series_2", "namespace", "b"}, makeSamples(int64(now), 4), nil, makeExemplars([]string{"trace_id", "D"}, int64(now), 0)), } tests := map[string]struct { @@ -229,7 +229,7 @@ func TestDistributor_QueryStream_ShouldReturnErrorIfMaxChunksPerQueryLimitIsReac writeReq = &mimirpb.WriteRequest{} for i := 0; i < limit; i++ { writeReq.Timeseries = append(writeReq.Timeseries, - makeTimeseries([]string{model.MetricNameLabel, fmt.Sprintf("another_series_%d", i)}, makeSamples(0, 0), nil), + makeTimeseries([]string{model.MetricNameLabel, fmt.Sprintf("another_series_%d", i)}, makeSamples(0, 0), nil, nil), ) } @@ -308,7 +308,7 @@ func TestDistributor_QueryStream_ShouldReturnErrorIfMaxSeriesPerQueryLimitIsReac } // Push more series to exceed the limit once we'll query back all series. - writeReq = makeWriteRequestWith(makeTimeseries([]string{model.MetricNameLabel, "another_series"}, makeSamples(0, 0), nil)) + writeReq = makeWriteRequestWith(makeTimeseries([]string{model.MetricNameLabel, "another_series"}, makeSamples(0, 0), nil, nil)) writeRes, err = ds[0].Push(userCtx, writeReq) assert.Equal(t, &mimirpb.WriteResponse{}, writeRes) @@ -354,7 +354,7 @@ func TestDistributor_QueryStream_ShouldReturnErrorIfMaxChunkBytesPerQueryLimitIs labels.MustNewMatcher(labels.MatchRegexp, model.MetricNameLabel, ".+"), } // Push a single series to allow us to calculate the chunk size to calculate the limit for the test. - writeReq := makeWriteRequestWith(makeTimeseries([]string{model.MetricNameLabel, "another_series"}, makeSamples(0, 0), nil)) + writeReq := makeWriteRequestWith(makeTimeseries([]string{model.MetricNameLabel, "another_series"}, makeSamples(0, 0), nil, nil)) writeRes, err := ds[0].Push(ctx, writeReq) assert.Equal(t, &mimirpb.WriteResponse{}, writeRes) assert.Nil(t, err) @@ -383,7 +383,7 @@ func TestDistributor_QueryStream_ShouldReturnErrorIfMaxChunkBytesPerQueryLimitIs assert.Len(t, queryRes.Chunkseries, seriesToAdd) // Push another series to exceed the chunk bytes limit once we'll query back all series. - writeReq = makeWriteRequestWith(makeTimeseries([]string{model.MetricNameLabel, "another_series_1"}, makeSamples(0, 0), nil)) + writeReq = makeWriteRequestWith(makeTimeseries([]string{model.MetricNameLabel, "another_series_1"}, makeSamples(0, 0), nil, nil)) writeRes, err = ds[0].Push(ctx, writeReq) assert.Equal(t, &mimirpb.WriteResponse{}, writeRes) diff --git a/pkg/distributor/validate.go b/pkg/distributor/validate.go index fffc943b6c0..bbb6717bc96 100644 --- a/pkg/distributor/validate.go +++ b/pkg/distributor/validate.go @@ -43,6 +43,7 @@ var ( reasonDuplicateLabelNames = globalerror.SeriesWithDuplicateLabelNames.LabelValue() reasonTooFarInFuture = globalerror.SampleTooFarInFuture.LabelValue() reasonTooFarInPast = globalerror.SampleTooFarInPast.LabelValue() + reasonDuplicateTimestamp = globalerror.SampleDuplicateTimestamp.LabelValue() // Discarded exemplars reasons. reasonExemplarLabelsMissing = globalerror.ExemplarLabelsMissing.LabelValue() @@ -98,6 +99,7 @@ var ( "received a sample whose timestamp is too far in the past, timestamp: %d series: '%.200s'", validation.PastGracePeriodFlag, ) + duplicateTimestampMsgFormat = globalerror.SampleDuplicateTimestamp.Message("samples with duplicated timestamps have been discarded, discarded samples: %d series: '%.200s'") exemplarEmptyLabelsMsgFormat = globalerror.ExemplarLabelsMissing.Message( "received an exemplar with no valid labels, timestamp: %d series: %s labels: %s", ) @@ -143,6 +145,7 @@ type sampleValidationMetrics struct { duplicateLabelNames *prometheus.CounterVec tooFarInFuture *prometheus.CounterVec tooFarInPast *prometheus.CounterVec + duplicateTimestamp *prometheus.CounterVec } func (m *sampleValidationMetrics) deleteUserMetrics(userID string) { @@ -160,6 +163,7 @@ func (m *sampleValidationMetrics) deleteUserMetrics(userID string) { m.duplicateLabelNames.DeletePartialMatch(filter) m.tooFarInFuture.DeletePartialMatch(filter) m.tooFarInPast.DeletePartialMatch(filter) + m.duplicateTimestamp.DeletePartialMatch(filter) } func (m *sampleValidationMetrics) deleteUserMetricsForGroup(userID, group string) { @@ -176,6 +180,7 @@ func (m *sampleValidationMetrics) deleteUserMetricsForGroup(userID, group string m.duplicateLabelNames.DeleteLabelValues(userID, group) m.tooFarInFuture.DeleteLabelValues(userID, group) m.tooFarInPast.DeleteLabelValues(userID, group) + m.duplicateTimestamp.DeleteLabelValues(userID, group) } func newSampleValidationMetrics(r prometheus.Registerer) *sampleValidationMetrics { @@ -193,6 +198,7 @@ func newSampleValidationMetrics(r prometheus.Registerer) *sampleValidationMetric duplicateLabelNames: validation.DiscardedSamplesCounter(r, reasonDuplicateLabelNames), tooFarInFuture: validation.DiscardedSamplesCounter(r, reasonTooFarInFuture), tooFarInPast: validation.DiscardedSamplesCounter(r, reasonTooFarInPast), + duplicateTimestamp: validation.DiscardedSamplesCounter(r, reasonDuplicateTimestamp), } } @@ -424,7 +430,7 @@ func validateLabels(m *sampleValidationMetrics, cfg labelValidationConfig, userI return fmt.Errorf(labelNameTooLongMsgFormat, l.Name, mimirpb.FromLabelAdaptersToString(ls)) } else if !skipLabelValidation && !model.LabelValue(l.Value).IsValid() { m.invalidLabelValue.WithLabelValues(userID, group).Inc() - return fmt.Errorf(invalidLabelValueMsgFormat, l.Name, strings.ToValidUTF8(l.Value, ""), unsafeMetricName) + return fmt.Errorf(invalidLabelValueMsgFormat, l.Name, validUTF8Message(l.Value), unsafeMetricName) } else if len(l.Value) > maxLabelValueLength { m.labelValueTooLong.WithLabelValues(userID, group).Inc() return fmt.Errorf(labelValueTooLongMsgFormat, l.Name, l.Value, mimirpb.FromLabelAdaptersToString(ls)) @@ -506,3 +512,15 @@ func getMetricAndEllipsis(ls []mimirpb.LabelAdapter) (string, string) { } return metric, ellipsis } + +// validUTF8ErrMessage ensures that the given message contains only valid utf8 characters. +// The presence of non-utf8 characters in some errors might break some crucial parts of distributor's logic. +// For example, if httpgrpc.HTTPServer.Handle() returns a httpgprc.Error containing a non-utf8 character, +// this error will not be propagated to httpgrpc.HTTPClient as a htttpgrpc.Error, but as a generic error, +// which might break some of Mimir internal logic. +// This is because golang's proto.Marshal(), which is used by gRPC internally, fails when it marshals the +// httpgrpc.Error containing non-utf8 character produced by httpgrpc.HTTPServer.Handle(), making the resulting +// error lose some important properties. +func validUTF8Message(msg string) string { + return strings.ToValidUTF8(msg, string(utf8.RuneError)) +} diff --git a/pkg/distributor/validate_test.go b/pkg/distributor/validate_test.go index df4de2dd60f..0dc670d6e5c 100644 --- a/pkg/distributor/validate_test.go +++ b/pkg/distributor/validate_test.go @@ -8,16 +8,22 @@ package distributor import ( "errors" "fmt" + "net/http" "strings" "testing" "time" "unicode/utf8" + "github.com/gogo/protobuf/proto" + "github.com/grafana/dskit/grpcutil" + "github.com/grafana/dskit/httpgrpc" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/testutil" "github.com/prometheus/common/model" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" + grpcstatus "google.golang.org/grpc/status" + golangproto "google.golang.org/protobuf/proto" "github.com/grafana/mimir/pkg/mimirpb" "github.com/grafana/mimir/pkg/util/validation" @@ -212,7 +218,7 @@ func TestValidateLabels(t *testing.T) { skipLabelCountValidation: false, err: fmt.Errorf( invalidLabelValueMsgFormat, - "label1", "abcdef", "foo", + "label1", "abc\ufffddef", "foo", ), }, { @@ -671,3 +677,61 @@ func tooManyLabelsArgs(series []mimirpb.LabelAdapter, limit int) []any { return []any{len(series), limit, metric, ellipsis} } + +func TestValidUTF8Message(t *testing.T) { + testCases := map[string]struct { + body []byte + containsNonUTF8Characters bool + }{ + "valid message returns no error": { + body: []byte("valid message"), + containsNonUTF8Characters: false, + }, + "message containing only UTF8 characters returns no error": { + body: []byte("\n\ufffd\u0016\n\ufffd\u0002\n\u001D\n\u0011container.runtime\u0012\b\n\u0006docker\n'\n\u0012container.h"), + containsNonUTF8Characters: false, + }, + "message containing non-UTF8 character returns an error": { + // \xf6 and \xd3 are not valid UTF8 characters. + body: []byte("\n\xf6\x1a\n\xd3\x02\n\x1d\n\x11container.runtime\x12\x08\n\x06docker\n'\n\x12container.h"), + containsNonUTF8Characters: true, + }, + } + + for name, tc := range testCases { + for _, withValidation := range []bool{false, true} { + t.Run(fmt.Sprintf("%s withValidation: %v", name, withValidation), func(t *testing.T) { + msg := string(tc.body) + if withValidation { + msg = validUTF8Message(msg) + } + httpgrpcErr := httpgrpc.Error(http.StatusBadRequest, msg) + + // gogo's proto.Marshal() correctly processes both httpgrpc errors with and without non-utf8 characters. + st, ok := grpcutil.ErrorToStatus(httpgrpcErr) + require.True(t, ok) + stBytes, err := proto.Marshal(st.Proto()) + require.NoError(t, err) + require.NotNil(t, stBytes) + + //lint:ignore faillint We want to explicitly use on grpcstatus.FromError() + grpcSt, ok := grpcstatus.FromError(httpgrpcErr) + require.True(t, ok) + stBytes, err = golangproto.Marshal(grpcSt.Proto()) + if withValidation { + // Ensure that errors with validated messages can always be correctly marshaled. + require.NoError(t, err) + require.NotNil(t, stBytes) + } else { + if tc.containsNonUTF8Characters { + // Ensure that errors with non-validated non-utf8 messages cannot be correctly marshaled. + require.Error(t, err) + } else { + require.NoError(t, err) + require.NotNil(t, stBytes) + } + } + }) + } + } +} diff --git a/pkg/frontend/querymiddleware/codec.go b/pkg/frontend/querymiddleware/codec.go index c3948072c72..b3e2c318070 100644 --- a/pkg/frontend/querymiddleware/codec.go +++ b/pkg/frontend/querymiddleware/codec.go @@ -8,9 +8,9 @@ package querymiddleware import ( "bytes" "context" + "errors" "fmt" "io" - "math" "net/http" "net/url" "sort" @@ -33,6 +33,7 @@ import ( "golang.org/x/exp/slices" apierror "github.com/grafana/mimir/pkg/api/error" + "github.com/grafana/mimir/pkg/cardinality" "github.com/grafana/mimir/pkg/mimirpb" "github.com/grafana/mimir/pkg/querier/api" "github.com/grafana/mimir/pkg/querier/stats" @@ -78,24 +79,24 @@ type Codec interface { Merger // DecodeMetricsQueryRequest decodes a MetricsQueryRequest from an http request. DecodeMetricsQueryRequest(context.Context, *http.Request) (MetricsQueryRequest, error) - // DecodeLabelsQueryRequest decodes a LabelsQueryRequest from an http request. - DecodeLabelsQueryRequest(context.Context, *http.Request) (LabelsQueryRequest, error) + // DecodeLabelsSeriesQueryRequest decodes a LabelsSeriesQueryRequest from an http request. + DecodeLabelsSeriesQueryRequest(context.Context, *http.Request) (LabelsSeriesQueryRequest, error) // DecodeMetricsQueryResponse decodes a Response from an http response. // The original request is also passed as a parameter this is useful for implementation that needs the request // to merge result or build the result correctly. DecodeMetricsQueryResponse(context.Context, *http.Response, MetricsQueryRequest, log.Logger) (Response, error) - // DecodeLabelsQueryResponse decodes a Response from an http response. + // DecodeLabelsSeriesQueryResponse decodes a Response from an http response. // The original request is also passed as a parameter this is useful for implementation that needs the request // to merge result or build the result correctly. - DecodeLabelsQueryResponse(context.Context, *http.Response, LabelsQueryRequest, log.Logger) (Response, error) + DecodeLabelsSeriesQueryResponse(context.Context, *http.Response, LabelsSeriesQueryRequest, log.Logger) (Response, error) // EncodeMetricsQueryRequest encodes a MetricsQueryRequest into an http request. EncodeMetricsQueryRequest(context.Context, MetricsQueryRequest) (*http.Request, error) - // EncodeLabelsQueryRequest encodes a LabelsQueryRequest into an http request. - EncodeLabelsQueryRequest(context.Context, LabelsQueryRequest) (*http.Request, error) + // EncodeLabelsSeriesQueryRequest encodes a LabelsSeriesQueryRequest into an http request. + EncodeLabelsSeriesQueryRequest(context.Context, LabelsSeriesQueryRequest) (*http.Request, error) // EncodeMetricsQueryResponse encodes a Response from a MetricsQueryRequest into an http response. EncodeMetricsQueryResponse(context.Context, *http.Request, Response) (*http.Response, error) - // EncodeLabelsQueryResponse encodes a Response from a LabelsQueryRequest into an http response. - EncodeLabelsQueryResponse(context.Context, *http.Request, Response, bool) (*http.Response, error) + // EncodeLabelsSeriesQueryResponse encodes a Response from a LabelsSeriesQueryRequest into an http response. + EncodeLabelsSeriesQueryResponse(context.Context, *http.Request, Response, bool) (*http.Response, error) } // Merger is used by middlewares making multiple requests to merge back all responses into a single one. @@ -153,8 +154,8 @@ type MetricsQueryRequest interface { AddSpanTags(opentracing.Span) } -// LabelsQueryRequest represents a label names or values query request that can be process by middlewares. -type LabelsQueryRequest interface { +// LabelsSeriesQueryRequest represents a label names, label values, or series query request that can be process by middlewares. +type LabelsSeriesQueryRequest interface { // GetLabelName returns the label name param from a Label Values request `/api/v1/label//values` // or an empty string for a Label Names request `/api/v1/labels` GetLabelName() string @@ -178,11 +179,11 @@ type LabelsQueryRequest interface { // GetHeaders returns the HTTP headers in the request. GetHeaders() []*PrometheusHeader // WithLabelName clones the current request with a different label name param. - WithLabelName(string) (LabelsQueryRequest, error) + WithLabelName(string) (LabelsSeriesQueryRequest, error) // WithLabelMatcherSets clones the current request with different label matchers. - WithLabelMatcherSets([]string) (LabelsQueryRequest, error) + WithLabelMatcherSets([]string) (LabelsSeriesQueryRequest, error) // WithHeaders clones the current request with different headers. - WithHeaders([]*PrometheusHeader) (LabelsQueryRequest, error) + WithHeaders([]*PrometheusHeader) (LabelsSeriesQueryRequest, error) // AddSpanTags writes information about this request to an OpenTracing span AddSpanTags(opentracing.Span) } @@ -356,7 +357,7 @@ func (c prometheusCodec) decodeInstantQueryRequest(r *http.Request) (MetricsQuer return nil, apierror.New(apierror.TypeBadData, err.Error()) } - time, err := DecodeInstantQueryTimeParams(&reqValues, time.Now) + time, err := DecodeInstantQueryTimeParams(&reqValues) if err != nil { return nil, DecorateWithParamName(err, "time") } @@ -388,16 +389,18 @@ func httpHeadersToProm(httpH http.Header) []*PrometheusHeader { return headers } -func (prometheusCodec) DecodeLabelsQueryRequest(_ context.Context, r *http.Request) (LabelsQueryRequest, error) { +func (prometheusCodec) DecodeLabelsSeriesQueryRequest(_ context.Context, r *http.Request) (LabelsSeriesQueryRequest, error) { if !IsLabelsQuery(r.URL.Path) && !IsSeriesQuery(r.URL.Path) { - return nil, fmt.Errorf("unknown labels query API endpoint %s", r.URL.Path) + return nil, fmt.Errorf("unknown labels or series query API endpoint %s", r.URL.Path) } reqValues, err := util.ParseRequestFormWithoutConsumingBody(r) if err != nil { return nil, apierror.New(apierror.TypeBadData, err.Error()) } - start, end, err := DecodeLabelsQueryTimeParams(&reqValues, false) + // see DecodeLabelsSeriesQueryTimeParams for notes on time param parsing compatibility + // between label names, label values, and series requests + start, end, err := DecodeLabelsSeriesQueryTimeParams(&reqValues) if err != nil { return nil, err } @@ -445,26 +448,80 @@ func (prometheusCodec) DecodeLabelsQueryRequest(_ context.Context, r *http.Reque }, nil } +// TimeParamType enumerates the types of time parameters in Prometheus API. +// https://prometheus.io/docs/prometheus/latest/querying/api/ +type TimeParamType int + +const ( + // RFC3339OrUnixMS represents the type in Prometheus Querying API docs + RFC3339OrUnixMS TimeParamType = iota + // DurationMS represents the type in Prometheus Querying API docs + DurationMS + // DurationMSOrFloatMS represents the in Prometheus Querying API docs + DurationMSOrFloatMS +) + +// PromTimeParamDecoder provides common functionality for decoding Prometheus time parameters. +type PromTimeParamDecoder struct { + paramName string + timeType TimeParamType + isOptional bool + defaultMSFunc func() int64 +} + +func (p PromTimeParamDecoder) Decode(reqValues *url.Values) (int64, error) { + rawValue := reqValues.Get(p.paramName) + if rawValue == "" { + if p.isOptional { + if p.defaultMSFunc != nil { + return p.defaultMSFunc(), nil + } + return 0, nil + } + return 0, apierror.New(apierror.TypeBadData, fmt.Sprintf("missing required parameter %q", p.timeType)) + } + + var t int64 + var err error + switch p.timeType { + case RFC3339OrUnixMS: + t, err = util.ParseTime(rawValue) + case DurationMS, DurationMSOrFloatMS: + t, err = util.ParseDurationMS(rawValue) + default: + return 0, apierror.New(apierror.TypeInternal, fmt.Sprintf("unknown time type %v", p.timeType)) + } + if err != nil { + return 0, DecorateWithParamName(err, p.paramName) + } + + return t, nil +} + +var rangeStartParamDecodable = PromTimeParamDecoder{"start", RFC3339OrUnixMS, false, nil} +var rangeEndParamDecodable = PromTimeParamDecoder{"end", RFC3339OrUnixMS, false, nil} +var rangeStepEndParamDecodable = PromTimeParamDecoder{"step", DurationMSOrFloatMS, false, nil} + // DecodeRangeQueryTimeParams encapsulates Prometheus instant query time param parsing, // emulating the logic in prometheus/prometheus/web/api/v1#API.query_range. func DecodeRangeQueryTimeParams(reqValues *url.Values) (start, end, step int64, err error) { - start, err = util.ParseTime(reqValues.Get("start")) + start, err = rangeStartParamDecodable.Decode(reqValues) if err != nil { - return 0, 0, 0, DecorateWithParamName(err, "start") + return 0, 0, 0, err } - end, err = util.ParseTime(reqValues.Get("end")) + end, err = rangeEndParamDecodable.Decode(reqValues) if err != nil { - return 0, 0, 0, DecorateWithParamName(err, "end") + return 0, 0, 0, err } if end < start { return 0, 0, 0, errEndBeforeStart } - step, err = parseDurationMs(reqValues.Get("step")) + step, err = rangeStepEndParamDecodable.Decode(reqValues) if err != nil { - return 0, 0, 0, DecorateWithParamName(err, "step") + return 0, 0, 0, err } if step <= 0 { @@ -480,61 +537,86 @@ func DecodeRangeQueryTimeParams(reqValues *url.Values) (start, end, step int64, return start, end, step, nil } +func instantTimeParamNow() int64 { + return time.Now().UTC().UnixMilli() +} + +var instantTimeParamDecodable = PromTimeParamDecoder{"time", RFC3339OrUnixMS, true, instantTimeParamNow} + // DecodeInstantQueryTimeParams encapsulates Prometheus instant query time param parsing, // emulating the logic in prometheus/prometheus/web/api/v1#API.query. -func DecodeInstantQueryTimeParams(reqValues *url.Values, defaultNow func() time.Time) (time int64, err error) { - timeVal := reqValues.Get("time") - if timeVal == "" { - time = defaultNow().UnixMilli() - } else { - time, err = util.ParseTime(timeVal) - if err != nil { - return 0, DecorateWithParamName(err, "time") - } +func DecodeInstantQueryTimeParams(reqValues *url.Values) (time int64, err error) { + time, err = instantTimeParamDecodable.Decode(reqValues) + if err != nil { + return 0, err } return time, err } -// DecodeLabelsQueryTimeParams encapsulates Prometheus label names and label values query time param parsing, -// emulating the logic in prometheus/prometheus/web/api/v1#API.labelNames and v1#API.labelValues. -// -// Setting `usePromDefaults` true will set missing timestamp params to the Prometheus default -// min and max query timestamps; false will default to 0 for missing timestamp params. -func DecodeLabelsQueryTimeParams(reqValues *url.Values, usePromDefaults bool) (start, end int64, err error) { - var defaultStart, defaultEnd int64 - if usePromDefaults { - defaultStart = v1.MinTime.UnixMilli() - defaultEnd = v1.MaxTime.UnixMilli() - } - - startVal := reqValues.Get("start") - if startVal == "" { - start = defaultStart - } else { - start, err = util.ParseTime(startVal) - if err != nil { - return 0, 0, DecorateWithParamName(err, "start") - } +// Label names, label values, and series codec applies the prometheus/web/api/v1.MinTime and MaxTime defaults on read +// with GetStartOrDefault/GetEndOrDefault, so we don't need to apply them with a defaultMSFunc here. +// This allows the object to be symmetrically decoded and encoded to and from the http request format, +// as well as indicating when an optional time parameter was not included in the original request. +var labelsStartParamDecodable = PromTimeParamDecoder{"start", RFC3339OrUnixMS, true, nil} +var labelsEndParamDecodable = PromTimeParamDecoder{"end", RFC3339OrUnixMS, true, nil} + +// DecodeLabelsSeriesQueryTimeParams encapsulates Prometheus query time param parsing +// for label names, label values, and series endpoints, emulating prometheus/prometheus/web/api/v1. +// Note: the Prometheus HTTP API spec claims that the series endpoint `start` and `end` parameters +// are not optional, but the Prometheus implementation allows them to be optional. +// Until this changes we can reuse the same PromTimeParamDecoder structs as the label names and values endpoints. +func DecodeLabelsSeriesQueryTimeParams(reqValues *url.Values) (start, end int64, err error) { + start, err = labelsStartParamDecodable.Decode(reqValues) + if err != nil { + return 0, 0, err } - endVal := reqValues.Get("end") - if endVal == "" { - end = defaultEnd - } else { - end, err = util.ParseTime(endVal) - if err != nil { - return 0, 0, DecorateWithParamName(err, "end") - } + end, err = labelsEndParamDecodable.Decode(reqValues) + if err != nil { + return 0, 0, err } - if endVal != "" && end < start { + if end != 0 && end < start { return 0, 0, errEndBeforeStart } return start, end, err } +// DecodeCardinalityQueryParams strictly handles validation for cardinality API endpoint parameters. +// The current decoding of the cardinality requests is handled in the cardinality package +// which is not yet compatible with the codec's approach of using interfaces +// and multiple concrete proto implementations to represent different query types. +func DecodeCardinalityQueryParams(r *http.Request) (any, error) { + var err error + + reqValues, err := util.ParseRequestFormWithoutConsumingBody(r) + if err != nil { + return nil, apierror.New(apierror.TypeBadData, err.Error()) + } + + var parsedReq any + switch { + case strings.HasSuffix(r.URL.Path, cardinalityLabelNamesPathSuffix): + parsedReq, err = cardinality.DecodeLabelNamesRequestFromValues(reqValues) + + case strings.HasSuffix(r.URL.Path, cardinalityLabelValuesPathSuffix): + parsedReq, err = cardinality.DecodeLabelValuesRequestFromValues(reqValues) + + case strings.HasSuffix(r.URL.Path, cardinalityActiveSeriesPathSuffix): + parsedReq, err = cardinality.DecodeActiveSeriesRequestFromValues(reqValues) + + default: + return nil, errors.New("unknown cardinality API endpoint") + } + + if err != nil { + return nil, apierror.New(apierror.TypeBadData, err.Error()) + } + return parsedReq, nil +} + func decodeQueryMinMaxTime(queryExpr parser.Expr, start, end, step int64, lookbackDelta time.Duration) (minTime, maxTime int64) { evalStmt := &parser.EvalStmt{ Expr: queryExpr, @@ -649,7 +731,7 @@ func (c prometheusCodec) EncodeMetricsQueryRequest(ctx context.Context, r Metric return req.WithContext(ctx), nil } -func (c prometheusCodec) EncodeLabelsQueryRequest(ctx context.Context, req LabelsQueryRequest) (*http.Request, error) { +func (c prometheusCodec) EncodeLabelsSeriesQueryRequest(ctx context.Context, req LabelsSeriesQueryRequest) (*http.Request, error) { var u *url.URL switch req := req.(type) { case *PrometheusLabelNamesQueryRequest: @@ -823,7 +905,7 @@ func (c prometheusCodec) DecodeMetricsQueryResponse(ctx context.Context, r *http return resp, nil } -func (c prometheusCodec) DecodeLabelsQueryResponse(ctx context.Context, r *http.Response, lr LabelsQueryRequest, logger log.Logger) (Response, error) { +func (c prometheusCodec) DecodeLabelsSeriesQueryResponse(ctx context.Context, r *http.Response, lr LabelsSeriesQueryRequest, logger log.Logger) (Response, error) { spanlog := spanlogger.FromContext(ctx, logger) buf, err := readResponseBody(r) if err != nil { @@ -959,7 +1041,7 @@ func (c prometheusCodec) EncodeMetricsQueryResponse(ctx context.Context, req *ht return &resp, nil } -func (c prometheusCodec) EncodeLabelsQueryResponse(ctx context.Context, req *http.Request, res Response, isSeriesResponse bool) (*http.Response, error) { +func (c prometheusCodec) EncodeLabelsSeriesQueryResponse(ctx context.Context, req *http.Request, res Response, isSeriesResponse bool) (*http.Response, error) { sp, _ := opentracing.StartSpanFromContext(ctx, "APIResponse.ToHTTPResponse") defer sp.Finish() @@ -1156,20 +1238,6 @@ func readResponseBody(res *http.Response) ([]byte, error) { return buf.Bytes(), nil } -func parseDurationMs(s string) (int64, error) { - if d, err := strconv.ParseFloat(s, 64); err == nil { - ts := d * float64(time.Second/time.Millisecond) - if ts > float64(math.MaxInt64) || ts < float64(math.MinInt64) { - return 0, apierror.Newf(apierror.TypeBadData, "cannot parse %q to a valid duration. It overflows int64", s) - } - return int64(ts), nil - } - if d, err := model.ParseDuration(s); err == nil { - return int64(d) / int64(time.Millisecond/time.Nanosecond), nil - } - return 0, apierror.Newf(apierror.TypeBadData, "cannot parse %q to a valid duration", s) -} - func encodeTime(t int64) string { f := float64(t) / 1.0e3 return strconv.FormatFloat(f, 'f', -1, 64) diff --git a/pkg/frontend/querymiddleware/codec_json_test.go b/pkg/frontend/querymiddleware/codec_json_test.go index a8201d39b78..eb8d4bfd668 100644 --- a/pkg/frontend/querymiddleware/codec_json_test.go +++ b/pkg/frontend/querymiddleware/codec_json_test.go @@ -242,7 +242,7 @@ func TestPrometheusCodec_JSONResponse_Labels(t *testing.T) { for _, tc := range []struct { name string - request LabelsQueryRequest + request LabelsSeriesQueryRequest isSeriesResponse bool responseHeaders http.Header resp prometheusAPIResponse @@ -308,7 +308,7 @@ func TestPrometheusCodec_JSONResponse_Labels(t *testing.T) { Body: io.NopCloser(bytes.NewBuffer(body)), ContentLength: int64(len(body)), } - decoded, err := codec.DecodeLabelsQueryResponse(context.Background(), httpResponse, tc.request, log.NewNopLogger()) + decoded, err := codec.DecodeLabelsSeriesQueryResponse(context.Background(), httpResponse, tc.request, log.NewNopLogger()) if err != nil || tc.expectedErr != nil { require.Equal(t, tc.expectedErr, err) return @@ -328,7 +328,7 @@ func TestPrometheusCodec_JSONResponse_Labels(t *testing.T) { Body: io.NopCloser(bytes.NewBuffer(body)), ContentLength: int64(len(body)), } - encoded, err := codec.EncodeLabelsQueryResponse(context.Background(), httpRequest, decoded, tc.isSeriesResponse) + encoded, err := codec.EncodeLabelsSeriesQueryResponse(context.Background(), httpRequest, decoded, tc.isSeriesResponse) require.NoError(t, err) expectedJSON, err := readResponseBody(httpResponse) @@ -554,7 +554,7 @@ func TestPrometheusCodec_JSONEncoding_Labels(t *testing.T) { Header: http.Header{"Accept": []string{jsonMimeType}}, } - encoded, err := codec.EncodeLabelsQueryResponse(context.Background(), httpRequest, tc.response, tc.isSeriesResponse) + encoded, err := codec.EncodeLabelsSeriesQueryResponse(context.Background(), httpRequest, tc.response, tc.isSeriesResponse) require.NoError(t, err) require.Equal(t, http.StatusOK, encoded.StatusCode) require.Equal(t, "application/json", encoded.Header.Get("Content-Type")) diff --git a/pkg/frontend/querymiddleware/codec_test.go b/pkg/frontend/querymiddleware/codec_test.go index b04b09a5c0c..6d2b7686259 100644 --- a/pkg/frontend/querymiddleware/codec_test.go +++ b/pkg/frontend/querymiddleware/codec_test.go @@ -475,13 +475,13 @@ func TestMetricsQuery_WithQuery_WithExpr_TransformConsistency(t *testing.T) { } } -func TestPrometheusCodec_EncodeLabelsQueryRequest(t *testing.T) { +func TestPrometheusCodec_DecodeEncodeLabelsQueryRequest(t *testing.T) { for _, testCase := range []struct { name string propagateHeaders []string url string headers http.Header - expectedStruct LabelsQueryRequest + expectedStruct LabelsSeriesQueryRequest expectedGetLabelName string expectedGetStartOrDefault int64 expectedGetEndOrDefault int64 @@ -709,7 +709,7 @@ func TestPrometheusCodec_EncodeLabelsQueryRequest(t *testing.T) { } codec := newTestPrometheusCodecWithHeaders(testCase.propagateHeaders) - reqDecoded, err := codec.DecodeLabelsQueryRequest(ctx, r) + reqDecoded, err := codec.DecodeLabelsSeriesQueryRequest(ctx, r) if err != nil || testCase.expectedErr != "" { require.EqualError(t, err, testCase.expectedErr) return @@ -720,7 +720,7 @@ func TestPrometheusCodec_EncodeLabelsQueryRequest(t *testing.T) { require.EqualValues(t, testCase.expectedGetEndOrDefault, reqDecoded.GetEndOrDefault()) require.EqualValues(t, testCase.expectedLimit, reqDecoded.GetLimit()) - reqEncoded, err := codec.EncodeLabelsQueryRequest(context.Background(), reqDecoded) + reqEncoded, err := codec.EncodeLabelsSeriesQueryRequest(context.Background(), reqDecoded) require.NoError(t, err) require.EqualValues(t, testCase.url, reqEncoded.RequestURI) }) @@ -1729,7 +1729,7 @@ func TestPrometheusCodec_DecodeEncodeMultipleTimes_Labels(t *testing.T) { for _, tc := range []struct { name string queryURL string - request LabelsQueryRequest + request LabelsSeriesQueryRequest }{ { name: "label names - minimal", @@ -1819,25 +1819,25 @@ func TestPrometheusCodec_DecodeEncodeMultipleTimes_Labels(t *testing.T) { require.NoError(t, err) expected.Body = http.NoBody expected.Header = make(http.Header) - // This header is set by EncodeLabelsQueryRequest according to the codec's config, so we + // This header is set by EncodeLabelsSeriesQueryRequest according to the codec's config, so we // should always expect it to be present on the re-encoded request. expected.Header.Set("Accept", "application/json") ctx := context.Background() - decoded, err := codec.DecodeLabelsQueryRequest(ctx, expected) + decoded, err := codec.DecodeLabelsSeriesQueryRequest(ctx, expected) require.NoError(t, err) assert.Equal(t, tc.request, decoded) - encoded, err := codec.EncodeLabelsQueryRequest(ctx, decoded) + encoded, err := codec.EncodeLabelsSeriesQueryRequest(ctx, decoded) require.NoError(t, err) assert.Equal(t, expected.URL, encoded.URL) assert.Equal(t, expected.Header, encoded.Header) - decoded, err = codec.DecodeLabelsQueryRequest(ctx, encoded) + decoded, err = codec.DecodeLabelsSeriesQueryRequest(ctx, encoded) require.NoError(t, err) assert.Equal(t, tc.request, decoded) - encoded, err = codec.EncodeLabelsQueryRequest(ctx, decoded) + encoded, err = codec.EncodeLabelsSeriesQueryRequest(ctx, decoded) require.NoError(t, err) assert.Equal(t, expected.URL, encoded.URL) assert.Equal(t, expected.Header, encoded.Header) diff --git a/pkg/frontend/querymiddleware/experimental_functions.go b/pkg/frontend/querymiddleware/experimental_functions.go index 1de5fee429c..df91492ec09 100644 --- a/pkg/frontend/querymiddleware/experimental_functions.go +++ b/pkg/frontend/querymiddleware/experimental_functions.go @@ -100,10 +100,7 @@ func containedExperimentalFunctions(expr parser.Expr) map[string]struct{} { } agg, ok := node.(*parser.AggregateExpr) if ok { - // Note that unlike most PromQL functions, the experimental nature of the aggregation functions are manually - // defined and enforced, so they have to be hardcoded here and updated along with changes in Prometheus. - switch agg.Op { - case parser.LIMITK, parser.LIMIT_RATIO: + if agg.Op.IsExperimentalAggregator() { expFuncNames[agg.Op.String()] = struct{}{} } } diff --git a/pkg/frontend/querymiddleware/labels_query_cache.go b/pkg/frontend/querymiddleware/labels_query_cache.go index bfac9ee4612..309d6821c9f 100644 --- a/pkg/frontend/querymiddleware/labels_query_cache.go +++ b/pkg/frontend/querymiddleware/labels_query_cache.go @@ -53,7 +53,7 @@ func (c *labelsQueryTTL) ttl(userID string) time.Duration { } func (g DefaultCacheKeyGenerator) LabelValues(r *http.Request) (*GenericQueryCacheKey, error) { - labelValuesReq, err := g.codec.DecodeLabelsQueryRequest(r.Context(), r) + labelValuesReq, err := g.codec.DecodeLabelsSeriesQueryRequest(r.Context(), r) if err != nil { return nil, err } diff --git a/pkg/frontend/querymiddleware/model_extra.go b/pkg/frontend/querymiddleware/model_extra.go index c38e65b0ef1..9cf07648338 100644 --- a/pkg/frontend/querymiddleware/model_extra.go +++ b/pkg/frontend/querymiddleware/model_extra.go @@ -510,12 +510,12 @@ func (r *PrometheusSeriesQueryRequest) GetHeaders() []*PrometheusHeader { } // WithLabelName clones the current `PrometheusLabelNamesQueryRequest` with a new label name param. -func (r *PrometheusLabelNamesQueryRequest) WithLabelName(string) (LabelsQueryRequest, error) { +func (r *PrometheusLabelNamesQueryRequest) WithLabelName(string) (LabelsSeriesQueryRequest, error) { panic("PrometheusLabelNamesQueryRequest.WithLabelName is not implemented") } // WithLabelName clones the current `PrometheusLabelValuesQueryRequest` with a new label name param. -func (r *PrometheusLabelValuesQueryRequest) WithLabelName(name string) (LabelsQueryRequest, error) { +func (r *PrometheusLabelValuesQueryRequest) WithLabelName(name string) (LabelsSeriesQueryRequest, error) { newRequest := *r newRequest.Path = labelValuesPathSuffix.ReplaceAllString(r.Path, `/api/v1/label/`+name+`/values`) newRequest.LabelName = name @@ -523,12 +523,12 @@ func (r *PrometheusLabelValuesQueryRequest) WithLabelName(name string) (LabelsQu } // WithLabelName clones the current `PrometheusSeriesQueryRequest` with a new label name param. -func (r *PrometheusSeriesQueryRequest) WithLabelName(string) (LabelsQueryRequest, error) { +func (r *PrometheusSeriesQueryRequest) WithLabelName(string) (LabelsSeriesQueryRequest, error) { panic("PrometheusSeriesQueryRequest.WithLabelName is not implemented") } // WithLabelMatcherSets clones the current `PrometheusLabelNamesQueryRequest` with new label matcher sets. -func (r *PrometheusLabelNamesQueryRequest) WithLabelMatcherSets(labelMatcherSets []string) (LabelsQueryRequest, error) { +func (r *PrometheusLabelNamesQueryRequest) WithLabelMatcherSets(labelMatcherSets []string) (LabelsSeriesQueryRequest, error) { newRequest := *r newRequest.LabelMatcherSets = make([]string, len(labelMatcherSets)) copy(newRequest.LabelMatcherSets, labelMatcherSets) @@ -536,7 +536,7 @@ func (r *PrometheusLabelNamesQueryRequest) WithLabelMatcherSets(labelMatcherSets } // WithLabelMatcherSets clones the current `PrometheusLabelValuesQueryRequest` with new label matcher sets. -func (r *PrometheusLabelValuesQueryRequest) WithLabelMatcherSets(labelMatcherSets []string) (LabelsQueryRequest, error) { +func (r *PrometheusLabelValuesQueryRequest) WithLabelMatcherSets(labelMatcherSets []string) (LabelsSeriesQueryRequest, error) { newRequest := *r newRequest.LabelMatcherSets = make([]string, len(labelMatcherSets)) copy(newRequest.LabelMatcherSets, labelMatcherSets) @@ -544,7 +544,7 @@ func (r *PrometheusLabelValuesQueryRequest) WithLabelMatcherSets(labelMatcherSet } // WithLabelMatcherSets clones the current `PrometheusSeriesQueryRequest` with new label matcher sets. -func (r *PrometheusSeriesQueryRequest) WithLabelMatcherSets(labelMatcherSets []string) (LabelsQueryRequest, error) { +func (r *PrometheusSeriesQueryRequest) WithLabelMatcherSets(labelMatcherSets []string) (LabelsSeriesQueryRequest, error) { newRequest := *r newRequest.LabelMatcherSets = make([]string, len(labelMatcherSets)) copy(newRequest.LabelMatcherSets, labelMatcherSets) @@ -552,21 +552,21 @@ func (r *PrometheusSeriesQueryRequest) WithLabelMatcherSets(labelMatcherSets []s } // WithHeaders clones the current `PrometheusLabelNamesQueryRequest` with new headers. -func (r *PrometheusLabelNamesQueryRequest) WithHeaders(headers []*PrometheusHeader) (LabelsQueryRequest, error) { +func (r *PrometheusLabelNamesQueryRequest) WithHeaders(headers []*PrometheusHeader) (LabelsSeriesQueryRequest, error) { newRequest := *r newRequest.Headers = cloneHeaders(headers) return &newRequest, nil } // WithHeaders clones the current `PrometheusLabelValuesQueryRequest` with new headers. -func (r *PrometheusLabelValuesQueryRequest) WithHeaders(headers []*PrometheusHeader) (LabelsQueryRequest, error) { +func (r *PrometheusLabelValuesQueryRequest) WithHeaders(headers []*PrometheusHeader) (LabelsSeriesQueryRequest, error) { newRequest := *r newRequest.Headers = cloneHeaders(headers) return &newRequest, nil } // WithHeaders clones the current `PrometheusSeriesQueryRequest` with new headers. -func (r *PrometheusSeriesQueryRequest) WithHeaders(headers []*PrometheusHeader) (LabelsQueryRequest, error) { +func (r *PrometheusSeriesQueryRequest) WithHeaders(headers []*PrometheusHeader) (LabelsSeriesQueryRequest, error) { newRequest := *r newRequest.Headers = cloneHeaders(headers) return &newRequest, nil diff --git a/pkg/frontend/querymiddleware/querysharding.go b/pkg/frontend/querymiddleware/querysharding.go index 1fe6a186de1..7fede857cb9 100644 --- a/pkg/frontend/querymiddleware/querysharding.go +++ b/pkg/frontend/querymiddleware/querysharding.go @@ -30,7 +30,6 @@ import ( "github.com/grafana/mimir/pkg/querier/stats" "github.com/grafana/mimir/pkg/storage/lazyquery" "github.com/grafana/mimir/pkg/util" - util_math "github.com/grafana/mimir/pkg/util/math" "github.com/grafana/mimir/pkg/util/spanlogger" "github.com/grafana/mimir/pkg/util/validation" ) @@ -345,7 +344,7 @@ func (s *querySharding) getShardsForQuery(ctx context.Context, tenantIDs []strin prevTotalShards := totalShards // If an estimate for query cardinality is available, use it to limit the number // of shards based on linear interpolation. - totalShards = util_math.Min(totalShards, int(seriesCount.EstimatedSeriesCount/s.maxSeriesPerShard)+1) + totalShards = min(totalShards, int(seriesCount.EstimatedSeriesCount/s.maxSeriesPerShard)+1) if prevTotalShards != totalShards { spanLog.DebugLog( @@ -380,7 +379,7 @@ func (s *querySharding) getShardsForQuery(ctx context.Context, tenantIDs []strin } prevTotalShards := totalShards - totalShards = util_math.Max(1, util_math.Min(totalShards, (maxShardedQueries/int(hints.TotalQueries))/numShardableLegs)) + totalShards = max(1, min(totalShards, (maxShardedQueries/int(hints.TotalQueries))/numShardableLegs)) if prevTotalShards != totalShards { spanLog.DebugLog( @@ -498,7 +497,7 @@ func longestRegexpMatcherBytes(expr parser.Expr) int { continue } - longest = util_math.Max(longest, len(matcher.Value)) + longest = max(longest, len(matcher.Value)) } } diff --git a/pkg/frontend/querymiddleware/querysharding_test.go b/pkg/frontend/querymiddleware/querysharding_test.go index 24aefe429f4..323c50f7858 100644 --- a/pkg/frontend/querymiddleware/querysharding_test.go +++ b/pkg/frontend/querymiddleware/querysharding_test.go @@ -988,15 +988,8 @@ func TestQuerySharding_FunctionCorrectness(t *testing.T) { testsForBoth := []queryShardingFunctionCorrectnessTest{ {fn: "count_over_time", rangeQuery: true}, - {fn: "days_in_month"}, - {fn: "day_of_month"}, - {fn: "day_of_week"}, - {fn: "day_of_year"}, {fn: "delta", rangeQuery: true}, - {fn: "hour"}, {fn: "increase", rangeQuery: true}, - {fn: "minute"}, - {fn: "month"}, {fn: "rate", rangeQuery: true}, {fn: "resets", rangeQuery: true}, {fn: "sort"}, @@ -1006,10 +999,6 @@ func TestQuerySharding_FunctionCorrectness(t *testing.T) { {fn: "last_over_time", rangeQuery: true}, {fn: "present_over_time", rangeQuery: true}, {fn: "timestamp"}, - {fn: "year"}, - {fn: "clamp", args: []string{"5", "10"}}, - {fn: "clamp_max", args: []string{"5"}}, - {fn: "clamp_min", args: []string{"5"}}, {fn: "label_replace", args: []string{`"fuzz"`, `"$1"`, `"foo"`, `"b(.*)"`}}, {fn: "label_join", args: []string{`"fuzz"`, `","`, `"foo"`, `"bar"`}}, } @@ -1017,10 +1006,18 @@ func TestQuerySharding_FunctionCorrectness(t *testing.T) { {fn: "abs"}, {fn: "avg_over_time", rangeQuery: true}, {fn: "ceil"}, + {fn: "clamp", args: []string{"5", "10"}}, + {fn: "clamp_max", args: []string{"5"}}, + {fn: "clamp_min", args: []string{"5"}}, {fn: "changes", rangeQuery: true}, + {fn: "days_in_month"}, + {fn: "day_of_month"}, + {fn: "day_of_week"}, + {fn: "day_of_year"}, {fn: "deriv", rangeQuery: true}, {fn: "exp"}, {fn: "floor"}, + {fn: "hour"}, {fn: "idelta", rangeQuery: true}, {fn: "irate", rangeQuery: true}, {fn: "ln"}, @@ -1028,6 +1025,8 @@ func TestQuerySharding_FunctionCorrectness(t *testing.T) { {fn: "log2"}, {fn: "max_over_time", rangeQuery: true}, {fn: "min_over_time", rangeQuery: true}, + {fn: "minute"}, + {fn: "month"}, {fn: "round", args: []string{"20"}}, {fn: "sqrt"}, {fn: "deg"}, @@ -1055,6 +1054,7 @@ func TestQuerySharding_FunctionCorrectness(t *testing.T) { {fn: "double_exponential_smoothing", args: []string{"0.5", "0.7"}, rangeQuery: true}, // holt_winters is a backwards compatible alias for double_exponential_smoothing. {fn: "holt_winters", args: []string{"0.5", "0.7"}, rangeQuery: true}, + {fn: "year"}, } testsForNativeHistogramsOnly := []queryShardingFunctionCorrectnessTest{ {fn: "histogram_count"}, diff --git a/pkg/frontend/querymiddleware/request_validation.go b/pkg/frontend/querymiddleware/request_validation.go new file mode 100644 index 00000000000..3d05c5f2f74 --- /dev/null +++ b/pkg/frontend/querymiddleware/request_validation.go @@ -0,0 +1,92 @@ +// SPDX-License-Identifier: AGPL-3.0-only + +package querymiddleware + +import ( + "context" + "net/http" + + "github.com/grafana/dskit/cancellation" +) + +const requestValidationFailedFmt = "request validation failed for " + +var errMetricsQueryRequestValidationFailed = cancellation.NewErrorf( + requestValidationFailedFmt + "metrics query", +) +var errLabelsQueryRequestValidationFailed = cancellation.NewErrorf( + requestValidationFailedFmt + "labels query", +) +var errCardinalityQueryRequestValidationFailed = cancellation.NewErrorf( + requestValidationFailedFmt + "cardinality query", +) + +type MetricsQueryRequestValidationRoundTripper struct { + codec Codec + next http.RoundTripper +} + +func NewMetricsQueryRequestValidationRoundTripper(codec Codec, next http.RoundTripper) http.RoundTripper { + return MetricsQueryRequestValidationRoundTripper{ + codec: codec, + next: next, + } +} + +func (rt MetricsQueryRequestValidationRoundTripper) RoundTrip(r *http.Request) (*http.Response, error) { + ctx, cancel := context.WithCancelCause(r.Context()) + defer cancel(errMetricsQueryRequestValidationFailed) + r = r.WithContext(ctx) + + _, err := rt.codec.DecodeMetricsQueryRequest(ctx, r) + if err != nil { + return nil, err + } + return rt.next.RoundTrip(r) +} + +type LabelsQueryRequestValidationRoundTripper struct { + codec Codec + next http.RoundTripper +} + +func NewLabelsQueryRequestValidationRoundTripper(codec Codec, next http.RoundTripper) http.RoundTripper { + return LabelsQueryRequestValidationRoundTripper{ + codec: codec, + next: next, + } +} + +func (rt LabelsQueryRequestValidationRoundTripper) RoundTrip(r *http.Request) (*http.Response, error) { + ctx, cancel := context.WithCancelCause(r.Context()) + defer cancel(errLabelsQueryRequestValidationFailed) + r = r.WithContext(ctx) + + _, err := rt.codec.DecodeLabelsSeriesQueryRequest(ctx, r) + if err != nil { + return nil, err + } + return rt.next.RoundTrip(r) +} + +type CardinalityQueryRequestValidationRoundTripper struct { + next http.RoundTripper +} + +func NewCardinalityQueryRequestValidationRoundTripper(next http.RoundTripper) http.RoundTripper { + return CardinalityQueryRequestValidationRoundTripper{ + next: next, + } +} + +func (rt CardinalityQueryRequestValidationRoundTripper) RoundTrip(r *http.Request) (*http.Response, error) { + ctx, cancel := context.WithCancelCause(r.Context()) + defer cancel(errCardinalityQueryRequestValidationFailed) + r = r.WithContext(ctx) + + _, err := DecodeCardinalityQueryParams(r) + if err != nil { + return nil, err + } + return rt.next.RoundTrip(r) +} diff --git a/pkg/frontend/querymiddleware/request_validation_test.go b/pkg/frontend/querymiddleware/request_validation_test.go new file mode 100644 index 00000000000..419e8663a97 --- /dev/null +++ b/pkg/frontend/querymiddleware/request_validation_test.go @@ -0,0 +1,246 @@ +// SPDX-License-Identifier: AGPL-3.0-only + +package querymiddleware + +import ( + "net/http" + "net/http/httptest" + "strconv" + "testing" + + "github.com/grafana/dskit/middleware" + "github.com/stretchr/testify/assert" + + apierror "github.com/grafana/mimir/pkg/api/error" +) + +// TestMetricsQueryRequestValidationRoundTripper only checks for expected API error types being returned, +// as the goal is to apply codec parsing early in the request cycle and return 400 errors for invalid requests. +// The exact error message for each different parse failure is tested extensively in the codec tests. +func TestMetricsQueryRequestValidationRoundTripper(t *testing.T) { + srv := httptest.NewServer( + middleware.AuthenticateUser.Wrap( + http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { + var err error + _, err = w.Write(nil) + + if err != nil { + t.Fatal(err) + } + }), + ), + ) + defer srv.Close() + + rt := NewMetricsQueryRequestValidationRoundTripper(newTestPrometheusCodec(), http.DefaultTransport) + + for i, tc := range []struct { + url string + expectedErrType apierror.Type + }{ + { + url: queryRangePathSuffix + "?query=up{&start=123&end=456&step=60s", + expectedErrType: apierror.TypeBadData, + }, + { + url: queryRangePathSuffix + "?query=up{}&start=foo&end=456&step=60s", + expectedErrType: apierror.TypeBadData, + }, + { + url: queryRangePathSuffix + "?query=up{}&start=123&end=bar&step=60s", + expectedErrType: apierror.TypeBadData, + }, + { + url: queryRangePathSuffix + "?query=up{}&start=123&end=456&step=baz", + expectedErrType: apierror.TypeBadData, + }, + { + url: queryRangePathSuffix + "?query=up{}&start=123&end=456&step=60s", + expectedErrType: "", + }, + { + url: instantQueryPathSuffix + "?query=up{", + expectedErrType: apierror.TypeBadData, + }, + { + url: instantQueryPathSuffix + "?query=up{}&time=foo", + expectedErrType: apierror.TypeBadData, + }, + { + url: instantQueryPathSuffix + "?query=up&start=123&end=456&step=60s", + expectedErrType: "", + }, + } { + t.Run(strconv.Itoa(i), func(t *testing.T) { + req, err := http.NewRequest(http.MethodGet, srv.URL+tc.url, nil) + if err != nil { + t.Fatal(err) + } + + _, err = rt.RoundTrip(req) + if tc.expectedErrType == "" { + assert.NoError(t, err) + return + } + + apiErr := &apierror.APIError{} + + assert.ErrorAs(t, err, &apiErr) + assert.Equal(t, tc.expectedErrType, apiErr.Type) + }) + } +} + +// TestLabelsQueryRequestValidationRoundTripper only checks for expected API error types being returned, +// as the goal is to apply codec parsing early in the request cycle and return 400 errors for invalid requests. +// The exact error message for each different parse failure is tested extensively in the codec tests. +func TestLabelsQueryRequestValidationRoundTripper(t *testing.T) { + srv := httptest.NewServer( + middleware.AuthenticateUser.Wrap( + http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { + var err error + _, err = w.Write(nil) + + if err != nil { + t.Fatal(err) + } + }), + ), + ) + defer srv.Close() + + rt := NewLabelsQueryRequestValidationRoundTripper(newTestPrometheusCodec(), http.DefaultTransport) + + for i, tc := range []struct { + url string + expectedErrType apierror.Type + }{ + { + url: labelNamesPathSuffix + "?start=foo", + expectedErrType: apierror.TypeBadData, + }, + { + url: labelNamesPathSuffix + "?end=foo", + expectedErrType: apierror.TypeBadData, + }, + { + url: labelNamesPathSuffix + "?limit=foo", + expectedErrType: apierror.TypeBadData, + }, + { + url: labelNamesPathSuffix + "", + expectedErrType: "", + }, + { + url: labelNamesPathSuffix + "?match=up{}&start=123&end=456&limit=100", + expectedErrType: "", + }, + } { + t.Run(strconv.Itoa(i), func(t *testing.T) { + req, err := http.NewRequest(http.MethodGet, srv.URL+tc.url, nil) + if err != nil { + t.Fatal(err) + } + + _, err = rt.RoundTrip(req) + if tc.expectedErrType == "" { + assert.NoError(t, err) + return + } + + apiErr := &apierror.APIError{} + + assert.ErrorAs(t, err, &apiErr) + assert.Equal(t, tc.expectedErrType, apiErr.Type) + }) + } +} + +// TestCardinalityQueryRequestValidationRoundTripper only checks for expected API error types being returned, +// as the goal is to apply codec parsing early in the request cycle and return 400 errors for invalid requests. +// The exact error message for each different parse failure is tested extensively in the pkg/cardinality/reqeust tests. +func TestCardinalityQueryRequestValidationRoundTripper(t *testing.T) { + srv := httptest.NewServer( + middleware.AuthenticateUser.Wrap( + http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { + var err error + _, err = w.Write(nil) + + if err != nil { + t.Fatal(err) + } + }), + ), + ) + defer srv.Close() + + rt := NewCardinalityQueryRequestValidationRoundTripper(http.DefaultTransport) + for i, tc := range []struct { + url string + expectedErrType apierror.Type + }{ + { + url: cardinalityLabelNamesPathSuffix + "?selector=up{", + expectedErrType: apierror.TypeBadData, + }, + { + url: cardinalityLabelNamesPathSuffix + "?limit=0&limit=10", + expectedErrType: apierror.TypeBadData, + }, + { + url: cardinalityLabelNamesPathSuffix + "?count_method=foo", + expectedErrType: apierror.TypeBadData, + }, + { + url: cardinalityLabelNamesPathSuffix, + expectedErrType: "", + }, + { + url: cardinalityLabelValuesPathSuffix + "?selector=up{", + expectedErrType: apierror.TypeBadData, + }, + { + url: cardinalityLabelValuesPathSuffix + "?limit=0&limit=10", + expectedErrType: apierror.TypeBadData, + }, + { + url: cardinalityLabelValuesPathSuffix + "?count_method=foo", + expectedErrType: apierror.TypeBadData, + }, + { + // non-utf8 label name will be rejected even when we transition to UTF-8 label names + url: cardinalityLabelValuesPathSuffix + "?label_names[]=\\xbd\\xb2\\x3d\\xbc\\x20\\xe2\\x8c\\x98", + expectedErrType: apierror.TypeBadData, + }, + { + url: cardinalityLabelValuesPathSuffix + "?label_names[]=foo", + expectedErrType: "", + }, + { + url: cardinalityActiveSeriesPathSuffix + "?selector=up{", + expectedErrType: apierror.TypeBadData, + }, + { + url: cardinalityActiveSeriesPathSuffix + "?selector=up{}", + expectedErrType: "", + }, + } { + t.Run(strconv.Itoa(i), func(t *testing.T) { + req, err := http.NewRequest(http.MethodGet, srv.URL+tc.url, nil) + if err != nil { + t.Fatal(err) + } + + _, err = rt.RoundTrip(req) + if tc.expectedErrType == "" { + assert.NoError(t, err) + return + } + + apiErr := &apierror.APIError{} + + assert.ErrorAs(t, err, &apiErr) + assert.Equal(t, tc.expectedErrType, apiErr.Type) + }) + } +} diff --git a/pkg/frontend/querymiddleware/results_cache.go b/pkg/frontend/querymiddleware/results_cache.go index 9a6c65612b1..b13699c30c5 100644 --- a/pkg/frontend/querymiddleware/results_cache.go +++ b/pkg/frontend/querymiddleware/results_cache.go @@ -34,7 +34,6 @@ import ( "github.com/grafana/mimir/pkg/mimirpb" "github.com/grafana/mimir/pkg/util" - "github.com/grafana/mimir/pkg/util/math" ) const ( @@ -394,11 +393,11 @@ func mergeCacheExtentsForRequest(ctx context.Context, r MetricsQueryRequest, mer if accumulator.QueryTimestampMs > 0 && extents[i].QueryTimestampMs > 0 { // Keep older (minimum) timestamp. - accumulator.QueryTimestampMs = math.Min(accumulator.QueryTimestampMs, extents[i].QueryTimestampMs) + accumulator.QueryTimestampMs = min(accumulator.QueryTimestampMs, extents[i].QueryTimestampMs) } else { // Some old extents may have zero timestamps. In that case we keep the non-zero one. // (Hopefully one of them is not zero, since we're only merging if there are some new extents.) - accumulator.QueryTimestampMs = math.Max(accumulator.QueryTimestampMs, extents[i].QueryTimestampMs) + accumulator.QueryTimestampMs = max(accumulator.QueryTimestampMs, extents[i].QueryTimestampMs) } } diff --git a/pkg/frontend/querymiddleware/roundtrip.go b/pkg/frontend/querymiddleware/roundtrip.go index 2df612e6994..af3009d76d9 100644 --- a/pkg/frontend/querymiddleware/roundtrip.go +++ b/pkg/frontend/querymiddleware/roundtrip.go @@ -140,16 +140,16 @@ type MetricsQueryHandler interface { } // LabelsHandlerFunc is like http.HandlerFunc, but for LabelsQueryHandler. -type LabelsHandlerFunc func(context.Context, LabelsQueryRequest) (Response, error) +type LabelsHandlerFunc func(context.Context, LabelsSeriesQueryRequest) (Response, error) // Do implements LabelsQueryHandler. -func (q LabelsHandlerFunc) Do(ctx context.Context, req LabelsQueryRequest) (Response, error) { +func (q LabelsHandlerFunc) Do(ctx context.Context, req LabelsSeriesQueryRequest) (Response, error) { return q(ctx, req) } // LabelsQueryHandler is like http.Handler, but specifically for Prometheus label names and values calls. type LabelsQueryHandler interface { - Do(context.Context, LabelsQueryRequest) (Response, error) + Do(context.Context, LabelsSeriesQueryRequest) (Response, error) } // MetricsQueryMiddlewareFunc is like http.HandlerFunc, but for MetricsQueryMiddleware. @@ -273,6 +273,7 @@ func newQueryTripperware( activeSeries := next activeNativeHistogramMetrics := next labels := next + series := next if cfg.ShardActiveSeriesQueries { activeSeries = newShardActiveSeriesMiddleware(activeSeries, cfg.UseActiveSeriesDecoder, limits, log) @@ -289,16 +290,24 @@ func newQueryTripperware( activeSeries = newReadConsistencyRoundTripper(activeSeries, ingestStorageTopicOffsetsReader, limits, log, metrics) activeNativeHistogramMetrics = newReadConsistencyRoundTripper(activeNativeHistogramMetrics, ingestStorageTopicOffsetsReader, limits, log, metrics) labels = newReadConsistencyRoundTripper(labels, ingestStorageTopicOffsetsReader, limits, log, metrics) + series = newReadConsistencyRoundTripper(series, ingestStorageTopicOffsetsReader, limits, log, metrics) remoteRead = newReadConsistencyRoundTripper(remoteRead, ingestStorageTopicOffsetsReader, limits, log, metrics) next = newReadConsistencyRoundTripper(next, ingestStorageTopicOffsetsReader, limits, log, metrics) } - // Look up cache as first thing. + // Look up cache as first thing after validation. if cfg.CacheResults { cardinality = newCardinalityQueryCacheRoundTripper(c, cacheKeyGenerator, limits, cardinality, log, registerer) labels = newLabelsQueryCacheRoundTripper(c, cacheKeyGenerator, limits, labels, log, registerer) } + // Validate the request before any processing. + queryrange = NewMetricsQueryRequestValidationRoundTripper(codec, queryrange) + instant = NewMetricsQueryRequestValidationRoundTripper(codec, instant) + labels = NewLabelsQueryRequestValidationRoundTripper(codec, labels) + series = NewLabelsQueryRequestValidationRoundTripper(codec, series) + cardinality = NewCardinalityQueryRequestValidationRoundTripper(cardinality) + return RoundTripFunc(func(r *http.Request) (*http.Response, error) { switch { case IsRangeQuery(r.URL.Path): @@ -313,6 +322,8 @@ func newQueryTripperware( return activeNativeHistogramMetrics.RoundTrip(r) case IsLabelsQuery(r.URL.Path): return labels.RoundTrip(r) + case IsSeriesQuery(r.URL.Path): + return series.RoundTrip(r) case IsRemoteReadQuery(r.URL.Path): return remoteRead.RoundTrip(r) default: diff --git a/pkg/frontend/querymiddleware/roundtrip_test.go b/pkg/frontend/querymiddleware/roundtrip_test.go index aeb78323b9f..e9672054291 100644 --- a/pkg/frontend/querymiddleware/roundtrip_test.go +++ b/pkg/frontend/querymiddleware/roundtrip_test.go @@ -186,26 +186,6 @@ func TestTripperware_InstantQuery(t *testing.T) { }, res) }) - t.Run("specific time param with form being already parsed", func(t *testing.T) { - ts := time.Date(2021, 1, 2, 3, 4, 5, 0, time.UTC) - - formParserRoundTripper := RoundTripFunc(func(r *http.Request) (*http.Response, error) { - assert.NoError(t, r.ParseForm()) - return tripper.RoundTrip(r) - }) - queryClient, err := api.NewClient(api.Config{Address: "http://localhost", RoundTripper: formParserRoundTripper}) - require.NoError(t, err) - api := v1.NewAPI(queryClient) - - res, _, err := api.Query(ctx, `sum(increase(we_dont_care_about_this[1h])) by (foo)`, ts) - require.NoError(t, err) - require.IsType(t, model.Vector{}, res) - require.NotEmpty(t, res.(model.Vector)) - - resultTime := res.(model.Vector)[0].Timestamp.Time() - require.Equal(t, ts.Unix(), resultTime.Unix()) - }) - t.Run("default time param happy case", func(t *testing.T) { queryClient, err := api.NewClient(api.Config{Address: "http://localhost", RoundTripper: tripper}) require.NoError(t, err) @@ -220,24 +200,6 @@ func TestTripperware_InstantQuery(t *testing.T) { require.InDelta(t, time.Now().Unix(), resultTime.Unix(), 1) }) - t.Run("default time param with form being already parsed", func(t *testing.T) { - formParserRoundTripper := RoundTripFunc(func(r *http.Request) (*http.Response, error) { - assert.NoError(t, r.ParseForm()) - return tripper.RoundTrip(r) - }) - queryClient, err := api.NewClient(api.Config{Address: "http://localhost", RoundTripper: formParserRoundTripper}) - require.NoError(t, err) - api := v1.NewAPI(queryClient) - - res, _, err := api.Query(ctx, `sum(increase(we_dont_care_about_this[1h])) by (foo)`, time.Time{}) - require.NoError(t, err) - require.IsType(t, model.Vector{}, res) - require.NotEmpty(t, res.(model.Vector)) - - resultTime := res.(model.Vector)[0].Timestamp.Time() - require.InDelta(t, time.Now().Unix(), resultTime.Unix(), 1) - }) - t.Run("post form time param takes precedence over query time param ", func(t *testing.T) { postFormTimeParam := time.Date(2021, 1, 2, 3, 4, 5, 0, time.UTC) @@ -834,7 +796,7 @@ func TestTripperware_ShouldSupportReadConsistencyOffsetsInjection(t *testing.T) }, "cardinality label values": { makeRequest: func() *http.Request { - return httptest.NewRequest("GET", cardinalityLabelValuesPathSuffix, nil) + return httptest.NewRequest("GET", cardinalityLabelValuesPathSuffix+"?label_names[]=foo", nil) }, }, "cardinality active series": { diff --git a/pkg/frontend/v2/frontend_scheduler_adapter.go b/pkg/frontend/v2/frontend_scheduler_adapter.go index a43a8afd15c..23a8ea96baf 100644 --- a/pkg/frontend/v2/frontend_scheduler_adapter.go +++ b/pkg/frontend/v2/frontend_scheduler_adapter.go @@ -75,7 +75,7 @@ func (a *frontendToSchedulerAdapter) extractAdditionalQueueDimensions( return a.queryComponentQueueDimensionFromTimeParams(tenantIDs, minT, maxT, now), nil case querymiddleware.IsLabelsQuery(httpRequest.URL.Path): - decodedRequest, err := a.codec.DecodeLabelsQueryRequest(httpRequest.Context(), httpRequest) + decodedRequest, err := a.codec.DecodeLabelsSeriesQueryRequest(httpRequest.Context(), httpRequest) if err != nil { return nil, err } diff --git a/pkg/ingester/ingester.go b/pkg/ingester/ingester.go index 4ae60217a4b..08edb6ab54c 100644 --- a/pkg/ingester/ingester.go +++ b/pkg/ingester/ingester.go @@ -1847,7 +1847,7 @@ func (i *Ingester) MetricsForLabelMatchers(ctx context.Context, req *client.Metr Metric: make([]*mimirpb.Metric, 0), } - mergedSet := storage.NewMergeSeriesSet(sets, storage.ChainedSeriesMerge) + mergedSet := storage.NewMergeSeriesSet(sets, 0, storage.ChainedSeriesMerge) for mergedSet.Next() { // Interrupt if the context has been canceled. if ctx.Err() != nil { @@ -2964,7 +2964,7 @@ func (i *Ingester) minTsdbHeadTimestamp() float64 { minTime := int64(math.MaxInt64) for _, db := range i.tsdbs { - minTime = util_math.Min(minTime, db.db.Head().MinTime()) + minTime = min(minTime, db.db.Head().MinTime()) } if minTime == math.MaxInt64 { @@ -2980,7 +2980,7 @@ func (i *Ingester) maxTsdbHeadTimestamp() float64 { maxTime := int64(math.MinInt64) for _, db := range i.tsdbs { - maxTime = util_math.Max(maxTime, db.db.Head().MaxTime()) + maxTime = max(maxTime, db.db.Head().MaxTime()) } if maxTime == math.MinInt64 { @@ -3248,7 +3248,7 @@ func (i *Ingester) compactBlocksToReduceInMemorySeries(ctx context.Context, now // Estimate the number of series that would be dropped from the TSDB Head if we would // compact the head up until "now - active series idle timeout". totalActiveSeries, _, _ := db.activeSeries.Active() - estimatedSeriesReduction := util_math.Max(0, int64(userMemorySeries)-int64(totalActiveSeries)) + estimatedSeriesReduction := max(0, int64(userMemorySeries)-int64(totalActiveSeries)) estimations = append(estimations, seriesReductionEstimation{ userID: userID, estimatedCount: estimatedSeriesReduction, diff --git a/pkg/ingester/ingester_early_compaction_test.go b/pkg/ingester/ingester_early_compaction_test.go index 7e34dc399b3..531d8a673f0 100644 --- a/pkg/ingester/ingester_early_compaction_test.go +++ b/pkg/ingester/ingester_early_compaction_test.go @@ -28,7 +28,6 @@ import ( "golang.org/x/exp/slices" "github.com/grafana/mimir/pkg/ingester/client" - util_math "github.com/grafana/mimir/pkg/util/math" util_test "github.com/grafana/mimir/pkg/util/test" ) @@ -602,7 +601,7 @@ func TestIngester_compactBlocksToReduceInMemorySeries_Concurrency(t *testing.T) // Find the lowest sample written. We compact up until that timestamp. writerTimesMx.Lock() for _, ts := range writerTimes { - lowestWriterTimeMilli = util_math.Min(lowestWriterTimeMilli, ts) + lowestWriterTimeMilli = min(lowestWriterTimeMilli, ts) } writerTimesMx.Unlock() diff --git a/pkg/ingester/ingester_test.go b/pkg/ingester/ingester_test.go index e92b19b7f75..6d03bc83535 100644 --- a/pkg/ingester/ingester_test.go +++ b/pkg/ingester/ingester_test.go @@ -70,7 +70,6 @@ import ( "github.com/grafana/mimir/pkg/usagestats" "github.com/grafana/mimir/pkg/util" "github.com/grafana/mimir/pkg/util/globalerror" - util_math "github.com/grafana/mimir/pkg/util/math" util_test "github.com/grafana/mimir/pkg/util/test" "github.com/grafana/mimir/pkg/util/validation" ) @@ -4978,7 +4977,7 @@ func createIngesterWithSeries(t testing.TB, userID string, numSeries, numSamples for ts := startTimestamp; ts < startTimestamp+(step*int64(numSamplesPerSeries)); ts += step { for o := 0; o < numSeries; o += maxBatchSize { - batchSize := util_math.Min(maxBatchSize, numSeries-o) + batchSize := min(maxBatchSize, numSeries-o) // Generate metrics and samples (1 for each series). metrics := make([][]mimirpb.LabelAdapter, 0, batchSize) diff --git a/pkg/ingester/limiter.go b/pkg/ingester/limiter.go index 74f81c545ac..f743cb82a7e 100644 --- a/pkg/ingester/limiter.go +++ b/pkg/ingester/limiter.go @@ -11,7 +11,6 @@ import ( "github.com/grafana/dskit/ring" "github.com/grafana/mimir/pkg/util" - util_math "github.com/grafana/mimir/pkg/util/math" ) // limiterTenantLimits provides access to limits used by Limiter. @@ -168,7 +167,7 @@ func (is *ingesterRingLimiterStrategy) convertGlobalToLocalLimit(userID string, // expected number of ingesters per sharded zone, then we should honor the latter because series/metadata // cannot be written to more ingesters than that. if userShardSize > 0 { - ingestersInZoneCount = util_math.Min(ingestersInZoneCount, util.ShuffleShardExpectedInstancesPerZone(userShardSize, zonesCount)) + ingestersInZoneCount = min(ingestersInZoneCount, util.ShuffleShardExpectedInstancesPerZone(userShardSize, zonesCount)) } // This may happen, for example when the total number of ingesters is asynchronously updated, or @@ -190,7 +189,7 @@ func (is *ingesterRingLimiterStrategy) getShardSize(userID string) int { func (is *ingesterRingLimiterStrategy) getZonesCount() int { if is.zoneAwarenessEnabled { - return util_math.Max(is.ring.ZonesCount(), 1) + return max(is.ring.ZonesCount(), 1) } return 1 } diff --git a/pkg/ingester/user_tsdb.go b/pkg/ingester/user_tsdb.go index 95bfe9840e2..5a3ed82c28c 100644 --- a/pkg/ingester/user_tsdb.go +++ b/pkg/ingester/user_tsdb.go @@ -261,7 +261,7 @@ func nextForcedHeadCompactionRange(blockDuration, headMinTime, headMaxTime, forc // By default we try to compact the whole head, honoring the forcedMaxTime. minTime = headMinTime - maxTime = util_math.Min(headMaxTime, forcedMaxTime) + maxTime = min(headMaxTime, forcedMaxTime) // Due to the forcedMaxTime, the range may be empty. In that case we just skip it. if maxTime < minTime { diff --git a/pkg/mimir/modules.go b/pkg/mimir/modules.go index c66d15c6474..697501af98f 100644 --- a/pkg/mimir/modules.go +++ b/pkg/mimir/modules.go @@ -711,7 +711,7 @@ func (t *Mimir) initQueryFrontendTopicOffsetsReader() (services.Service, error) var err error - kafkaMetrics := ingest.NewKafkaReaderClientMetrics("query-frontend", t.Registerer) + kafkaMetrics := ingest.NewKafkaReaderClientMetrics(ingest.ReaderMetricsPrefix, "query-frontend", t.Registerer) kafkaClient, err := ingest.NewKafkaReaderClient(t.Cfg.IngestStorage.KafkaConfig, kafkaMetrics, util_log.Logger) if err != nil { return nil, err @@ -1023,6 +1023,7 @@ func (t *Mimir) initMemberlistKV() (services.Service, error) { // Append to the list of codecs instead of overwriting the value to allow third parties to inject their own codecs. t.Cfg.MemberlistKV.Codecs = append(t.Cfg.MemberlistKV.Codecs, ring.GetCodec()) t.Cfg.MemberlistKV.Codecs = append(t.Cfg.MemberlistKV.Codecs, ring.GetPartitionRingCodec()) + t.Cfg.MemberlistKV.Codecs = append(t.Cfg.MemberlistKV.Codecs, distributor.GetReplicaDescCodec()) dnsProviderReg := prometheus.WrapRegistererWithPrefix( "cortex_", @@ -1037,6 +1038,7 @@ func (t *Mimir) initMemberlistKV() (services.Service, error) { // Update the config. t.Cfg.Distributor.DistributorRing.Common.KVStore.MemberlistKV = t.MemberlistKV.GetMemberlistKV + t.Cfg.Distributor.HATrackerConfig.KVStore.MemberlistKV = t.MemberlistKV.GetMemberlistKV t.Cfg.Ingester.IngesterRing.KVStore.MemberlistKV = t.MemberlistKV.GetMemberlistKV t.Cfg.Ingester.IngesterPartitionRing.KVStore.MemberlistKV = t.MemberlistKV.GetMemberlistKV t.Cfg.StoreGateway.ShardingRing.KVStore.MemberlistKV = t.MemberlistKV.GetMemberlistKV diff --git a/pkg/mimir/modules_test.go b/pkg/mimir/modules_test.go index c8ecc7b8daf..d71c8715717 100644 --- a/pkg/mimir/modules_test.go +++ b/pkg/mimir/modules_test.go @@ -193,6 +193,7 @@ func TestMultiKVSetup(t *testing.T) { Distributor: func(t *testing.T, c Config) { require.NotNil(t, c.Distributor.DistributorRing.Common.KVStore.Multi.ConfigProvider) + require.NotNil(t, c.Distributor.HATrackerConfig.KVStore.MemberlistKV) require.NotNil(t, c.Ingester.IngesterRing.KVStore.Multi.ConfigProvider) require.NotNil(t, c.Ingester.IngesterPartitionRing.KVStore.Multi.ConfigProvider) }, diff --git a/pkg/mimirpb/timeseries.go b/pkg/mimirpb/timeseries.go index 43eb0dd2a72..9ddefa711c4 100644 --- a/pkg/mimirpb/timeseries.go +++ b/pkg/mimirpb/timeseries.go @@ -182,6 +182,10 @@ func (p *PreallocTimeseries) ResizeExemplars(newSize int) { p.clearUnmarshalData() } +func (p *PreallocTimeseries) SamplesUpdated() { + p.clearUnmarshalData() +} + func (p *PreallocTimeseries) HistogramsUpdated() { p.clearUnmarshalData() } diff --git a/pkg/mimirtool/commands/env_var.go b/pkg/mimirtool/commands/env_var.go index 3e6187ecdb9..3cb6bea9ced 100644 --- a/pkg/mimirtool/commands/env_var.go +++ b/pkg/mimirtool/commands/env_var.go @@ -30,7 +30,7 @@ func NewEnvVarsWithPrefix(prefix string) EnvVarNames { useLegacyRoutes = "USE_LEGACY_ROUTES" authToken = "AUTH_TOKEN" extraHeaders = "EXTRA_HEADERS" - mimirHTTPPrefix = "MIMIR_HTTP_PREFIX" + mimirHTTPPrefix = "HTTP_PREFIX" ) if len(prefix) > 0 && prefix[len(prefix)-1] != '_' { diff --git a/pkg/querier/blocks_store_queryable.go b/pkg/querier/blocks_store_queryable.go index a6f17182594..1871cc5e8e0 100644 --- a/pkg/querier/blocks_store_queryable.go +++ b/pkg/querier/blocks_store_queryable.go @@ -488,7 +488,7 @@ func (q *blocksStoreQuerier) selectSorted(ctx context.Context, sp *storage.Selec } return series.NewSeriesSetWithWarnings( - storage.NewMergeSeriesSet(resSeriesSets, storage.ChainedSeriesMerge), + storage.NewMergeSeriesSet(resSeriesSets, 0, storage.ChainedSeriesMerge), resWarnings) } diff --git a/pkg/querier/cardinality_analysis_handler.go b/pkg/querier/cardinality_analysis_handler.go index 9a6d57081bd..90ecec58005 100644 --- a/pkg/querier/cardinality_analysis_handler.go +++ b/pkg/querier/cardinality_analysis_handler.go @@ -19,7 +19,6 @@ import ( "github.com/grafana/mimir/pkg/querier/api" "github.com/grafana/mimir/pkg/querier/worker" "github.com/grafana/mimir/pkg/util" - util_math "github.com/grafana/mimir/pkg/util/math" "github.com/grafana/mimir/pkg/util/validation" ) @@ -196,7 +195,7 @@ func toLabelNamesCardinalityResponse(response *ingester_client.LabelNamesAndValu labelsWithValues := response.Items sortByValuesCountAndName(labelsWithValues) valuesCountTotal := getValuesCountTotal(labelsWithValues) - items := make([]*api.LabelNamesCardinalityItem, util_math.Min(len(labelsWithValues), limit)) + items := make([]*api.LabelNamesCardinalityItem, min(len(labelsWithValues), limit)) for i := 0; i < len(items); i++ { items[i] = &api.LabelNamesCardinalityItem{LabelName: labelsWithValues[i].LabelName, LabelValuesCount: len(labelsWithValues[i].Values)} } diff --git a/pkg/querier/distributor_queryable.go b/pkg/querier/distributor_queryable.go index 3a3c4b315aa..2fa028dd914 100644 --- a/pkg/querier/distributor_queryable.go +++ b/pkg/querier/distributor_queryable.go @@ -200,7 +200,7 @@ func (q *distributorQuerier) streamingSelect(ctx context.Context, minT, maxT int return sets[0] } // Sets need to be sorted. Both series.NewConcreteSeriesSetFromUnsortedSeries and newTimeSeriesSeriesSet take care of that. - return storage.NewMergeSeriesSet(sets, storage.ChainedSeriesMerge) + return storage.NewMergeSeriesSet(sets, 0, storage.ChainedSeriesMerge) } func (q *distributorQuerier) LabelValues(ctx context.Context, name string, _ *storage.LabelHints, matchers ...*labels.Matcher) ([]string, annotations.Annotations, error) { diff --git a/pkg/querier/metadata_handler.go b/pkg/querier/metadata_handler.go index ba6f8178b6f..1a56f01910a 100644 --- a/pkg/querier/metadata_handler.go +++ b/pkg/querier/metadata_handler.go @@ -48,28 +48,34 @@ type metadataErrorResult struct { // Mimir for a given tenant. It is kept and returned as a set. func NewMetadataHandler(m MetadataSupplier) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - limit := -1 + limit := int32(-1) if s := r.FormValue("limit"); s != "" { - var err error - if limit, err = strconv.Atoi(s); err != nil { + parsed, err := strconv.ParseInt(s, 10, 32) + if err != nil { w.WriteHeader(http.StatusBadRequest) util.WriteJSONResponse(w, metadataErrorResult{Status: statusError, Error: "limit must be a number"}) return } + + limit = int32(parsed) } - limitPerMetric := -1 + + limitPerMetric := int32(-1) if s := r.FormValue("limit_per_metric"); s != "" { - var err error - if limitPerMetric, err = strconv.Atoi(s); err != nil { + parsed, err := strconv.ParseInt(s, 10, 32) + if err != nil { w.WriteHeader(http.StatusBadRequest) util.WriteJSONResponse(w, metadataErrorResult{Status: statusError, Error: "limit_per_metric must be a number"}) return } + + limitPerMetric = int32(parsed) } + metric := r.FormValue("metric") req := &client.MetricsMetadataRequest{ - Limit: int32(limit), - LimitPerMetric: int32(limitPerMetric), + Limit: limit, + LimitPerMetric: limitPerMetric, Metric: metric, } @@ -87,11 +93,11 @@ func NewMetadataHandler(m MetadataSupplier) http.Handler { // We enforce this both here and in the ingesters. Doing it in the ingesters is // more efficient as it is earlier in the process, but since that one is per user, // we still need to do it here after all the results are merged. - if limitPerMetric > 0 && len(ms) >= limitPerMetric { + if limitPerMetric > 0 && len(ms) >= int(limitPerMetric) { continue } if !ok { - if limit >= 0 && len(metrics) >= limit { + if limit >= 0 && len(metrics) >= int(limit) { break } // Most metrics will only hold 1 copy of the same metadata. diff --git a/pkg/querier/querier.go b/pkg/querier/querier.go index cae5138ff86..8c44c89cf65 100644 --- a/pkg/querier/querier.go +++ b/pkg/querier/querier.go @@ -35,7 +35,6 @@ import ( "github.com/grafana/mimir/pkg/util" "github.com/grafana/mimir/pkg/util/activitytracker" "github.com/grafana/mimir/pkg/util/limiter" - "github.com/grafana/mimir/pkg/util/math" "github.com/grafana/mimir/pkg/util/spanlogger" "github.com/grafana/mimir/pkg/util/validation" ) @@ -550,7 +549,7 @@ func (mq multiQuerier) mergeSeriesSets(sets []storage.SeriesSet) storage.SeriesS } if len(chunks) == 0 { - return storage.NewMergeSeriesSet(otherSets, storage.ChainedSeriesMerge) + return storage.NewMergeSeriesSet(otherSets, 0, storage.ChainedSeriesMerge) } // partitionChunks returns set with sorted series, so it can be used by NewMergeSeriesSet @@ -561,7 +560,7 @@ func (mq multiQuerier) mergeSeriesSets(sets []storage.SeriesSet) storage.SeriesS } otherSets = append(otherSets, chunksSet) - return storage.NewMergeSeriesSet(otherSets, storage.ChainedSeriesMerge) + return storage.NewMergeSeriesSet(otherSets, 0, storage.ChainedSeriesMerge) } type sliceSeriesSet struct { @@ -617,7 +616,7 @@ func clampMaxTime(spanLog *spanlogger.SpanLogger, maxT int64, refT int64, limitD // limits equal to 0 are considered to not be enabled return maxT } - clampedT := math.Min(maxT, refT+limitDelta.Milliseconds()) + clampedT := min(maxT, refT+limitDelta.Milliseconds()) if clampedT != maxT { logClampEvent(spanLog, maxT, clampedT, "max", limitName) @@ -640,7 +639,7 @@ func clampMinTime(spanLog *spanlogger.SpanLogger, minT int64, refT int64, limitD // limits equal to 0 are considered to not be enabled return minT } - clampedT := math.Max(minT, refT+limitDelta.Milliseconds()) + clampedT := max(minT, refT+limitDelta.Milliseconds()) if clampedT != minT { logClampEvent(spanLog, minT, clampedT, "min", limitName) diff --git a/pkg/querier/tenantfederation/merge_queryable.go b/pkg/querier/tenantfederation/merge_queryable.go index c01da5e2871..c5e6a664d20 100644 --- a/pkg/querier/tenantfederation/merge_queryable.go +++ b/pkg/querier/tenantfederation/merge_queryable.go @@ -378,7 +378,7 @@ func (m *mergeQuerier) Select(ctx context.Context, sortSeries bool, hints *stora return storage.ErrSeriesSet(err) } - return storage.NewMergeSeriesSet(seriesSets, storage.ChainedSeriesMerge) + return storage.NewMergeSeriesSet(seriesSets, 0, storage.ChainedSeriesMerge) } type addLabelsSeriesSet struct { diff --git a/pkg/querier/worker/worker.go b/pkg/querier/worker/worker.go index dcf22a10e9f..2b9459a116e 100644 --- a/pkg/querier/worker/worker.go +++ b/pkg/querier/worker/worker.go @@ -25,7 +25,6 @@ import ( "github.com/grafana/mimir/pkg/scheduler/schedulerdiscovery" "github.com/grafana/mimir/pkg/util/grpcencoding/s2" - "github.com/grafana/mimir/pkg/util/math" ) type Config struct { @@ -353,7 +352,7 @@ func (w *querierWorker) getDesiredConcurrency() map[string]int { ) // new adjusted minimum to ensure that each in-use instance has at least MinConcurrencyPerRequestQueue connections. - maxConcurrentWithMinPerInstance := math.Max( + maxConcurrentWithMinPerInstance := max( w.maxConcurrentRequests, MinConcurrencyPerRequestQueue*numInUse, ) if maxConcurrentWithMinPerInstance > w.maxConcurrentRequests { diff --git a/pkg/ruler/api.go b/pkg/ruler/api.go index 9eb571db470..a287dc9421c 100644 --- a/pkg/ruler/api.go +++ b/pkg/ruler/api.go @@ -519,7 +519,7 @@ func (a *API) ListRules(w http.ResponseWriter, req *http.Request) { // so their content is empty). numRuleGroupsBeforeFiltering := len(rgs) tenantRuleGroups := map[string]rulespb.RuleGroupList{userID: rgs} - tenantRuleGroups = filterRuleGroupsByNotMissing(tenantRuleGroups, missing, a.logger) + tenantRuleGroups = FilterRuleGroupsByNotMissing(tenantRuleGroups, missing, a.logger) var tenantFound bool rgs, tenantFound = tenantRuleGroups[userID] diff --git a/pkg/ruler/notifier.go b/pkg/ruler/notifier.go index 348aa7b9067..621133611c9 100644 --- a/pkg/ruler/notifier.go +++ b/pkg/ruler/notifier.go @@ -27,6 +27,7 @@ import ( "github.com/grafana/mimir/pkg/util" util_log "github.com/grafana/mimir/pkg/util/log" + "github.com/grafana/mimir/pkg/util/validation" ) var ( @@ -51,10 +52,11 @@ func (cfg *NotifierConfig) RegisterFlags(f *flag.FlagSet) { } type OAuth2Config struct { - ClientID string `yaml:"client_id"` - ClientSecret flagext.Secret `yaml:"client_secret"` - TokenURL string `yaml:"token_url"` - Scopes flagext.StringSliceCSV `yaml:"scopes,omitempty"` + ClientID string `yaml:"client_id"` + ClientSecret flagext.Secret `yaml:"client_secret"` + TokenURL string `yaml:"token_url"` + Scopes flagext.StringSliceCSV `yaml:"scopes,omitempty"` + EndpointParams validation.LimitsMap[string] `yaml:"endpoint_params" category:"advanced"` } func (cfg *OAuth2Config) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) { @@ -62,6 +64,10 @@ func (cfg *OAuth2Config) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) f.Var(&cfg.ClientSecret, prefix+"client_secret", "OAuth2 client secret.") f.StringVar(&cfg.TokenURL, prefix+"token_url", "", "Endpoint used to fetch access token.") f.Var(&cfg.Scopes, prefix+"scopes", "Optional scopes to include with the token request.") + if !cfg.EndpointParams.IsInitialized() { + cfg.EndpointParams = validation.NewLimitsMap[string](nil) + } + f.Var(&cfg.EndpointParams, prefix+"endpoint-params", "Optional additional URL parameters to send to the token URL.") } func (cfg *OAuth2Config) IsEnabled() bool { @@ -226,6 +232,10 @@ func amConfigWithSD(rulerConfig *Config, url *url.URL, sdConfig discovery.Config Scopes: rulerConfig.Notifier.OAuth2.Scopes, } + if rulerConfig.Notifier.OAuth2.EndpointParams.IsInitialized() { + amConfig.HTTPClientConfig.OAuth2.EndpointParams = rulerConfig.Notifier.OAuth2.EndpointParams.Read() + } + if rulerConfig.Notifier.ProxyURL != "" { url, err := url.Parse(rulerConfig.Notifier.ProxyURL) if err != nil { diff --git a/pkg/ruler/notifier_test.go b/pkg/ruler/notifier_test.go index 54c56f46fee..146de3515d5 100644 --- a/pkg/ruler/notifier_test.go +++ b/pkg/ruler/notifier_test.go @@ -6,7 +6,10 @@ package ruler import ( + "bytes" "errors" + "flag" + "maps" "net/url" "testing" "time" @@ -17,9 +20,11 @@ import ( "github.com/prometheus/common/model" "github.com/prometheus/prometheus/config" "github.com/prometheus/prometheus/discovery" + "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "github.com/grafana/mimir/pkg/util" + "github.com/grafana/mimir/pkg/util/validation" ) func TestBuildNotifierConfig(t *testing.T) { @@ -373,6 +378,51 @@ func TestBuildNotifierConfig(t *testing.T) { }, }, }, + { + name: "with OAuth2 and optional endpoint params", + cfg: &Config{ + AlertmanagerURL: "dnssrv+https://_http._tcp.alertmanager-0.default.svc.cluster.local/alertmanager", + Notifier: NotifierConfig{ + OAuth2: OAuth2Config{ + ClientID: "oauth2-client-id", + ClientSecret: flagext.SecretWithValue("test"), + TokenURL: "https://oauth2-token-endpoint.local/token", + EndpointParams: validation.NewLimitsMapWithData[string]( + map[string]string{ + "param1": "value1", + "param2": "value2", + }, + nil, + ), + }, + }, + }, + ncfg: &config.Config{ + AlertingConfig: config.AlertingConfig{ + AlertmanagerConfigs: []*config.AlertmanagerConfig{ + { + HTTPClientConfig: config_util.HTTPClientConfig{ + OAuth2: &config_util.OAuth2{ + ClientID: "oauth2-client-id", + ClientSecret: "test", + TokenURL: "https://oauth2-token-endpoint.local/token", + EndpointParams: map[string]string{"param1": "value1", "param2": "value2"}, + }, + }, + APIVersion: "v2", + Scheme: "https", + PathPrefix: "/alertmanager", + ServiceDiscoveryConfigs: discovery.Configs{ + dnsServiceDiscovery{ + Host: "_http._tcp.alertmanager-0.default.svc.cluster.local", + QType: dns.SRV, + }, + }, + }, + }, + }, + }, + }, { name: "with OAuth2 and proxy_url simultaneously, inheriting proxy", cfg: &Config{ @@ -498,6 +548,42 @@ func TestBuildNotifierConfig(t *testing.T) { } } +func TestOAuth2Config_ValidateEndpointParams(t *testing.T) { + for name, tc := range map[string]struct { + args []string + expected validation.LimitsMap[string] + error string + }{ + "basic test": { + args: []string{"-map-flag", "{\"param1\": \"value1\" }"}, + expected: validation.NewLimitsMapWithData(map[string]string{ + "param1": "value1", + }, nil), + }, + "parsing error": { + args: []string{"-map-flag", "{\"hello\": ..."}, + error: "invalid value \"{\\\"hello\\\": ...\" for flag -map-flag: invalid character '.' looking for beginning of value", + }, + } { + t.Run(name, func(t *testing.T) { + v := validation.NewLimitsMap[string](nil) + + fs := flag.NewFlagSet("test", flag.ContinueOnError) + fs.SetOutput(&bytes.Buffer{}) // otherwise errors would go to stderr. + fs.Var(v, "map-flag", "Map flag, you can pass JSON into this") + err := fs.Parse(tc.args) + + if tc.error != "" { + require.NotNil(t, err) + assert.Equal(t, tc.error, err.Error()) + } else { + assert.NoError(t, err) + assert.True(t, maps.Equal(tc.expected.Read(), v.Read())) + } + }) + } +} + func urlMustParse(t *testing.T, raw string) *url.URL { t.Helper() u, err := url.Parse(raw) diff --git a/pkg/ruler/ruler.go b/pkg/ruler/ruler.go index 5f9e27f435a..3e6493b0285 100644 --- a/pkg/ruler/ruler.go +++ b/pkg/ruler/ruler.go @@ -638,7 +638,7 @@ func (r *Ruler) loadRuleGroupsToSync(ctx context.Context, configs map[string]rul // cached for a short period of time. This means that some rule groups discovered by listing // the bucket (cached) may no longer exist because deleted in the meanwhile. For this reason, // we filter out any missing rule group, not considering it as an hard error. - configs = filterRuleGroupsByNotMissing(configs, missing, r.logger) + configs = FilterRuleGroupsByNotMissing(configs, missing, r.logger) return configs, nil } @@ -888,11 +888,11 @@ func filterRuleGroupByEnabled(group *rulespb.RuleGroupDesc, recordingEnabled, al return filtered, removedRules } -// filterRuleGroupsByNotMissing filters out from the input configs all the rules groups which are in the missing list. +// FilterRuleGroupsByNotMissing filters out from the input configs all the rules groups which are in the missing list. // // This function doesn't modify the input configs in place (even if it could) in order to reduce the likelihood of introducing // future bugs, in case the rule groups will be cached in memory. -func filterRuleGroupsByNotMissing(configs map[string]rulespb.RuleGroupList, missing rulespb.RuleGroupList, logger log.Logger) (filtered map[string]rulespb.RuleGroupList) { +func FilterRuleGroupsByNotMissing(configs map[string]rulespb.RuleGroupList, missing rulespb.RuleGroupList, logger log.Logger) (filtered map[string]rulespb.RuleGroupList) { // Nothing to do if there are no missing rule groups. if len(missing) == 0 { return configs diff --git a/pkg/ruler/ruler_test.go b/pkg/ruler/ruler_test.go index 390d147295e..b8e35af12a5 100644 --- a/pkg/ruler/ruler_test.go +++ b/pkg/ruler/ruler_test.go @@ -1939,7 +1939,7 @@ func TestFilterRuleGroupsByNotMissing(t *testing.T) { t.Run(testName, func(t *testing.T) { logger := log.NewNopLogger() - actual := filterRuleGroupsByNotMissing(testData.configs, testData.missing, logger) + actual := FilterRuleGroupsByNotMissing(testData.configs, testData.missing, logger) assert.Equal(t, testData.expected, actual) }) } diff --git a/pkg/scheduler/scheduler.go b/pkg/scheduler/scheduler.go index 2aed911103f..10d5aa6ac4e 100644 --- a/pkg/scheduler/scheduler.go +++ b/pkg/scheduler/scheduler.go @@ -19,6 +19,7 @@ import ( "github.com/go-kit/log/level" "github.com/grafana/dskit/cancellation" "github.com/grafana/dskit/grpcclient" + "github.com/grafana/dskit/grpcutil" "github.com/grafana/dskit/httpgrpc" "github.com/grafana/dskit/middleware" "github.com/grafana/dskit/ring" @@ -508,6 +509,11 @@ func (s *Scheduler) forwardRequestToQuerier(querier schedulerpb.SchedulerForQuer QueueTimeNanos: queueTime.Nanoseconds(), }) + if grpcutil.IsCanceled(err) { + // The querier abruptly closing the connection should be the only reason we'd get a cancellation error here. + err = fmt.Errorf("querier disconnected ungracefully") + } + if err != nil { errCh <- fmt.Errorf("failed to send query to querier '%v': %w", querierID, err) span.LogFields(otlog.Message("sending query to querier failed"), otlog.Error(err)) @@ -519,6 +525,12 @@ func (s *Scheduler) forwardRequestToQuerier(querier schedulerpb.SchedulerForQuer span.Finish() _, err = querier.Recv() + + if grpcutil.IsCanceled(err) { + // The querier abruptly closing the connection should be the only reason we'd get a cancellation error here. + err = fmt.Errorf("querier disconnected ungracefully") + } + if err != nil { errCh <- fmt.Errorf("failed to receive response from querier '%v': %w", querierID, err) return diff --git a/pkg/storage/ingest/config.go b/pkg/storage/ingest/config.go index 546e11b7a05..21fc5564d98 100644 --- a/pkg/storage/ingest/config.go +++ b/pkg/storage/ingest/config.go @@ -22,9 +22,9 @@ const ( consumeFromEnd = "end" consumeFromTimestamp = "timestamp" - kafkaConfigFlagPrefix = "ingest-storage.kafka" - targetConsumerLagAtStartupFlag = kafkaConfigFlagPrefix + ".target-consumer-lag-at-startup" - maxConsumerLagAtStartupFlag = kafkaConfigFlagPrefix + ".max-consumer-lag-at-startup" + kafkaConfigFlagPrefix = "ingest-storage.kafka." + targetConsumerLagAtStartupFlag = kafkaConfigFlagPrefix + "target-consumer-lag-at-startup" + maxConsumerLagAtStartupFlag = kafkaConfigFlagPrefix + "max-consumer-lag-at-startup" ) var ( @@ -38,6 +38,7 @@ var ( ErrInconsistentSASLCredentials = fmt.Errorf("the SASL username and password must be both configured to enable SASL authentication") ErrInvalidIngestionConcurrencyMax = errors.New("ingest-storage.kafka.ingestion-concurrency-max must either be set to 0 or to a value greater than 0") ErrInvalidIngestionConcurrencyParams = errors.New("ingest-storage.kafka.ingestion-concurrency-queue-capacity, ingest-storage.kafka.ingestion-concurrency-estimated-bytes-per-sample, ingest-storage.kafka.ingestion-concurrency-batch-size and ingest-storage.kafka.ingestion-concurrency-target-flushes-per-shard must be greater than 0") + ErrInvalidAutoCreateTopicParams = errors.New("ingest-storage.kafka.auto-create-topic-default-partitions must be -1 or greater than 0 when ingest-storage.kafka.auto-create-topic-default-partitions=true") consumeFromPositionOptions = []string{consumeFromLastOffset, consumeFromStart, consumeFromEnd, consumeFromTimestamp} @@ -58,7 +59,7 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) { f.BoolVar(&cfg.Enabled, "ingest-storage.enabled", false, "True to enable the ingestion via object storage.") cfg.KafkaConfig.RegisterFlagsWithPrefix(kafkaConfigFlagPrefix, f) - cfg.Migration.RegisterFlagsWithPrefix("ingest-storage.migration", f) + cfg.Migration.RegisterFlagsWithPrefix("ingest-storage.migration.", f) } // Validate the config. @@ -109,8 +110,7 @@ type KafkaConfig struct { // Used when logging unsampled client errors. Set from ingester's ErrorSampleRate. FallbackClientErrorSampleRate int64 `yaml:"-"` - StartupFetchConcurrency int `yaml:"startup_fetch_concurrency"` - OngoingFetchConcurrency int `yaml:"ongoing_fetch_concurrency"` + FetchConcurrencyMax int `yaml:"fetch_concurrency_max"` UseCompressedBytesAsFetchMaxBytes bool `yaml:"use_compressed_bytes_as_fetch_max_bytes"` MaxBufferedBytes int `yaml:"max_buffered_bytes"` @@ -145,47 +145,46 @@ func (cfg *KafkaConfig) RegisterFlags(f *flag.FlagSet) { func (cfg *KafkaConfig) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) { cfg.concurrentFetchersFetchBackoffConfig = defaultFetchBackoffConfig - f.StringVar(&cfg.Address, prefix+".address", "", "The Kafka backend address.") - f.StringVar(&cfg.Topic, prefix+".topic", "", "The Kafka topic name.") - f.StringVar(&cfg.ClientID, prefix+".client-id", "", "The Kafka client ID.") - f.DurationVar(&cfg.DialTimeout, prefix+".dial-timeout", 2*time.Second, "The maximum time allowed to open a connection to a Kafka broker.") - f.DurationVar(&cfg.WriteTimeout, prefix+".write-timeout", 10*time.Second, "How long to wait for an incoming write request to be successfully committed to the Kafka backend.") - f.IntVar(&cfg.WriteClients, prefix+".write-clients", 1, "The number of Kafka clients used by producers. When the configured number of clients is greater than 1, partitions are sharded among Kafka clients. A higher number of clients may provide higher write throughput at the cost of additional Metadata requests pressure to Kafka.") + f.StringVar(&cfg.Address, prefix+"address", "", "The Kafka backend address.") + f.StringVar(&cfg.Topic, prefix+"topic", "", "The Kafka topic name.") + f.StringVar(&cfg.ClientID, prefix+"client-id", "", "The Kafka client ID.") + f.DurationVar(&cfg.DialTimeout, prefix+"dial-timeout", 2*time.Second, "The maximum time allowed to open a connection to a Kafka broker.") + f.DurationVar(&cfg.WriteTimeout, prefix+"write-timeout", 10*time.Second, "How long to wait for an incoming write request to be successfully committed to the Kafka backend.") + f.IntVar(&cfg.WriteClients, prefix+"write-clients", 1, "The number of Kafka clients used by producers. When the configured number of clients is greater than 1, partitions are sharded among Kafka clients. A higher number of clients may provide higher write throughput at the cost of additional Metadata requests pressure to Kafka.") - f.StringVar(&cfg.SASLUsername, prefix+".sasl-username", "", "The username used to authenticate to Kafka using the SASL plain mechanism. To enable SASL, configure both the username and password.") - f.Var(&cfg.SASLPassword, prefix+".sasl-password", "The password used to authenticate to Kafka using the SASL plain mechanism. To enable SASL, configure both the username and password.") + f.StringVar(&cfg.SASLUsername, prefix+"sasl-username", "", "The username used to authenticate to Kafka using the SASL plain mechanism. To enable SASL, configure both the username and password.") + f.Var(&cfg.SASLPassword, prefix+"sasl-password", "The password used to authenticate to Kafka using the SASL plain mechanism. To enable SASL, configure both the username and password.") - f.StringVar(&cfg.ConsumerGroup, prefix+".consumer-group", "", "The consumer group used by the consumer to track the last consumed offset. The consumer group must be different for each ingester. If the configured consumer group contains the '' placeholder, it is replaced with the actual partition ID owned by the ingester. When empty (recommended), Mimir uses the ingester instance ID to guarantee uniqueness.") - f.DurationVar(&cfg.ConsumerGroupOffsetCommitInterval, prefix+".consumer-group-offset-commit-interval", time.Second, "How frequently a consumer should commit the consumed offset to Kafka. The last committed offset is used at startup to continue the consumption from where it was left.") + f.StringVar(&cfg.ConsumerGroup, prefix+"consumer-group", "", "The consumer group used by the consumer to track the last consumed offset. The consumer group must be different for each ingester. If the configured consumer group contains the '' placeholder, it is replaced with the actual partition ID owned by the ingester. When empty (recommended), Mimir uses the ingester instance ID to guarantee uniqueness.") + f.DurationVar(&cfg.ConsumerGroupOffsetCommitInterval, prefix+"consumer-group-offset-commit-interval", time.Second, "How frequently a consumer should commit the consumed offset to Kafka. The last committed offset is used at startup to continue the consumption from where it was left.") - f.DurationVar(&cfg.LastProducedOffsetPollInterval, prefix+".last-produced-offset-poll-interval", time.Second, "How frequently to poll the last produced offset, used to enforce strong read consistency.") - f.DurationVar(&cfg.LastProducedOffsetRetryTimeout, prefix+".last-produced-offset-retry-timeout", 10*time.Second, "How long to retry a failed request to get the last produced offset.") + f.DurationVar(&cfg.LastProducedOffsetPollInterval, prefix+"last-produced-offset-poll-interval", time.Second, "How frequently to poll the last produced offset, used to enforce strong read consistency.") + f.DurationVar(&cfg.LastProducedOffsetRetryTimeout, prefix+"last-produced-offset-retry-timeout", 10*time.Second, "How long to retry a failed request to get the last produced offset.") - f.StringVar(&cfg.ConsumeFromPositionAtStartup, prefix+".consume-from-position-at-startup", consumeFromLastOffset, fmt.Sprintf("From which position to start consuming the partition at startup. Supported options: %s.", strings.Join(consumeFromPositionOptions, ", "))) - f.Int64Var(&cfg.ConsumeFromTimestampAtStartup, prefix+".consume-from-timestamp-at-startup", 0, fmt.Sprintf("Milliseconds timestamp after which the consumption of the partition starts at startup. Only applies when consume-from-position-at-startup is %s", consumeFromTimestamp)) + f.StringVar(&cfg.ConsumeFromPositionAtStartup, prefix+"consume-from-position-at-startup", consumeFromLastOffset, fmt.Sprintf("From which position to start consuming the partition at startup. Supported options: %s.", strings.Join(consumeFromPositionOptions, ", "))) + f.Int64Var(&cfg.ConsumeFromTimestampAtStartup, prefix+"consume-from-timestamp-at-startup", 0, fmt.Sprintf("Milliseconds timestamp after which the consumption of the partition starts at startup. Only applies when consume-from-position-at-startup is %s", consumeFromTimestamp)) howToDisableConsumerLagAtStartup := fmt.Sprintf("Set both -%s and -%s to 0 to disable waiting for maximum consumer lag being honored at startup.", targetConsumerLagAtStartupFlag, maxConsumerLagAtStartupFlag) f.DurationVar(&cfg.TargetConsumerLagAtStartup, targetConsumerLagAtStartupFlag, 2*time.Second, "The best-effort maximum lag a consumer tries to achieve at startup. "+howToDisableConsumerLagAtStartup) f.DurationVar(&cfg.MaxConsumerLagAtStartup, maxConsumerLagAtStartupFlag, 15*time.Second, "The guaranteed maximum lag before a consumer is considered to have caught up reading from a partition at startup, becomes ACTIVE in the hash ring and passes the readiness check. "+howToDisableConsumerLagAtStartup) - f.BoolVar(&cfg.AutoCreateTopicEnabled, prefix+".auto-create-topic-enabled", true, "Enable auto-creation of Kafka topic if it doesn't exist.") - f.IntVar(&cfg.AutoCreateTopicDefaultPartitions, prefix+".auto-create-topic-default-partitions", 0, "When auto-creation of Kafka topic is enabled and this value is positive, Kafka's num.partitions configuration option is set on Kafka brokers with this value when Mimir component that uses Kafka starts. This configuration option specifies the default number of partitions that the Kafka broker uses for auto-created topics. Note that this is a Kafka-cluster wide setting, and applies to any auto-created topic. If the setting of num.partitions fails, Mimir proceeds anyways, but auto-created topics could have an incorrect number of partitions.") + f.BoolVar(&cfg.AutoCreateTopicEnabled, prefix+"auto-create-topic-enabled", true, "Enable auto-creation of Kafka topic on startup if it doesn't exist. If creating the topic fails and the topic doesn't already exist, Mimir will fail to start.") + f.IntVar(&cfg.AutoCreateTopicDefaultPartitions, prefix+"auto-create-topic-default-partitions", -1, "When auto-creation of Kafka topic is enabled and this value is positive, Mimir will create the topic with this number of partitions. When the value is -1 the Kafka broker will use the default number of partitions (num.partitions configuration).") - f.IntVar(&cfg.ProducerMaxRecordSizeBytes, prefix+".producer-max-record-size-bytes", maxProducerRecordDataBytesLimit, "The maximum size of a Kafka record data that should be generated by the producer. An incoming write request larger than this size is split into multiple Kafka records. We strongly recommend to not change this setting unless for testing purposes.") - f.Int64Var(&cfg.ProducerMaxBufferedBytes, prefix+".producer-max-buffered-bytes", 1024*1024*1024, "The maximum size of (uncompressed) buffered and unacknowledged produced records sent to Kafka. The produce request fails once this limit is reached. This limit is per Kafka client. 0 to disable the limit.") + f.IntVar(&cfg.ProducerMaxRecordSizeBytes, prefix+"producer-max-record-size-bytes", maxProducerRecordDataBytesLimit, "The maximum size of a Kafka record data that should be generated by the producer. An incoming write request larger than this size is split into multiple Kafka records. We strongly recommend to not change this setting unless for testing purposes.") + f.Int64Var(&cfg.ProducerMaxBufferedBytes, prefix+"producer-max-buffered-bytes", 1024*1024*1024, "The maximum size of (uncompressed) buffered and unacknowledged produced records sent to Kafka. The produce request fails once this limit is reached. This limit is per Kafka client. 0 to disable the limit.") - f.DurationVar(&cfg.WaitStrongReadConsistencyTimeout, prefix+".wait-strong-read-consistency-timeout", 20*time.Second, "The maximum allowed for a read requests processed by an ingester to wait until strong read consistency is enforced. 0 to disable the timeout.") + f.DurationVar(&cfg.WaitStrongReadConsistencyTimeout, prefix+"wait-strong-read-consistency-timeout", 20*time.Second, "The maximum allowed for a read requests processed by an ingester to wait until strong read consistency is enforced. 0 to disable the timeout.") - f.IntVar(&cfg.StartupFetchConcurrency, prefix+".startup-fetch-concurrency", 0, "The number of concurrent fetch requests that the ingester makes when reading data from Kafka during startup. 0 to disable.") - f.IntVar(&cfg.OngoingFetchConcurrency, prefix+".ongoing-fetch-concurrency", 0, "The number of concurrent fetch requests that the ingester makes when reading data continuously from Kafka after startup. Is disabled unless "+prefix+".startup-fetch-concurrency is greater than 0. 0 to disable.") - f.BoolVar(&cfg.UseCompressedBytesAsFetchMaxBytes, prefix+".use-compressed-bytes-as-fetch-max-bytes", true, "When enabled, the fetch request MaxBytes field is computed using the compressed size of previous records. When disabled, MaxBytes is computed using uncompressed bytes. Different Kafka implementations interpret MaxBytes differently.") - f.IntVar(&cfg.MaxBufferedBytes, prefix+".max-buffered-bytes", 100_000_000, "The maximum number of buffered records ready to be processed. This limit applies to the sum of all inflight requests. Set to 0 to disable the limit.") + f.IntVar(&cfg.FetchConcurrencyMax, prefix+"fetch-concurrency-max", 0, "The maximum number of concurrent fetch requests that the ingester makes when reading data from Kafka during startup. Concurrent fetch requests are issued only when there is sufficient backlog of records to consume. 0 to disable.") + f.BoolVar(&cfg.UseCompressedBytesAsFetchMaxBytes, prefix+"use-compressed-bytes-as-fetch-max-bytes", true, "When enabled, the fetch request MaxBytes field is computed using the compressed size of previous records. When disabled, MaxBytes is computed using uncompressed bytes. Different Kafka implementations interpret MaxBytes differently.") + f.IntVar(&cfg.MaxBufferedBytes, prefix+"max-buffered-bytes", 100_000_000, "The maximum number of buffered records ready to be processed. This limit applies to the sum of all inflight requests. Set to 0 to disable the limit.") - f.IntVar(&cfg.IngestionConcurrencyMax, prefix+".ingestion-concurrency-max", 0, "The maximum number of concurrent ingestion streams to the TSDB head. Every tenant has their own set of streams. 0 to disable.") - f.IntVar(&cfg.IngestionConcurrencyBatchSize, prefix+".ingestion-concurrency-batch-size", 150, "The number of timeseries to batch together before ingesting to the TSDB head. Only use this setting when -ingest-storage.kafka.ingestion-concurrency-max is greater than 0.") - f.IntVar(&cfg.IngestionConcurrencyQueueCapacity, prefix+".ingestion-concurrency-queue-capacity", 5, "The number of batches to prepare and queue to ingest to the TSDB head. Only use this setting when -ingest-storage.kafka.ingestion-concurrency-max is greater than 0.") - f.IntVar(&cfg.IngestionConcurrencyTargetFlushesPerShard, prefix+".ingestion-concurrency-target-flushes-per-shard", 80, "The expected number of times to ingest timeseries to the TSDB head after batching. With fewer flushes, the overhead of splitting up the work is higher than the benefit of parallelization. Only use this setting when -ingest-storage.kafka.ingestion-concurrency-max is greater than 0.") - f.IntVar(&cfg.IngestionConcurrencyEstimatedBytesPerSample, prefix+".ingestion-concurrency-estimated-bytes-per-sample", 500, "The estimated number of bytes a sample has at time of ingestion. This value is used to estimate the timeseries without decompressing them. Only use this setting when -ingest-storage.kafka.ingestion-concurrency-max is greater than 0.") + f.IntVar(&cfg.IngestionConcurrencyMax, prefix+"ingestion-concurrency-max", 0, "The maximum number of concurrent ingestion streams to the TSDB head. Every tenant has their own set of streams. 0 to disable.") + f.IntVar(&cfg.IngestionConcurrencyBatchSize, prefix+"ingestion-concurrency-batch-size", 150, "The number of timeseries to batch together before ingesting to the TSDB head. Only use this setting when -ingest-storage.kafka.ingestion-concurrency-max is greater than 0.") + f.IntVar(&cfg.IngestionConcurrencyQueueCapacity, prefix+"ingestion-concurrency-queue-capacity", 5, "The number of batches to prepare and queue to ingest to the TSDB head. Only use this setting when -ingest-storage.kafka.ingestion-concurrency-max is greater than 0.") + f.IntVar(&cfg.IngestionConcurrencyTargetFlushesPerShard, prefix+"ingestion-concurrency-target-flushes-per-shard", 80, "The expected number of times to ingest timeseries to the TSDB head after batching. With fewer flushes, the overhead of splitting up the work is higher than the benefit of parallelization. Only use this setting when -ingest-storage.kafka.ingestion-concurrency-max is greater than 0.") + f.IntVar(&cfg.IngestionConcurrencyEstimatedBytesPerSample, prefix+"ingestion-concurrency-estimated-bytes-per-sample", 500, "The estimated number of bytes a sample has at time of ingestion. This value is used to estimate the timeseries without decompressing them. Only use this setting when -ingest-storage.kafka.ingestion-concurrency-max is greater than 0.") } func (cfg *KafkaConfig) Validate() error { @@ -221,12 +220,8 @@ func (cfg *KafkaConfig) Validate() error { return ErrInvalidMaxConsumerLagAtStartup } - if cfg.StartupFetchConcurrency < 0 { - return fmt.Errorf("ingest-storage.kafka.startup-fetch-concurrency must be greater or equal to 0") - } - - if cfg.OngoingFetchConcurrency > 0 && cfg.StartupFetchConcurrency <= 0 { - return fmt.Errorf("ingest-storage.kafka.startup-fetch-concurrency must be greater than 0 when ingest-storage.kafka.ongoing-fetch-concurrency is greater than 0") + if cfg.FetchConcurrencyMax < 0 { + return fmt.Errorf("ingest-storage.kafka.fetch-concurrency-max must be greater than or equal to 0") } if cfg.MaxBufferedBytes >= math.MaxInt32 { @@ -247,6 +242,10 @@ func (cfg *KafkaConfig) Validate() error { } } + if partitions := cfg.AutoCreateTopicDefaultPartitions; cfg.AutoCreateTopicEnabled && (partitions != -1 && partitions < 1) { + return ErrInvalidAutoCreateTopicParams + } + return nil } @@ -270,5 +269,5 @@ func (cfg *MigrationConfig) RegisterFlags(f *flag.FlagSet) { } func (cfg *MigrationConfig) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) { - f.BoolVar(&cfg.DistributorSendToIngestersEnabled, prefix+".distributor-send-to-ingesters-enabled", false, "When both this option and ingest storage are enabled, distributors write to both Kafka and ingesters. A write request is considered successful only when written to both backends.") + f.BoolVar(&cfg.DistributorSendToIngestersEnabled, prefix+"distributor-send-to-ingesters-enabled", false, "When both this option and ingest storage are enabled, distributors write to both Kafka and ingesters. A write request is considered successful only when written to both backends.") } diff --git a/pkg/storage/ingest/config_test.go b/pkg/storage/ingest/config_test.go index 7ea959d9639..6ee7b218f9b 100644 --- a/pkg/storage/ingest/config_test.go +++ b/pkg/storage/ingest/config_test.go @@ -210,6 +210,25 @@ func TestConfig_Validate(t *testing.T) { }, expectedErr: ErrInvalidIngestionConcurrencyParams, }, + "should fail when auto create topic default partitions is lower than 1": { + setup: func(cfg *Config) { + cfg.Enabled = true + cfg.KafkaConfig.Address = "localhost" + cfg.KafkaConfig.Topic = "test" + cfg.KafkaConfig.AutoCreateTopicEnabled = true + cfg.KafkaConfig.AutoCreateTopicDefaultPartitions = -100 + }, + expectedErr: ErrInvalidAutoCreateTopicParams, + }, + "should pass when auto create topic default partitions is -1 (using Kafka broker's default)": { + setup: func(cfg *Config) { + cfg.Enabled = true + cfg.KafkaConfig.Address = "localhost" + cfg.KafkaConfig.Topic = "test" + cfg.KafkaConfig.AutoCreateTopicEnabled = true + cfg.KafkaConfig.AutoCreateTopicDefaultPartitions = -1 + }, + }, } for testName, testData := range tests { diff --git a/pkg/storage/ingest/fetcher.go b/pkg/storage/ingest/fetcher.go index 96d3c8e8f62..902c6930e58 100644 --- a/pkg/storage/ingest/fetcher.go +++ b/pkg/storage/ingest/fetcher.go @@ -53,9 +53,6 @@ type fetcher interface { // record and use the returned context when doing something that is common to all records. PollFetches(context.Context) (kgo.Fetches, context.Context) - // Update updates the fetcher with the given concurrency. - Update(ctx context.Context, concurrency int) - // Stop stops the fetcher. Stop() @@ -400,15 +397,6 @@ func (r *concurrentFetchers) Stop() { level.Info(r.logger).Log("msg", "stopped concurrent fetchers", "last_returned_offset", r.lastReturnedOffset) } -// Update implements fetcher -func (r *concurrentFetchers) Update(ctx context.Context, concurrency int) { - r.Stop() - r.done = make(chan struct{}) - - r.wg.Add(1) - go r.start(ctx, r.lastReturnedOffset+1, concurrency) -} - // PollFetches implements fetcher func (r *concurrentFetchers) PollFetches(ctx context.Context) (kgo.Fetches, context.Context) { waitStartTime := time.Now() @@ -863,6 +851,10 @@ func (r *concurrentFetchers) start(ctx context.Context, startOffset int64, concu // When there isn't a fetch in flight the HWM will never be updated, we will dispatch the next fetchWant even if that means it's above the HWM. dispatchNextWant = wants } + + // Periodically update our estimation, so it's exported as a metric. + r.estimatedBytesPerRecord.Store(int64(nextFetch.estimatedBytesPerRecord)) + select { case <-r.done: return diff --git a/pkg/storage/ingest/fetcher_test.go b/pkg/storage/ingest/fetcher_test.go index 5e3cec8a16d..34fe74dd89c 100644 --- a/pkg/storage/ingest/fetcher_test.go +++ b/pkg/storage/ingest/fetcher_test.go @@ -570,188 +570,6 @@ func TestConcurrentFetchers(t *testing.T) { } }) - t.Run("concurrency can be updated", func(t *testing.T) { - t.Parallel() - - ctx, cancel := context.WithCancel(context.Background()) - defer cancel() - rec1 := []byte("record-1") - rec2 := []byte("record-2") - rec3 := []byte("record-3") - - _, clusterAddr := testkafka.CreateCluster(t, partitionID+1, topicName) - client := newKafkaProduceClient(t, clusterAddr) - fetchers := createConcurrentFetchers(ctx, t, client, topicName, partitionID, 0, concurrency, 0) - - produceRecordAndAssert := func(record []byte) { - producedOffset := produceRecord(ctx, t, client, topicName, partitionID, record) - // verify that the record is fetched. - - fetches := longPollFetches(fetchers, 1, 5*time.Second) - - require.Equal(t, fetches.Records()[0].Value, record) - require.Equal(t, fetches.Records()[0].Offset, producedOffset) - } - - // Ensure that the fetchers work with the initial concurrency. - produceRecordAndAssert(rec1) - - // Now, update the concurrency. - fetchers.Update(ctx, 1) - - // Ensure that the fetchers work with the updated concurrency. - produceRecordAndAssert(rec2) - - // Update and verify again. - fetchers.Update(ctx, 10) - produceRecordAndAssert(rec3) - - // We expect no more records returned by PollFetches() and no buffered records. - pollFetchesAndAssertNoRecords(t, fetchers) - }) - - t.Run("update concurrency with continuous production", func(t *testing.T) { - t.Parallel() - - const ( - testDuration = 10 * time.Second - produceInterval = 10 * time.Millisecond - initialConcurrency = 2 - ) - - ctx, cancel := context.WithTimeout(context.Background(), testDuration) - defer cancel() - - produceCtx, cancelProduce := context.WithTimeout(context.Background(), testDuration) - defer cancelProduce() - - _, clusterAddr := testkafka.CreateCluster(t, partitionID+1, topicName) - client := newKafkaProduceClient(t, clusterAddr) - - producedCount := atomic.NewInt64(0) - - // Start producing records continuously - go func() { - ticker := time.NewTicker(produceInterval) - defer ticker.Stop() - - for { - select { - case <-produceCtx.Done(): - return - case <-ticker.C: - count := producedCount.Inc() - record := fmt.Sprintf("record-%d", count) - // Use context.Background() so that we don't race with the test context being cancelled. - produceRecord(context.Background(), t, client, topicName, partitionID, []byte(record)) - } - } - }() - - fetchers := createConcurrentFetchers(ctx, t, client, topicName, partitionID, 0, initialConcurrency, 0) - - fetchedRecords := make([]*kgo.Record, 0) - - // Initial fetch with starting concurrency - fetches := longPollFetches(fetchers, math.MaxInt, 2*time.Second) - initialFetched := fetches.NumRecords() - - // Update to higher concurrency - fetchers.Update(ctx, 4) - fetches = longPollFetches(fetchers, math.MaxInt, 3*time.Second) - highConcurrencyFetched := fetches.NumRecords() - - // Update to lower concurrency - fetchers.Update(ctx, 1) - fetches = longPollFetches(fetchers, math.MaxInt, 3*time.Second) - lowerConcurrentFetched := fetches.NumRecords() - - cancelProduce() - // Consume everything that's left now. - fetches = longPollFetches(fetchers, math.MaxInt, 2*time.Second) - finalFetched := fetches.NumRecords() - totalProduced := producedCount.Load() - totalFetched := initialFetched + highConcurrencyFetched + lowerConcurrentFetched + finalFetched - - // Verify fetched records - assert.True(t, totalFetched > 0, "Expected to fetch some records") - assert.Equal(t, int64(totalFetched), totalProduced, "Should not fetch more records than produced") - assert.True(t, highConcurrencyFetched > initialFetched, "Expected to fetch more records with higher concurrency") - - // Verify record contents - for i, record := range fetchedRecords { - expectedContent := fmt.Sprintf("record-%d", i+1) - assert.Equal(t, expectedContent, string(record.Value), - "Record %d has unexpected content: %s", i, string(record.Value)) - } - - // We expect no more records returned by PollFetches() and no buffered records. - pollFetchesAndAssertNoRecords(t, fetchers) - - // Log some statistics - t.Logf("Total produced: %d, Total fetched: %d", totalProduced, totalFetched) - t.Logf("Fetched with initial concurrency: %d", initialFetched) - t.Logf("Fetched with high concurrency: %d", highConcurrencyFetched) - t.Logf("Fetched with low concurrency: %d", totalFetched-initialFetched-highConcurrencyFetched) - }) - - t.Run("consume from end and update immediately", func(t *testing.T) { - t.Parallel() - - const ( - initialRecords = 100 - additionalRecords = 50 - initialConcurrency = 2 - updatedConcurrency = 4 - ) - - ctx := context.Background() - - _, clusterAddr := testkafka.CreateCluster(t, partitionID+1, topicName) - client := newKafkaProduceClient(t, clusterAddr) - - // Produce initial records - for i := 0; i < initialRecords; i++ { - record := fmt.Sprintf("initial-record-%d", i+1) - produceRecord(ctx, t, client, topicName, partitionID, []byte(record)) - } - - // Start concurrent fetchers from the end - fetchers := createConcurrentFetchers(ctx, t, client, topicName, partitionID, kafkaOffsetEnd, initialConcurrency, 0) - - // Immediately update concurrency - fetchers.Update(ctx, updatedConcurrency) - - // Produce additional records - for i := 0; i < additionalRecords; i++ { - record := fmt.Sprintf("additional-record-%d", i+1) - produceRecord(ctx, t, client, topicName, partitionID, []byte(record)) - } - - // Fetch records - fetches := longPollFetches(fetchers, additionalRecords, 5*time.Second) - fetchedRecords := fetches.Records() - - // Verify fetched records - assert.LessOrEqual(t, len(fetchedRecords), additionalRecords, - "Should not fetch more records than produced after start") - - // Verify record contents - for i, record := range fetchedRecords { - expectedContent := fmt.Sprintf("additional-record-%d", i+1) - assert.Equal(t, expectedContent, string(record.Value), - "Record %d has unexpected content: %s", i, string(record.Value)) - } - - // We expect no more records returned by PollFetches() and no buffered records. - pollFetchesAndAssertNoRecords(t, fetchers) - - // Log some statistics - t.Logf("Total records produced: %d", initialRecords+additionalRecords) - t.Logf("Records produced after start: %d", additionalRecords) - t.Logf("Records fetched: %d", len(fetchedRecords)) - }) - t.Run("staggered production", func(t *testing.T) { t.Parallel() diff --git a/pkg/storage/ingest/reader.go b/pkg/storage/ingest/reader.go index f021d43e880..4abe1932138 100644 --- a/pkg/storage/ingest/reader.go +++ b/pkg/storage/ingest/reader.go @@ -39,6 +39,9 @@ const ( // This is usually used when there aren't enough records available to fulfil MinBytes, so the broker waits for more records to be produced. // Warpstream clamps this between 5s and 30s. defaultMinBytesMaxWaitTime = 5 * time.Second + + // ReaderMetricsPrefix is the reader metrics prefix used by the ingest storage. + ReaderMetricsPrefix = "cortex_ingest_storage_reader" ) var ( @@ -132,11 +135,6 @@ func (r *PartitionReader) Stop() { // Given the partition reader has no concurrency it doesn't support stopping anything. } -// Update implements fetcher -func (r *PartitionReader) Update(_ context.Context, _ int) { - // Given the partition reader has no concurrency it doesn't support updates. -} - func (r *PartitionReader) BufferedRecords() int64 { var fcount, ccount int64 @@ -175,7 +173,9 @@ func (r *PartitionReader) EstimatedBytesPerRecord() int64 { func (r *PartitionReader) start(ctx context.Context) (returnErr error) { if r.kafkaCfg.AutoCreateTopicEnabled { - setDefaultNumberOfPartitionsForAutocreatedTopics(r.kafkaCfg, r.logger) + if err := CreateTopic(r.kafkaCfg, r.logger); err != nil { + return err + } } // Stop dependencies if the start() fails. @@ -231,7 +231,7 @@ func (r *PartitionReader) start(ctx context.Context) (returnErr error) { return errors.Wrap(err, "starting service manager") } - if r.kafkaCfg.StartupFetchConcurrency > 0 { + if r.kafkaCfg.FetchConcurrencyMax > 0 { // When concurrent fetch is enabled we manually fetch from the partition so we don't want the Kafka // client to buffer any record. However, we still want to configure partition consumption so that // the partition metadata is kept updated by the client (our concurrent fetcher requires metadata to @@ -246,7 +246,7 @@ func (r *PartitionReader) start(ctx context.Context) (returnErr error) { r.kafkaCfg.Topic: {r.partitionID: kgo.NewOffset().At(startOffset)}, }) - f, err := newConcurrentFetchers(ctx, r.client.Load(), r.logger, r.kafkaCfg.Topic, r.partitionID, startOffset, r.kafkaCfg.StartupFetchConcurrency, int32(r.kafkaCfg.MaxBufferedBytes), r.kafkaCfg.UseCompressedBytesAsFetchMaxBytes, r.concurrentFetchersMinBytesMaxWaitTime, offsetsClient, startOffsetReader, r.kafkaCfg.concurrentFetchersFetchBackoffConfig, &r.metrics) + f, err := newConcurrentFetchers(ctx, r.client.Load(), r.logger, r.kafkaCfg.Topic, r.partitionID, startOffset, r.kafkaCfg.FetchConcurrencyMax, int32(r.kafkaCfg.MaxBufferedBytes), r.kafkaCfg.UseCompressedBytesAsFetchMaxBytes, r.concurrentFetchersMinBytesMaxWaitTime, offsetsClient, startOffsetReader, r.kafkaCfg.concurrentFetchersFetchBackoffConfig, &r.metrics) if err != nil { return errors.Wrap(err, "creating concurrent fetchers during startup") } @@ -300,8 +300,6 @@ func (r *PartitionReader) stopDependencies() error { } func (r *PartitionReader) run(ctx context.Context) error { - r.switchToOngoingFetcher(ctx) - for ctx.Err() == nil { err := r.processNextFetches(ctx, r.metrics.receiveDelayWhenRunning) if err != nil && !errors.Is(err, context.Canceled) { @@ -313,64 +311,6 @@ func (r *PartitionReader) run(ctx context.Context) error { return nil } -// switchToOngoingFetcher switches to the configured ongoing fetcher. This function could be -// called multiple times. -func (r *PartitionReader) switchToOngoingFetcher(ctx context.Context) { - if r.kafkaCfg.StartupFetchConcurrency == r.kafkaCfg.OngoingFetchConcurrency { - // we're already using the same settings, no need to switch - return - } - - if r.kafkaCfg.StartupFetchConcurrency > 0 && r.kafkaCfg.OngoingFetchConcurrency > 0 { - // No need to switch the fetcher, just update the concurrency. - r.getFetcher().Update(ctx, r.kafkaCfg.OngoingFetchConcurrency) - return - } - - if r.kafkaCfg.StartupFetchConcurrency > 0 && r.kafkaCfg.OngoingFetchConcurrency == 0 { - if r.getFetcher() == r { - // This method has been called before, no need to switch the fetcher. - return - } - - level.Info(r.logger).Log("msg", "partition reader is switching to non-concurrent fetcher") - - // Stop the current fetcher before replacing it. - r.getFetcher().Stop() - - // We need to switch to franz-go for ongoing fetches. - // If we've already fetched some records, we should discard them from franz-go and start from the last consumed offset. - r.setFetcher(r) - - lastConsumed := r.consumedOffsetWatcher.LastConsumedOffset() - if lastConsumed == -1 { - // We haven't consumed any records yet with the other fetcher. - // - // The franz-go client is initialized to start consuming from the same place as the other fetcher. - // We can just use the client, but we have to resume the fetching because it was previously paused. - r.client.Load().ResumeFetchPartitions(map[string][]int32{ - r.kafkaCfg.Topic: {r.partitionID}, - }) - return - } - - // The consumption is paused so in theory the client shouldn't have any buffered records. However, to start - // from a clean setup and have the guarantee that we're not going to read any previously buffered record, - // we do remove the partition consumption (this clears the buffer), then we resume the fetching and finally - // we add the consumption back. - r.client.Load().RemoveConsumePartitions(map[string][]int32{ - r.kafkaCfg.Topic: {r.partitionID}, - }) - r.client.Load().ResumeFetchPartitions(map[string][]int32{ - r.kafkaCfg.Topic: {r.partitionID}, - }) - r.client.Load().AddConsumePartitions(map[string]map[int32]kgo.Offset{ - // Resume from the next unconsumed offset. - r.kafkaCfg.Topic: {r.partitionID: kgo.NewOffset().At(lastConsumed + 1)}, - }) - } -} - func (r *PartitionReader) processNextFetches(ctx context.Context, delayObserver prometheus.Observer) error { fetches, fetchCtx := r.getFetcher().PollFetches(ctx) // Propagate the fetching span to consuming the records. @@ -421,10 +361,6 @@ func (r *PartitionReader) processNextFetchesUntilTargetOrMaxLagHonored(ctx conte timedCtx, cancel := context.WithTimeoutCause(ctx, 2*maxLag, errWaitTargetLagDeadlineExceeded) defer cancel() - // Don't use timedCtx because we want the fetchers to continue running - // At this point we're close enough to the end of the partition that we should switch to the more responsive fetcher. - r.switchToOngoingFetcher(ctx) - return r.processNextFetchesUntilLagHonored(timedCtx, targetLag, logger) }, @@ -1099,7 +1035,7 @@ func newReaderMetrics(partitionID int32, reg prometheus.Registerer, metricsSourc }), strongConsistencyInstrumentation: NewStrongReadConsistencyInstrumentation[struct{}](component, reg), lastConsumedOffset: lastConsumedOffset, - kprom: NewKafkaReaderClientMetrics(component, reg), + kprom: NewKafkaReaderClientMetrics(ReaderMetricsPrefix, component, reg), missedRecords: promauto.With(reg).NewCounter(prometheus.CounterOpts{ Name: "cortex_ingest_storage_reader_missed_records_total", Help: "The number of offsets that were never consumed by the reader because they weren't fetched.", diff --git a/pkg/storage/ingest/reader_client.go b/pkg/storage/ingest/reader_client.go index 628fa545fac..ef5fda2b5b8 100644 --- a/pkg/storage/ingest/reader_client.go +++ b/pkg/storage/ingest/reader_client.go @@ -36,8 +36,8 @@ func NewKafkaReaderClient(cfg KafkaConfig, metrics *kprom.Metrics, logger log.Lo return client, nil } -func NewKafkaReaderClientMetrics(component string, reg prometheus.Registerer) *kprom.Metrics { - return kprom.NewMetrics("cortex_ingest_storage_reader", +func NewKafkaReaderClientMetrics(prefix, component string, reg prometheus.Registerer) *kprom.Metrics { + return kprom.NewMetrics(prefix, kprom.Registerer(prometheus.WrapRegistererWith(prometheus.Labels{"component": component}, reg)), // Do not export the client ID, because we use it to specify options to the backend. kprom.FetchAndProduceDetail(kprom.Batches, kprom.Records, kprom.CompressedBytes, kprom.UncompressedBytes)) diff --git a/pkg/storage/ingest/reader_test.go b/pkg/storage/ingest/reader_test.go index 2243683ab2d..22cef17a544 100644 --- a/pkg/storage/ingest/reader_test.go +++ b/pkg/storage/ingest/reader_test.go @@ -113,10 +113,8 @@ func TestPartitionReader_ConsumerError(t *testing.T) { // We want to run this test with different concurrency config. concurrencyVariants := map[string][]readerTestCfgOpt{ - "without concurrency": {withStartupConcurrency(0), withOngoingConcurrency(0)}, - "with startup concurrency": {withStartupConcurrency(2), withOngoingConcurrency(0)}, - "with startup and ongoing concurrency (same settings)": {withStartupConcurrency(2), withOngoingConcurrency(2)}, - "with startup and ongoing concurrency (different settings)": {withStartupConcurrency(2), withOngoingConcurrency(4)}, + "without concurrency": {withFetchConcurrency(0)}, + "with fetch concurrency": {withFetchConcurrency(2)}, } for concurrencyName, concurrencyVariant := range concurrencyVariants { @@ -169,10 +167,8 @@ func TestPartitionReader_ConsumerStopping(t *testing.T) { // We want to run this test with different concurrency config. concurrencyVariants := map[string][]readerTestCfgOpt{ - "without concurrency": {withStartupConcurrency(0), withOngoingConcurrency(0)}, - "with startup concurrency": {withStartupConcurrency(2), withOngoingConcurrency(0)}, - "with startup and ongoing concurrency (same settings)": {withStartupConcurrency(2), withOngoingConcurrency(2)}, - "with startup and ongoing concurrency (different settings)": {withStartupConcurrency(2), withOngoingConcurrency(4)}, + "without concurrency": {withFetchConcurrency(0)}, + "with fetch concurrency": {withFetchConcurrency(2)}, } for concurrencyName, concurrencyVariant := range concurrencyVariants { @@ -517,10 +513,8 @@ func TestPartitionReader_ConsumeAtStartup(t *testing.T) { // We want to run all these tests with different concurrency config. concurrencyVariants := map[string][]readerTestCfgOpt{ - "without concurrency": {withStartupConcurrency(0), withOngoingConcurrency(0)}, - "with startup concurrency": {withStartupConcurrency(2), withOngoingConcurrency(0)}, - "with startup and ongoing concurrency (same settings)": {withStartupConcurrency(2), withOngoingConcurrency(2)}, - "with startup and ongoing concurrency (different settings)": {withStartupConcurrency(2), withOngoingConcurrency(4)}, + "without concurrency": {withFetchConcurrency(0)}, + "with startup concurrency": {withFetchConcurrency(2)}, } t.Run("should immediately switch to Running state if partition is empty", func(t *testing.T) { @@ -1721,8 +1715,7 @@ func TestPartitionReader_ConsumeAtStartup(t *testing.T) { withTargetAndMaxConsumerLagAtStartup(time.Second, 2*time.Second), withRegistry(reg), withLogger(log.NewLogfmtLogger(logs)), - withStartupConcurrency(2), - withOngoingConcurrency(0)) + withFetchConcurrency(2)) require.NoError(t, reader.StartAsync(ctx)) t.Cleanup(func() { @@ -1809,19 +1802,13 @@ func TestPartitionReader_ShouldNotBufferRecordsInTheKafkaClientWhenDone(t *testi expectedBufferedRecordsFromClient int }{ "without concurrency": { - concurrencyVariant: []readerTestCfgOpt{withStartupConcurrency(0), withOngoingConcurrency(0)}, + concurrencyVariant: []readerTestCfgOpt{withFetchConcurrency(0)}, expectedBufferedRecords: 1, expectedBufferedBytes: 8, expectedBufferedRecordsFromClient: 1, }, - "with startup concurrency": { - concurrencyVariant: []readerTestCfgOpt{withStartupConcurrency(2), withOngoingConcurrency(0)}, - expectedBufferedRecords: 1, - expectedBufferedBytes: 8, - expectedBufferedRecordsFromClient: 1, - }, - "with startup and ongoing concurrency": { - concurrencyVariant: []readerTestCfgOpt{withStartupConcurrency(2), withOngoingConcurrency(2)}, + "with fetch concurrency": { + concurrencyVariant: []readerTestCfgOpt{withFetchConcurrency(2)}, expectedBufferedRecords: 1, // only one fetcher is active because we don't fetch over the HWM. // That fetcher should fetch 1MB. It's padded with 5% to account for underestimation of the record size. @@ -1829,8 +1816,8 @@ func TestPartitionReader_ShouldNotBufferRecordsInTheKafkaClientWhenDone(t *testi expectedBufferedBytes: 1_050_000, expectedBufferedRecordsFromClient: 0, }, - "with startup and ongoing concurrency (different settings)": { - concurrencyVariant: []readerTestCfgOpt{withStartupConcurrency(2), withOngoingConcurrency(4)}, + "with higher fetch concurrency": { + concurrencyVariant: []readerTestCfgOpt{withFetchConcurrency(4)}, expectedBufferedRecords: 1, // There is one fetcher fetching. That fetcher should fetch 500KB. // But we clamp the MaxBytes to at least 1MB, so that we don't underfetch when the absolute volume of data is low. @@ -2129,8 +2116,7 @@ func TestPartitionReader_ShouldNotMissRecordsIfFetchRequestContainPartialFailure withTargetAndMaxConsumerLagAtStartup(time.Second, 2*time.Second), withLogger(log.NewNopLogger()), withMaxBufferedBytes(maxBufferedBytes), - withStartupConcurrency(concurrency), - withOngoingConcurrency(concurrency), + withFetchConcurrency(concurrency), } reader := createReader(t, clusterAddr, topicName, partitionID, consumer, readerOpts...) @@ -2167,10 +2153,8 @@ func TestPartitionReader_ShouldNotMissRecordsIfKafkaReturnsAFetchBothWithAnError // We want to run all these tests with different concurrency config. concurrencyVariants := map[string][]readerTestCfgOpt{ - "without concurrency": {withStartupConcurrency(0), withOngoingConcurrency(0)}, - "with startup concurrency": {withStartupConcurrency(2), withOngoingConcurrency(0)}, - "with startup and ongoing concurrency (same settings)": {withStartupConcurrency(2), withOngoingConcurrency(2)}, - "with startup and ongoing concurrency (different settings)": {withStartupConcurrency(2), withOngoingConcurrency(4)}, + "without concurrency": {withFetchConcurrency(0)}, + "with fetch concurrency": {withFetchConcurrency(2)}, } for concurrencyName, concurrencyVariant := range concurrencyVariants { @@ -2705,15 +2689,9 @@ func withLogger(logger log.Logger) func(cfg *readerTestCfg) { } } -func withStartupConcurrency(i int) readerTestCfgOpt { - return func(cfg *readerTestCfg) { - cfg.kafka.StartupFetchConcurrency = i - } -} - -func withOngoingConcurrency(i int) readerTestCfgOpt { +func withFetchConcurrency(i int) readerTestCfgOpt { return func(cfg *readerTestCfg) { - cfg.kafka.OngoingFetchConcurrency = i + cfg.kafka.FetchConcurrencyMax = i } } diff --git a/pkg/storage/ingest/util.go b/pkg/storage/ingest/util.go index 8aa05e8ccf5..f0387f6eca3 100644 --- a/pkg/storage/ingest/util.go +++ b/pkg/storage/ingest/util.go @@ -4,6 +4,7 @@ package ingest import ( "context" + "errors" "fmt" "strconv" "time" @@ -12,6 +13,7 @@ import ( "github.com/go-kit/log/level" "github.com/grafana/regexp" "github.com/twmb/franz-go/pkg/kadm" + "github.com/twmb/franz-go/pkg/kerr" "github.com/twmb/franz-go/pkg/kgo" "github.com/twmb/franz-go/pkg/kmsg" "github.com/twmb/franz-go/pkg/sasl/plain" @@ -34,7 +36,7 @@ func IngesterPartitionID(ingesterID string) (int32, error) { } // Parse the ingester sequence number. - ingesterSeq, err := strconv.Atoi(match[1]) + ingesterSeq, err := strconv.ParseInt(match[1], 10, 32) if err != nil { return 0, fmt.Errorf("no ingester sequence number in ingester ID %s", ingesterID) } @@ -94,10 +96,6 @@ func commonKafkaClientOptions(cfg KafkaConfig, metrics *kprom.Metrics, logger lo }), } - if cfg.AutoCreateTopicEnabled { - opts = append(opts, kgo.AllowAutoTopicCreation()) - } - // SASL plain auth. if cfg.SASLUsername != "" && cfg.SASLPassword.String() != "" { opts = append(opts, kgo.SASL(plain.Plain(func(_ context.Context) (plain.Auth, error) { @@ -154,45 +152,44 @@ func (w *resultPromise[T]) wait(ctx context.Context) (T, error) { } } -// setDefaultNumberOfPartitionsForAutocreatedTopics tries to set num.partitions config option on brokers. -// This is best-effort, if setting the option fails, error is logged, but not returned. -func setDefaultNumberOfPartitionsForAutocreatedTopics(cfg KafkaConfig, logger log.Logger) { - if cfg.AutoCreateTopicDefaultPartitions <= 0 { - return - } +// CreateTopic creates the topic in the Kafka cluster. If creating the topic fails, then an error is returned. +// If the topic already exists, then the function logs a message and returns nil. +func CreateTopic(cfg KafkaConfig, logger log.Logger) error { + logger = log.With(logger, "task", "autocreate_topic") cl, err := kgo.NewClient(commonKafkaClientOptions(cfg, nil, logger)...) if err != nil { - level.Error(logger).Log("msg", "failed to create kafka client", "err", err) - return + return fmt.Errorf("failed to create kafka client: %w", err) } adm := kadm.NewClient(cl) defer adm.Close() + ctx := context.Background() - defaultNumberOfPartitions := fmt.Sprintf("%d", cfg.AutoCreateTopicDefaultPartitions) - responses, err := adm.AlterBrokerConfigsState(context.Background(), []kadm.AlterConfig{ - { - Op: kadm.SetConfig, - Name: "num.partitions", - Value: &defaultNumberOfPartitions, - }, - }) - - // Check if any error has been returned as part of the response. + // As of kafka 2.4 we can pass -1 and the broker will use its default configuration. + const defaultReplication = -1 + resp, err := adm.CreateTopic(ctx, int32(cfg.AutoCreateTopicDefaultPartitions), defaultReplication, nil, cfg.Topic) if err == nil { - for _, res := range responses { - if res.Err != nil { - err = res.Err - break - } - } + err = resp.Err } - if err != nil { - level.Error(logger).Log("msg", "failed to alter default number of partitions", "err", err) - return + if errors.Is(err, kerr.TopicAlreadyExists) { + level.Info(logger).Log( + "msg", "topic already exists", + "topic", resp.Topic, + "num_partitions", resp.NumPartitions, + "replication_factor", resp.ReplicationFactor, + ) + return nil + } + return fmt.Errorf("failed to create topic %s: %w", cfg.Topic, err) } - level.Info(logger).Log("msg", "configured Kafka-wide default number of partitions for auto-created topics (num.partitions)", "value", cfg.AutoCreateTopicDefaultPartitions) + level.Info(logger).Log( + "msg", "successfully created topic", + "topic", resp.Topic, + "num_partitions", resp.NumPartitions, + "replication_factor", resp.ReplicationFactor, + ) + return nil } diff --git a/pkg/storage/ingest/util_test.go b/pkg/storage/ingest/util_test.go index c2224072dc7..e750f2ab133 100644 --- a/pkg/storage/ingest/util_test.go +++ b/pkg/storage/ingest/util_test.go @@ -10,10 +10,8 @@ import ( "time" "github.com/go-kit/log" - "github.com/grafana/dskit/concurrency" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" - "github.com/twmb/franz-go/pkg/kerr" "github.com/twmb/franz-go/pkg/kfake" "github.com/twmb/franz-go/pkg/kmsg" ) @@ -130,7 +128,7 @@ func TestResultPromise(t *testing.T) { }) } -func TestSetDefaultNumberOfPartitionsForAutocreatedTopics(t *testing.T) { +func TestCreateTopic(t *testing.T) { createKafkaCluster := func(t *testing.T) (string, *kfake.Cluster) { cluster, err := kfake.NewCluster(kfake.NumBrokers(1)) require.NoError(t, err) @@ -146,36 +144,41 @@ func TestSetDefaultNumberOfPartitionsForAutocreatedTopics(t *testing.T) { var ( addr, cluster = createKafkaCluster(t) cfg = KafkaConfig{ + Topic: "topic-name", Address: addr, AutoCreateTopicDefaultPartitions: 100, } ) cluster.ControlKey(kmsg.AlterConfigs.Int16(), func(request kmsg.Request) (kmsg.Response, error, bool) { - r := request.(*kmsg.AlterConfigsRequest) - - require.Len(t, r.Resources, 1) - res := r.Resources[0] - require.Equal(t, kmsg.ConfigResourceTypeBroker, res.ResourceType) - require.Len(t, res.Configs, 1) - cfg := res.Configs[0] - require.Equal(t, "num.partitions", cfg.Name) - require.NotNil(t, *cfg.Value) - require.Equal(t, "100", *cfg.Value) - - return &kmsg.AlterConfigsResponse{ + r := request.(*kmsg.CreateTopicsRequest) + + assert.Len(t, r.Topics, 1) + res := r.Topics[0] + assert.Equal(t, cfg.Topic, res.Topic) + assert.Equal(t, 100, res.NumPartitions) + assert.Equal(t, -1, res.ReplicationFactor) + assert.Empty(t, res.Configs) + assert.Empty(t, res.ReplicaAssignment) + + return &kmsg.CreateTopicsResponse{ Version: r.Version, + Topics: []kmsg.CreateTopicsResponseTopic{ + { + Topic: cfg.Topic, + NumPartitions: res.NumPartitions, + ReplicationFactor: 3, + }, + }, }, nil, true }) - logs := concurrency.SyncBuffer{} - logger := log.NewLogfmtLogger(&logs) + logger := log.NewNopLogger() - setDefaultNumberOfPartitionsForAutocreatedTopics(cfg, logger) - require.NotContains(t, logs.String(), "err") + require.NoError(t, CreateTopic(cfg, logger)) }) - t.Run("should log an error if the request fails", func(t *testing.T) { + t.Run("should return an error if the request fails", func(t *testing.T) { var ( addr, cluster = createKafkaCluster(t) cfg = KafkaConfig{ @@ -184,48 +187,67 @@ func TestSetDefaultNumberOfPartitionsForAutocreatedTopics(t *testing.T) { } ) - cluster.ControlKey(kmsg.AlterConfigs.Int16(), func(_ kmsg.Request) (kmsg.Response, error, bool) { - return &kmsg.AlterConfigsResponse{}, errors.New("failed request"), true + cluster.ControlKey(kmsg.CreateTopics.Int16(), func(_ kmsg.Request) (kmsg.Response, error, bool) { + return &kmsg.CreateTopicsResponse{}, errors.New("failed request"), true }) - logs := concurrency.SyncBuffer{} - logger := log.NewLogfmtLogger(&logs) + logger := log.NewNopLogger() - setDefaultNumberOfPartitionsForAutocreatedTopics(cfg, logger) - require.Contains(t, logs.String(), "err") + require.NoError(t, CreateTopic(cfg, logger)) }) - t.Run("should log an error if the request succeed but the response contains an error", func(t *testing.T) { + t.Run("should return an error if the request succeed but the response contains an error", func(t *testing.T) { var ( addr, cluster = createKafkaCluster(t) cfg = KafkaConfig{ + Topic: "topic-name", Address: addr, AutoCreateTopicDefaultPartitions: 100, } ) cluster.ControlKey(kmsg.AlterConfigs.Int16(), func(kReq kmsg.Request) (kmsg.Response, error, bool) { - req := kReq.(*kmsg.AlterConfigsRequest) - require.Len(t, req.Resources, 1) - - return &kmsg.AlterConfigsResponse{ + req := kReq.(*kmsg.CreateTopicsRequest) + assert.Len(t, req.Topics, 1) + res := req.Topics[0] + assert.Equal(t, cfg.Topic, res.Topic) + assert.Equal(t, 100, res.NumPartitions) + assert.Equal(t, -1, res.ReplicationFactor) + assert.Empty(t, res.Configs) + assert.Empty(t, res.ReplicaAssignment) + + return &kmsg.CreateTopicsResponse{ Version: req.Version, - Resources: []kmsg.AlterConfigsResponseResource{ + Topics: []kmsg.CreateTopicsResponseTopic{ { - ResourceType: req.Resources[0].ResourceType, - ResourceName: req.Resources[0].ResourceName, - ErrorCode: kerr.InvalidRequest.Code, - ErrorMessage: pointerOf(kerr.InvalidRequest.Message), + Topic: cfg.Topic, + NumPartitions: res.NumPartitions, + ReplicationFactor: 3, }, }, }, nil, true }) - logs := concurrency.SyncBuffer{} - logger := log.NewLogfmtLogger(&logs) + logger := log.NewNopLogger() - setDefaultNumberOfPartitionsForAutocreatedTopics(cfg, logger) - require.Contains(t, logs.String(), "err") + require.NoError(t, CreateTopic(cfg, logger)) }) + t.Run("should not return error when topic already exists", func(t *testing.T) { + var ( + addr, _ = createKafkaCluster(t) + cfg = KafkaConfig{ + Topic: "topic-name", + Address: addr, + AutoCreateTopicDefaultPartitions: 100, + } + logger = log.NewNopLogger() + ) + + // First call should create the topic + assert.NoError(t, CreateTopic(cfg, logger)) + + // Second call should succeed because topic already exists + assert.NoError(t, CreateTopic(cfg, logger)) + }) } diff --git a/pkg/storage/ingest/writer.go b/pkg/storage/ingest/writer.go index bf36b3298e0..46d0bf131a5 100644 --- a/pkg/storage/ingest/writer.go +++ b/pkg/storage/ingest/writer.go @@ -107,7 +107,7 @@ func NewWriter(kafkaCfg KafkaConfig, logger log.Logger, reg prometheus.Registere func (w *Writer) starting(_ context.Context) error { if w.kafkaCfg.AutoCreateTopicEnabled { - setDefaultNumberOfPartitionsForAutocreatedTopics(w.kafkaCfg, w.logger) + return CreateTopic(w.kafkaCfg, w.logger) } return nil } diff --git a/pkg/storage/ingest/writer_test.go b/pkg/storage/ingest/writer_test.go index 41791ae9b1e..55bd5fdae31 100644 --- a/pkg/storage/ingest/writer_test.go +++ b/pkg/storage/ingest/writer_test.go @@ -1099,8 +1099,7 @@ func createTestKafkaConfig(clusterAddr, topicName string) KafkaConfig { cfg.Address = clusterAddr cfg.Topic = topicName cfg.WriteTimeout = 2 * time.Second - cfg.StartupFetchConcurrency = 2 - cfg.OngoingFetchConcurrency = 0 + cfg.FetchConcurrencyMax = 2 cfg.concurrentFetchersFetchBackoffConfig = fastFetchBackoffConfig return cfg diff --git a/pkg/storegateway/bucket_chunk_reader.go b/pkg/storegateway/bucket_chunk_reader.go index 5e78cb94f57..b483ca19d7b 100644 --- a/pkg/storegateway/bucket_chunk_reader.go +++ b/pkg/storegateway/bucket_chunk_reader.go @@ -25,7 +25,6 @@ import ( "github.com/grafana/mimir/pkg/storage/tsdb" "github.com/grafana/mimir/pkg/storegateway/storepb" - util_math "github.com/grafana/mimir/pkg/util/math" "github.com/grafana/mimir/pkg/util/pool" ) @@ -68,7 +67,7 @@ func (r *bucketChunkReader) addLoad(id chunks.ChunkRef, seriesEntry, chunkEntry } r.toLoad[seq] = append(r.toLoad[seq], loadIdx{ offset: off, - length: util_math.Max(varint.MaxLen32, length), // If the length is 0, we need to at least fetch the length of the chunk. + length: max(varint.MaxLen32, length), // If the length is 0, we need to at least fetch the length of the chunk. seriesEntry: seriesEntry, chunkEntry: chunkEntry, }) diff --git a/pkg/storegateway/bucket_index_postings.go b/pkg/storegateway/bucket_index_postings.go index 70d25628420..6111374c6cc 100644 --- a/pkg/storegateway/bucket_index_postings.go +++ b/pkg/storegateway/bucket_index_postings.go @@ -21,7 +21,6 @@ import ( "github.com/grafana/mimir/pkg/storage/tsdb" "github.com/grafana/mimir/pkg/storegateway/indexheader" streamindex "github.com/grafana/mimir/pkg/storegateway/indexheader/index" - util_math "github.com/grafana/mimir/pkg/util/math" ) // rawPostingGroup keeps posting keys for single matcher. It is raw because there is no guarantee @@ -458,7 +457,7 @@ func (w labelValuesPostingsStrategy) selectPostings(matchersGroups []postingGrou completeMatchersPlusSeriesSize := completeMatchersSize + maxPossibleSeriesSize partialMatchersPlusSeriesSize := postingGroupsTotalSize(partialMatchersGroups) + maxPossibleSeriesSize - if util_math.Min(completeMatchersPlusSeriesSize, completeMatchersPlusLabelValuesSize) < partialMatchersPlusSeriesSize { + if min(completeMatchersPlusSeriesSize, completeMatchersPlusLabelValuesSize) < partialMatchersPlusSeriesSize { return matchersGroups, nil } return partialMatchersGroups, omittedMatchersGroups diff --git a/pkg/storegateway/bucket_index_reader.go b/pkg/storegateway/bucket_index_reader.go index 413527ff9a2..f97d1507235 100644 --- a/pkg/storegateway/bucket_index_reader.go +++ b/pkg/storegateway/bucket_index_reader.go @@ -36,7 +36,6 @@ import ( "github.com/grafana/mimir/pkg/storegateway/indexheader" streamindex "github.com/grafana/mimir/pkg/storegateway/indexheader/index" "github.com/grafana/mimir/pkg/util" - util_math "github.com/grafana/mimir/pkg/util/math" "github.com/grafana/mimir/pkg/util/pool" "github.com/grafana/mimir/pkg/util/spanlogger" ) @@ -455,7 +454,7 @@ func (r *bucketIndexReader) fetchPostings(ctx context.Context, keys []labels.Lab "labels_key", cachedLabelsKey, "block", r.block.meta.ULID, "bytes_len", len(b), - "bytes_head_hex", hex.EncodeToString(b[:util_math.Min(8, len(b))]), + "bytes_head_hex", hex.EncodeToString(b[:min(8, len(b))]), ) } diff --git a/pkg/storegateway/indexheader/encoding/encoding.go b/pkg/storegateway/indexheader/encoding/encoding.go index 5f560219f44..9a6ebc92441 100644 --- a/pkg/storegateway/indexheader/encoding/encoding.go +++ b/pkg/storegateway/indexheader/encoding/encoding.go @@ -12,8 +12,6 @@ import ( "github.com/dennwc/varint" "github.com/pkg/errors" - - "github.com/grafana/mimir/pkg/util/math" ) var ( @@ -49,7 +47,7 @@ func (d *Decbuf) CheckCrc32(castagnoliTable *crc32.Table) { rawBuf := make([]byte, maxChunkSize) for bytesToRead > 0 { - chunkSize := math.Min(bytesToRead, maxChunkSize) + chunkSize := min(bytesToRead, maxChunkSize) chunkBuf := rawBuf[0:chunkSize] err := d.r.readInto(chunkBuf) diff --git a/pkg/storegateway/series_chunks.go b/pkg/storegateway/series_chunks.go index f6a80aa73a3..f5d3648deed 100644 --- a/pkg/storegateway/series_chunks.go +++ b/pkg/storegateway/series_chunks.go @@ -14,7 +14,6 @@ import ( "github.com/prometheus/prometheus/model/labels" "github.com/grafana/mimir/pkg/storegateway/storepb" - util_math "github.com/grafana/mimir/pkg/util/math" "github.com/grafana/mimir/pkg/util/pool" "github.com/grafana/mimir/pkg/util/spanlogger" ) @@ -378,7 +377,7 @@ func (c *loadingSeriesChunksSetIterator) Next() (retHasNext bool) { // Pre-allocate the series slice using the expected batchSize even if nextUnloaded has less elements, // so that there's a higher chance the slice will be reused once released. - nextSet := newSeriesChunksSet(util_math.Max(c.fromBatchSize, nextUnloaded.len()), true) + nextSet := newSeriesChunksSet(max(c.fromBatchSize, nextUnloaded.len()), true) // Release the set if an error occurred. defer func() { diff --git a/pkg/streamingpromql/engine_test.go b/pkg/streamingpromql/engine_test.go index a987f0df4b0..0cce9cc6071 100644 --- a/pkg/streamingpromql/engine_test.go +++ b/pkg/streamingpromql/engine_test.go @@ -2670,6 +2670,8 @@ func TestCompareVariousMixedMetricsFunctions(t *testing.T) { expressions = append(expressions, fmt.Sprintf(`histogram_fraction(scalar(series{label="i"}), scalar(series{label="i"}), series{label=~"(%s)"})`, labelRegex)) expressions = append(expressions, fmt.Sprintf(`histogram_quantile(0.8, series{label=~"(%s)"})`, labelRegex)) expressions = append(expressions, fmt.Sprintf(`histogram_quantile(scalar(series{label="i"}), series{label=~"(%s)"})`, labelRegex)) + expressions = append(expressions, fmt.Sprintf(`histogram_stddev(series{label=~"(%s)"})`, labelRegex)) + expressions = append(expressions, fmt.Sprintf(`histogram_stdvar(series{label=~"(%s)"})`, labelRegex)) expressions = append(expressions, fmt.Sprintf(`histogram_sum(series{label=~"(%s)"})`, labelRegex)) } diff --git a/pkg/streamingpromql/functions.go b/pkg/streamingpromql/functions.go index 95267bbf9fd..e4229de9175 100644 --- a/pkg/streamingpromql/functions.go +++ b/pkg/streamingpromql/functions.go @@ -371,6 +371,8 @@ var instantVectorFunctionOperatorFactories = map[string]InstantVectorFunctionOpe "histogram_count": InstantVectorTransformationFunctionOperatorFactory("histogram_count", functions.HistogramCount), "histogram_fraction": HistogramFractionFunctionOperatorFactory(), "histogram_quantile": HistogramQuantileFunctionOperatorFactory(), + "histogram_stddev": InstantVectorTransformationFunctionOperatorFactory("histogram_stddev", functions.HistogramStdDevStdVar(true)), + "histogram_stdvar": InstantVectorTransformationFunctionOperatorFactory("histogram_stdvar", functions.HistogramStdDevStdVar(false)), "histogram_sum": InstantVectorTransformationFunctionOperatorFactory("histogram_sum", functions.HistogramSum), "increase": FunctionOverRangeVectorOperatorFactory("increase", functions.Increase), "label_replace": LabelReplaceFunctionOperatorFactory(), diff --git a/pkg/streamingpromql/operators/functions/histogram_quantile.go b/pkg/streamingpromql/operators/functions/histogram_quantile.go index a30f666d121..4407d648173 100644 --- a/pkg/streamingpromql/operators/functions/histogram_quantile.go +++ b/pkg/streamingpromql/operators/functions/histogram_quantile.go @@ -77,7 +77,7 @@ var bucketGroupPool = zeropool.New(func() *bucketGroup { }) var pointBucketPool = types.NewLimitingBucketedPool( - pool.NewBucketedPool(1, types.MaxExpectedPointsPerSeries, types.PointsPerSeriesBucketFactor, func(size int) []buckets { + pool.NewBucketedPool(types.MaxExpectedPointsPerSeries, func(size int) []buckets { return make([]buckets, 0, size) }), uint64(unsafe.Sizeof(buckets{})), @@ -98,7 +98,7 @@ func mangleBuckets(b buckets) buckets { const maxExpectedBucketsPerHistogram = 64 // There isn't much science to this var bucketSliceBucketedPool = types.NewLimitingBucketedPool( - pool.NewBucketedPool(1, maxExpectedBucketsPerHistogram, 2, func(size int) []bucket { + pool.NewBucketedPool(maxExpectedBucketsPerHistogram, func(size int) []bucket { return make([]bucket, 0, size) }), uint64(unsafe.Sizeof(bucket{})), diff --git a/pkg/streamingpromql/operators/functions/native_histograms.go b/pkg/streamingpromql/operators/functions/native_histograms.go index 46c1795e6ac..65db6f158c7 100644 --- a/pkg/streamingpromql/operators/functions/native_histograms.go +++ b/pkg/streamingpromql/operators/functions/native_histograms.go @@ -4,23 +4,25 @@ package functions import ( "errors" + "math" "github.com/prometheus/prometheus/model/histogram" "github.com/prometheus/prometheus/promql" "github.com/prometheus/prometheus/util/annotations" + "github.com/grafana/mimir/pkg/streamingpromql/floats" "github.com/grafana/mimir/pkg/streamingpromql/limiting" "github.com/grafana/mimir/pkg/streamingpromql/types" ) func HistogramAvg(seriesData types.InstantVectorSeriesData, _ []types.ScalarData, _ types.QueryTimeRange, memoryConsumptionTracker *limiting.MemoryConsumptionTracker) (types.InstantVectorSeriesData, error) { - floats, err := types.FPointSlicePool.Get(len(seriesData.Histograms), memoryConsumptionTracker) + fPoints, err := types.FPointSlicePool.Get(len(seriesData.Histograms), memoryConsumptionTracker) if err != nil { return types.InstantVectorSeriesData{}, err } data := types.InstantVectorSeriesData{ - Floats: floats, + Floats: fPoints, } for _, histogram := range seriesData.Histograms { @@ -36,13 +38,13 @@ func HistogramAvg(seriesData types.InstantVectorSeriesData, _ []types.ScalarData } func HistogramCount(seriesData types.InstantVectorSeriesData, _ []types.ScalarData, _ types.QueryTimeRange, memoryConsumptionTracker *limiting.MemoryConsumptionTracker) (types.InstantVectorSeriesData, error) { - floats, err := types.FPointSlicePool.Get(len(seriesData.Histograms), memoryConsumptionTracker) + fPoints, err := types.FPointSlicePool.Get(len(seriesData.Histograms), memoryConsumptionTracker) if err != nil { return types.InstantVectorSeriesData{}, err } data := types.InstantVectorSeriesData{ - Floats: floats, + Floats: fPoints, } for _, histogram := range seriesData.Histograms { @@ -58,13 +60,13 @@ func HistogramCount(seriesData types.InstantVectorSeriesData, _ []types.ScalarDa } func HistogramFraction(seriesData types.InstantVectorSeriesData, scalarArgsData []types.ScalarData, timeRange types.QueryTimeRange, memoryConsumptionTracker *limiting.MemoryConsumptionTracker) (types.InstantVectorSeriesData, error) { - floats, err := types.FPointSlicePool.Get(len(seriesData.Histograms), memoryConsumptionTracker) + fPoints, err := types.FPointSlicePool.Get(len(seriesData.Histograms), memoryConsumptionTracker) if err != nil { return types.InstantVectorSeriesData{}, err } data := types.InstantVectorSeriesData{ - Floats: floats, + Floats: fPoints, } lower := scalarArgsData[0] @@ -87,6 +89,58 @@ func HistogramFraction(seriesData types.InstantVectorSeriesData, scalarArgsData return data, nil } +func HistogramStdDevStdVar(isStdDev bool) InstantVectorSeriesFunction { + // returns either the standard deviation, or standard variance of a native histogram. + // Float values are ignored. + return func(seriesData types.InstantVectorSeriesData, _ []types.ScalarData, _ types.QueryTimeRange, memoryConsumptionTracker *limiting.MemoryConsumptionTracker) (types.InstantVectorSeriesData, error) { + fPoints, err := types.FPointSlicePool.Get(len(seriesData.Histograms), memoryConsumptionTracker) + if err != nil { + return types.InstantVectorSeriesData{}, err + } + + data := types.InstantVectorSeriesData{ + Floats: fPoints, + } + + for _, histogram := range seriesData.Histograms { + mean := histogram.H.Sum / histogram.H.Count + var variance, cVariance float64 + it := histogram.H.AllBucketIterator() + for it.Next() { + bucket := it.At() + if bucket.Count == 0 { + continue + } + var val float64 + if bucket.Lower <= 0 && 0 <= bucket.Upper { + val = 0 + } else { + val = math.Sqrt(bucket.Upper * bucket.Lower) + if bucket.Upper < 0 { + val = -val + } + } + delta := val - mean + variance, cVariance = floats.KahanSumInc(bucket.Count*delta*delta, variance, cVariance) + } + variance += cVariance + variance /= histogram.H.Count + if isStdDev { + variance = math.Sqrt(variance) + } + + data.Floats = append(data.Floats, promql.FPoint{ + T: histogram.T, + F: variance, + }) + } + + types.PutInstantVectorSeriesData(seriesData, memoryConsumptionTracker) + + return data, nil + } +} + func HistogramSum(seriesData types.InstantVectorSeriesData, _ []types.ScalarData, _ types.QueryTimeRange, memoryConsumptionTracker *limiting.MemoryConsumptionTracker) (types.InstantVectorSeriesData, error) { floats, err := types.FPointSlicePool.Get(len(seriesData.Histograms), memoryConsumptionTracker) if err != nil { diff --git a/pkg/streamingpromql/operators/functions/range_vectors.go b/pkg/streamingpromql/operators/functions/range_vectors.go index 9f19f7e6476..dea48555c67 100644 --- a/pkg/streamingpromql/operators/functions/range_vectors.go +++ b/pkg/streamingpromql/operators/functions/range_vectors.go @@ -338,95 +338,133 @@ func avgHistograms(head, tail []promql.HPoint) (*histogram.FloatHistogram, error var Changes = FunctionOverRangeVectorDefinition{ SeriesMetadataFunction: DropSeriesName, - StepFunc: changes, + StepFunc: resetsChanges(false), } -func changes(step *types.RangeVectorStepData, _ float64, _ types.EmitAnnotationFunc) (float64, bool, *histogram.FloatHistogram, error) { - fHead, fTail := step.Floats.UnsafePoints() +var Resets = FunctionOverRangeVectorDefinition{ + SeriesMetadataFunction: DropSeriesName, + StepFunc: resetsChanges(true), +} - haveFloats := len(fHead) > 0 || len(fTail) > 0 +func resetsChanges(isReset bool) RangeVectorStepFunction { + return func(step *types.RangeVectorStepData, _ float64, _ types.EmitAnnotationFunc) (float64, bool, *histogram.FloatHistogram, error) { + fHead, fTail := step.Floats.UnsafePoints() + hHead, hTail := step.Histograms.UnsafePoints() - if !haveFloats { - // Prometheus' engine doesn't support histogram for `changes` function yet, - // therefore we won't add that yet too. - return 0, false, nil, nil - } + if len(fHead) == 0 && len(hHead) == 0 { + // No points, nothing to do + return 0, false, nil, nil + } - if len(fHead) == 0 && len(fTail) == 0 { - return 0, true, nil, nil - } + count := 0.0 + var prevFloat float64 + var prevHist *histogram.FloatHistogram - changes := 0.0 - prev := fHead[0].F + fIdx, hIdx := 0, 0 - // Comparing the point with the point before it. - accumulate := func(points []promql.FPoint) { - for _, sample := range points { - current := sample.F - if current != prev && !(math.IsNaN(current) && math.IsNaN(prev)) { - changes++ + loadFloat := func(idx int) (float64, int64, bool, int) { + if idx < len(fHead) { + point := fHead[idx] + return point.F, point.T, true, idx + 1 + } else if idx-len(fHead) < len(fTail) { + point := fTail[idx-len(fHead)] + return point.F, point.T, true, idx + 1 } - prev = current + return 0, 0, false, idx } - } - - accumulate(fHead[1:]) - accumulate(fTail) - return changes, true, nil, nil -} - -var Resets = FunctionOverRangeVectorDefinition{ - SeriesMetadataFunction: DropSeriesName, - StepFunc: resets, -} + loadHist := func(idx int) (*histogram.FloatHistogram, int64, bool, int) { + if idx < len(hHead) { + point := hHead[idx] + return point.H, point.T, true, idx + 1 + } else if idx-len(hHead) < len(hTail) { + point := hTail[idx-len(hHead)] + return point.H, point.T, true, idx + 1 + } + return nil, 0, false, idx + } -func resets(step *types.RangeVectorStepData, _ float64, _ types.EmitAnnotationFunc) (float64, bool, *histogram.FloatHistogram, error) { - fHead, fTail := step.Floats.UnsafePoints() - hHead, hTail := step.Histograms.UnsafePoints() + var fValue float64 + var hValue *histogram.FloatHistogram + var fTime, hTime int64 + var fOk, hOk bool - // There is no need to check xTail length because xHead slice will always be populated first if there is at least 1 point. - haveFloats := len(fHead) > 0 - haveHistograms := len(hHead) > 0 + fValue, fTime, fOk, fIdx = loadFloat(fIdx) + hValue, hTime, hOk, hIdx = loadHist(hIdx) - if !haveFloats && !haveHistograms { - return 0, false, nil, nil - } + if fOk && hOk { + if fTime <= hTime { + prevFloat = fValue + prevHist = nil + } else { + prevHist = hValue + prevFloat = 0 + } + } else if fOk { + prevFloat = fValue + prevHist = nil + } else if hOk { + prevHist = hValue + prevFloat = 0 + } else { + return 0, false, nil, nil + } - resets := 0.0 + for fOk || hOk { + var currentFloat float64 + var currentHist *histogram.FloatHistogram + + if fOk && (!hOk || fTime <= hTime) { + currentFloat = fValue + currentHist = nil + fValue, fTime, fOk, fIdx = loadFloat(fIdx) + } else if hOk { + currentHist = hValue + currentFloat = 0 + hValue, hTime, hOk, hIdx = loadHist(hIdx) + } - if haveFloats { - prev := fHead[0].F - accumulate := func(points []promql.FPoint) { - for _, sample := range points { - current := sample.F - if current < prev { - resets++ + if prevHist == nil { + if currentHist == nil { + if isReset { + if currentFloat < prevFloat { + count++ + } + } else { + if currentFloat != prevFloat && !(math.IsNaN(currentFloat) && math.IsNaN(prevFloat)) { + count++ + } + } + } else { + count++ // Type change from float to histogram + } + } else { + if currentHist == nil { + count++ // Type change from histogram to float + } else { + if isReset { + if currentHist.DetectReset(prevHist) { + count++ + } + } else { + if !currentHist.Equals(prevHist) { + count++ + } + } } - prev = current } - } - accumulate(fHead[1:]) - accumulate(fTail) - } - - if haveHistograms { - prev := hHead[0].H - accumulate := func(points []promql.HPoint) { - for _, sample := range points { - current := sample.H - if current.DetectReset(prev) { - resets++ - } - prev = current + if currentHist == nil { + prevFloat = currentFloat + prevHist = nil + } else { + prevHist = currentHist + prevFloat = 0 } } - accumulate(hHead[1:]) - accumulate(hTail) - } - return resets, true, nil, nil + return count, true, nil, nil + } } var Deriv = FunctionOverRangeVectorDefinition{ diff --git a/pkg/streamingpromql/operators/selectors/selector.go b/pkg/streamingpromql/operators/selectors/selector.go index c75c1c37a12..1cd6f13e4a2 100644 --- a/pkg/streamingpromql/operators/selectors/selector.go +++ b/pkg/streamingpromql/operators/selectors/selector.go @@ -57,7 +57,7 @@ func (s *Selector) SeriesMetadata(ctx context.Context) ([]types.SeriesMetadata, // Apply lookback delta, range and offset after adjusting for timestamp from @ modifier. rangeMilliseconds := s.Range.Milliseconds() - startTimestamp = startTimestamp - s.LookbackDelta.Milliseconds() - rangeMilliseconds - s.Offset - 1 // -1 to exclude samples on the lower boundary of the range. + startTimestamp = startTimestamp - s.LookbackDelta.Milliseconds() - rangeMilliseconds - s.Offset + 1 // +1 to exclude samples on the lower boundary of the range (queriers work with closed intervals, we use left-open). endTimestamp = endTimestamp - s.Offset hints := &storage.SelectHints{ diff --git a/pkg/streamingpromql/operators/selectors/selector_test.go b/pkg/streamingpromql/operators/selectors/selector_test.go index 0487ca9d247..bc4f73ad8d5 100644 --- a/pkg/streamingpromql/operators/selectors/selector_test.go +++ b/pkg/streamingpromql/operators/selectors/selector_test.go @@ -3,13 +3,17 @@ package selectors import ( + "context" "fmt" "strconv" "testing" + "time" "github.com/prometheus/prometheus/model/labels" + "github.com/prometheus/prometheus/model/timestamp" "github.com/prometheus/prometheus/storage" "github.com/prometheus/prometheus/tsdb/chunkenc" + "github.com/prometheus/prometheus/util/annotations" "github.com/stretchr/testify/require" "github.com/grafana/mimir/pkg/streamingpromql/types" @@ -97,3 +101,83 @@ func (m mockSeries) Labels() labels.Labels { func (m mockSeries) Iterator(chunkenc.Iterator) chunkenc.Iterator { panic("mockSeries: Iterator() not supported") } + +func TestSelector_QueryRanges(t *testing.T) { + start := time.Date(2024, 12, 11, 3, 12, 45, 0, time.UTC) + end := start.Add(time.Hour) + timeRange := types.NewRangeQueryTimeRange(start, end, time.Minute) + + t.Run("instant vector selector", func(t *testing.T) { + queryable := &mockQueryable{} + lookbackDelta := 5 * time.Minute + s := &Selector{ + Queryable: queryable, + TimeRange: timeRange, + LookbackDelta: lookbackDelta, + } + + _, err := s.SeriesMetadata(context.Background()) + require.NoError(t, err) + + expectedMinT := timestamp.FromTime(start.Add(-lookbackDelta).Add(time.Millisecond)) // Add a millisecond to exclude the beginning of the range. + expectedMaxT := timestamp.FromTime(end) + require.Equal(t, expectedMinT, queryable.mint) + require.Equal(t, expectedMaxT, queryable.maxt) + require.Equal(t, expectedMinT, queryable.hints.Start) + require.Equal(t, expectedMaxT, queryable.hints.End) + }) + + t.Run("range vector selector", func(t *testing.T) { + queryable := &mockQueryable{} + selectorRange := 15 * time.Minute + s := &Selector{ + Queryable: queryable, + TimeRange: timeRange, + Range: selectorRange, + } + + _, err := s.SeriesMetadata(context.Background()) + require.NoError(t, err) + + expectedMinT := timestamp.FromTime(start.Add(-selectorRange).Add(time.Millisecond)) // Add a millisecond to exclude the beginning of the range. + expectedMaxT := timestamp.FromTime(end) + require.Equal(t, expectedMinT, queryable.mint) + require.Equal(t, expectedMaxT, queryable.maxt) + require.Equal(t, expectedMinT, queryable.hints.Start) + require.Equal(t, expectedMaxT, queryable.hints.End) + }) +} + +type mockQueryable struct { + mint, maxt int64 + hints *storage.SelectHints +} + +func (m *mockQueryable) Querier(mint, maxt int64) (storage.Querier, error) { + m.mint = mint + m.maxt = maxt + + return &mockQuerier{m}, nil +} + +type mockQuerier struct { + q *mockQueryable +} + +func (m *mockQuerier) Select(_ context.Context, _ bool, hints *storage.SelectHints, _ ...*labels.Matcher) storage.SeriesSet { + m.q.hints = hints + + return storage.EmptySeriesSet() +} + +func (m *mockQuerier) LabelValues(context.Context, string, *storage.LabelHints, ...*labels.Matcher) ([]string, annotations.Annotations, error) { + panic("not supported") +} + +func (m *mockQuerier) LabelNames(context.Context, *storage.LabelHints, ...*labels.Matcher) ([]string, annotations.Annotations, error) { + panic("not supported") +} + +func (m *mockQuerier) Close() error { + panic("not supported") +} diff --git a/pkg/streamingpromql/operators/subquery.go b/pkg/streamingpromql/operators/subquery.go index 716b8ab7757..ef5e8e7c505 100644 --- a/pkg/streamingpromql/operators/subquery.go +++ b/pkg/streamingpromql/operators/subquery.go @@ -69,8 +69,15 @@ func (s *Subquery) NextSeries(ctx context.Context) error { } s.nextStepT = s.ParentQueryTimeRange.StartT - s.floats.Use(data.Floats) - s.histograms.Use(data.Histograms) + + if err := s.floats.Use(data.Floats); err != nil { + return err + } + + if err := s.histograms.Use(data.Histograms); err != nil { + return err + } + return nil } diff --git a/pkg/streamingpromql/testdata/ours/binary_operators.test b/pkg/streamingpromql/testdata/ours/binary_operators.test index 7a0bcb69d05..72f3f130d1a 100644 --- a/pkg/streamingpromql/testdata/ours/binary_operators.test +++ b/pkg/streamingpromql/testdata/ours/binary_operators.test @@ -276,16 +276,16 @@ load 1m another_mixed{job="test"} 10 {{schema:0 sum:12 count:6 buckets:[2 4 6]}} {{schema:0 sum:12 count:6 buckets:[2 4 6]}} 4 5 {{schema:0 sum:12 count:6 buckets:[2 4 6]}} # @0 @5m @10m @15m @20m @25m -eval range from 0 to 5m step 1m mixed_metric + another_mixed +eval_info range from 0 to 5m step 1m mixed_metric + another_mixed {job="test"} 20 11 12 7 _ {{schema:0 sum:24 count:12 buckets:[4 8 12]}} -eval range from 0 to 5m step 1m mixed_metric - another_mixed +eval_info range from 0 to 5m step 1m mixed_metric - another_mixed {job="test"} 0 -9 -8 -1 _ {{schema:0 sum:0 count:0}} -eval range from 0 to 5m step 1m mixed_metric * another_mixed +eval_info range from 0 to 5m step 1m mixed_metric * another_mixed {job="test"} 100 10 20 12 {{schema:0 sum:30 count:15 buckets:[5 10 15]}} _ -eval range from 0 to 5m step 1m mixed_metric / another_mixed +eval_info range from 0 to 5m step 1m mixed_metric / another_mixed {job="test"} 1 0.1 0.2 0.75 {{schema:0 sum:1.2 count:0.6 buckets:[0.2 0.4 0.6]}} _ clear @@ -317,18 +317,18 @@ eval range from 0m to 24m step 6m my_metric / 2 # Scalar on left side # Note that positive scalar / histogram == nothing. -eval range from 0m to 24m step 6m 2 / my_metric +eval_info range from 0m to 24m step 6m 2 / my_metric {histograms="both"} 0.4 0.2 _ {job="foo"} 2 1 0.6666666666666667 0.5 0.4 {job="bar"} 0.2 0.1 0.06666666666666667 _ 0.04 # Test other arithmetic operations. -eval range from 0m to 24m step 6m my_metric + 2 +eval_info range from 0m to 24m step 6m my_metric + 2 {histograms="both"} 7 12 _ {job="foo"} 3 4 5 6 7 {job="bar"} 12 22 32 _ 52 -eval range from 0m to 24m step 6m my_metric - 2 +eval_info range from 0m to 24m step 6m my_metric - 2 {histograms="both"} 3 8 {job="foo"} -1 0 1 2 3 {job="bar"} 8 18 28 _ 48 @@ -552,7 +552,7 @@ eval_info range from 0 to 24m step 6m left_histograms == 0 eval_info range from 0 to 24m step 6m left_histograms != 3 # No results. -eval range from 0 to 24m step 6m left_histograms != 0 +eval_info range from 0 to 24m step 6m left_histograms != 0 # No results. eval_info range from 0 to 24m step 6m left_histograms > 3 @@ -561,7 +561,7 @@ eval_info range from 0 to 24m step 6m left_histograms > 3 eval_info range from 0 to 24m step 6m left_histograms > 0 # No results. -eval range from 0 to 24m step 6m left_histograms >= 3 +eval_info range from 0 to 24m step 6m left_histograms >= 3 # No results. eval_info range from 0 to 24m step 6m left_histograms >= 0 @@ -576,7 +576,7 @@ eval_info range from 0 to 24m step 6m left_histograms < 0 eval_info range from 0 to 24m step 6m left_histograms <= 3 # No results. -eval range from 0 to 24m step 6m left_histograms <= 0 +eval_info range from 0 to 24m step 6m left_histograms <= 0 # No results. eval_info range from 0 to 24m step 6m left_histograms == bool 3 @@ -649,40 +649,40 @@ eval range from 0 to 60m step 6m NaN == left_floats eval range from 0 to 60m step 6m NaN == bool left_floats {} 0 0 _ _ 0 _ 0 0 0 0 0 -eval range from 0 to 24m step 6m 3 == left_histograms +eval_info range from 0 to 24m step 6m 3 == left_histograms # No results. -eval range from 0 to 24m step 6m 0 == left_histograms +eval_info range from 0 to 24m step 6m 0 == left_histograms # No results. -eval range from 0 to 24m step 6m 3 != left_histograms +eval_info range from 0 to 24m step 6m 3 != left_histograms # No results. -eval range from 0 to 24m step 6m 0 != left_histograms +eval_info range from 0 to 24m step 6m 0 != left_histograms # No results. -eval range from 0 to 24m step 6m 3 < left_histograms +eval_info range from 0 to 24m step 6m 3 < left_histograms # No results. -eval range from 0 to 24m step 6m 0 < left_histograms +eval_info range from 0 to 24m step 6m 0 < left_histograms # No results. -eval range from 0 to 24m step 6m 3 < left_histograms +eval_info range from 0 to 24m step 6m 3 < left_histograms # No results. -eval range from 0 to 24m step 6m 0 < left_histograms +eval_info range from 0 to 24m step 6m 0 < left_histograms # No results. -eval range from 0 to 24m step 6m 3 > left_histograms +eval_info range from 0 to 24m step 6m 3 > left_histograms # No results. -eval range from 0 to 24m step 6m 0 > left_histograms +eval_info range from 0 to 24m step 6m 0 > left_histograms # No results. -eval range from 0 to 24m step 6m 3 >= left_histograms +eval_info range from 0 to 24m step 6m 3 >= left_histograms # No results. -eval range from 0 to 24m step 6m 0 >= left_histograms +eval_info range from 0 to 24m step 6m 0 >= left_histograms # No results. # Scalar / scalar combinations diff --git a/pkg/streamingpromql/testdata/ours/functions.test b/pkg/streamingpromql/testdata/ours/functions.test index 95dc4f0a373..a1b669f221c 100644 --- a/pkg/streamingpromql/testdata/ours/functions.test +++ b/pkg/streamingpromql/testdata/ours/functions.test @@ -4,22 +4,22 @@ # These test cases cover scenarios not covered by the upstream test cases, such as range queries, or edge cases that are uniquely likely to cause issues in the streaming engine. load 1m - some_metric{env="prod", cluster="eu"} 0+60x4 - some_metric{env="prod", cluster="us"} 0+120x4 - some_metric{env="test", cluster="eu"} 0+180x4 - some_metric{env="test", cluster="us"} 0+240x4 - some_metric_with_gaps 0 60 120 180 240 _ 2000 2120 2240 - some_metric_with_stale_marker 0 60 120 stale 240 300 + some_metric_count{env="prod", cluster="eu"} 0+60x4 + some_metric_count{env="prod", cluster="us"} 0+120x4 + some_metric_count{env="test", cluster="eu"} 0+180x4 + some_metric_count{env="test", cluster="us"} 0+240x4 + some_metric_with_gaps_total 0 60 120 180 240 _ 2000 2120 2240 + some_metric_with_stale_marker_sum 0 60 120 stale 240 300 # Range query with rate. -eval range from 0 to 4m step 1m rate(some_metric[1m1s]) +eval range from 0 to 4m step 1m rate(some_metric_count[1m1s]) {env="prod", cluster="eu"} _ 0.9836065573770493 1 1 1 {env="prod", cluster="us"} _ 1.9672131147540985 2 2 2 {env="test", cluster="eu"} _ 2.9508196721311477 3 3 3 {env="test", cluster="us"} _ 3.934426229508197 4 4 4 # Range query with increase. -eval range from 0 to 4m step 1m increase(some_metric[1m1s]) +eval range from 0 to 4m step 1m increase(some_metric_count[1m1s]) {env="prod", cluster="eu"} _ 60 61 61 61 {env="prod", cluster="us"} _ 120 122 122 122 {env="test", cluster="eu"} _ 180 183 183 183 @@ -40,102 +40,102 @@ eval range from 0 to 4m step 1m increase(some_nonexistent_metric[1m]) # # The first query below (with 1m) tests that we correctly skip evaluating rate() when there aren't enough points in the range. # The second query below (with 2m) tests that we correctly pick the last point from the buffer if the last point in the buffer is outside the range. -eval range from 0 to 8m step 1m rate(some_metric_with_gaps[1m1s]) +eval range from 0 to 8m step 1m rate(some_metric_with_gaps_total[1m1s]) {} _ 0.9836065573770493 1 1 1 _ _ 2 2 -eval range from 0 to 8m step 1m increase(some_metric_with_gaps[1m1s]) +eval range from 0 to 8m step 1m increase(some_metric_with_gaps_total[1m1s]) {} _ 60 61 61 61 _ _ 122 122 -eval range from 0 to 8m step 1m rate(some_metric_with_gaps[2m1s]) +eval range from 0 to 8m step 1m rate(some_metric_with_gaps_total[2m1s]) {} _ 0.49586776859504134 0.9917355371900827 1 1 1 14.666666666666666 2 2 -eval range from 0 to 8m step 1m increase(some_metric_with_gaps[2m1s]) +eval range from 0 to 8m step 1m increase(some_metric_with_gaps_total[2m1s]) {} _ 60 120 121 121 121 1774.6666666666665 242 242 # Test that we handle staleness markers correctly. -eval range from 0 to 5m step 1m rate(some_metric_with_stale_marker[2m1s]) +eval range from 0 to 5m step 1m rate(some_metric_with_stale_marker_sum[2m1s]) {} _ 0.49586776859504134 0.9917355371900827 1 1 1 -eval range from 0 to 5m step 1m increase(some_metric_with_stale_marker[2m1s]) +eval range from 0 to 5m step 1m increase(some_metric_with_stale_marker_sum[2m1s]) {} _ 60 120 121 121 121 clear # Test simple functions not covered by the upstream tests load 1m - some_metric{env="prod"} 0 0.5 -0.5 NaN -NaN 2.1 -2.1 + some_metric_count{env="prod"} 0 0.5 -0.5 NaN -NaN 2.1 -2.1 -eval range from 0 to 4m step 1m abs(some_metric) +eval range from 0 to 4m step 1m abs(some_metric_count) {env="prod"} 0 0.5 0.5 NaN NaN -eval range from 0 to 4m step 1m acos(some_metric) +eval range from 0 to 4m step 1m acos(some_metric_count) {env="prod"} 1.5707963267948966 1.0471975511965976 2.0943951023931957 NaN NaN -eval range from 0 to 4m step 1m asin(some_metric) +eval range from 0 to 4m step 1m asin(some_metric_count) {env="prod"} 0 0.5235987755982989 -0.5235987755982989 NaN NaN -eval range from 0 to 4m step 1m atanh(some_metric) +eval range from 0 to 4m step 1m atanh(some_metric_count) {env="prod"} 0 0.5493061443340548 -0.5493061443340548 NaN NaN -eval range from 0 to 6m step 1m ceil(some_metric) +eval range from 0 to 6m step 1m ceil(some_metric_count) {env="prod"} 0 1 -0 NaN -NaN 3 -2 -eval range from 0 to 6m step 1m floor(some_metric) +eval range from 0 to 6m step 1m floor(some_metric_count) {env="prod"} 0 0 -1 NaN -NaN 2 -3 clear load 1m - some_metric{foo="bar"} 0 1 2 3 _ _ {{schema:3 sum:4 count:4 buckets:[1 2 1]}} {{schema:3 sum:5 count:3 buckets:[2 5 4]}} + some_metric_count{foo="bar"} 0 1 2 3 _ _ {{schema:3 sum:4 count:4 buckets:[1 2 1]}} {{schema:3 sum:5 count:3 buckets:[2 5 4]}} some_nhcb_metric{baz="bar"} {{schema:-53 sum:1 count:5 custom_values:[5 10] buckets:[1 4]}} {{schema:-53 sum:15 count:2 custom_values:[5 10] buckets:[0 2]}} {{schema:-53 sum:3 count:15 custom_values:[5 10] buckets:[7 8]}} some_inf_and_nan_metric{foo="baz"} 0 1 2 3 Inf Inf Inf NaN NaN NaN NaN 8 7 6 -eval range from 0 to 7m step 1m count_over_time(some_metric[3m1s]) +eval range from 0 to 7m step 1m count_over_time(some_metric_count[3m1s]) {foo="bar"} 1 2 3 4 3 2 2 2 -eval range from 0 to 7m step 1m count_over_time(some_metric[6s]) +eval range from 0 to 7m step 1m count_over_time(some_metric_count[6s]) {foo="bar"} 1 1 1 1 _ _ 1 1 -eval range from 0 to 7m step 1m last_over_time(some_metric[3m1s]) - some_metric{foo="bar"} 0 1 2 3 3 3 {{schema:3 sum:4 count:4 buckets:[1 2 1]}} {{schema:3 sum:5 count:3 buckets:[2 5 4]}} +eval range from 0 to 7m step 1m last_over_time(some_metric_count[3m1s]) + some_metric_count{foo="bar"} 0 1 2 3 3 3 {{schema:3 sum:4 count:4 buckets:[1 2 1]}} {{schema:3 sum:5 count:3 buckets:[2 5 4]}} -eval range from 0 to 7m step 1m last_over_time(some_metric[6s]) - some_metric{foo="bar"} 0 1 2 3 _ _ {{schema:3 sum:4 count:4 buckets:[1 2 1]}} {{schema:3 sum:5 count:3 buckets:[2 5 4]}} +eval range from 0 to 7m step 1m last_over_time(some_metric_count[6s]) + some_metric_count{foo="bar"} 0 1 2 3 _ _ {{schema:3 sum:4 count:4 buckets:[1 2 1]}} {{schema:3 sum:5 count:3 buckets:[2 5 4]}} -eval range from 0 to 7m step 1m present_over_time(some_metric[3m1s]) +eval range from 0 to 7m step 1m present_over_time(some_metric_count[3m1s]) {foo="bar"} 1 1 1 1 1 1 1 1 -eval range from 0 to 7m step 1m present_over_time(some_metric[6s]) +eval range from 0 to 7m step 1m present_over_time(some_metric_count[6s]) {foo="bar"} 1 1 1 1 _ _ 1 1 -eval range from 0 to 7m step 1m min_over_time(some_metric[3m1s]) +eval range from 0 to 7m step 1m min_over_time(some_metric_count[3m1s]) {foo="bar"} 0 0 0 0 1 2 3 _ -eval range from 0 to 7m step 1m min_over_time(some_metric[6s]) +eval range from 0 to 7m step 1m min_over_time(some_metric_count[6s]) {foo="bar"} 0 1 2 3 _ _ _ _ eval range from 0 to 16m step 1m min_over_time(some_inf_and_nan_metric[3m1s]) {foo="baz"} 0 0 0 0 1 2 3 Inf Inf Inf NaN 8 7 6 6 6 6 -eval range from 0 to 7m step 1m max_over_time(some_metric[3m1s]) +eval range from 0 to 7m step 1m max_over_time(some_metric_count[3m1s]) {foo="bar"} 0 1 2 3 3 3 3 _ -eval range from 0 to 7m step 1m max_over_time(some_metric[6s]) +eval range from 0 to 7m step 1m max_over_time(some_metric_count[6s]) {foo="bar"} 0 1 2 3 _ _ _ _ eval range from 0 to 16m step 1m max_over_time(some_inf_and_nan_metric[3m1s]) {foo="baz"} 0 1 2 3 Inf Inf Inf Inf Inf Inf NaN 8 8 8 8 7 6 -eval_warn range from 0 to 10m step 1m sum_over_time(some_metric[3m1s]) +eval_warn range from 0 to 10m step 1m sum_over_time(some_metric_count[3m1s]) {foo="bar"} 0 1 3 6 6 5 _ {{schema:3 sum:9 count:7 buckets:[3 7 5]}} {{schema:3 sum:9 count:7 buckets:[3 7 5]}} {{schema:3 sum:9 count:7 buckets:[3 7 5]}} {{schema:3 sum:5 count:3 buckets:[2 5 4]}} -eval range from 0 to 5m step 1m sum_over_time(some_metric[3m1s]) +eval range from 0 to 5m step 1m sum_over_time(some_metric_count[3m1s]) {foo="bar"} 0 1 3 6 6 5 -eval range from 7m to 10m step 1m sum_over_time(some_metric[3m1s]) +eval range from 7m to 10m step 1m sum_over_time(some_metric_count[3m1s]) {foo="bar"} {{schema:3 sum:9 count:7 buckets:[3 7 5]}} {{schema:3 sum:9 count:7 buckets:[3 7 5]}} {{schema:3 sum:9 count:7 buckets:[3 7 5]}} {{schema:3 sum:5 count:3 buckets:[2 5 4]}} -eval range from 0 to 7m step 1m sum_over_time(some_metric[6s]) +eval range from 0 to 7m step 1m sum_over_time(some_metric_count[6s]) {foo="bar"} 0 1 2 3 _ _ {{schema:3 sum:4 count:4 buckets:[1 2 1]}} {{schema:3 sum:5 count:3 buckets:[2 5 4]}} eval range from 0 to 2m step 1m sum_over_time(some_nhcb_metric[3m1s]) @@ -144,16 +144,16 @@ eval range from 0 to 2m step 1m sum_over_time(some_nhcb_metric[3m1s]) eval range from 0 to 16m step 1m sum_over_time(some_inf_and_nan_metric[3m1s]) {foo="baz"} 0 1 3 6 Inf Inf Inf NaN NaN NaN NaN NaN NaN NaN 21 13 6 -eval_warn range from 0 to 10m step 1m avg_over_time(some_metric[3m1s]) +eval_warn range from 0 to 10m step 1m avg_over_time(some_metric_count[3m1s]) {foo="bar"} 0 0.5 1 1.5 2 2.5 _ {{schema:3 sum:4.5 count:3.5 buckets:[1.5 3.5 2.5]}} {{schema:3 sum:4.5 count:3.5 buckets:[1.5 3.5 2.5]}} {{schema:3 sum:4.5 count:3.5 buckets:[1.5 3.5 2.5]}} {{schema:3 sum:5 count:3 buckets:[2 5 4]}} -eval range from 0 to 5m step 1m avg_over_time(some_metric[3m1s]) +eval range from 0 to 5m step 1m avg_over_time(some_metric_count[3m1s]) {foo="bar"} 0 0.5 1 1.5 2 2.5 -eval range from 7m to 10m step 1m avg_over_time(some_metric[3m1s]) +eval range from 7m to 10m step 1m avg_over_time(some_metric_count[3m1s]) {foo="bar"} {{schema:3 sum:4.5 count:3.5 buckets:[1.5 3.5 2.5]}} {{schema:3 sum:4.5 count:3.5 buckets:[1.5 3.5 2.5]}} {{schema:3 sum:4.5 count:3.5 buckets:[1.5 3.5 2.5]}} {{schema:3 sum:5 count:3 buckets:[2 5 4]}} -eval range from 0 to 7m step 1m avg_over_time(some_metric[6s]) +eval range from 0 to 7m step 1m avg_over_time(some_metric_count[6s]) {foo="bar"} 0 1 2 3 _ _ {{schema:3 sum:4 count:4 buckets:[1 2 1]}} {{schema:3 sum:5 count:3 buckets:[2 5 4]}} eval range from 0 to 2m step 1m avg_over_time(some_nhcb_metric[3m1s]) @@ -511,8 +511,8 @@ eval range from 0 to 20m step 1m resets(metric[3m1s]) {case="all NaN, no resets"} 0 0 0 0 0 0 0 0 0 0 0 {case="all Inf, no resets"} 0 0 0 0 0 0 0 0 0 0 0 {case="all same histogram, no resets"} 0 0 0 0 0 - {case="floats and nh, some resets"} 0 1 1 1 0 0 0 1 1 1 0 - {case="floats, nh and nhcb, some resets"} 0 0 0 0 1 2 3 2 1 1 2 3 3 2 2 1 1 0 + {case="floats and nh, some resets"} 0 1 1 1 0 0 1 1 1 1 0 + {case="floats, nh and nhcb, some resets"} 0 0 0 0 1 2 3 2 2 2 2 3 3 2 2 1 1 0 {case="mixed float, NaN and Inf, some resets"} 0 1 1 1 0 0 0 0 0 1 2 3 2 2 1 1 0 {case="nh, bucket decreased, some resets"} 0 1 2 2 1 0 {case="nh, count decreased, some resets"} 0 1 1 1 0 @@ -558,12 +558,12 @@ eval range from 0 to 20m step 1m deriv(metric[3m1s]) clear load 1m - some_metric{env="prod", cluster="eu"} _ _ _ 0+1x4 - some_metric{env="prod", cluster="us"} _ _ _ 0+2x4 - some_metric{env="prod", cluster="au"} _ _ _ {{count:5}}+{{count:5}}x4 + some_metric_count{env="prod", cluster="eu"} _ _ _ 0+1x4 + some_metric_count{env="prod", cluster="us"} _ _ _ 0+2x4 + some_metric_count{env="prod", cluster="au"} _ _ _ {{count:5}}+{{count:5}}x4 # Function over range vector with many steps at beginning of range with no samples. -eval range from 0 to 7m step 1m last_over_time(some_metric[3m]) - some_metric{env="prod", cluster="eu"} _ _ _ 0 1 2 3 4 - some_metric{env="prod", cluster="us"} _ _ _ 0 2 4 6 8 - some_metric{env="prod", cluster="au"} _ _ _ {{count:5}} {{count:10}} {{count:15}} {{count:20}} {{count:25}} +eval range from 0 to 7m step 1m last_over_time(some_metric_count[3m]) + some_metric_count{env="prod", cluster="eu"} _ _ _ 0 1 2 3 4 + some_metric_count{env="prod", cluster="us"} _ _ _ 0 2 4 6 8 + some_metric_count{env="prod", cluster="au"} _ _ _ {{count:5}} {{count:10}} {{count:15}} {{count:20}} {{count:25}} diff --git a/pkg/streamingpromql/testdata/ours/histograms.test b/pkg/streamingpromql/testdata/ours/histograms.test index 33d89bc8bc9..788f8245e28 100644 --- a/pkg/streamingpromql/testdata/ours/histograms.test +++ b/pkg/streamingpromql/testdata/ours/histograms.test @@ -8,7 +8,7 @@ load 6m series{le="+Inf"} 8 series{le="2000"} {{schema:0 sum:5 count:4 buckets:[1 2 1]}} -eval instant at 0m histogram_quantile(0.8, series) +eval_info instant at 0m histogram_quantile(0.8, series) {} 595 {le="2000"} 2.29739670999407 @@ -80,16 +80,16 @@ clear # Test various mixed metric scenarios load 6m series{host="a", le="0.1"} 2 _ 1 {{schema:0 sum:5 count:4 buckets:[2 2 2]}} - series{host="a", le="1"} 1 _ 2 {{schema:0 sum:5 count:4 buckets:[1 5 1]}} + series{host="a", le="1"} 3 _ 2 {{schema:0 sum:5 count:4 buckets:[1 5 1]}} series{host="a", le="10"} 5 _ 3 _ {{schema:0 sum:5 count:4 buckets:[5 2 5]}} - series{host="a", le="100"} 4 _ 9 _ {{schema:0 sum:1 count:3 buckets:[6 6 2]}} - series{host="a", le="1000"} 9 _ 5 - series{host="a", le="+Inf"} 8 _ 6 + series{host="a", le="100"} 6 _ 4 _ {{schema:0 sum:1 count:3 buckets:[6 6 2]}} + series{host="a", le="1000"} 8 _ 5 + series{host="a", le="+Inf"} 9 _ 6 series{host="a"} {{schema:0 sum:5 count:4 buckets:[9 2 1]}} {{schema:0 sum:5 count:4 buckets:[1 2 1]}} _ _ _ series{host="b"} 1 {{schema:0 sum:5 count:4 buckets:[0 3 1]}} {{schema:0 sum:5 count:4 buckets:[3 3 1]}} _ eval_warn range from 0m to 24m step 6m histogram_quantile(0.8, series) - {host="a"} _ 2.29739670999407 73 + {host="a"} _ 2.29739670999407 820.0000000000007 {host="a", le="0.1"} _ _ _ 3.0314331330207964 {host="a", le="1"} _ _ _ 2.29739670999407 {host="a", le="10"} _ _ _ _ 3.1166583186419996 @@ -97,7 +97,7 @@ eval_warn range from 0m to 24m step 6m histogram_quantile(0.8, series) {host="b"} _ 2.29739670999407 2.29739670999407 eval_warn range from 0m to 12m step 6m histogram_quantile(0.8, series{host="a"}) - {host="a"} _ 2.29739670999407 73 + {host="a"} _ 2.29739670999407 820.0000000000007 eval_warn range from 0m to 24m step 6m histogram_quantile(0.8, series{host="b"}) {host="b"} _ 2.29739670999407 2.29739670999407 @@ -118,7 +118,7 @@ load 6m notEnoughObservations{le="1"} 0 1 notEnoughObservations{le="+Inf"} 0 2 -eval range from 0m to 6m step 6m histogram_quantile(0.8, series) +eval_info range from 0m to 6m step 6m histogram_quantile(0.8, series) {} 4.800000000000001 NaN eval range from 0m to 6m step 6m histogram_quantile(0.8, noInfinity) diff --git a/pkg/streamingpromql/testdata/ours/native_histograms.test b/pkg/streamingpromql/testdata/ours/native_histograms.test index 8282fbc2e04..cd80e98449c 100644 --- a/pkg/streamingpromql/testdata/ours/native_histograms.test +++ b/pkg/streamingpromql/testdata/ours/native_histograms.test @@ -14,18 +14,24 @@ eval range from 0 to 5m step 1m single_histogram {__name__="single_histogram"} {{schema:0 sum:5 count:4 buckets:[1 2 1]}} {{schema:0 sum:5 count:4 buckets:[1 2 1]}} {{schema:0 sum:5 count:4 buckets:[1 2 1]}} {{schema:0 sum:5 count:4 buckets:[1 2 1]}} {{schema:0 sum:5 count:4 buckets:[1 2 1]}} {{schema:0 sum:20 count:7 buckets:[9 10 1]}} # histogram_avg is sum / count. -eval range from 0 to 5m step 1m histogram_avg(single_histogram) +eval range from 0 to 5m step 1m histogram_avg(single_histogram) {} 1.25 1.25 1.25 1.25 1.25 2.857142857142857 # histogram_count extracts the count property from the histogram. -eval range from 0 to 5m step 1m histogram_count(single_histogram) +eval range from 0 to 5m step 1m histogram_count(single_histogram) {} 4 4 4 4 4 7 -eval range from 0 to 5m step 1m histogram_fraction(0, 2, single_histogram) +eval range from 0 to 5m step 1m histogram_fraction(0, 2, single_histogram) {} 0.75 0.75 0.75 0.75 0.75 1 +eval range from 0 to 5m step 1m histogram_stddev(single_histogram) + {} 0.842629429717281 0.842629429717281 0.842629429717281 0.842629429717281 0.842629429717281 2.986282214238901 + +eval range from 0 to 5m step 1m histogram_stdvar(single_histogram) + {} 0.7100243558256704 0.7100243558256704 0.7100243558256704 0.7100243558256704 0.7100243558256704 8.917881463079594 + # histogram_sum extracts the sum property from the histogram. -eval range from 0 to 5m step 1m histogram_sum(single_histogram) +eval range from 0 to 5m step 1m histogram_sum(single_histogram) {} 5 5 5 5 5 20 clear @@ -46,6 +52,12 @@ eval instant at 3m histogram_count(mixed_metric) eval instant at 3m histogram_fraction(0, 1, mixed_metric) {} 0.25 +eval instant at 4m histogram_stddev(mixed_metric) + {} 0.6650352854715079 + +eval instant at 4m histogram_stdvar(mixed_metric) + {} 0.44227193092217004 + eval instant at 4m histogram_sum(mixed_metric) {} 8 @@ -64,6 +76,12 @@ eval instant at 2m histogram_count(mixed_metric) # histogram_fraction ignores any float values eval instant at 2m histogram_fraction(0, 1, mixed_metric) +# histogram_stddev ignores any float values +eval instant at 2m histogram_stddev(mixed_metric) + +# histogram_stdvar ignores any float values +eval instant at 2m histogram_stdvar(mixed_metric) + # histogram_sum ignores any float values eval instant at 2m histogram_sum(mixed_metric) @@ -95,6 +113,16 @@ eval instant at 0 histogram_fraction(-10, 20, route) {path="two"} 1 {path="three"} 1 +eval instant at 0 histogram_stddev(route) + {path="one"} 0.842629429717281 + {path="two"} 0.8415900492770793 + {path="three"} 1.1865698706402301 + +eval instant at 0 histogram_stdvar(route) + {path="one"} 0.7100243558256704 + {path="two"} 0.7082738110421968 + {path="three"} 1.4079480579111723 + eval instant at 0 histogram_sum(route) {path="one"} 5 {path="two"} 10 @@ -132,6 +160,12 @@ eval range from 0 to 8m step 1m histogram_count(mixed_metric) eval range from 0 to 8m step 1m histogram_fraction(0, 1, mixed_metric) {} _ _ _ _ _ _ _ 0.3 0.3 +eval range from 0 to 8m step 1m histogram_stddev(mixed_metric) +{} _ _ _ _ _ _ _ 0.8574122997574659 0.8574122997574659 + +eval range from 0 to 8m step 1m histogram_stdvar(mixed_metric) +{} _ _ _ _ _ _ _ 0.7351558517753866 0.7351558517753866 + eval range from 0 to 8m step 1m histogram_sum(mixed_metric) {} _ _ _ _ _ _ _ 18 18 @@ -164,16 +198,16 @@ clear # Test mixed metrics and range query load 1m - incr_histogram 1 2 {{schema:3 sum:4 count:4 buckets:[1 2 1]}}+{{schema:5 sum:2 count:1 buckets:[1] offset:1}}x3 + incr_histogram_sum 1 2 {{schema:3 sum:4 count:4 buckets:[1 2 1]}}+{{schema:5 sum:2 count:1 buckets:[1] offset:1}}x3 # - The first value isn't enough to calculate a rate/increase # - The second value is a rate/increase from two floats # - The third value is a rate/increase across a float and histogram (so no value returned) # - The remaining values contain the rate/increase across two histograms in the vector -eval_warn range from 0 to 4m step 1m rate(incr_histogram[1m1s]) +eval_warn range from 0 to 4m step 1m rate(incr_histogram_sum[1m1s]) {} _ 0.016666666666666666 _ {{schema:3 count:0.016666666666666666 sum:0.03333333333333333 offset:1 buckets:[0.016666666666666666]}} {{schema:3 count:0.016666666666666666 sum:0.03333333333333333 offset:1 buckets:[0.016666666666666666]}} -eval_warn range from 0 to 4m step 1m increase(incr_histogram[1m1s]) +eval_warn range from 0 to 4m step 1m increase(incr_histogram_sum[1m1s]) {} _ 1.0166666666666666 _ {{schema:3 count:1.0166666666666666 sum:2.033333333333333 offset:1 buckets:[1.0166666666666666]}} {{schema:3 count:1.0166666666666666 sum:2.033333333333333 offset:1 buckets:[1.0166666666666666]}} clear diff --git a/pkg/streamingpromql/testdata/upstream/aggregators.test b/pkg/streamingpromql/testdata/upstream/aggregators.test index 39434a9797e..6f23ab6ab13 100644 --- a/pkg/streamingpromql/testdata/upstream/aggregators.test +++ b/pkg/streamingpromql/testdata/upstream/aggregators.test @@ -277,6 +277,8 @@ load 5m http_requests{job="app-server", instance="1", group="production"} 0+60x10 http_requests{job="app-server", instance="0", group="canary"} 0+70x10 http_requests{job="app-server", instance="1", group="canary"} 0+80x10 + http_requests_histogram{job="app-server", instance="2", group="canary"} {{schema:0 sum:10 count:10}}x11 + http_requests_histogram{job="api-server", instance="3", group="production"} {{schema:0 sum:20 count:20}}x11 foo 3+0x10 # Unsupported by streaming engine. @@ -356,6 +358,59 @@ load 5m # http_requests{group="canary", instance="0", job="app-server"} 700 # http_requests{group="production", instance="1", job="app-server"} 600 +# Tests for histogram: should ignore histograms. +# Unsupported by streaming engine. +# eval_info instant at 50m topk(100, http_requests_histogram) +# #empty + +# Unsupported by streaming engine. +# eval_info range from 0 to 50m step 5m topk(100, http_requests_histogram) +# #empty + +# Unsupported by streaming engine. +# eval_info instant at 50m topk(1, {__name__=~"http_requests(_histogram)?"}) +# {__name__="http_requests", group="canary", instance="1", job="app-server"} 800 + +# Unsupported by streaming engine. +# eval_info instant at 50m count(topk(1000, {__name__=~"http_requests(_histogram)?"})) +# {} 9 + +# Unsupported by streaming engine. +# eval_info range from 0 to 50m step 5m count(topk(1000, {__name__=~"http_requests(_histogram)?"})) +# {} 9x10 + +# Unsupported by streaming engine. +# eval_info instant at 50m topk by (instance) (1, {__name__=~"http_requests(_histogram)?"}) +# {__name__="http_requests", group="canary", instance="0", job="app-server"} 700 +# {__name__="http_requests", group="canary", instance="1", job="app-server"} 800 +# {__name__="http_requests", group="production", instance="2", job="api-server"} NaN + +# Unsupported by streaming engine. +# eval_info instant at 50m bottomk(100, http_requests_histogram) +# #empty + +# Unsupported by streaming engine. +# eval_info range from 0 to 50m step 5m bottomk(100, http_requests_histogram) +# #empty + +# Unsupported by streaming engine. +# eval_info instant at 50m bottomk(1, {__name__=~"http_requests(_histogram)?"}) +# {__name__="http_requests", group="production", instance="0", job="api-server"} 100 + +# Unsupported by streaming engine. +# eval_info instant at 50m count(bottomk(1000, {__name__=~"http_requests(_histogram)?"})) +# {} 9 + +# Unsupported by streaming engine. +# eval_info range from 0 to 50m step 5m count(bottomk(1000, {__name__=~"http_requests(_histogram)?"})) +# {} 9x10 + +# Unsupported by streaming engine. +# eval_info instant at 50m bottomk by (instance) (1, {__name__=~"http_requests(_histogram)?"}) +# {__name__="http_requests", group="production", instance="0", job="api-server"} 100 +# {__name__="http_requests", group="production", instance="1", job="api-server"} 200 +# {__name__="http_requests", group="production", instance="2", job="api-server"} NaN + clear # Tests for count_values. @@ -369,20 +424,22 @@ load 5m version{job="app-server", instance="1", group="production"} 6 version{job="app-server", instance="0", group="canary"} 7 version{job="app-server", instance="1", group="canary"} 7 + version{job="app-server", instance="2", group="canary"} {{schema:0 sum:10 count:20 z_bucket_w:0.001 z_bucket:2 buckets:[1 2] n_buckets:[1 2]}} + version{job="app-server", instance="3", group="canary"} {{schema:0 sum:10 count:20 z_bucket_w:0.001 z_bucket:2 buckets:[1 2] n_buckets:[1 2]}} # Unsupported by streaming engine. # eval instant at 1m count_values("version", version) # {version="6"} 5 # {version="7"} 2 # {version="8"} 2 - +# {version="{count:20, sum:10, [-2,-1):2, [-1,-0.5):1, [-0.001,0.001]:2, (0.5,1]:1, (1,2]:2}"} 2 # Unsupported by streaming engine. # eval instant at 1m count_values(((("version"))), version) -# {version="6"} 5 -# {version="7"} 2 -# {version="8"} 2 - +# {version="6"} 5 +# {version="7"} 2 +# {version="8"} 2 +# {version="{count:20, sum:10, [-2,-1):2, [-1,-0.5):1, [-0.001,0.001]:2, (0.5,1]:1, (1,2]:2}"} 2 # Unsupported by streaming engine. # eval instant at 1m count_values without (instance)("version", version) @@ -390,6 +447,7 @@ load 5m # {job="api-server", group="canary", version="8"} 2 # {job="app-server", group="production", version="6"} 2 # {job="app-server", group="canary", version="7"} 2 +# {job="app-server", group="canary", version="{count:20, sum:10, [-2,-1):2, [-1,-0.5):1, [-0.001,0.001]:2, (0.5,1]:1, (1,2]:2}"} 2 # Overwrite label with output. Don't do this. # Unsupported by streaming engine. @@ -397,6 +455,7 @@ load 5m # {job="6", group="production"} 5 # {job="8", group="canary"} 2 # {job="7", group="canary"} 2 +# {job="{count:20, sum:10, [-2,-1):2, [-1,-0.5):1, [-0.001,0.001]:2, (0.5,1]:1, (1,2]:2}", group="canary"} 2 # Overwrite label with output. Don't do this. # Unsupported by streaming engine. @@ -404,7 +463,7 @@ load 5m # {job="6", group="production"} 5 # {job="8", group="canary"} 2 # {job="7", group="canary"} 2 - +# {job="{count:20, sum:10, [-2,-1):2, [-1,-0.5):1, [-0.001,0.001]:2, (0.5,1]:1, (1,2]:2}", group="canary"} 2 # Tests for quantile. clear @@ -470,12 +529,14 @@ load 10s data{test="uneven samples",point="a"} 0 data{test="uneven samples",point="b"} 1 data{test="uneven samples",point="c"} 4 + data{test="histogram sample",point="c"} {{schema:0 sum:0 count:0}} foo .8 eval instant at 1m group without(point)(data) {test="two samples"} 1 {test="three samples"} 1 {test="uneven samples"} 1 + {test="histogram sample"} 1 eval instant at 1m group(foo) {} 1 @@ -523,6 +584,7 @@ load 10s data{test="bigzero",point="b"} -9.988465674311579e+307 data{test="bigzero",point="c"} 9.988465674311579e+307 data{test="bigzero",point="d"} 9.988465674311579e+307 + data{test="value is nan"} NaN eval instant at 1m avg(data{test="ten"}) {} 10 @@ -557,6 +619,10 @@ eval instant at 1m avg(data{test="-big"}) eval instant at 1m avg(data{test="bigzero"}) {} 0 +# If NaN is in the mix, the result is NaN. +eval instant at 1m avg(data) + {} NaN + # Test summing and averaging extreme values. clear @@ -647,11 +713,11 @@ eval_info instant at 0m stddev({label="c"}) eval_info instant at 0m stdvar({label="c"}) -eval instant at 0m stddev by (label) (series) +eval_info instant at 0m stddev by (label) (series) {label="a"} 0 {label="b"} 0 -eval instant at 0m stdvar by (label) (series) +eval_info instant at 0m stdvar by (label) (series) {label="a"} 0 {label="b"} 0 @@ -662,17 +728,17 @@ load 5m series{label="b"} 1 series{label="c"} 2 -eval instant at 0m stddev(series) +eval_info instant at 0m stddev(series) {} 0.5 -eval instant at 0m stdvar(series) +eval_info instant at 0m stdvar(series) {} 0.25 -eval instant at 0m stddev by (label) (series) +eval_info instant at 0m stddev by (label) (series) {label="b"} 0 {label="c"} 0 -eval instant at 0m stdvar by (label) (series) +eval_info instant at 0m stdvar by (label) (series) {label="b"} 0 {label="c"} 0 diff --git a/pkg/streamingpromql/testdata/upstream/at_modifier.test b/pkg/streamingpromql/testdata/upstream/at_modifier.test index da939526531..e2d24bb1fbe 100644 --- a/pkg/streamingpromql/testdata/upstream/at_modifier.test +++ b/pkg/streamingpromql/testdata/upstream/at_modifier.test @@ -95,7 +95,8 @@ eval instant at 25s sum_over_time(metric{job="1"}[100] offset 50s @ 100) eval instant at 25s metric{job="1"} @ 50 + metric{job="1"} @ 100 {job="1"} 15 -eval instant at 25s rate(metric{job="1"}[100s] @ 100) + label_replace(rate(metric{job="2"}[123s] @ 200), "job", "1", "", "") +# Note that this triggers an info annotation because we are rate'ing a metric that does not end in `_total`. +eval_info instant at 25s rate(metric{job="1"}[100s] @ 100) + label_replace(rate(metric{job="2"}[123s] @ 200), "job", "1", "", "") {job="1"} 0.3 eval instant at 25s sum_over_time(metric{job="1"}[100s] @ 100) + label_replace(sum_over_time(metric{job="2"}[100s] @ 100), "job", "1", "", "") diff --git a/pkg/streamingpromql/testdata/upstream/functions.test b/pkg/streamingpromql/testdata/upstream/functions.test index 38cff561a73..bd070c6fb01 100644 --- a/pkg/streamingpromql/testdata/upstream/functions.test +++ b/pkg/streamingpromql/testdata/upstream/functions.test @@ -7,7 +7,10 @@ load 5m http_requests{path="/foo"} 1 2 3 0 1 0 0 1 2 0 http_requests{path="/bar"} 1 2 3 4 5 1 2 3 4 5 - http_requests{path="/biz"} 0 0 0 0 0 1 1 1 1 1 + http_requests{path="/biz"} 0 0 0 0 0 1 1 1 1 1 + http_requests_histogram{path="/foo"} {{schema:0 sum:1 count:1}}x9 + http_requests_histogram{path="/bar"} 0 0 0 0 0 0 0 0 {{schema:0 sum:1 count:1}} {{schema:0 sum:1 count:1}} + http_requests_histogram{path="/biz"} 0 1 0 2 0 3 0 {{schema:0 sum:1 count:1 z_bucket_w:0.001 z_bucket:2}} {{schema:0 sum:2 count:2 z_bucket_w:0.001 z_bucket:1}} {{schema:0 sum:1 count:1 z_bucket_w:0.001 z_bucket:2}} # Tests for resets(). eval instant at 50m resets(http_requests[5m]) @@ -44,6 +47,18 @@ eval instant at 50m resets(http_requests[50m]) eval instant at 50m resets(nonexistent_metric[50m]) +# Test for mix of floats and histograms. + +eval instant at 50m resets(http_requests_histogram[6m]) + {path="/foo"} 0 + {path="/bar"} 0 + {path="/biz"} 0 + +eval instant at 50m resets(http_requests_histogram[60m]) + {path="/foo"} 0 + {path="/bar"} 1 + {path="/biz"} 6 + # Tests for changes(). eval instant at 50m changes(http_requests[5m]) @@ -74,6 +89,21 @@ eval instant at 50m changes((http_requests[50m])) eval instant at 50m changes(nonexistent_metric[50m]) +# Test for mix of floats and histograms. +# Because of bug #14172 we are not able to test more complex cases like below: +# 0 1 2 {{schema:0 sum:1 count:1}} 3 {{schema:0 sum:2 count:2}} 4 {{schema:0 sum:3 count:3}}. +eval instant at 50m changes(http_requests_histogram[5m]) + +eval instant at 50m changes(http_requests_histogram[6m]) + {path="/foo"} 0 + {path="/bar"} 0 + {path="/biz"} 0 + +eval instant at 50m changes(http_requests_histogram[60m]) + {path="/foo"} 0 + {path="/bar"} 1 + {path="/biz"} 9 + clear load 5m @@ -88,13 +118,13 @@ clear # Tests for increase(). load 5m - http_requests{path="/foo"} 0+10x10 - http_requests{path="/bar"} 0+18x5 0+18x5 - http_requests{path="/dings"} 10+10x10 - http_requests{path="/bumms"} 1+10x10 + http_requests_total{path="/foo"} 0+10x10 + http_requests_total{path="/bar"} 0+18x5 0+18x5 + http_requests_total{path="/dings"} 10+10x10 + http_requests_total{path="/bumms"} 1+10x10 # Tests for increase(). -eval instant at 50m increase(http_requests[50m]) +eval instant at 50m increase(http_requests_total[50m]) {path="/foo"} 100 {path="/bar"} 160 {path="/dings"} 100 @@ -107,7 +137,7 @@ eval instant at 50m increase(http_requests[50m]) # chosen. However, "bumms" has value 1 at t=0 and would reach 0 at # t=-30s. Here the extrapolation to t=-2m30s would reach a negative # value, and therefore the extrapolation happens only by 30s. -eval instant at 50m increase(http_requests[100m]) +eval instant at 50m increase(http_requests_total[100m]) {path="/foo"} 100 {path="/bar"} 162 {path="/dings"} 105 @@ -120,57 +150,57 @@ clear # So the sequence 3 2 (decreasing counter = reset) is interpreted the same as 3 0 1 2. # Prometheus assumes it missed the intermediate values 0 and 1. load 5m - http_requests{path="/foo"} 0 1 2 3 2 3 4 + http_requests_total{path="/foo"} 0 1 2 3 2 3 4 -eval instant at 30m increase(http_requests[30m]) +eval instant at 30m increase(http_requests_total[30m]) {path="/foo"} 7 clear # Tests for rate(). load 5m - testcounter_reset_middle 0+27x4 0+27x5 - testcounter_reset_end 0+10x9 0 10 + testcounter_reset_middle_total 0+27x4 0+27x5 + testcounter_reset_end_total 0+10x9 0 10 # Counter resets at in the middle of range are handled correctly by rate(). -eval instant at 50m rate(testcounter_reset_middle[50m]) +eval instant at 50m rate(testcounter_reset_middle_total[50m]) {} 0.08 # Counter resets at end of range are ignored by rate(). -eval instant at 50m rate(testcounter_reset_end[5m]) +eval instant at 50m rate(testcounter_reset_end_total[5m]) -eval instant at 50m rate(testcounter_reset_end[6m]) +eval instant at 50m rate(testcounter_reset_end_total[6m]) {} 0 clear load 5m - calculate_rate_offset{x="a"} 0+10x10 - calculate_rate_offset{x="b"} 0+20x10 - calculate_rate_window 0+80x10 + calculate_rate_offset_total{x="a"} 0+10x10 + calculate_rate_offset_total{x="b"} 0+20x10 + calculate_rate_window_total 0+80x10 # Rates should calculate per-second rates. -eval instant at 50m rate(calculate_rate_window[50m]) +eval instant at 50m rate(calculate_rate_window_total[50m]) {} 0.26666666666666666 -eval instant at 50m rate(calculate_rate_offset[10m] offset 5m) +eval instant at 50m rate(calculate_rate_offset_total[10m] offset 5m) {x="a"} 0.03333333333333333 {x="b"} 0.06666666666666667 clear load 4m - testcounter_zero_cutoff{start="0m"} 0+240x10 - testcounter_zero_cutoff{start="1m"} 60+240x10 - testcounter_zero_cutoff{start="2m"} 120+240x10 - testcounter_zero_cutoff{start="3m"} 180+240x10 - testcounter_zero_cutoff{start="4m"} 240+240x10 - testcounter_zero_cutoff{start="5m"} 300+240x10 + testcounter_zero_cutoff_total{start="0m"} 0+240x10 + testcounter_zero_cutoff_total{start="1m"} 60+240x10 + testcounter_zero_cutoff_total{start="2m"} 120+240x10 + testcounter_zero_cutoff_total{start="3m"} 180+240x10 + testcounter_zero_cutoff_total{start="4m"} 240+240x10 + testcounter_zero_cutoff_total{start="5m"} 300+240x10 # Zero cutoff for left-side extrapolation happens until we # reach half a sampling interval (2m). Beyond that, we only # extrapolate by half a sampling interval. -eval instant at 10m rate(testcounter_zero_cutoff[20m]) +eval instant at 10m rate(testcounter_zero_cutoff_total[20m]) {start="0m"} 0.5 {start="1m"} 0.55 {start="2m"} 0.6 @@ -179,7 +209,7 @@ eval instant at 10m rate(testcounter_zero_cutoff[20m]) {start="5m"} 0.6 # Normal half-interval cutoff for left-side extrapolation. -eval instant at 50m rate(testcounter_zero_cutoff[20m]) +eval instant at 50m rate(testcounter_zero_cutoff_total[20m]) {start="0m"} 0.6 {start="1m"} 0.6 {start="2m"} 0.6 @@ -191,17 +221,17 @@ clear # Tests for irate(). load 5m - http_requests{path="/foo"} 0+10x10 - http_requests{path="/bar"} 0+10x5 0+10x5 + http_requests_total{path="/foo"} 0+10x10 + http_requests_total{path="/bar"} 0+10x5 0+10x5 # Unsupported by streaming engine. -# eval instant at 50m irate(http_requests[50m]) +# eval instant at 50m irate(http_requests_total[50m]) # {path="/foo"} .03333333333333333333 # {path="/bar"} .03333333333333333333 # Counter reset. # Unsupported by streaming engine. -# eval instant at 30m irate(http_requests[50m]) +# eval instant at 30m irate(http_requests_total[50m]) # {path="/foo"} .03333333333333333333 # {path="/bar"} 0 @@ -233,18 +263,18 @@ clear # Tests for deriv() and predict_linear(). load 5m - testcounter_reset_middle 0+10x4 0+10x5 - http_requests{job="app-server", instance="1", group="canary"} 0+80x10 + testcounter_reset_middle_total 0+10x4 0+10x5 + http_requests_total{job="app-server", instance="1", group="canary"} 0+80x10 # deriv should return the same as rate in simple cases. -eval instant at 50m rate(http_requests{group="canary", instance="1", job="app-server"}[50m]) +eval instant at 50m rate(http_requests_total{group="canary", instance="1", job="app-server"}[50m]) {group="canary", instance="1", job="app-server"} 0.26666666666666666 -eval instant at 50m deriv(http_requests{group="canary", instance="1", job="app-server"}[50m]) +eval instant at 50m deriv(http_requests_total{group="canary", instance="1", job="app-server"}[50m]) {group="canary", instance="1", job="app-server"} 0.26666666666666666 # deriv should return correct result. -eval instant at 50m deriv(testcounter_reset_middle[100m]) +eval instant at 50m deriv(testcounter_reset_middle_total[100m]) {} 0.010606060606060607 # predict_linear should return correct result. @@ -262,37 +292,37 @@ eval instant at 50m deriv(testcounter_reset_middle[100m]) # intercept at t=3000: 38.63636363636364 # intercept at t=3000+3600: 76.81818181818181 # Unsupported by streaming engine. -# eval instant at 50m predict_linear(testcounter_reset_middle[50m], 3600) +# eval instant at 50m predict_linear(testcounter_reset_middle_total[50m], 3600) # {} 70 # Unsupported by streaming engine. -# eval instant at 50m predict_linear(testcounter_reset_middle[50m], 1h) +# eval instant at 50m predict_linear(testcounter_reset_middle_total[50m], 1h) # {} 70 # intercept at t = 3000+3600 = 6600 # Unsupported by streaming engine. -# eval instant at 50m predict_linear(testcounter_reset_middle[55m] @ 3000, 3600) +# eval instant at 50m predict_linear(testcounter_reset_middle_total[55m] @ 3000, 3600) # {} 76.81818181818181 # Unsupported by streaming engine. -# eval instant at 50m predict_linear(testcounter_reset_middle[55m] @ 3000, 1h) +# eval instant at 50m predict_linear(testcounter_reset_middle_total[55m] @ 3000, 1h) # {} 76.81818181818181 # intercept at t = 600+3600 = 4200 # Unsupported by streaming engine. -# eval instant at 10m predict_linear(testcounter_reset_middle[55m] @ 3000, 3600) +# eval instant at 10m predict_linear(testcounter_reset_middle_total[55m] @ 3000, 3600) # {} 51.36363636363637 # intercept at t = 4200+3600 = 7800 # Unsupported by streaming engine. -# eval instant at 70m predict_linear(testcounter_reset_middle[55m] @ 3000, 3600) +# eval instant at 70m predict_linear(testcounter_reset_middle_total[55m] @ 3000, 3600) # {} 89.54545454545455 -# With http_requests, there is a sample value exactly at the end of +# With http_requests_total, there is a sample value exactly at the end of # the range, and it has exactly the predicted value, so predict_linear # can be emulated with deriv. # Unsupported by streaming engine. -# eval instant at 50m predict_linear(http_requests[50m], 3600) - (http_requests + deriv(http_requests[50m]) * 3600) +# eval instant at 50m predict_linear(http_requests_total[50m], 3600) - (http_requests_total + deriv(http_requests_total[50m]) * 3600) # {group="canary", instance="1", job="app-server"} 0 clear @@ -481,6 +511,21 @@ eval instant at 0m clamp(test_clamp, NaN, 0) eval instant at 0m clamp(test_clamp, 5, -5) +clear + +load 1m + mixed_metric {{schema:0 sum:5 count:4 buckets:[1 2 1]}} 1 2 3 {{schema:0 sum:5 count:4 buckets:[1 2 1]}} {{schema:0 sum:8 count:6 buckets:[1 4 1]}} + +# clamp ignores any histograms +eval range from 0 to 5m step 1m clamp(mixed_metric, 2, 5) + {} _ 2 2 3 + +eval range from 0 to 5m step 1m clamp_min(mixed_metric, 2) + {} _ 2 2 3 + +eval range from 0 to 5m step 1m clamp_max(mixed_metric, 2) + {} _ 1 2 2 + # Test cases for sgn. clear load 5m @@ -490,6 +535,7 @@ load 5m test_sgn{src="sgn-d"} -50 test_sgn{src="sgn-e"} 0 test_sgn{src="sgn-f"} 100 + test_sgn{src="sgn-histogram"} {{schema:1 sum:1 count:1}} eval instant at 0m sgn(test_sgn) {src="sgn-a"} -1 @@ -742,6 +788,7 @@ load 10s metric8 9.988465674311579e+307 9.988465674311579e+307 metric9 -9.988465674311579e+307 -9.988465674311579e+307 -9.988465674311579e+307 metric10 -9.988465674311579e+307 9.988465674311579e+307 + metric11 1 2 3 NaN NaN eval instant at 55s avg_over_time(metric[1m]) {} 3 @@ -835,6 +882,19 @@ eval instant at 45s sum_over_time(metric10[1m])/count_over_time(metric10[1m]) eval instant at 1m sum_over_time(metric10[2m])/count_over_time(metric10[2m]) {} 0 +# NaN behavior. +eval instant at 20s avg_over_time(metric11[1m]) + {} 2 + +eval instant at 30s avg_over_time(metric11[1m]) + {} NaN + +eval instant at 1m avg_over_time(metric11[1m]) + {} NaN + +eval instant at 1m sum_over_time(metric11[1m])/count_over_time(metric11[1m]) + {} NaN + # Test if very big intermediate values cause loss of detail. clear load 10s @@ -944,6 +1004,9 @@ load 10s clear # Test time-related functions. +load 5m + histogram_sample {{schema:0 sum:1 count:1}} + # Unsupported by streaming engine. # eval instant at 0m year() # {} 1970 @@ -1049,6 +1112,31 @@ clear # eval instant at 0m days_in_month(vector(1485907200)) # {} 28 +# Test for histograms. +# Unsupported by streaming engine. +# eval instant at 0m day_of_month(histogram_sample) + +# Unsupported by streaming engine. +# eval instant at 0m day_of_week(histogram_sample) + +# Unsupported by streaming engine. +# eval instant at 0m day_of_year(histogram_sample) + +# Unsupported by streaming engine. +# eval instant at 0m days_in_month(histogram_sample) + +# Unsupported by streaming engine. +# eval instant at 0m hour(histogram_sample) + +# Unsupported by streaming engine. +# eval instant at 0m minute(histogram_sample) + +# Unsupported by streaming engine. +# eval instant at 0m month(histogram_sample) + +# Unsupported by streaming engine. +# eval instant at 0m year(histogram_sample) + clear # Test duplicate labelset in promql output. @@ -1154,19 +1242,19 @@ clear # Testdata for absent_over_time() # Unsupported by streaming engine. -# eval instant at 1m absent_over_time(http_requests[5m]) +# eval instant at 1m absent_over_time(http_requests_total[5m]) # {} 1 # Unsupported by streaming engine. -# eval instant at 1m absent_over_time(http_requests{handler="/foo"}[5m]) +# eval instant at 1m absent_over_time(http_requests_total{handler="/foo"}[5m]) # {handler="/foo"} 1 # Unsupported by streaming engine. -# eval instant at 1m absent_over_time(http_requests{handler!="/foo"}[5m]) +# eval instant at 1m absent_over_time(http_requests_total{handler!="/foo"}[5m]) # {} 1 # Unsupported by streaming engine. -# eval instant at 1m absent_over_time(http_requests{handler="/foo", handler="/bar", handler="/foobar"}[5m]) +# eval instant at 1m absent_over_time(http_requests_total{handler="/foo", handler="/bar", handler="/foobar"}[5m]) # {} 1 # Unsupported by streaming engine. @@ -1174,21 +1262,21 @@ clear # {} 1 # Unsupported by streaming engine. -# eval instant at 1m absent_over_time(http_requests{handler="/foo", handler="/bar", instance="127.0.0.1"}[5m]) +# eval instant at 1m absent_over_time(http_requests_total{handler="/foo", handler="/bar", instance="127.0.0.1"}[5m]) # {instance="127.0.0.1"} 1 load 1m - http_requests{path="/foo",instance="127.0.0.1",job="httpd"} 1+1x10 - http_requests{path="/bar",instance="127.0.0.1",job="httpd"} 1+1x10 + http_requests_total{path="/foo",instance="127.0.0.1",job="httpd"} 1+1x10 + http_requests_total{path="/bar",instance="127.0.0.1",job="httpd"} 1+1x10 httpd_handshake_failures_total{instance="127.0.0.1",job="node"} 1+1x15 httpd_log_lines_total{instance="127.0.0.1",job="node"} 1 ssl_certificate_expiry_seconds{job="ingress"} NaN NaN NaN NaN NaN # Unsupported by streaming engine. -# eval instant at 5m absent_over_time(http_requests[5m]) +# eval instant at 5m absent_over_time(http_requests_total[5m]) # Unsupported by streaming engine. -# eval instant at 5m absent_over_time(rate(http_requests[5m])[5m:1m]) +# eval instant at 5m absent_over_time(rate(http_requests_total[5m])[5m:1m]) # Unsupported by streaming engine. # eval instant at 0m absent_over_time(httpd_log_lines_total[30s]) @@ -1198,18 +1286,18 @@ load 1m # {} 1 # Unsupported by streaming engine. -# eval instant at 15m absent_over_time(http_requests[5m]) +# eval instant at 15m absent_over_time(http_requests_total[5m]) # {} 1 # Unsupported by streaming engine. -# eval instant at 15m absent_over_time(http_requests[10m]) +# eval instant at 15m absent_over_time(http_requests_total[10m]) # Unsupported by streaming engine. -# eval instant at 16m absent_over_time(http_requests[6m]) +# eval instant at 16m absent_over_time(http_requests_total[6m]) # {} 1 # Unsupported by streaming engine. -# eval instant at 16m absent_over_time(http_requests[16m]) +# eval instant at 16m absent_over_time(http_requests_total[16m]) # Unsupported by streaming engine. # eval instant at 16m absent_over_time(httpd_handshake_failures_total[1m]) @@ -1246,30 +1334,30 @@ load 1m clear # Testdata for present_over_time() -eval instant at 1m present_over_time(http_requests[5m]) +eval instant at 1m present_over_time(http_requests_total[5m]) -eval instant at 1m present_over_time(http_requests{handler="/foo"}[5m]) +eval instant at 1m present_over_time(http_requests_total{handler="/foo"}[5m]) -eval instant at 1m present_over_time(http_requests{handler!="/foo"}[5m]) +eval instant at 1m present_over_time(http_requests_total{handler!="/foo"}[5m]) -eval instant at 1m present_over_time(http_requests{handler="/foo", handler="/bar", handler="/foobar"}[5m]) +eval instant at 1m present_over_time(http_requests_total{handler="/foo", handler="/bar", handler="/foobar"}[5m]) eval instant at 1m present_over_time(rate(nonexistant[5m])[5m:]) -eval instant at 1m present_over_time(http_requests{handler="/foo", handler="/bar", instance="127.0.0.1"}[5m]) +eval instant at 1m present_over_time(http_requests_total{handler="/foo", handler="/bar", instance="127.0.0.1"}[5m]) load 1m - http_requests{path="/foo",instance="127.0.0.1",job="httpd"} 1+1x10 - http_requests{path="/bar",instance="127.0.0.1",job="httpd"} 1+1x10 + http_requests_total{path="/foo",instance="127.0.0.1",job="httpd"} 1+1x10 + http_requests_total{path="/bar",instance="127.0.0.1",job="httpd"} 1+1x10 httpd_handshake_failures_total{instance="127.0.0.1",job="node"} 1+1x15 httpd_log_lines_total{instance="127.0.0.1",job="node"} 1 ssl_certificate_expiry_seconds{job="ingress"} NaN NaN NaN NaN NaN -eval instant at 5m present_over_time(http_requests[5m]) +eval instant at 5m present_over_time(http_requests_total[5m]) {instance="127.0.0.1", job="httpd", path="/bar"} 1 {instance="127.0.0.1", job="httpd", path="/foo"} 1 -eval instant at 5m present_over_time(rate(http_requests[5m])[5m:1m]) +eval instant at 5m present_over_time(rate(http_requests_total[5m])[5m:1m]) {instance="127.0.0.1", job="httpd", path="/bar"} 1 {instance="127.0.0.1", job="httpd", path="/foo"} 1 @@ -1278,15 +1366,15 @@ eval instant at 0m present_over_time(httpd_log_lines_total[30s]) eval instant at 1m present_over_time(httpd_log_lines_total[30s]) -eval instant at 15m present_over_time(http_requests[5m]) +eval instant at 15m present_over_time(http_requests_total[5m]) -eval instant at 15m present_over_time(http_requests[10m]) +eval instant at 15m present_over_time(http_requests_total[10m]) {instance="127.0.0.1", job="httpd", path="/bar"} 1 {instance="127.0.0.1", job="httpd", path="/foo"} 1 -eval instant at 16m present_over_time(http_requests[6m]) +eval instant at 16m present_over_time(http_requests_total[6m]) -eval instant at 16m present_over_time(http_requests[16m]) +eval instant at 16m present_over_time(http_requests_total[16m]) {instance="127.0.0.1", job="httpd", path="/bar"} 1 {instance="127.0.0.1", job="httpd", path="/foo"} 1 @@ -1310,11 +1398,16 @@ clear load 5m exp_root_log{l="x"} 10 exp_root_log{l="y"} 20 + exp_root_log_h{l="z"} {{schema:1 sum:1 count:1}} eval instant at 1m exp(exp_root_log) {l="x"} 22026.465794806718 {l="y"} 485165195.4097903 +eval instant at 1m exp({__name__=~"exp_root_log(_h)?"}) + {l="x"} 22026.465794806718 + {l="y"} 485165195.4097903 + eval instant at 1m exp(exp_root_log - 10) {l="y"} 22026.465794806718 {l="x"} 1 @@ -1327,6 +1420,10 @@ eval instant at 1m ln(exp_root_log) {l="x"} 2.302585092994046 {l="y"} 2.995732273553991 +eval instant at 1m ln({__name__=~"exp_root_log(_h)?"}) + {l="x"} 2.302585092994046 + {l="y"} 2.995732273553991 + eval instant at 1m ln(exp_root_log - 10) {l="y"} 2.302585092994046 {l="x"} -Inf @@ -1339,14 +1436,26 @@ eval instant at 1m exp(ln(exp_root_log)) {l="y"} 20 {l="x"} 10 +eval instant at 1m exp(ln({__name__=~"exp_root_log(_h)?"})) + {l="y"} 20 + {l="x"} 10 + eval instant at 1m sqrt(exp_root_log) {l="x"} 3.1622776601683795 {l="y"} 4.47213595499958 +eval instant at 1m sqrt({__name__=~"exp_root_log(_h)?"}) + {l="x"} 3.1622776601683795 + {l="y"} 4.47213595499958 + eval instant at 1m log2(exp_root_log) {l="x"} 3.3219280948873626 {l="y"} 4.321928094887363 +eval instant at 1m log2({__name__=~"exp_root_log(_h)?"}) + {l="x"} 3.3219280948873626 + {l="y"} 4.321928094887363 + eval instant at 1m log2(exp_root_log - 10) {l="y"} 3.3219280948873626 {l="x"} -Inf @@ -1359,6 +1468,10 @@ eval instant at 1m log10(exp_root_log) {l="x"} 1 {l="y"} 1.301029995663981 +eval instant at 1m log10({__name__=~"exp_root_log(_h)?"}) + {l="x"} 1 + {l="y"} 1.301029995663981 + eval instant at 1m log10(exp_root_log - 10) {l="y"} 1 {l="x"} -Inf diff --git a/pkg/streamingpromql/testdata/upstream/histograms.test b/pkg/streamingpromql/testdata/upstream/histograms.test index d1530926347..88ff72053eb 100644 --- a/pkg/streamingpromql/testdata/upstream/histograms.test +++ b/pkg/streamingpromql/testdata/upstream/histograms.test @@ -99,16 +99,14 @@ eval instant at 50m histogram_avg(testhistogram3) {start="negative"} 4 # Test histogram_stddev. This has no classic equivalent. -# Unsupported by streaming engine. -# eval instant at 50m histogram_stddev(testhistogram3) -# {start="positive"} 2.8189265757336734 -# {start="negative"} 4.182715937754936 +eval instant at 50m histogram_stddev(testhistogram3) + {start="positive"} 2.8189265757336734 + {start="negative"} 4.182715937754936 # Test histogram_stdvar. This has no classic equivalent. -# Unsupported by streaming engine. -# eval instant at 50m histogram_stdvar(testhistogram3) -# {start="positive"} 7.946347039377573 -# {start="negative"} 17.495112615949154 +eval instant at 50m histogram_stdvar(testhistogram3) + {start="positive"} 7.946347039377573 + {start="negative"} 17.495112615949154 # Test histogram_fraction. @@ -459,14 +457,14 @@ load 5m nonmonotonic_bucket{le="1000"} 0+9x10 nonmonotonic_bucket{le="+Inf"} 0+8x10 -# Nonmonotonic buckets -eval instant at 50m histogram_quantile(0.01, nonmonotonic_bucket) +# Nonmonotonic buckets, triggering an info annotation. +eval_info instant at 50m histogram_quantile(0.01, nonmonotonic_bucket) {} 0.0045 -eval instant at 50m histogram_quantile(0.5, nonmonotonic_bucket) +eval_info instant at 50m histogram_quantile(0.5, nonmonotonic_bucket) {} 8.5 -eval instant at 50m histogram_quantile(0.99, nonmonotonic_bucket) +eval_info instant at 50m histogram_quantile(0.99, nonmonotonic_bucket) {} 979.75 # Buckets with different representations of the same upper bound. diff --git a/pkg/streamingpromql/testdata/upstream/limit.test.disabled b/pkg/streamingpromql/testdata/upstream/limit.test.disabled index 2a3dd6b03f3..c1a4fadf4fe 100644 --- a/pkg/streamingpromql/testdata/upstream/limit.test.disabled +++ b/pkg/streamingpromql/testdata/upstream/limit.test.disabled @@ -14,6 +14,8 @@ load 5m http_requests{job="api-server", instance="1", group="canary"} 0+40x10 http_requests{job="api-server", instance="2", group="canary"} 0+50x10 http_requests{job="api-server", instance="3", group="canary"} 0+60x10 + http_requests{job="api-server", instance="histogram_1", group="canary"} {{schema:0 sum:10 count:10}}x11 + http_requests{job="api-server", instance="histogram_2", group="canary"} {{schema:0 sum:20 count:20}}x11 eval instant at 50m count(limitk by (group) (0, http_requests)) # empty @@ -21,25 +23,56 @@ eval instant at 50m count(limitk by (group) (0, http_requests)) eval instant at 50m count(limitk by (group) (-1, http_requests)) # empty -# Exercise k==1 special case (as sample is added before the main series loop +# Exercise k==1 special case (as sample is added before the main series loop). eval instant at 50m count(limitk by (group) (1, http_requests) and http_requests) - {} 2 + {} 2 eval instant at 50m count(limitk by (group) (2, http_requests) and http_requests) - {} 4 + {} 4 eval instant at 50m count(limitk(100, http_requests) and http_requests) - {} 6 + {} 8 -# Exercise k==1 special case (as sample is added before the main series loop +# Exercise k==1 special case (as sample is added before the main series loop). eval instant at 50m count(limitk by (group) (1, http_requests) and http_requests) - {} 2 + {} 2 eval instant at 50m count(limitk by (group) (2, http_requests) and http_requests) - {} 4 + {} 4 eval instant at 50m count(limitk(100, http_requests) and http_requests) - {} 6 + {} 8 + +# Test for histograms. +# k==1: verify that histogram is included in the result. +eval instant at 50m limitk(1, http_requests{instance="histogram_1"}) + {__name__="http_requests", group="canary", instance="histogram_1", job="api-server"} {{count:10 sum:10}} + +eval range from 0 to 50m step 5m limitk(1, http_requests{instance="histogram_1"}) + {__name__="http_requests", group="canary", instance="histogram_1", job="api-server"} {{count:10 sum:10}}x10 + +# Histogram is included with mix of floats as well. +eval instant at 50m limitk(8, http_requests{instance=~"(histogram_2|0)"}) + {__name__="http_requests", group="canary", instance="histogram_2", job="api-server"} {{count:20 sum:20}} + {__name__="http_requests", group="production", instance="0", job="api-server"} 100 + {__name__="http_requests", group="canary", instance="0", job="api-server"} 300 + +eval range from 0 to 50m step 5m limitk(8, http_requests{instance=~"(histogram_2|0)"}) + {__name__="http_requests", group="canary", instance="histogram_2", job="api-server"} {{count:20 sum:20}}x10 + {__name__="http_requests", group="production", instance="0", job="api-server"} 0+10x10 + {__name__="http_requests", group="canary", instance="0", job="api-server"} 0+30x10 + +eval instant at 50m count(limitk(2, http_requests{instance=~"histogram_[0-9]"})) + {} 2 + +eval range from 0 to 50m step 5m count(limitk(2, http_requests{instance=~"histogram_[0-9]"})) + {} 2+0x10 + +eval instant at 50m count(limitk(1000, http_requests{instance=~"histogram_[0-9]"})) + {} 2 + +eval range from 0 to 50m step 5m count(limitk(1000, http_requests{instance=~"histogram_[0-9]"})) + {} 2+0x10 # limit_ratio eval range from 0 to 50m step 5m count(limit_ratio(0.0, http_requests)) @@ -47,64 +80,64 @@ eval range from 0 to 50m step 5m count(limit_ratio(0.0, http_requests)) # limitk(2, ...) should always return a 2-count subset of the timeseries (hence the AND'ing) eval range from 0 to 50m step 5m count(limitk(2, http_requests) and http_requests) - {} 2+0x10 + {} 2+0x10 -# Tests for limit_ratio +# Tests for limit_ratio. # # NB: below 0.5 ratio will depend on some hashing "luck" (also there's no guarantee that # an integer comes from: total number of series * ratio), as it depends on: # -# * ratioLimit = [0.0, 1.0]: +# * ratioLimit = [0.0, 1.0]: # float64(sample.Metric.Hash()) / float64MaxUint64 < Ratio ? # * ratioLimit = [-1.0, 1.0): # float64(sample.Metric.Hash()) / float64MaxUint64 >= (1.0 + Ratio) ? # # See `AddRatioSample()` in promql/engine.go for more details. -# Half~ish samples: verify we get "near" 3 (of 0.5 * 6) -eval range from 0 to 50m step 5m count(limit_ratio(0.5, http_requests) and http_requests) <= bool (3+1) - {} 1+0x10 +# Half~ish samples: verify we get "near" 3 (of 0.5 * 6). +eval range from 0 to 50m step 5m count(limit_ratio(0.5, http_requests) and http_requests) <= bool (4+1) + {} 1+0x10 -eval range from 0 to 50m step 5m count(limit_ratio(0.5, http_requests) and http_requests) >= bool (3-1) - {} 1+0x10 +eval range from 0 to 50m step 5m count(limit_ratio(0.5, http_requests) and http_requests) >= bool (4-1) + {} 1+0x10 -# All samples +# All samples. eval range from 0 to 50m step 5m count(limit_ratio(1.0, http_requests) and http_requests) - {} 6+0x10 + {} 8+0x10 -# All samples +# All samples. eval range from 0 to 50m step 5m count(limit_ratio(-1.0, http_requests) and http_requests) - {} 6+0x10 + {} 8+0x10 -# Capped to 1.0 -> all samples +# Capped to 1.0 -> all samples. eval_warn range from 0 to 50m step 5m count(limit_ratio(1.1, http_requests) and http_requests) - {} 6+0x10 + {} 8+0x10 -# Capped to -1.0 -> all samples +# Capped to -1.0 -> all samples. eval_warn range from 0 to 50m step 5m count(limit_ratio(-1.1, http_requests) and http_requests) - {} 6+0x10 + {} 8+0x10 -# Verify that limit_ratio(value) and limit_ratio(1.0-value) return the "complement" of each other -# Complement below for [0.2, -0.8] +# Verify that limit_ratio(value) and limit_ratio(1.0-value) return the "complement" of each other. +# Complement below for [0.2, -0.8]. # -# Complement 1of2: `or` should return all samples +# Complement 1of2: `or` should return all samples. eval range from 0 to 50m step 5m count(limit_ratio(0.2, http_requests) or limit_ratio(-0.8, http_requests)) - {} 6+0x10 + {} 8+0x10 -# Complement 2of2: `and` should return no samples +# Complement 2of2: `and` should return no samples. eval range from 0 to 50m step 5m count(limit_ratio(0.2, http_requests) and limit_ratio(-0.8, http_requests)) # empty -# Complement below for [0.5, -0.5] +# Complement below for [0.5, -0.5]. eval range from 0 to 50m step 5m count(limit_ratio(0.5, http_requests) or limit_ratio(-0.5, http_requests)) - {} 6+0x10 + {} 8+0x10 eval range from 0 to 50m step 5m count(limit_ratio(0.5, http_requests) and limit_ratio(-0.5, http_requests)) # empty -# Complement below for [0.8, -0.2] +# Complement below for [0.8, -0.2]. eval range from 0 to 50m step 5m count(limit_ratio(0.8, http_requests) or limit_ratio(-0.2, http_requests)) - {} 6+0x10 + {} 8+0x10 eval range from 0 to 50m step 5m count(limit_ratio(0.8, http_requests) and limit_ratio(-0.2, http_requests)) # empty @@ -112,13 +145,19 @@ eval range from 0 to 50m step 5m count(limit_ratio(0.8, http_requests) and limit # Complement below for [some_ratio, 1.0 - some_ratio], some_ratio derived from time(), # using a small prime number to avoid rounded ratio values, and a small set of them. eval range from 0 to 50m step 5m count(limit_ratio(time() % 17/17, http_requests) or limit_ratio(1.0 - (time() % 17/17), http_requests)) - {} 6+0x10 + {} 8+0x10 eval range from 0 to 50m step 5m count(limit_ratio(time() % 17/17, http_requests) and limit_ratio(1.0 - (time() % 17/17), http_requests)) # empty -# Poor man's normality check: ok (loaded samples follow a nice linearity over labels and time) -# The check giving: 1 (i.e. true) -eval range from 0 to 50m step 5m abs(avg(limit_ratio(0.5, http_requests)) - avg(limit_ratio(-0.5, http_requests))) <= bool stddev(http_requests) +# Poor man's normality check: ok (loaded samples follow a nice linearity over labels and time). +# The check giving: 1 (i.e. true). +eval range from 0 to 50m step 5m abs(avg(limit_ratio(0.5, http_requests{instance!~"histogram_[0-9]"})) - avg(limit_ratio(-0.5, http_requests{instance!~"histogram_[0-9]"}))) <= bool stddev(http_requests{instance!~"histogram_[0-9]"}) {} 1+0x10 +# All specified histograms are included for r=1. +eval instant at 50m limit_ratio(1, http_requests{instance="histogram_1"}) + {__name__="http_requests", group="canary", instance="histogram_1", job="api-server"} {{count:10 sum:10}} + +eval range from 0 to 50m step 5m limit_ratio(1, http_requests{instance="histogram_1"}) + {__name__="http_requests", group="canary", instance="histogram_1", job="api-server"} {{count:10 sum:10}}x10 diff --git a/pkg/streamingpromql/testdata/upstream/name_label_dropping.test b/pkg/streamingpromql/testdata/upstream/name_label_dropping.test index 4e58d2644fe..abddee6ec27 100644 --- a/pkg/streamingpromql/testdata/upstream/name_label_dropping.test +++ b/pkg/streamingpromql/testdata/upstream/name_label_dropping.test @@ -5,97 +5,97 @@ # Test for __name__ label drop. load 5m - metric{env="1"} 0 60 120 - another_metric{env="1"} 60 120 180 + metric_total{env="1"} 0 60 120 + another_metric_total{env="1"} 60 120 180 -# Does not drop __name__ for vector selector -eval instant at 10m metric{env="1"} - metric{env="1"} 120 +# Does not drop __name__ for vector selector. +eval instant at 10m metric_total{env="1"} + metric_total{env="1"} 120 -# Drops __name__ for unary operators -eval instant at 10m -metric +# Drops __name__ for unary operators. +eval instant at 10m -metric_total {env="1"} -120 -# Drops __name__ for binary operators -eval instant at 10m metric + another_metric +# Drops __name__ for binary operators. +eval instant at 10m metric_total + another_metric_total {env="1"} 300 -# Does not drop __name__ for binary comparison operators -eval instant at 10m metric <= another_metric - metric{env="1"} 120 +# Does not drop __name__ for binary comparison operators. +eval instant at 10m metric_total <= another_metric_total + metric_total{env="1"} 120 -# Drops __name__ for binary comparison operators with "bool" modifier -eval instant at 10m metric <= bool another_metric +# Drops __name__ for binary comparison operators with "bool" modifier. +eval instant at 10m metric_total <= bool another_metric_total {env="1"} 1 -# Drops __name__ for vector-scalar operations -eval instant at 10m metric * 2 +# Drops __name__ for vector-scalar operations. +eval instant at 10m metric_total * 2 {env="1"} 240 -# Drops __name__ for instant-vector functions -eval instant at 10m clamp(metric, 0, 100) +# Drops __name__ for instant-vector functions. +eval instant at 10m clamp(metric_total, 0, 100) {env="1"} 100 -# Drops __name__ for round function -eval instant at 10m round(metric) +# Drops __name__ for round function. +eval instant at 10m round(metric_total) {env="1"} 120 -# Drops __name__ for range-vector functions -eval instant at 10m rate(metric{env="1"}[10m]) +# Drops __name__ for range-vector functions. +eval instant at 10m rate(metric_total{env="1"}[10m]) {env="1"} 0.2 -# Does not drop __name__ for last_over_time function -eval instant at 10m last_over_time(metric{env="1"}[10m]) - metric{env="1"} 120 +# Does not drop __name__ for last_over_time function. +eval instant at 10m last_over_time(metric_total{env="1"}[10m]) + metric_total{env="1"} 120 -# Drops name for other _over_time functions -eval instant at 10m max_over_time(metric{env="1"}[10m]) +# Drops name for other _over_time functions. +eval instant at 10m max_over_time(metric_total{env="1"}[10m]) {env="1"} 120 -# Allows relabeling (to-be-dropped) __name__ via label_replace +# Allows relabeling (to-be-dropped) __name__ via label_replace. # Unsupported by streaming engine. # eval instant at 10m label_replace(rate({env="1"}[10m]), "my_name", "rate_$1", "__name__", "(.+)") -# {my_name="rate_metric", env="1"} 0.2 -# {my_name="rate_another_metric", env="1"} 0.2 +# {my_name="rate_metric_total", env="1"} 0.2 +# {my_name="rate_another_metric_total", env="1"} 0.2 -# Allows preserving __name__ via label_replace +# Allows preserving __name__ via label_replace. # Unsupported by streaming engine. # eval instant at 10m label_replace(rate({env="1"}[10m]), "__name__", "rate_$1", "__name__", "(.+)") -# rate_metric{env="1"} 0.2 -# rate_another_metric{env="1"} 0.2 +# rate_metric_total{env="1"} 0.2 +# rate_another_metric_total{env="1"} 0.2 -# Allows relabeling (to-be-dropped) __name__ via label_join +# Allows relabeling (to-be-dropped) __name__ via label_join. # Unsupported by streaming engine. # eval instant at 10m label_join(rate({env="1"}[10m]), "my_name", "_", "__name__") -# {my_name="metric", env="1"} 0.2 -# {my_name="another_metric", env="1"} 0.2 +# {my_name="metric_total", env="1"} 0.2 +# {my_name="another_metric_total", env="1"} 0.2 -# Allows preserving __name__ via label_join +# Allows preserving __name__ via label_join. # Unsupported by streaming engine. # eval instant at 10m label_join(rate({env="1"}[10m]), "__name__", "_", "__name__", "env") -# metric_1{env="1"} 0.2 -# another_metric_1{env="1"} 0.2 +# metric_total_1{env="1"} 0.2 +# another_metric_total_1{env="1"} 0.2 -# Does not drop metric names fro aggregation operators -eval instant at 10m sum by (__name__, env) (metric{env="1"}) - metric{env="1"} 120 +# Does not drop metric names from aggregation operators. +eval instant at 10m sum by (__name__, env) (metric_total{env="1"}) + metric_total{env="1"} 120 -# Aggregation operators by __name__ lead to duplicate labelset errors (aggregation is partitioned by not yet removed __name__ label) +# Aggregation operators by __name__ lead to duplicate labelset errors (aggregation is partitioned by not yet removed __name__ label). # This is an accidental side effect of delayed __name__ label dropping # Unsupported by streaming engine. # eval_fail instant at 10m sum by (__name__) (rate({env="1"}[10m])) -# Aggregation operators aggregate metrics with same labelset and to-be-dropped names +# Aggregation operators aggregate metrics with same labelset and to-be-dropped names. # This is an accidental side effect of delayed __name__ label dropping # Unsupported by streaming engine. # eval instant at 10m sum(rate({env="1"}[10m])) by (env) # {env="1"} 0.4 -# Aggregationk operators propagate __name__ label dropping information +# Aggregationk operators propagate __name__ label dropping information. # Unsupported by streaming engine. -# eval instant at 10m topk(10, sum by (__name__, env) (metric{env="1"})) -# metric{env="1"} 120 +# eval instant at 10m topk(10, sum by (__name__, env) (metric_total{env="1"})) +# metric_total{env="1"} 120 # Unsupported by streaming engine. -# eval instant at 10m topk(10, sum by (__name__, env) (rate(metric{env="1"}[10m]))) +# eval instant at 10m topk(10, sum by (__name__, env) (rate(metric_total{env="1"}[10m]))) # {env="1"} 0.2 diff --git a/pkg/streamingpromql/testdata/upstream/native_histograms.test b/pkg/streamingpromql/testdata/upstream/native_histograms.test index 93b946d863a..3616d455911 100644 --- a/pkg/streamingpromql/testdata/upstream/native_histograms.test +++ b/pkg/streamingpromql/testdata/upstream/native_histograms.test @@ -317,13 +317,11 @@ clear load 10m histogram_stddev_stdvar_1 {{schema:2 count:4 sum:10 buckets:[1 0 0 0 1 0 0 1 1]}}x1 -# Unsupported by streaming engine. -# eval instant at 10m histogram_stddev(histogram_stddev_stdvar_1) -# {} 1.0787993180043811 +eval instant at 10m histogram_stddev(histogram_stddev_stdvar_1) + {} 1.0787993180043811 -# Unsupported by streaming engine. -# eval instant at 10m histogram_stdvar(histogram_stddev_stdvar_1) -# {} 1.163807968526718 +eval instant at 10m histogram_stdvar(histogram_stddev_stdvar_1) + {} 1.163807968526718 clear @@ -331,13 +329,11 @@ clear load 10m histogram_stddev_stdvar_2 {{schema:8 count:10 sum:10 buckets:[1 2 3 4]}}x1 -# Unsupported by streaming engine. -# eval instant at 10m histogram_stddev(histogram_stddev_stdvar_2) -# {} 0.0048960313898237465 +eval instant at 10m histogram_stddev(histogram_stddev_stdvar_2) + {} 0.0048960313898237465 -# Unsupported by streaming engine. -# eval instant at 10m histogram_stdvar(histogram_stddev_stdvar_2) -# {} 2.3971123370139447e-05 +eval instant at 10m histogram_stdvar(histogram_stddev_stdvar_2) + {} 2.3971123370139447e-05 clear @@ -345,13 +341,11 @@ clear load 10m histogram_stddev_stdvar_3 {{schema:3 count:7 sum:62 z_bucket:1 buckets:[0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ] n_buckets:[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ]}}x1 -# Unsupported by streaming engine. -# eval instant at 10m histogram_stddev(histogram_stddev_stdvar_3) -# {} 42.947236400258 +eval instant at 10m histogram_stddev(histogram_stddev_stdvar_3) + {} 42.947236400258 -# Unsupported by streaming engine. -# eval instant at 10m histogram_stdvar(histogram_stddev_stdvar_3) -# {} 1844.4651144196398 +eval instant at 10m histogram_stdvar(histogram_stddev_stdvar_3) + {} 1844.4651144196398 clear @@ -359,13 +353,11 @@ clear load 10m histogram_stddev_stdvar_4 {{schema:0 count:10 sum:-112946 z_bucket:0 n_buckets:[0 0 1 1 1 0 1 1 0 0 3 0 0 0 1 0 0 1]}}x1 -# Unsupported by streaming engine. -# eval instant at 10m histogram_stddev(histogram_stddev_stdvar_4) -# {} 27556.344499842 +eval instant at 10m histogram_stddev(histogram_stddev_stdvar_4) + {} 27556.344499842 -# Unsupported by streaming engine. -# eval instant at 10m histogram_stdvar(histogram_stddev_stdvar_4) -# {} 759352122.1939945 +eval instant at 10m histogram_stdvar(histogram_stddev_stdvar_4) + {} 759352122.1939945 clear @@ -373,13 +365,11 @@ clear load 10m histogram_stddev_stdvar_5 {{schema:0 count:10 sum:-100 z_bucket:0 n_buckets:[0 0 0 0 10]}}x1 -# Unsupported by streaming engine. -# eval instant at 10m histogram_stddev(histogram_stddev_stdvar_5) -# {} 1.3137084989848 +eval instant at 10m histogram_stddev(histogram_stddev_stdvar_5) + {} 1.3137084989848 -# Unsupported by streaming engine. -# eval instant at 10m histogram_stdvar(histogram_stddev_stdvar_5) -# {} 1.725830020304794 +eval instant at 10m histogram_stdvar(histogram_stddev_stdvar_5) + {} 1.725830020304794 clear @@ -387,13 +377,11 @@ clear load 10m histogram_stddev_stdvar_6 {{schema:3 count:7 sum:NaN z_bucket:1 buckets:[0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ] n_buckets:[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ]}}x1 -# Unsupported by streaming engine. -# eval instant at 10m histogram_stddev(histogram_stddev_stdvar_6) -# {} NaN +eval instant at 10m histogram_stddev(histogram_stddev_stdvar_6) + {} NaN -# Unsupported by streaming engine. -# eval instant at 10m histogram_stdvar(histogram_stddev_stdvar_6) -# {} NaN +eval instant at 10m histogram_stdvar(histogram_stddev_stdvar_6) + {} NaN clear @@ -401,13 +389,11 @@ clear load 10m histogram_stddev_stdvar_7 {{schema:3 count:7 sum:Inf z_bucket:1 buckets:[0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ] n_buckets:[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ]}}x1 -# Unsupported by streaming engine. -# eval instant at 10m histogram_stddev(histogram_stddev_stdvar_7) -# {} Inf +eval instant at 10m histogram_stddev(histogram_stddev_stdvar_7) + {} Inf -# Unsupported by streaming engine. -# eval instant at 10m histogram_stdvar(histogram_stddev_stdvar_7) -# {} Inf +eval instant at 10m histogram_stdvar(histogram_stddev_stdvar_7) + {} Inf clear @@ -1105,14 +1091,12 @@ eval instant at 5m histogram_quantile(1.0, rate(const_histogram[5m])) {} NaN # Zero buckets mean no observations, so there is no standard deviation. -# Unsupported by streaming engine. -# eval instant at 5m histogram_stddev(rate(const_histogram[5m])) -# {} NaN +eval instant at 5m histogram_stddev(rate(const_histogram[5m])) + {} NaN # Zero buckets mean no observations, so there is no standard variance. -# Unsupported by streaming engine. -# eval instant at 5m histogram_stdvar(rate(const_histogram[5m])) -# {} NaN +eval instant at 5m histogram_stdvar(rate(const_histogram[5m])) + {} NaN clear @@ -1207,3 +1191,23 @@ eval instant at 3m avg_over_time(histogram_sum_over_time[4m:1m]) {} {{schema:0 count:26.75 sum:1172.8 z_bucket:3.5 z_bucket_w:0.001 buckets:[0.75 2 0.5 1.25 0.75 0.5 0.5] n_buckets:[0.5 1.5 2 1 3.75 2.25 0 0 0 2.5 2.5 1]}} clear + +# Test native histograms with sub operator. +load 10m + histogram_sub_1{idx="0"} {{schema:0 count:41 sum:2345.6 z_bucket:5 z_bucket_w:0.001 buckets:[1 3 1 2 1 1 1] n_buckets:[0 1 4 2 7 0 0 0 0 5 5 2]}}x1 + histogram_sub_1{idx="1"} {{schema:0 count:11 sum:1234.5 z_bucket:3 z_bucket_w:0.001 buckets:[0 2 1] n_buckets:[0 0 3 2]}}x1 + histogram_sub_2{idx="0"} {{schema:0 count:41 sum:2345.6 z_bucket:5 z_bucket_w:0.001 buckets:[1 3 1 2 1 1 1] n_buckets:[0 1 4 2 7 0 0 0 0 5 5 2]}}x1 + histogram_sub_2{idx="1"} {{schema:1 count:11 sum:1234.5 z_bucket:3 z_bucket_w:0.001 buckets:[0 2 1] n_buckets:[0 0 3 2]}}x1 + histogram_sub_3{idx="0"} {{schema:1 count:11 sum:1234.5 z_bucket:3 z_bucket_w:0.001 buckets:[0 2 1] n_buckets:[0 0 3 2]}}x1 + histogram_sub_3{idx="1"} {{schema:0 count:41 sum:2345.6 z_bucket:5 z_bucket_w:0.001 buckets:[1 3 1 2 1 1 1] n_buckets:[0 1 4 2 7 0 0 0 0 5 5 2]}}x1 + +eval instant at 10m histogram_sub_1{idx="0"} - ignoring(idx) histogram_sub_1{idx="1"} + {} {{schema:0 count:30 sum:1111.1 z_bucket:2 z_bucket_w:0.001 buckets:[1 1 0 2 1 1 1] n_buckets:[0 1 1 0 7 0 0 0 0 5 5 2]}} + +eval instant at 10m histogram_sub_2{idx="0"} - ignoring(idx) histogram_sub_2{idx="1"} + {} {{schema:0 count:30 sum:1111.1 z_bucket:2 z_bucket_w:0.001 buckets:[1 0 1 2 1 1 1] n_buckets:[0 -2 2 2 7 0 0 0 0 5 5 2]}} + +eval instant at 10m histogram_sub_3{idx="0"} - ignoring(idx) histogram_sub_3{idx="1"} + {} {{schema:0 count:-30 sum:-1111.1 z_bucket:-2 z_bucket_w:0.001 buckets:[-1 0 -1 -2 -1 -1 -1] n_buckets:[0 2 -2 -2 -7 0 0 0 0 -5 -5 -2]}} + +clear diff --git a/pkg/streamingpromql/testdata/upstream/operators.test b/pkg/streamingpromql/testdata/upstream/operators.test index 86c998a0da8..97aac16c642 100644 --- a/pkg/streamingpromql/testdata/upstream/operators.test +++ b/pkg/streamingpromql/testdata/upstream/operators.test @@ -4,14 +4,14 @@ # Provenance-includes-copyright: The Prometheus Authors load 5m - http_requests{job="api-server", instance="0", group="production"} 0+10x10 - http_requests{job="api-server", instance="1", group="production"} 0+20x10 - http_requests{job="api-server", instance="0", group="canary"} 0+30x10 - http_requests{job="api-server", instance="1", group="canary"} 0+40x10 - http_requests{job="app-server", instance="0", group="production"} 0+50x10 - http_requests{job="app-server", instance="1", group="production"} 0+60x10 - http_requests{job="app-server", instance="0", group="canary"} 0+70x10 - http_requests{job="app-server", instance="1", group="canary"} 0+80x10 + http_requests_total{job="api-server", instance="0", group="production"} 0+10x10 + http_requests_total{job="api-server", instance="1", group="production"} 0+20x10 + http_requests_total{job="api-server", instance="0", group="canary"} 0+30x10 + http_requests_total{job="api-server", instance="1", group="canary"} 0+40x10 + http_requests_total{job="app-server", instance="0", group="production"} 0+50x10 + http_requests_total{job="app-server", instance="1", group="production"} 0+60x10 + http_requests_total{job="app-server", instance="0", group="canary"} 0+70x10 + http_requests_total{job="app-server", instance="1", group="canary"} 0+80x10 http_requests_histogram{job="app-server", instance="1", group="production"} {{schema:1 sum:15 count:10 buckets:[3 2 5 7 9]}}x11 load 5m @@ -20,21 +20,21 @@ load 5m vector_matching_b{l="x"} 0+4x25 -eval instant at 50m SUM(http_requests) BY (job) - COUNT(http_requests) BY (job) +eval instant at 50m SUM(http_requests_total) BY (job) - COUNT(http_requests_total) BY (job) {job="api-server"} 996 {job="app-server"} 2596 -eval instant at 50m 2 - SUM(http_requests) BY (job) +eval instant at 50m 2 - SUM(http_requests_total) BY (job) {job="api-server"} -998 {job="app-server"} -2598 -eval instant at 50m -http_requests{job="api-server",instance="0",group="production"} +eval instant at 50m -http_requests_total{job="api-server",instance="0",group="production"} {job="api-server",instance="0",group="production"} -100 -eval instant at 50m +http_requests{job="api-server",instance="0",group="production"} - http_requests{job="api-server",instance="0",group="production"} 100 +eval instant at 50m +http_requests_total{job="api-server",instance="0",group="production"} + http_requests_total{job="api-server",instance="0",group="production"} 100 -eval instant at 50m - - - SUM(http_requests) BY (job) +eval instant at 50m - - - SUM(http_requests_total) BY (job) {job="api-server"} -1000 {job="app-server"} -2600 @@ -47,83 +47,83 @@ eval instant at 50m -2^---1*3 eval instant at 50m 2/-2^---1*3+2 -10 -eval instant at 50m -10^3 * - SUM(http_requests) BY (job) ^ -1 +eval instant at 50m -10^3 * - SUM(http_requests_total) BY (job) ^ -1 {job="api-server"} 1 {job="app-server"} 0.38461538461538464 -eval instant at 50m 1000 / SUM(http_requests) BY (job) +eval instant at 50m 1000 / SUM(http_requests_total) BY (job) {job="api-server"} 1 {job="app-server"} 0.38461538461538464 -eval instant at 50m SUM(http_requests) BY (job) - 2 +eval instant at 50m SUM(http_requests_total) BY (job) - 2 {job="api-server"} 998 {job="app-server"} 2598 -eval instant at 50m SUM(http_requests) BY (job) % 3 +eval instant at 50m SUM(http_requests_total) BY (job) % 3 {job="api-server"} 1 {job="app-server"} 2 -eval instant at 50m SUM(http_requests) BY (job) % 0.3 +eval instant at 50m SUM(http_requests_total) BY (job) % 0.3 {job="api-server"} 0.1 {job="app-server"} 0.2 -eval instant at 50m SUM(http_requests) BY (job) ^ 2 +eval instant at 50m SUM(http_requests_total) BY (job) ^ 2 {job="api-server"} 1000000 {job="app-server"} 6760000 -eval instant at 50m SUM(http_requests) BY (job) % 3 ^ 2 +eval instant at 50m SUM(http_requests_total) BY (job) % 3 ^ 2 {job="api-server"} 1 {job="app-server"} 8 -eval instant at 50m SUM(http_requests) BY (job) % 2 ^ (3 ^ 2) +eval instant at 50m SUM(http_requests_total) BY (job) % 2 ^ (3 ^ 2) {job="api-server"} 488 {job="app-server"} 40 -eval instant at 50m SUM(http_requests) BY (job) % 2 ^ 3 ^ 2 +eval instant at 50m SUM(http_requests_total) BY (job) % 2 ^ 3 ^ 2 {job="api-server"} 488 {job="app-server"} 40 -eval instant at 50m SUM(http_requests) BY (job) % 2 ^ 3 ^ 2 ^ 2 +eval instant at 50m SUM(http_requests_total) BY (job) % 2 ^ 3 ^ 2 ^ 2 {job="api-server"} 1000 {job="app-server"} 2600 -eval instant at 50m COUNT(http_requests) BY (job) ^ COUNT(http_requests) BY (job) +eval instant at 50m COUNT(http_requests_total) BY (job) ^ COUNT(http_requests_total) BY (job) {job="api-server"} 256 {job="app-server"} 256 -eval instant at 50m SUM(http_requests) BY (job) / 0 +eval instant at 50m SUM(http_requests_total) BY (job) / 0 {job="api-server"} +Inf {job="app-server"} +Inf -eval instant at 50m http_requests{group="canary", instance="0", job="api-server"} / 0 +eval instant at 50m http_requests_total{group="canary", instance="0", job="api-server"} / 0 {group="canary", instance="0", job="api-server"} +Inf -eval instant at 50m -1 * http_requests{group="canary", instance="0", job="api-server"} / 0 +eval instant at 50m -1 * http_requests_total{group="canary", instance="0", job="api-server"} / 0 {group="canary", instance="0", job="api-server"} -Inf -eval instant at 50m 0 * http_requests{group="canary", instance="0", job="api-server"} / 0 +eval instant at 50m 0 * http_requests_total{group="canary", instance="0", job="api-server"} / 0 {group="canary", instance="0", job="api-server"} NaN -eval instant at 50m 0 * http_requests{group="canary", instance="0", job="api-server"} % 0 +eval instant at 50m 0 * http_requests_total{group="canary", instance="0", job="api-server"} % 0 {group="canary", instance="0", job="api-server"} NaN -eval instant at 50m SUM(http_requests) BY (job) + SUM(http_requests) BY (job) +eval instant at 50m SUM(http_requests_total) BY (job) + SUM(http_requests_total) BY (job) {job="api-server"} 2000 {job="app-server"} 5200 -eval instant at 50m (SUM((http_requests)) BY (job)) + SUM(http_requests) BY (job) +eval instant at 50m (SUM((http_requests_total)) BY (job)) + SUM(http_requests_total) BY (job) {job="api-server"} 2000 {job="app-server"} 5200 -eval instant at 50m http_requests{job="api-server", group="canary"} - http_requests{group="canary", instance="0", job="api-server"} 300 - http_requests{group="canary", instance="1", job="api-server"} 400 +eval instant at 50m http_requests_total{job="api-server", group="canary"} + http_requests_total{group="canary", instance="0", job="api-server"} 300 + http_requests_total{group="canary", instance="1", job="api-server"} 400 -eval instant at 50m http_requests{job="api-server", group="canary"} + rate(http_requests{job="api-server"}[10m]) * 5 * 60 +eval instant at 50m http_requests_total{job="api-server", group="canary"} + rate(http_requests_total{job="api-server"}[10m]) * 5 * 60 {group="canary", instance="0", job="api-server"} 330 {group="canary", instance="1", job="api-server"} 440 -eval instant at 50m rate(http_requests[25m]) * 25 * 60 +eval instant at 50m rate(http_requests_total[25m]) * 25 * 60 {group="canary", instance="0", job="api-server"} 150 {group="canary", instance="0", job="app-server"} 350 {group="canary", instance="1", job="api-server"} 200 @@ -133,7 +133,7 @@ eval instant at 50m rate(http_requests[25m]) * 25 * 60 {group="production", instance="1", job="api-server"} 100 {group="production", instance="1", job="app-server"} 300 -eval instant at 50m (rate((http_requests[25m])) * 25) * 60 +eval instant at 50m (rate((http_requests_total[25m])) * 25) * 60 {group="canary", instance="0", job="api-server"} 150 {group="canary", instance="0", job="app-server"} 350 {group="canary", instance="1", job="api-server"} 200 @@ -144,53 +144,53 @@ eval instant at 50m (rate((http_requests[25m])) * 25) * 60 {group="production", instance="1", job="app-server"} 300 -eval instant at 50m http_requests{group="canary"} and http_requests{instance="0"} - http_requests{group="canary", instance="0", job="api-server"} 300 - http_requests{group="canary", instance="0", job="app-server"} 700 +eval instant at 50m http_requests_total{group="canary"} and http_requests_total{instance="0"} + http_requests_total{group="canary", instance="0", job="api-server"} 300 + http_requests_total{group="canary", instance="0", job="app-server"} 700 -eval instant at 50m (http_requests{group="canary"} + 1) and http_requests{instance="0"} +eval instant at 50m (http_requests_total{group="canary"} + 1) and http_requests_total{instance="0"} {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 -eval instant at 50m (http_requests{group="canary"} + 1) and on(instance, job) http_requests{instance="0", group="production"} +eval instant at 50m (http_requests_total{group="canary"} + 1) and on(instance, job) http_requests_total{instance="0", group="production"} {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 -eval instant at 50m (http_requests{group="canary"} + 1) and on(instance) http_requests{instance="0", group="production"} +eval instant at 50m (http_requests_total{group="canary"} + 1) and on(instance) http_requests_total{instance="0", group="production"} {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 -eval instant at 50m (http_requests{group="canary"} + 1) and ignoring(group) http_requests{instance="0", group="production"} +eval instant at 50m (http_requests_total{group="canary"} + 1) and ignoring(group) http_requests_total{instance="0", group="production"} {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 -eval instant at 50m (http_requests{group="canary"} + 1) and ignoring(group, job) http_requests{instance="0", group="production"} +eval instant at 50m (http_requests_total{group="canary"} + 1) and ignoring(group, job) http_requests_total{instance="0", group="production"} {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 -eval instant at 50m http_requests{group="canary"} or http_requests{group="production"} - http_requests{group="canary", instance="0", job="api-server"} 300 - http_requests{group="canary", instance="0", job="app-server"} 700 - http_requests{group="canary", instance="1", job="api-server"} 400 - http_requests{group="canary", instance="1", job="app-server"} 800 - http_requests{group="production", instance="0", job="api-server"} 100 - http_requests{group="production", instance="0", job="app-server"} 500 - http_requests{group="production", instance="1", job="api-server"} 200 - http_requests{group="production", instance="1", job="app-server"} 600 +eval instant at 50m http_requests_total{group="canary"} or http_requests_total{group="production"} + http_requests_total{group="canary", instance="0", job="api-server"} 300 + http_requests_total{group="canary", instance="0", job="app-server"} 700 + http_requests_total{group="canary", instance="1", job="api-server"} 400 + http_requests_total{group="canary", instance="1", job="app-server"} 800 + http_requests_total{group="production", instance="0", job="api-server"} 100 + http_requests_total{group="production", instance="0", job="app-server"} 500 + http_requests_total{group="production", instance="1", job="api-server"} 200 + http_requests_total{group="production", instance="1", job="app-server"} 600 # On overlap the rhs samples must be dropped. -eval instant at 50m (http_requests{group="canary"} + 1) or http_requests{instance="1"} +eval instant at 50m (http_requests_total{group="canary"} + 1) or http_requests_total{instance="1"} {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 {group="canary", instance="1", job="api-server"} 401 {group="canary", instance="1", job="app-server"} 801 - http_requests{group="production", instance="1", job="api-server"} 200 - http_requests{group="production", instance="1", job="app-server"} 600 + http_requests_total{group="production", instance="1", job="api-server"} 200 + http_requests_total{group="production", instance="1", job="app-server"} 600 # Matching only on instance excludes everything that has instance=0/1 but includes # entries without the instance label. -eval instant at 50m (http_requests{group="canary"} + 1) or on(instance) (http_requests or cpu_count or vector_matching_a) +eval instant at 50m (http_requests_total{group="canary"} + 1) or on(instance) (http_requests_total or cpu_count or vector_matching_a) {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 {group="canary", instance="1", job="api-server"} 401 @@ -198,7 +198,7 @@ eval instant at 50m (http_requests{group="canary"} + 1) or on(instance) (http_re vector_matching_a{l="x"} 10 vector_matching_a{l="y"} 20 -eval instant at 50m (http_requests{group="canary"} + 1) or ignoring(l, group, job) (http_requests or cpu_count or vector_matching_a) +eval instant at 50m (http_requests_total{group="canary"} + 1) or ignoring(l, group, job) (http_requests_total or cpu_count or vector_matching_a) {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 {group="canary", instance="1", job="api-server"} 401 @@ -206,81 +206,81 @@ eval instant at 50m (http_requests{group="canary"} + 1) or ignoring(l, group, jo vector_matching_a{l="x"} 10 vector_matching_a{l="y"} 20 -eval instant at 50m http_requests{group="canary"} unless http_requests{instance="0"} - http_requests{group="canary", instance="1", job="api-server"} 400 - http_requests{group="canary", instance="1", job="app-server"} 800 +eval instant at 50m http_requests_total{group="canary"} unless http_requests_total{instance="0"} + http_requests_total{group="canary", instance="1", job="api-server"} 400 + http_requests_total{group="canary", instance="1", job="app-server"} 800 -eval instant at 50m http_requests{group="canary"} unless on(job) http_requests{instance="0"} +eval instant at 50m http_requests_total{group="canary"} unless on(job) http_requests_total{instance="0"} -eval instant at 50m http_requests{group="canary"} unless on(job, instance) http_requests{instance="0"} - http_requests{group="canary", instance="1", job="api-server"} 400 - http_requests{group="canary", instance="1", job="app-server"} 800 +eval instant at 50m http_requests_total{group="canary"} unless on(job, instance) http_requests_total{instance="0"} + http_requests_total{group="canary", instance="1", job="api-server"} 400 + http_requests_total{group="canary", instance="1", job="app-server"} 800 -eval instant at 50m http_requests{group="canary"} / on(instance,job) http_requests{group="production"} +eval instant at 50m http_requests_total{group="canary"} / on(instance,job) http_requests_total{group="production"} {instance="0", job="api-server"} 3 {instance="0", job="app-server"} 1.4 {instance="1", job="api-server"} 2 {instance="1", job="app-server"} 1.3333333333333333 -eval instant at 50m http_requests{group="canary"} unless ignoring(group, instance) http_requests{instance="0"} +eval instant at 50m http_requests_total{group="canary"} unless ignoring(group, instance) http_requests_total{instance="0"} -eval instant at 50m http_requests{group="canary"} unless ignoring(group) http_requests{instance="0"} - http_requests{group="canary", instance="1", job="api-server"} 400 - http_requests{group="canary", instance="1", job="app-server"} 800 +eval instant at 50m http_requests_total{group="canary"} unless ignoring(group) http_requests_total{instance="0"} + http_requests_total{group="canary", instance="1", job="api-server"} 400 + http_requests_total{group="canary", instance="1", job="app-server"} 800 -eval instant at 50m http_requests{group="canary"} / ignoring(group) http_requests{group="production"} +eval instant at 50m http_requests_total{group="canary"} / ignoring(group) http_requests_total{group="production"} {instance="0", job="api-server"} 3 {instance="0", job="app-server"} 1.4 {instance="1", job="api-server"} 2 {instance="1", job="app-server"} 1.3333333333333333 # https://github.com/prometheus/prometheus/issues/1489 -eval instant at 50m http_requests AND ON (dummy) vector(1) - http_requests{group="canary", instance="0", job="api-server"} 300 - http_requests{group="canary", instance="0", job="app-server"} 700 - http_requests{group="canary", instance="1", job="api-server"} 400 - http_requests{group="canary", instance="1", job="app-server"} 800 - http_requests{group="production", instance="0", job="api-server"} 100 - http_requests{group="production", instance="0", job="app-server"} 500 - http_requests{group="production", instance="1", job="api-server"} 200 - http_requests{group="production", instance="1", job="app-server"} 600 - -eval instant at 50m http_requests AND IGNORING (group, instance, job) vector(1) - http_requests{group="canary", instance="0", job="api-server"} 300 - http_requests{group="canary", instance="0", job="app-server"} 700 - http_requests{group="canary", instance="1", job="api-server"} 400 - http_requests{group="canary", instance="1", job="app-server"} 800 - http_requests{group="production", instance="0", job="api-server"} 100 - http_requests{group="production", instance="0", job="app-server"} 500 - http_requests{group="production", instance="1", job="api-server"} 200 - http_requests{group="production", instance="1", job="app-server"} 600 +eval instant at 50m http_requests_total AND ON (dummy) vector(1) + http_requests_total{group="canary", instance="0", job="api-server"} 300 + http_requests_total{group="canary", instance="0", job="app-server"} 700 + http_requests_total{group="canary", instance="1", job="api-server"} 400 + http_requests_total{group="canary", instance="1", job="app-server"} 800 + http_requests_total{group="production", instance="0", job="api-server"} 100 + http_requests_total{group="production", instance="0", job="app-server"} 500 + http_requests_total{group="production", instance="1", job="api-server"} 200 + http_requests_total{group="production", instance="1", job="app-server"} 600 + +eval instant at 50m http_requests_total AND IGNORING (group, instance, job) vector(1) + http_requests_total{group="canary", instance="0", job="api-server"} 300 + http_requests_total{group="canary", instance="0", job="app-server"} 700 + http_requests_total{group="canary", instance="1", job="api-server"} 400 + http_requests_total{group="canary", instance="1", job="app-server"} 800 + http_requests_total{group="production", instance="0", job="api-server"} 100 + http_requests_total{group="production", instance="0", job="app-server"} 500 + http_requests_total{group="production", instance="1", job="api-server"} 200 + http_requests_total{group="production", instance="1", job="app-server"} 600 # Comparisons. -eval instant at 50m SUM(http_requests) BY (job) > 1000 +eval instant at 50m SUM(http_requests_total) BY (job) > 1000 {job="app-server"} 2600 -eval instant at 50m 1000 < SUM(http_requests) BY (job) +eval instant at 50m 1000 < SUM(http_requests_total) BY (job) {job="app-server"} 2600 -eval instant at 50m SUM(http_requests) BY (job) <= 1000 +eval instant at 50m SUM(http_requests_total) BY (job) <= 1000 {job="api-server"} 1000 -eval instant at 50m SUM(http_requests) BY (job) != 1000 +eval instant at 50m SUM(http_requests_total) BY (job) != 1000 {job="app-server"} 2600 -eval instant at 50m SUM(http_requests) BY (job) == 1000 +eval instant at 50m SUM(http_requests_total) BY (job) == 1000 {job="api-server"} 1000 -eval instant at 50m SUM(http_requests) BY (job) == bool 1000 +eval instant at 50m SUM(http_requests_total) BY (job) == bool 1000 {job="api-server"} 1 {job="app-server"} 0 -eval instant at 50m SUM(http_requests) BY (job) == bool SUM(http_requests) BY (job) +eval instant at 50m SUM(http_requests_total) BY (job) == bool SUM(http_requests_total) BY (job) {job="api-server"} 1 {job="app-server"} 1 -eval instant at 50m SUM(http_requests) BY (job) != bool SUM(http_requests) BY (job) +eval instant at 50m SUM(http_requests_total) BY (job) != bool SUM(http_requests_total) BY (job) {job="api-server"} 0 {job="app-server"} 0 @@ -290,12 +290,12 @@ eval instant at 50m 0 == bool 1 eval instant at 50m 1 == bool 1 1 -eval instant at 50m http_requests{job="api-server", instance="0", group="production"} == bool 100 +eval instant at 50m http_requests_total{job="api-server", instance="0", group="production"} == bool 100 {job="api-server", instance="0", group="production"} 1 # The histogram is ignored here so the result doesn't change but it has an info annotation now. eval_info instant at 5m {job="app-server"} == 80 - http_requests{group="canary", instance="1", job="app-server"} 80 + http_requests_total{group="canary", instance="1", job="app-server"} 80 eval_info instant at 5m http_requests_histogram != 80 @@ -694,7 +694,7 @@ eval_info range from 0 to 24m step 6m left_histograms == 0 eval_info range from 0 to 24m step 6m left_histograms != 3 # No results. -eval range from 0 to 24m step 6m left_histograms != 0 +eval_info range from 0 to 24m step 6m left_histograms != 0 # No results. eval_info range from 0 to 24m step 6m left_histograms > 3 @@ -703,7 +703,7 @@ eval_info range from 0 to 24m step 6m left_histograms > 3 eval_info range from 0 to 24m step 6m left_histograms > 0 # No results. -eval range from 0 to 24m step 6m left_histograms >= 3 +eval_info range from 0 to 24m step 6m left_histograms >= 3 # No results. eval_info range from 0 to 24m step 6m left_histograms >= 0 @@ -718,7 +718,7 @@ eval_info range from 0 to 24m step 6m left_histograms < 0 eval_info range from 0 to 24m step 6m left_histograms <= 3 # No results. -eval range from 0 to 24m step 6m left_histograms <= 0 +eval_info range from 0 to 24m step 6m left_histograms <= 0 # No results. eval_info range from 0 to 24m step 6m left_histograms == bool 3 @@ -791,40 +791,87 @@ eval range from 0 to 60m step 6m NaN == left_floats eval range from 0 to 60m step 6m NaN == bool left_floats {} 0 0 _ _ 0 _ 0 0 0 0 0 -eval range from 0 to 24m step 6m 3 == left_histograms +eval_info range from 0 to 24m step 6m 3 == left_histograms # No results. -eval range from 0 to 24m step 6m 0 == left_histograms +eval_info range from 0 to 24m step 6m 0 == left_histograms # No results. -eval range from 0 to 24m step 6m 3 != left_histograms +eval_info range from 0 to 24m step 6m 3 != left_histograms # No results. -eval range from 0 to 24m step 6m 0 != left_histograms +eval_info range from 0 to 24m step 6m 0 != left_histograms # No results. -eval range from 0 to 24m step 6m 3 < left_histograms +eval_info range from 0 to 24m step 6m 3 < left_histograms # No results. -eval range from 0 to 24m step 6m 0 < left_histograms +eval_info range from 0 to 24m step 6m 0 < left_histograms # No results. -eval range from 0 to 24m step 6m 3 < left_histograms +eval_info range from 0 to 24m step 6m 3 < left_histograms # No results. -eval range from 0 to 24m step 6m 0 < left_histograms +eval_info range from 0 to 24m step 6m 0 < left_histograms # No results. -eval range from 0 to 24m step 6m 3 > left_histograms +eval_info range from 0 to 24m step 6m 3 > left_histograms # No results. -eval range from 0 to 24m step 6m 0 > left_histograms +eval_info range from 0 to 24m step 6m 0 > left_histograms # No results. -eval range from 0 to 24m step 6m 3 >= left_histograms +eval_info range from 0 to 24m step 6m 3 >= left_histograms # No results. -eval range from 0 to 24m step 6m 0 >= left_histograms +eval_info range from 0 to 24m step 6m 0 >= left_histograms # No results. clear + +# Test completely discarding or completely including series in results with "and on" +load_with_nhcb 5m + testhistogram_bucket{le="0.1", id="1"} 0+5x10 + testhistogram_bucket{le="0.2", id="1"} 0+7x10 + testhistogram_bucket{le="+Inf", id="1"} 0+12x10 + testhistogram_bucket{le="0.1", id="2"} 0+4x10 + testhistogram_bucket{le="0.2", id="2"} 0+6x10 + testhistogram_bucket{le="+Inf", id="2"} 0+11x10 + +# Include all series when "and on" with non-empty vector. +eval instant at 10m (testhistogram_bucket) and on() (vector(1) == 1) + {__name__="testhistogram_bucket", le="0.1", id="1"} 10.0 + {__name__="testhistogram_bucket", le="0.2", id="1"} 14.0 + {__name__="testhistogram_bucket", le="+Inf", id="1"} 24.0 + {__name__="testhistogram_bucket", le="0.1", id="2"} 8.0 + {__name__="testhistogram_bucket", le="0.2", id="2"} 12.0 + {__name__="testhistogram_bucket", le="+Inf", id="2"} 22.0 + +eval range from 0 to 10m step 5m (testhistogram_bucket) and on() (vector(1) == 1) + {__name__="testhistogram_bucket", le="0.1", id="1"} 0.0 5.0 10.0 + {__name__="testhistogram_bucket", le="0.2", id="1"} 0.0 7.0 14.0 + {__name__="testhistogram_bucket", le="+Inf", id="1"} 0.0 12.0 24.0 + {__name__="testhistogram_bucket", le="0.1", id="2"} 0.0 4.0 8.0 + {__name__="testhistogram_bucket", le="0.2", id="2"} 0.0 6.0 12.0 + {__name__="testhistogram_bucket", le="+Inf", id="2"} 0.0 11.0 22.0 + +# Exclude all series when "and on" with empty vector. +eval instant at 10m (testhistogram_bucket) and on() (vector(-1) == 1) + +eval range from 0 to 10m step 5m (testhistogram_bucket) and on() (vector(-1) == 1) + +# Include all native histogram series when "and on" with non-empty vector. +eval instant at 10m (testhistogram) and on() (vector(1) == 1) + {__name__="testhistogram", id="1"} {{schema:-53 sum:0 count:24 buckets:[10 4 10] custom_values:[0.1 0.2]}} + {__name__="testhistogram", id="2"} {{schema:-53 sum:0 count:22 buckets:[8 4 10] custom_values:[0.1 0.2]}} + +eval range from 0 to 10m step 5m (testhistogram) and on() (vector(1) == 1) + {__name__="testhistogram", id="1"} {{schema:-53 sum:0 count:0 custom_values:[0.1 0.2]}} {{schema:-53 sum:0 count:12 buckets:[5 2 5] custom_values:[0.1 0.2]}} {{schema:-53 sum:0 count:24 buckets:[10 4 10] custom_values:[0.1 0.2]}} + {__name__="testhistogram", id="2"} {{schema:-53 sum:0 count:0 custom_values:[0.1 0.2]}} {{schema:-53 sum:0 count:11 buckets:[4 2 5] custom_values:[0.1 0.2]}} {{schema:-53 sum:0 count:22 buckets:[8 4 10] custom_values:[0.1 0.2]}} + +# Exclude all native histogram series when "and on" with empty vector. +eval instant at 10m (testhistogram) and on() (vector(-1) == 1) + +eval range from 0 to 10m step 5m (testhistogram) and on() (vector(-1) == 1) + +clear diff --git a/pkg/streamingpromql/testdata/upstream/selectors.test b/pkg/streamingpromql/testdata/upstream/selectors.test index 9d08a7f2590..f9aa1e14e40 100644 --- a/pkg/streamingpromql/testdata/upstream/selectors.test +++ b/pkg/streamingpromql/testdata/upstream/selectors.test @@ -4,111 +4,111 @@ # Provenance-includes-copyright: The Prometheus Authors load 10s - http_requests{job="api-server", instance="0", group="production"} 0+10x1000 100+30x1000 - http_requests{job="api-server", instance="1", group="production"} 0+20x1000 200+30x1000 - http_requests{job="api-server", instance="0", group="canary"} 0+30x1000 300+80x1000 - http_requests{job="api-server", instance="1", group="canary"} 0+40x2000 + http_requests_total{job="api-server", instance="0", group="production"} 0+10x1000 100+30x1000 + http_requests_total{job="api-server", instance="1", group="production"} 0+20x1000 200+30x1000 + http_requests_total{job="api-server", instance="0", group="canary"} 0+30x1000 300+80x1000 + http_requests_total{job="api-server", instance="1", group="canary"} 0+40x2000 -eval instant at 8000s rate(http_requests[1m]) +eval instant at 8000s rate(http_requests_total[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 {job="api-server", instance="0", group="canary"} 3 {job="api-server", instance="1", group="canary"} 4 -eval instant at 18000s rate(http_requests[1m]) +eval instant at 18000s rate(http_requests_total[1m]) {job="api-server", instance="0", group="production"} 3 {job="api-server", instance="1", group="production"} 3 {job="api-server", instance="0", group="canary"} 8 {job="api-server", instance="1", group="canary"} 4 -eval instant at 8000s rate(http_requests{group=~"pro.*"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~"pro.*"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 18000s rate(http_requests{group=~".*ry", instance="1"}[1m]) +eval instant at 18000s rate(http_requests_total{group=~".*ry", instance="1"}[1m]) {job="api-server", instance="1", group="canary"} 4 -eval instant at 18000s rate(http_requests{instance!="3"}[1m] offset 10000s) +eval instant at 18000s rate(http_requests_total{instance!="3"}[1m] offset 10000s) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 {job="api-server", instance="0", group="canary"} 3 {job="api-server", instance="1", group="canary"} 4 -eval instant at 4000s rate(http_requests{instance!="3"}[1m] offset -4000s) +eval instant at 4000s rate(http_requests_total{instance!="3"}[1m] offset -4000s) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 {job="api-server", instance="0", group="canary"} 3 {job="api-server", instance="1", group="canary"} 4 -eval instant at 18000s rate(http_requests[40s]) - rate(http_requests[1m] offset 10000s) +eval instant at 18000s rate(http_requests_total[40s]) - rate(http_requests_total[1m] offset 10000s) {job="api-server", instance="0", group="production"} 2 {job="api-server", instance="1", group="production"} 1 {job="api-server", instance="0", group="canary"} 5 {job="api-server", instance="1", group="canary"} 0 # https://github.com/prometheus/prometheus/issues/3575 -eval instant at 0s http_requests{foo!="bar"} - http_requests{job="api-server", instance="0", group="production"} 0 - http_requests{job="api-server", instance="1", group="production"} 0 - http_requests{job="api-server", instance="0", group="canary"} 0 - http_requests{job="api-server", instance="1", group="canary"} 0 - -eval instant at 0s http_requests{foo!="bar", job="api-server"} - http_requests{job="api-server", instance="0", group="production"} 0 - http_requests{job="api-server", instance="1", group="production"} 0 - http_requests{job="api-server", instance="0", group="canary"} 0 - http_requests{job="api-server", instance="1", group="canary"} 0 - -eval instant at 0s http_requests{foo!~"bar", job="api-server"} - http_requests{job="api-server", instance="0", group="production"} 0 - http_requests{job="api-server", instance="1", group="production"} 0 - http_requests{job="api-server", instance="0", group="canary"} 0 - http_requests{job="api-server", instance="1", group="canary"} 0 - -eval instant at 0s http_requests{foo!~"bar", job="api-server", instance="1", x!="y", z="", group!=""} - http_requests{job="api-server", instance="1", group="production"} 0 - http_requests{job="api-server", instance="1", group="canary"} 0 +eval instant at 0s http_requests_total{foo!="bar"} + http_requests_total{job="api-server", instance="0", group="production"} 0 + http_requests_total{job="api-server", instance="1", group="production"} 0 + http_requests_total{job="api-server", instance="0", group="canary"} 0 + http_requests_total{job="api-server", instance="1", group="canary"} 0 + +eval instant at 0s http_requests_total{foo!="bar", job="api-server"} + http_requests_total{job="api-server", instance="0", group="production"} 0 + http_requests_total{job="api-server", instance="1", group="production"} 0 + http_requests_total{job="api-server", instance="0", group="canary"} 0 + http_requests_total{job="api-server", instance="1", group="canary"} 0 + +eval instant at 0s http_requests_total{foo!~"bar", job="api-server"} + http_requests_total{job="api-server", instance="0", group="production"} 0 + http_requests_total{job="api-server", instance="1", group="production"} 0 + http_requests_total{job="api-server", instance="0", group="canary"} 0 + http_requests_total{job="api-server", instance="1", group="canary"} 0 + +eval instant at 0s http_requests_total{foo!~"bar", job="api-server", instance="1", x!="y", z="", group!=""} + http_requests_total{job="api-server", instance="1", group="production"} 0 + http_requests_total{job="api-server", instance="1", group="canary"} 0 # https://github.com/prometheus/prometheus/issues/7994 -eval instant at 8000s rate(http_requests{group=~"(?i:PRO).*"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~"(?i:PRO).*"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~".*?(?i:PRO).*"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~".*?(?i:PRO).*"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~".*(?i:DUC).*"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~".*(?i:DUC).*"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~".*(?i:TION)"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~".*(?i:TION)"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~".*(?i:TION).*?"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~".*(?i:TION).*?"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~"((?i)PRO).*"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~"((?i)PRO).*"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~".*((?i)DUC).*"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~".*((?i)DUC).*"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~".*((?i)TION)"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~".*((?i)TION)"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~"(?i:PRODUCTION)"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~"(?i:PRODUCTION)"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~".*(?i:C).*"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~".*(?i:C).*"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 {job="api-server", instance="0", group="canary"} 3 @@ -138,14 +138,14 @@ load 5m label_grouping_test{a="a", b="abb"} 0+20x10 load 5m - http_requests{job="api-server", instance="0", group="production"} 0+10x10 - http_requests{job="api-server", instance="1", group="production"} 0+20x10 - http_requests{job="api-server", instance="0", group="canary"} 0+30x10 - http_requests{job="api-server", instance="1", group="canary"} 0+40x10 - http_requests{job="app-server", instance="0", group="production"} 0+50x10 - http_requests{job="app-server", instance="1", group="production"} 0+60x10 - http_requests{job="app-server", instance="0", group="canary"} 0+70x10 - http_requests{job="app-server", instance="1", group="canary"} 0+80x10 + http_requests_total{job="api-server", instance="0", group="production"} 0+10x10 + http_requests_total{job="api-server", instance="1", group="production"} 0+20x10 + http_requests_total{job="api-server", instance="0", group="canary"} 0+30x10 + http_requests_total{job="api-server", instance="1", group="canary"} 0+40x10 + http_requests_total{job="app-server", instance="0", group="production"} 0+50x10 + http_requests_total{job="app-server", instance="1", group="production"} 0+60x10 + http_requests_total{job="app-server", instance="0", group="canary"} 0+70x10 + http_requests_total{job="app-server", instance="1", group="canary"} 0+80x10 # Single-letter label names and values. eval instant at 50m x{y="testvalue"} @@ -153,14 +153,14 @@ eval instant at 50m x{y="testvalue"} # Basic Regex eval instant at 50m {__name__=~".+"} - http_requests{group="canary", instance="0", job="api-server"} 300 - http_requests{group="canary", instance="0", job="app-server"} 700 - http_requests{group="canary", instance="1", job="api-server"} 400 - http_requests{group="canary", instance="1", job="app-server"} 800 - http_requests{group="production", instance="0", job="api-server"} 100 - http_requests{group="production", instance="0", job="app-server"} 500 - http_requests{group="production", instance="1", job="api-server"} 200 - http_requests{group="production", instance="1", job="app-server"} 600 + http_requests_total{group="canary", instance="0", job="api-server"} 300 + http_requests_total{group="canary", instance="0", job="app-server"} 700 + http_requests_total{group="canary", instance="1", job="api-server"} 400 + http_requests_total{group="canary", instance="1", job="app-server"} 800 + http_requests_total{group="production", instance="0", job="api-server"} 100 + http_requests_total{group="production", instance="0", job="app-server"} 500 + http_requests_total{group="production", instance="1", job="api-server"} 200 + http_requests_total{group="production", instance="1", job="app-server"} 600 x{y="testvalue"} 100 label_grouping_test{a="a", b="abb"} 200 label_grouping_test{a="aa", b="bb"} 100 @@ -169,34 +169,34 @@ eval instant at 50m {__name__=~".+"} cpu_count{instance="0", type="numa"} 300 eval instant at 50m {job=~".+-server", job!~"api-.+"} - http_requests{group="canary", instance="0", job="app-server"} 700 - http_requests{group="canary", instance="1", job="app-server"} 800 - http_requests{group="production", instance="0", job="app-server"} 500 - http_requests{group="production", instance="1", job="app-server"} 600 - -eval instant at 50m http_requests{group!="canary"} - http_requests{group="production", instance="1", job="app-server"} 600 - http_requests{group="production", instance="0", job="app-server"} 500 - http_requests{group="production", instance="1", job="api-server"} 200 - http_requests{group="production", instance="0", job="api-server"} 100 - -eval instant at 50m http_requests{job=~".+-server",group!="canary"} - http_requests{group="production", instance="1", job="app-server"} 600 - http_requests{group="production", instance="0", job="app-server"} 500 - http_requests{group="production", instance="1", job="api-server"} 200 - http_requests{group="production", instance="0", job="api-server"} 100 - -eval instant at 50m http_requests{job!~"api-.+",group!="canary"} - http_requests{group="production", instance="1", job="app-server"} 600 - http_requests{group="production", instance="0", job="app-server"} 500 - -eval instant at 50m http_requests{group="production",job=~"api-.+"} - http_requests{group="production", instance="0", job="api-server"} 100 - http_requests{group="production", instance="1", job="api-server"} 200 - -eval instant at 50m http_requests{group="production",job="api-server"} offset 5m - http_requests{group="production", instance="0", job="api-server"} 90 - http_requests{group="production", instance="1", job="api-server"} 180 + http_requests_total{group="canary", instance="0", job="app-server"} 700 + http_requests_total{group="canary", instance="1", job="app-server"} 800 + http_requests_total{group="production", instance="0", job="app-server"} 500 + http_requests_total{group="production", instance="1", job="app-server"} 600 + +eval instant at 50m http_requests_total{group!="canary"} + http_requests_total{group="production", instance="1", job="app-server"} 600 + http_requests_total{group="production", instance="0", job="app-server"} 500 + http_requests_total{group="production", instance="1", job="api-server"} 200 + http_requests_total{group="production", instance="0", job="api-server"} 100 + +eval instant at 50m http_requests_total{job=~".+-server",group!="canary"} + http_requests_total{group="production", instance="1", job="app-server"} 600 + http_requests_total{group="production", instance="0", job="app-server"} 500 + http_requests_total{group="production", instance="1", job="api-server"} 200 + http_requests_total{group="production", instance="0", job="api-server"} 100 + +eval instant at 50m http_requests_total{job!~"api-.+",group!="canary"} + http_requests_total{group="production", instance="1", job="app-server"} 600 + http_requests_total{group="production", instance="0", job="app-server"} 500 + +eval instant at 50m http_requests_total{group="production",job=~"api-.+"} + http_requests_total{group="production", instance="0", job="api-server"} 100 + http_requests_total{group="production", instance="1", job="api-server"} 200 + +eval instant at 50m http_requests_total{group="production",job="api-server"} offset 5m + http_requests_total{group="production", instance="0", job="api-server"} 90 + http_requests_total{group="production", instance="1", job="api-server"} 180 clear diff --git a/pkg/streamingpromql/testdata/upstream/subquery.test b/pkg/streamingpromql/testdata/upstream/subquery.test index 1849a4576bb..bc543fd94a4 100644 --- a/pkg/streamingpromql/testdata/upstream/subquery.test +++ b/pkg/streamingpromql/testdata/upstream/subquery.test @@ -4,43 +4,43 @@ # Provenance-includes-copyright: The Prometheus Authors load 10s - metric 1 2 + metric_total 1 2 # Evaluation before 0s gets no sample. -eval instant at 10s sum_over_time(metric[50s:10s]) +eval instant at 10s sum_over_time(metric_total[50s:10s]) {} 3 -eval instant at 10s sum_over_time(metric[50s:5s]) +eval instant at 10s sum_over_time(metric_total[50s:5s]) {} 4 # Every evaluation yields the last value, i.e. 2 -eval instant at 5m sum_over_time(metric[50s:10s]) +eval instant at 5m sum_over_time(metric_total[50s:10s]) {} 10 -# Series becomes stale at 5m10s (5m after last sample) +# Series becomes stale at 5m10s (5m after last sample). # Hence subquery gets a single sample at 5m10s. -eval instant at 5m59s sum_over_time(metric[60s:10s]) +eval instant at 5m59s sum_over_time(metric_total[60s:10s]) {} 2 -eval instant at 10s rate(metric[20s:10s]) +eval instant at 10s rate(metric_total[20s:10s]) {} 0.1 -eval instant at 20s rate(metric[20s:5s]) +eval instant at 20s rate(metric_total[20s:5s]) {} 0.06666666666666667 clear load 10s - http_requests{job="api-server", instance="1", group="production"} 0+20x1000 200+30x1000 - http_requests{job="api-server", instance="0", group="production"} 0+10x1000 100+30x1000 - http_requests{job="api-server", instance="0", group="canary"} 0+30x1000 300+80x1000 - http_requests{job="api-server", instance="1", group="canary"} 0+40x2000 + http_requests_total{job="api-server", instance="1", group="production"} 0+20x1000 200+30x1000 + http_requests_total{job="api-server", instance="0", group="production"} 0+10x1000 100+30x1000 + http_requests_total{job="api-server", instance="0", group="canary"} 0+30x1000 300+80x1000 + http_requests_total{job="api-server", instance="1", group="canary"} 0+40x2000 -eval instant at 8000s rate(http_requests{group=~"pro.*"}[1m:10s]) +eval instant at 8000s rate(http_requests_total{group=~"pro.*"}[1m:10s]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 20000s avg_over_time(rate(http_requests[1m])[1m:1s]) +eval instant at 20000s avg_over_time(rate(http_requests_total[1m])[1m:1s]) {job="api-server", instance="0", group="canary"} 8 {job="api-server", instance="1", group="canary"} 4 {job="api-server", instance="1", group="production"} 3 @@ -49,64 +49,64 @@ eval instant at 20000s avg_over_time(rate(http_requests[1m])[1m:1s]) clear load 10s - metric1 0+1x1000 - metric2 0+2x1000 - metric3 0+3x1000 + metric1_total 0+1x1000 + metric2_total 0+2x1000 + metric3_total 0+3x1000 -eval instant at 1000s sum_over_time(metric1[30s:10s]) +eval instant at 1000s sum_over_time(metric1_total[30s:10s]) {} 297 # This is (97 + 98*2 + 99*2 + 100), because other than 97@975s and 100@1000s, # everything else is repeated with the 5s step. -eval instant at 1000s sum_over_time(metric1[30s:5s]) +eval instant at 1000s sum_over_time(metric1_total[30s:5s]) {} 591 # Offset is aligned with the step, so this is from [98@980s, 99@990s, 100@1000s]. -eval instant at 1010s sum_over_time(metric1[30s:10s] offset 10s) +eval instant at 1010s sum_over_time(metric1_total[30s:10s] offset 10s) {} 297 # Same result for different offsets due to step alignment. -eval instant at 1010s sum_over_time(metric1[30s:10s] offset 9s) +eval instant at 1010s sum_over_time(metric1_total[30s:10s] offset 9s) {} 297 -eval instant at 1010s sum_over_time(metric1[30s:10s] offset 7s) +eval instant at 1010s sum_over_time(metric1_total[30s:10s] offset 7s) {} 297 -eval instant at 1010s sum_over_time(metric1[30s:10s] offset 5s) +eval instant at 1010s sum_over_time(metric1_total[30s:10s] offset 5s) {} 297 -eval instant at 1010s sum_over_time(metric1[30s:10s] offset 3s) +eval instant at 1010s sum_over_time(metric1_total[30s:10s] offset 3s) {} 297 -eval instant at 1010s sum_over_time((metric1)[30s:10s] offset 3s) +eval instant at 1010s sum_over_time((metric1_total)[30s:10s] offset 3s) {} 297 -eval instant at 1010s sum_over_time(metric1[30:10] offset 3) +eval instant at 1010s sum_over_time(metric1_total[30:10] offset 3) {} 297 -eval instant at 1010s sum_over_time((metric1)[30:10s] offset 3s) +eval instant at 1010s sum_over_time((metric1_total)[30:10s] offset 3s) {} 297 -eval instant at 1010s sum_over_time((metric1)[30:10s] offset 3s) +eval instant at 1010s sum_over_time((metric1_total)[30:10s] offset 3s) {} 297 -eval instant at 1010s sum_over_time((metric1)[30:10] offset 3s) +eval instant at 1010s sum_over_time((metric1_total)[30:10] offset 3s) {} 297 -eval instant at 1010s sum_over_time((metric1)[30:10] offset 3) +eval instant at 1010s sum_over_time((metric1_total)[30:10] offset 3) {} 297 -# Nested subqueries -eval instant at 1000s rate(sum_over_time(metric1[30s:10s])[50s:10s]) +# Nested subqueries. +eval instant at 1000s rate(sum_over_time(metric1_total[30s:10s])[50s:10s]) {} 0.30000000000000004 -eval instant at 1000s rate(sum_over_time(metric2[30s:10s])[50s:10s]) +eval instant at 1000s rate(sum_over_time(metric2_total[30s:10s])[50s:10s]) {} 0.6000000000000001 - -eval instant at 1000s rate(sum_over_time(metric3[30s:10s])[50s:10s]) + +eval instant at 1000s rate(sum_over_time(metric3_total[30s:10s])[50s:10s]) {} 0.9 - -eval instant at 1000s rate(sum_over_time((metric1+metric2+metric3)[30s:10s])[30s:10s]) + +eval instant at 1000s rate(sum_over_time((metric1_total+metric2_total+metric3_total)[30s:10s])[30s:10s]) {} 1.8 clear @@ -114,28 +114,28 @@ clear # Fibonacci sequence, to ensure the rate is not constant. # Additional note: using subqueries unnecessarily is unwise. load 7s - metric 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765 10946 17711 28657 46368 75025 121393 196418 317811 514229 832040 1346269 2178309 3524578 5702887 9227465 14930352 24157817 39088169 63245986 102334155 165580141 267914296 433494437 701408733 1134903170 1836311903 2971215073 4807526976 7778742049 12586269025 20365011074 32951280099 53316291173 86267571272 139583862445 225851433717 365435296162 591286729879 956722026041 1548008755920 2504730781961 4052739537881 6557470319842 10610209857723 17167680177565 27777890035288 44945570212853 72723460248141 117669030460994 190392490709135 308061521170129 498454011879264 806515533049393 1304969544928657 2111485077978050 3416454622906707 5527939700884757 8944394323791464 14472334024676221 23416728348467685 37889062373143906 61305790721611591 99194853094755497 160500643816367088 259695496911122585 420196140727489673 679891637638612258 1100087778366101931 1779979416004714189 2880067194370816120 4660046610375530309 7540113804746346429 12200160415121876738 19740274219868223167 31940434634990099905 51680708854858323072 83621143489848422977 135301852344706746049 218922995834555169026 354224848179261915075 573147844013817084101 927372692193078999176 1500520536206896083277 2427893228399975082453 3928413764606871165730 6356306993006846248183 10284720757613717413913 16641027750620563662096 26925748508234281076009 43566776258854844738105 70492524767089125814114 114059301025943970552219 184551825793033096366333 298611126818977066918552 483162952612010163284885 781774079430987230203437 1264937032042997393488322 2046711111473984623691759 3311648143516982017180081 5358359254990966640871840 8670007398507948658051921 14028366653498915298923761 22698374052006863956975682 36726740705505779255899443 59425114757512643212875125 96151855463018422468774568 155576970220531065681649693 251728825683549488150424261 407305795904080553832073954 659034621587630041982498215 1066340417491710595814572169 1725375039079340637797070384 2791715456571051233611642553 4517090495650391871408712937 7308805952221443105020355490 11825896447871834976429068427 19134702400093278081449423917 30960598847965113057878492344 50095301248058391139327916261 81055900096023504197206408605 131151201344081895336534324866 212207101440105399533740733471 343358302784187294870275058337 555565404224292694404015791808 898923707008479989274290850145 1454489111232772683678306641953 2353412818241252672952597492098 3807901929474025356630904134051 6161314747715278029583501626149 9969216677189303386214405760200 16130531424904581415797907386349 26099748102093884802012313146549 42230279526998466217810220532898 68330027629092351019822533679447 110560307156090817237632754212345 178890334785183168257455287891792 289450641941273985495088042104137 468340976726457153752543329995929 757791618667731139247631372100066 1226132595394188293000174702095995 1983924214061919432247806074196061 3210056809456107725247980776292056 5193981023518027157495786850488117 8404037832974134882743767626780173 13598018856492162040239554477268290 22002056689466296922983322104048463 35600075545958458963222876581316753 57602132235424755886206198685365216 93202207781383214849429075266681969 150804340016807970735635273952047185 244006547798191185585064349218729154 394810887814999156320699623170776339 638817435613190341905763972389505493 1033628323428189498226463595560281832 1672445759041379840132227567949787325 2706074082469569338358691163510069157 4378519841510949178490918731459856482 7084593923980518516849609894969925639 11463113765491467695340528626429782121 18547707689471986212190138521399707760 + metric_total 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765 10946 17711 28657 46368 75025 121393 196418 317811 514229 832040 1346269 2178309 3524578 5702887 9227465 14930352 24157817 39088169 63245986 102334155 165580141 267914296 433494437 701408733 1134903170 1836311903 2971215073 4807526976 7778742049 12586269025 20365011074 32951280099 53316291173 86267571272 139583862445 225851433717 365435296162 591286729879 956722026041 1548008755920 2504730781961 4052739537881 6557470319842 10610209857723 17167680177565 27777890035288 44945570212853 72723460248141 117669030460994 190392490709135 308061521170129 498454011879264 806515533049393 1304969544928657 2111485077978050 3416454622906707 5527939700884757 8944394323791464 14472334024676221 23416728348467685 37889062373143906 61305790721611591 99194853094755497 160500643816367088 259695496911122585 420196140727489673 679891637638612258 1100087778366101931 1779979416004714189 2880067194370816120 4660046610375530309 7540113804746346429 12200160415121876738 19740274219868223167 31940434634990099905 51680708854858323072 83621143489848422977 135301852344706746049 218922995834555169026 354224848179261915075 573147844013817084101 927372692193078999176 1500520536206896083277 2427893228399975082453 3928413764606871165730 6356306993006846248183 10284720757613717413913 16641027750620563662096 26925748508234281076009 43566776258854844738105 70492524767089125814114 114059301025943970552219 184551825793033096366333 298611126818977066918552 483162952612010163284885 781774079430987230203437 1264937032042997393488322 2046711111473984623691759 3311648143516982017180081 5358359254990966640871840 8670007398507948658051921 14028366653498915298923761 22698374052006863956975682 36726740705505779255899443 59425114757512643212875125 96151855463018422468774568 155576970220531065681649693 251728825683549488150424261 407305795904080553832073954 659034621587630041982498215 1066340417491710595814572169 1725375039079340637797070384 2791715456571051233611642553 4517090495650391871408712937 7308805952221443105020355490 11825896447871834976429068427 19134702400093278081449423917 30960598847965113057878492344 50095301248058391139327916261 81055900096023504197206408605 131151201344081895336534324866 212207101440105399533740733471 343358302784187294870275058337 555565404224292694404015791808 898923707008479989274290850145 1454489111232772683678306641953 2353412818241252672952597492098 3807901929474025356630904134051 6161314747715278029583501626149 9969216677189303386214405760200 16130531424904581415797907386349 26099748102093884802012313146549 42230279526998466217810220532898 68330027629092351019822533679447 110560307156090817237632754212345 178890334785183168257455287891792 289450641941273985495088042104137 468340976726457153752543329995929 757791618667731139247631372100066 1226132595394188293000174702095995 1983924214061919432247806074196061 3210056809456107725247980776292056 5193981023518027157495786850488117 8404037832974134882743767626780173 13598018856492162040239554477268290 22002056689466296922983322104048463 35600075545958458963222876581316753 57602132235424755886206198685365216 93202207781383214849429075266681969 150804340016807970735635273952047185 244006547798191185585064349218729154 394810887814999156320699623170776339 638817435613190341905763972389505493 1033628323428189498226463595560281832 1672445759041379840132227567949787325 2706074082469569338358691163510069157 4378519841510949178490918731459856482 7084593923980518516849609894969925639 11463113765491467695340528626429782121 18547707689471986212190138521399707760 # Extrapolated from [3@21, 144@77]: (144 - 3) / (77 - 21) -eval instant at 80s rate(metric[1m]) +eval instant at 80s rate(metric_total[1m]) {} 2.517857143 # Extrapolated to range start for counter, [2@20, 144@80]: (144 - 2) / (80 - 20) -eval instant at 80s rate(metric[1m500ms:10s]) +eval instant at 80s rate(metric_total[1m500ms:10s]) {} 2.3666666666666667 # Extrapolated to zero value for counter, [2@20, 144@80]: (144 - 0) / 61 -eval instant at 80s rate(metric[1m1s:10s]) +eval instant at 80s rate(metric_total[1m1s:10s]) {} 2.360655737704918 # Only one value between 10s and 20s, 2@14 -eval instant at 20s min_over_time(metric[10s]) +eval instant at 20s min_over_time(metric_total[10s]) {} 2 # min(2@20) -eval instant at 20s min_over_time(metric[15s:10s]) +eval instant at 20s min_over_time(metric_total[15s:10s]) {} 1 -eval instant at 20m min_over_time(rate(metric[5m])[20m:1m]) +eval instant at 20m min_over_time(rate(metric_total[5m])[20m:1m]) {} 0.12119047619047618 diff --git a/pkg/streamingpromql/types/fpoint_ring_buffer.go b/pkg/streamingpromql/types/fpoint_ring_buffer.go index e18964304e3..53dbd1d1387 100644 --- a/pkg/streamingpromql/types/fpoint_ring_buffer.go +++ b/pkg/streamingpromql/types/fpoint_ring_buffer.go @@ -64,7 +64,7 @@ func (b *FPointRingBuffer) Append(p promql.FPoint) error { if !isPowerOfTwo(cap(newSlice)) { // We rely on the capacity being a power of two for the pointsIndexMask optimisation below. // If we can guarantee that newSlice has a capacity that is a power of two in the future, then we can drop this check. - panic(fmt.Sprintf("pool returned slice of capacity %v (requested %v), but wanted a power of two", cap(newSlice), newSize)) + return fmt.Errorf("pool returned slice of capacity %v (requested %v), but wanted a power of two", cap(newSlice), newSize) } newSlice = newSlice[:cap(newSlice)] @@ -147,10 +147,10 @@ func (b *FPointRingBuffer) Release() { // s will be returned to the pool when Close is called, Use is called again, or the buffer needs to expand, so callers // should not return s to the pool themselves. // s must have a capacity that is a power of two. -func (b *FPointRingBuffer) Use(s []promql.FPoint) { +func (b *FPointRingBuffer) Use(s []promql.FPoint) error { if !isPowerOfTwo(cap(s)) { // We rely on the capacity being a power of two for the pointsIndexMask optimisation below. - panic(fmt.Sprintf("slice capacity must be a power of two, but is %v", cap(s))) + return fmt.Errorf("slice capacity must be a power of two, but is %v", cap(s)) } putFPointSliceForRingBuffer(b.points, b.memoryConsumptionTracker) @@ -159,6 +159,7 @@ func (b *FPointRingBuffer) Use(s []promql.FPoint) { b.firstIndex = 0 b.size = len(s) b.pointsIndexMask = cap(s) - 1 + return nil } // Close releases any resources associated with this buffer. diff --git a/pkg/streamingpromql/types/hpoint_ring_buffer.go b/pkg/streamingpromql/types/hpoint_ring_buffer.go index de95f6e2f97..e7c7293ff29 100644 --- a/pkg/streamingpromql/types/hpoint_ring_buffer.go +++ b/pkg/streamingpromql/types/hpoint_ring_buffer.go @@ -124,7 +124,7 @@ func (b *HPointRingBuffer) NextPoint() (*promql.HPoint, error) { if !isPowerOfTwo(cap(newSlice)) { // We rely on the capacity being a power of two for the pointsIndexMask optimisation below. // If we can guarantee that newSlice has a capacity that is a power of two in the future, then we can drop this check. - panic(fmt.Sprintf("pool returned slice of capacity %v (requested %v), but wanted a power of two", cap(newSlice), newSize)) + return nil, fmt.Errorf("pool returned slice of capacity %v (requested %v), but wanted a power of two", cap(newSlice), newSize) } newSlice = newSlice[:cap(newSlice)] @@ -184,10 +184,10 @@ func (b *HPointRingBuffer) Release() { // s will be returned to the pool when Close is called, Use is called again, or the buffer needs to expand, so callers // should not return s to the pool themselves. // s must have a capacity that is a power of two. -func (b *HPointRingBuffer) Use(s []promql.HPoint) { +func (b *HPointRingBuffer) Use(s []promql.HPoint) error { if !isPowerOfTwo(cap(s)) { // We rely on the capacity being a power of two for the pointsIndexMask optimisation below. - panic(fmt.Sprintf("slice capacity must be a power of two, but is %v", cap(s))) + return fmt.Errorf("slice capacity must be a power of two, but is %v", cap(s)) } putHPointSliceForRingBuffer(b.points, b.memoryConsumptionTracker) @@ -196,6 +196,7 @@ func (b *HPointRingBuffer) Use(s []promql.HPoint) { b.firstIndex = 0 b.size = len(s) b.pointsIndexMask = cap(s) - 1 + return nil } // Close releases any resources associated with this buffer. diff --git a/pkg/streamingpromql/types/limiting_pool.go b/pkg/streamingpromql/types/limiting_pool.go index 5cf68ba8edc..4866e258441 100644 --- a/pkg/streamingpromql/types/limiting_pool.go +++ b/pkg/streamingpromql/types/limiting_pool.go @@ -13,8 +13,7 @@ import ( ) const ( - MaxExpectedPointsPerSeries = 100_000 // There's not too much science behind this number: 100000 points allows for a point per minute for just under 70 days. - PointsPerSeriesBucketFactor = 2 + MaxExpectedPointsPerSeries = 100_000 // There's not too much science behind this number: 100000 points allows for a point per minute for just under 70 days. // Treat a native histogram sample as equivalent to this many float samples when considering max in-memory bytes limit. // Keep in mind that float sample = timestamp + float value, so 5x this is equivalent to five timestamps and five floats. @@ -36,7 +35,7 @@ var ( EnableManglingReturnedSlices = false FPointSlicePool = NewLimitingBucketedPool( - pool.NewBucketedPool(1, MaxExpectedPointsPerSeries, PointsPerSeriesBucketFactor, func(size int) []promql.FPoint { + pool.NewBucketedPool(MaxExpectedPointsPerSeries, func(size int) []promql.FPoint { return make([]promql.FPoint, 0, size) }), FPointSize, @@ -45,7 +44,7 @@ var ( ) HPointSlicePool = NewLimitingBucketedPool( - pool.NewBucketedPool(1, MaxExpectedPointsPerSeries, PointsPerSeriesBucketFactor, func(size int) []promql.HPoint { + pool.NewBucketedPool(MaxExpectedPointsPerSeries, func(size int) []promql.HPoint { return make([]promql.HPoint, 0, size) }), HPointSize, @@ -57,7 +56,7 @@ var ( ) VectorPool = NewLimitingBucketedPool( - pool.NewBucketedPool(1, MaxExpectedPointsPerSeries, PointsPerSeriesBucketFactor, func(size int) promql.Vector { + pool.NewBucketedPool(MaxExpectedPointsPerSeries, func(size int) promql.Vector { return make(promql.Vector, 0, size) }), VectorSampleSize, @@ -66,7 +65,7 @@ var ( ) Float64SlicePool = NewLimitingBucketedPool( - pool.NewBucketedPool(1, MaxExpectedPointsPerSeries, PointsPerSeriesBucketFactor, func(size int) []float64 { + pool.NewBucketedPool(MaxExpectedPointsPerSeries, func(size int) []float64 { return make([]float64, 0, size) }), Float64Size, @@ -75,7 +74,7 @@ var ( ) BoolSlicePool = NewLimitingBucketedPool( - pool.NewBucketedPool(1, MaxExpectedPointsPerSeries, PointsPerSeriesBucketFactor, func(size int) []bool { + pool.NewBucketedPool(MaxExpectedPointsPerSeries, func(size int) []bool { return make([]bool, 0, size) }), BoolSize, @@ -84,7 +83,7 @@ var ( ) HistogramSlicePool = NewLimitingBucketedPool( - pool.NewBucketedPool(1, MaxExpectedPointsPerSeries, PointsPerSeriesBucketFactor, func(size int) []*histogram.FloatHistogram { + pool.NewBucketedPool(MaxExpectedPointsPerSeries, func(size int) []*histogram.FloatHistogram { return make([]*histogram.FloatHistogram, 0, size) }), HistogramPointerSize, diff --git a/pkg/streamingpromql/types/limiting_pool_test.go b/pkg/streamingpromql/types/limiting_pool_test.go index 418af2efcda..2f9ff89fb08 100644 --- a/pkg/streamingpromql/types/limiting_pool_test.go +++ b/pkg/streamingpromql/types/limiting_pool_test.go @@ -25,7 +25,7 @@ func TestLimitingBucketedPool_Unlimited(t *testing.T) { tracker := limiting.NewMemoryConsumptionTracker(0, metric) p := NewLimitingBucketedPool( - pool.NewBucketedPool(1, 1000, 2, func(size int) []promql.FPoint { return make([]promql.FPoint, 0, size) }), + pool.NewBucketedPool(1000, func(size int) []promql.FPoint { return make([]promql.FPoint, 0, size) }), FPointSize, false, nil, @@ -78,7 +78,7 @@ func TestLimitingPool_Limited(t *testing.T) { tracker := limiting.NewMemoryConsumptionTracker(limit, metric) p := NewLimitingBucketedPool( - pool.NewBucketedPool(1, 1000, 2, func(size int) []promql.FPoint { return make([]promql.FPoint, 0, size) }), + pool.NewBucketedPool(1000, func(size int) []promql.FPoint { return make([]promql.FPoint, 0, size) }), FPointSize, false, nil, @@ -203,7 +203,7 @@ func TestLimitingPool_Mangling(t *testing.T) { tracker := limiting.NewMemoryConsumptionTracker(0, metric) p := NewLimitingBucketedPool( - pool.NewBucketedPool(1, 1000, 2, func(size int) []int { return make([]int, 0, size) }), + pool.NewBucketedPool(1000, func(size int) []int { return make([]int, 0, size) }), 1, false, func(_ int) int { return 123 }, diff --git a/pkg/streamingpromql/types/pool.go b/pkg/streamingpromql/types/pool.go index eebc9c3ea7e..4734524d81f 100644 --- a/pkg/streamingpromql/types/pool.go +++ b/pkg/streamingpromql/types/pool.go @@ -9,16 +9,15 @@ import ( ) const ( - maxExpectedSeriesPerResult = 10_000_000 // Likewise, there's not too much science behind this number: this is the based on examining the largest queries seen at Grafana Labs. - seriesPerResultBucketFactor = 2 + maxExpectedSeriesPerResult = 10_000_000 // There's not too much science behind this number: this is the based on examining the largest queries seen at Grafana Labs. ) var ( - matrixPool = pool.NewBucketedPool(1, maxExpectedSeriesPerResult, seriesPerResultBucketFactor, func(size int) promql.Matrix { + matrixPool = pool.NewBucketedPool(maxExpectedSeriesPerResult, func(size int) promql.Matrix { return make(promql.Matrix, 0, size) }) - seriesMetadataSlicePool = pool.NewBucketedPool(1, maxExpectedSeriesPerResult, seriesPerResultBucketFactor, func(size int) []SeriesMetadata { + seriesMetadataSlicePool = pool.NewBucketedPool(maxExpectedSeriesPerResult, func(size int) []SeriesMetadata { return make([]SeriesMetadata, 0, size) }) ) diff --git a/pkg/streamingpromql/types/ring_buffer_test.go b/pkg/streamingpromql/types/ring_buffer_test.go index ed6b2d2b327..2af450806a7 100644 --- a/pkg/streamingpromql/types/ring_buffer_test.go +++ b/pkg/streamingpromql/types/ring_buffer_test.go @@ -19,7 +19,7 @@ type ringBuffer[T any] interface { DiscardPointsAtOrBefore(t int64) Append(p T) error Reset() - Use(s []T) + Use(s []T) error Release() ViewUntilSearchingForwardsForTesting(maxT int64) ringBufferView[T] ViewUntilSearchingBackwardsForTesting(maxT int64) ringBufferView[T] @@ -120,7 +120,8 @@ func testRingBuffer[T any](t *testing.T, buf ringBuffer[T], points []T) { pointsWithPowerOfTwoCapacity := make([]T, 0, 16) // Use must be passed a slice with a capacity that is equal to a power of 2. pointsWithPowerOfTwoCapacity = append(pointsWithPowerOfTwoCapacity, points...) - buf.Use(pointsWithPowerOfTwoCapacity) + err := buf.Use(pointsWithPowerOfTwoCapacity) + require.NoError(t, err) shouldHavePoints(t, buf, points...) buf.DiscardPointsAtOrBefore(4) @@ -131,7 +132,8 @@ func testRingBuffer[T any](t *testing.T, buf ringBuffer[T], points []T) { subsliceWithPowerOfTwoCapacity := make([]T, 0, 8) // Use must be passed a slice with a capacity that is equal to a power of 2. subsliceWithPowerOfTwoCapacity = append(subsliceWithPowerOfTwoCapacity, points[4:]...) - buf.Use(subsliceWithPowerOfTwoCapacity) + err = buf.Use(subsliceWithPowerOfTwoCapacity) + require.NoError(t, err) shouldHavePoints(t, buf, points[4:]...) } diff --git a/pkg/util/math/math.go b/pkg/util/math/math.go deleted file mode 100644 index 834416de9d2..00000000000 --- a/pkg/util/math/math.go +++ /dev/null @@ -1,26 +0,0 @@ -// SPDX-License-Identifier: AGPL-3.0-only -// Provenance-includes-location: https://github.com/cortexproject/cortex/blob/master/pkg/util/math/math.go -// Provenance-includes-license: Apache-2.0 -// Provenance-includes-copyright: The Cortex Authors. - -package math - -import ( - "golang.org/x/exp/constraints" -) - -// Max returns the maximum of two ordered arguments. -func Max[T constraints.Ordered](a, b T) T { - if a > b { - return a - } - return b -} - -// Min returns the minimum of two ordered arguments. -func Min[T constraints.Ordered](a, b T) T { - if a < b { - return a - } - return b -} diff --git a/pkg/util/pool/bucketed_pool.go b/pkg/util/pool/bucketed_pool.go index bbdfaa1e6f8..2fa8d23cac2 100644 --- a/pkg/util/pool/bucketed_pool.go +++ b/pkg/util/pool/bucketed_pool.go @@ -6,41 +6,35 @@ package pool import ( + "fmt" + "math/bits" + "github.com/prometheus/prometheus/util/zeropool" ) // BucketedPool is a bucketed pool for variably sized slices. -// It is similar to prometheus/prometheus' pool.Pool, but uses zeropool.Pool internally, and -// generics to avoid reflection. +// It is similar to prometheus/prometheus' pool.Pool, but: +// - uses zeropool.Pool internally +// - uses generics to avoid reflection +// - only supports using a factor of 2 type BucketedPool[T ~[]E, E any] struct { buckets []zeropool.Pool[T] - sizes []int + maxSize uint // make is the function used to create an empty slice when none exist yet. make func(int) T } -// NewBucketedPool returns a new BucketedPool with size buckets for minSize to maxSize -// increasing by the given factor. -func NewBucketedPool[T ~[]E, E any](minSize, maxSize int, factor int, makeFunc func(int) T) *BucketedPool[T, E] { - if minSize < 1 { - panic("invalid minimum pool size") - } - if maxSize < 1 { +// NewBucketedPool returns a new BucketedPool with buckets separated by a factor of 2 up to maxSize. +func NewBucketedPool[T ~[]E, E any](maxSize uint, makeFunc func(int) T) *BucketedPool[T, E] { + if maxSize <= 1 { panic("invalid maximum pool size") } - if factor < 1 { - panic("invalid factor") - } - var sizes []int - - for s := minSize; s <= maxSize; s = s * factor { - sizes = append(sizes, s) - } + bucketCount := bits.Len(maxSize) p := &BucketedPool[T, E]{ - buckets: make([]zeropool.Pool[T], len(sizes)), - sizes: sizes, + buckets: make([]zeropool.Pool[T], bucketCount), + maxSize: maxSize, make: makeFunc, } @@ -49,39 +43,42 @@ func NewBucketedPool[T ~[]E, E any](minSize, maxSize int, factor int, makeFunc f // Get returns a new slice with capacity greater than or equal to size. func (p *BucketedPool[T, E]) Get(size int) T { - for i, bktSize := range p.sizes { - if size > bktSize { - continue - } - b := p.buckets[i].Get() - if b == nil { - b = p.make(bktSize) - } - return b + if size < 0 { + panic(fmt.Sprintf("BucketedPool.Get with negative size %v", size)) + } + + if size == 0 { + return nil + } + + if uint(size) > p.maxSize { + return p.make(size) } - return p.make(size) + + bucketIndex := bits.Len(uint(size - 1)) + s := p.buckets[bucketIndex].Get() + + if s == nil { + nextPowerOfTwo := 1 << bucketIndex + s = p.make(nextPowerOfTwo) + } + + return s } // Put adds a slice to the right bucket in the pool. // If the slice does not belong to any bucket in the pool, it is ignored. func (p *BucketedPool[T, E]) Put(s T) { - if cap(s) < p.sizes[0] { + size := uint(cap(s)) + + if size == 0 || size > p.maxSize { return } - for i, size := range p.sizes { - if cap(s) > size { - continue - } - - if cap(s) == size { - // Slice is exactly the minimum size for this bucket. Add it to this bucket. - p.buckets[i].Put(s[0:0]) - } else { - // Slice belongs in previous bucket. - p.buckets[i-1].Put(s[0:0]) - } - - return + bucketIndex := bits.Len(size - 1) + if size != (1 << bucketIndex) { + bucketIndex-- } + + p.buckets[bucketIndex].Put(s[0:0]) } diff --git a/pkg/util/pool/bucketed_pool_test.go b/pkg/util/pool/bucketed_pool_test.go index abafd1c089b..a7d2e754f50 100644 --- a/pkg/util/pool/bucketed_pool_test.go +++ b/pkg/util/pool/bucketed_pool_test.go @@ -16,48 +16,109 @@ func makeFunc(size int) []int { } func TestBucketedPool_HappyPath(t *testing.T) { - testPool := NewBucketedPool(1, 8, 2, makeFunc) + cases := []struct { size int expectedCap int }{ { - size: -1, + size: 0, + expectedCap: 0, + }, + { + size: 1, expectedCap: 1, }, { + // One less than bucket boundary size: 3, expectedCap: 4, }, { - size: 10, - expectedCap: 10, + // Same as bucket boundary + size: 4, + expectedCap: 4, + }, + { + // One more than bucket boundary + size: 5, + expectedCap: 8, + }, + { + // Two more than bucket boundary + size: 6, + expectedCap: 8, + }, + { + size: 8, + expectedCap: 8, + }, + { + size: 16, + expectedCap: 16, + }, + { + size: 20, + expectedCap: 20, // Max size is 19, so we expect to get a slice with the size requested (20), not 32 (the next power of two). }, } - for _, c := range cases { - ret := testPool.Get(c.size) - require.Equal(t, c.expectedCap, cap(ret)) - testPool.Put(ret) + + runTests := func(t *testing.T, returnToPool bool) { + testPool := NewBucketedPool(19, makeFunc) + for _, c := range cases { + ret := testPool.Get(c.size) + require.Equal(t, c.expectedCap, cap(ret)) + require.Len(t, ret, 0) + + if returnToPool { + // Add something to the slice, so we can test that the next consumer of the slice receives a slice of length 0. + if cap(ret) > 0 { + ret = append(ret, 123) + } + + testPool.Put(ret) + } + } } + + t.Run("populated pool", func(t *testing.T) { + runTests(t, true) + }) + + t.Run("empty pool", func(t *testing.T) { + runTests(t, false) + }) } func TestBucketedPool_SliceNotAlignedToBuckets(t *testing.T) { - pool := NewBucketedPool(1, 1000, 10, makeFunc) - pool.Put(make([]int, 0, 2)) - s := pool.Get(3) - require.GreaterOrEqual(t, cap(s), 3) + pool := NewBucketedPool(1000, makeFunc) + pool.Put(make([]int, 0, 5)) + s := pool.Get(6) + require.Equal(t, 8, cap(s)) + require.Len(t, s, 0) } func TestBucketedPool_PutEmptySlice(t *testing.T) { - pool := NewBucketedPool(1, 1000, 10, makeFunc) + pool := NewBucketedPool(1000, makeFunc) pool.Put([]int{}) s := pool.Get(1) - require.GreaterOrEqual(t, cap(s), 1) + require.Equal(t, 1, cap(s)) + require.Len(t, s, 0) +} + +func TestBucketedPool_PutNilSlice(t *testing.T) { + pool := NewBucketedPool(1000, makeFunc) + pool.Put(nil) + s := pool.Get(1) + require.Equal(t, 1, cap(s)) + require.Len(t, s, 0) } -func TestBucketedPool_PutSliceSmallerThanMinimum(t *testing.T) { - pool := NewBucketedPool(3, 1000, 10, makeFunc) - pool.Put([]int{1, 2}) - s := pool.Get(3) - require.GreaterOrEqual(t, cap(s), 3) +func TestBucketedPool_PutSliceLargerThanMaximum(t *testing.T) { + pool := NewBucketedPool(100, makeFunc) + s1 := make([]int, 101) + pool.Put(s1) + s2 := pool.Get(101)[:101] + require.NotSame(t, &s1[0], &s2[0]) + require.Equal(t, 101, cap(s2)) } diff --git a/pkg/util/time.go b/pkg/util/time.go index d897494b186..4b93313f521 100644 --- a/pkg/util/time.go +++ b/pkg/util/time.go @@ -6,15 +6,13 @@ package util import ( - "fmt" "math" "math/rand" - "net/http" "strconv" "sync" "time" - "github.com/grafana/dskit/httpgrpc" + "github.com/efficientgo/core/errors" "github.com/prometheus/common/model" ) @@ -27,30 +25,17 @@ func TimeFromMillis(ms int64) time.Time { return time.Unix(ms/1000, (ms%1000)*int64(time.Millisecond)).UTC() } -// FormatTimeMillis returns a human readable version of the input time (in milliseconds). +// FormatTimeMillis returns a human-readable version of the input time (in milliseconds). func FormatTimeMillis(ms int64) string { return TimeFromMillis(ms).String() } -// FormatTimeModel returns a human readable version of the input time. +// FormatTimeModel returns a human-readable version of the input time. func FormatTimeModel(t model.Time) string { return TimeFromMillis(int64(t)).String() } -// ParseTimeParam parses the desired time param from a Prometheus http request into an int64, milliseconds since epoch. -func ParseTimeParam(r *http.Request, paramName string, defaultValue int64) (int64, error) { - val := r.FormValue(paramName) - if val == "" { - return defaultValue, nil - } - result, err := ParseTime(val) - if err != nil { - return 0, fmt.Errorf("invalid time value for '%s': %w", paramName, err) - } - return result, nil -} - -// ParseTime parses the string into an int64, milliseconds since epoch. +// ParseTime parses the string into an int64 time, unix milliseconds since epoch. func ParseTime(s string) (int64, error) { if t, err := strconv.ParseFloat(s, 64); err == nil { s, ns := math.Modf(t) @@ -61,7 +46,22 @@ func ParseTime(s string) (int64, error) { if t, err := time.Parse(time.RFC3339Nano, s); err == nil { return TimeToMillis(t), nil } - return 0, httpgrpc.Errorf(http.StatusBadRequest, "cannot parse %q to a valid timestamp", s) + return 0, errors.Newf("cannot parse %q to a valid timestamp", s) +} + +// ParseDurationMS parses the string into an int64 duration, the elapsed nanoseconds between two instants +func ParseDurationMS(s string) (int64, error) { + if d, err := strconv.ParseFloat(s, 64); err == nil { + ts := d * float64(time.Second/time.Millisecond) + if ts > float64(math.MaxInt64) || ts < float64(math.MinInt64) { + return 0, errors.Newf("cannot parse %q to a valid duration. It overflows int64", s) + } + return int64(ts), nil + } + if d, err := model.ParseDuration(s); err == nil { + return int64(d) / int64(time.Millisecond/time.Nanosecond), nil + } + return 0, errors.Newf("cannot parse %q to a valid duration", s) } // DurationWithJitter returns random duration from "input - input*variance" to "input + input*variance" interval. diff --git a/pkg/util/validation/limits.go b/pkg/util/validation/limits.go index 09b212ff2fa..039929f045a 100644 --- a/pkg/util/validation/limits.go +++ b/pkg/util/validation/limits.go @@ -126,6 +126,7 @@ type Limits struct { MetricRelabelConfigs []*relabel.Config `yaml:"metric_relabel_configs,omitempty" json:"metric_relabel_configs,omitempty" doc:"nocli|description=List of metric relabel configurations. Note that in most situations, it is more effective to use metrics relabeling directly in the Prometheus server, e.g. remote_write.write_relabel_configs. Labels available during the relabeling phase and cleaned afterwards: __meta_tenant_id" category:"experimental"` MetricRelabelingEnabled bool `yaml:"metric_relabeling_enabled" json:"metric_relabeling_enabled" category:"experimental"` ServiceOverloadStatusCodeOnRateLimitEnabled bool `yaml:"service_overload_status_code_on_rate_limit_enabled" json:"service_overload_status_code_on_rate_limit_enabled" category:"experimental"` + IngestionArtificialDelay model.Duration `yaml:"ingestion_artificial_delay" json:"ingestion_artificial_delay" category:"experimental" doc:"hidden"` // Ingester enforced limits. // Series MaxGlobalSeriesPerUser int `yaml:"max_global_series_per_user" json:"max_global_series_per_user"` @@ -242,6 +243,7 @@ type Limits struct { OTelMetricSuffixesEnabled bool `yaml:"otel_metric_suffixes_enabled" json:"otel_metric_suffixes_enabled" category:"advanced"` OTelCreatedTimestampZeroIngestionEnabled bool `yaml:"otel_created_timestamp_zero_ingestion_enabled" json:"otel_created_timestamp_zero_ingestion_enabled" category:"experimental"` PromoteOTelResourceAttributes flagext.StringSliceCSV `yaml:"promote_otel_resource_attributes" json:"promote_otel_resource_attributes" category:"experimental"` + OTelKeepIdentifyingResourceAttributes bool `yaml:"otel_keep_identifying_resource_attributes" json:"otel_keep_identifying_resource_attributes" category:"experimental"` // Ingest storage. IngestStorageReadConsistency string `yaml:"ingest_storage_read_consistency" json:"ingest_storage_read_consistency" category:"experimental"` @@ -280,6 +282,8 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) { f.BoolVar(&l.OTelMetricSuffixesEnabled, "distributor.otel-metric-suffixes-enabled", false, "Whether to enable automatic suffixes to names of metrics ingested through OTLP.") f.BoolVar(&l.OTelCreatedTimestampZeroIngestionEnabled, "distributor.otel-created-timestamp-zero-ingestion-enabled", false, "Whether to enable translation of OTel start timestamps to Prometheus zero samples in the OTLP endpoint.") f.Var(&l.PromoteOTelResourceAttributes, "distributor.otel-promote-resource-attributes", "Optionally specify OTel resource attributes to promote to labels.") + f.BoolVar(&l.OTelKeepIdentifyingResourceAttributes, "distributor.otel-keep-identifying-resource-attributes", false, "Whether to keep identifying OTel resource attributes in the target_info metric on top of converting to job and instance labels.") + f.Var(&l.IngestionArtificialDelay, "distributor.ingestion-artificial-delay", "Target ingestion delay. If set to a non-zero value, the distributor will artificially delay ingestion time-frame by the specified duration by computing the difference between actual ingestion and the target. There is no delay on actual ingestion of samples, it is only the response back to the client.") f.IntVar(&l.MaxGlobalSeriesPerUser, MaxSeriesPerUserFlag, 150000, "The maximum number of in-memory series per tenant, across the cluster before replication. 0 to disable.") f.IntVar(&l.MaxGlobalSeriesPerMetric, MaxSeriesPerMetricFlag, 0, "The maximum number of in-memory series per metric name, across the cluster before replication. 0 to disable.") @@ -898,7 +902,7 @@ func (o *Overrides) RulerTenantShardSize(userID string) int { func (o *Overrides) RulerMaxRulesPerRuleGroup(userID, namespace string) int { u := o.getOverridesForUser(userID) - if namespaceLimit, ok := u.RulerMaxRulesPerRuleGroupByNamespace.data[namespace]; ok { + if namespaceLimit, ok := u.RulerMaxRulesPerRuleGroupByNamespace.Read()[namespace]; ok { return namespaceLimit } @@ -914,7 +918,7 @@ func (o *Overrides) RulerMaxRulesPerRuleGroup(userID, namespace string) int { func (o *Overrides) RulerMaxRuleGroupsPerTenant(userID, namespace string) int { u := o.getOverridesForUser(userID) - if namespaceLimit, ok := u.RulerMaxRuleGroupsPerTenantByNamespace.data[namespace]; ok { + if namespaceLimit, ok := u.RulerMaxRuleGroupsPerTenantByNamespace.Read()[namespace]; ok { return namespaceLimit } @@ -990,7 +994,7 @@ func (o *Overrides) AlertmanagerReceiversBlockPrivateAddresses(user string) bool // 4. default limits func (o *Overrides) getNotificationLimitForUser(user, integration string) float64 { u := o.getOverridesForUser(user) - if n, ok := u.NotificationRateLimitPerIntegration.data[integration]; ok { + if n, ok := u.NotificationRateLimitPerIntegration.Read()[integration]; ok { return n } @@ -1111,6 +1115,15 @@ func (o *Overrides) PromoteOTelResourceAttributes(tenantID string) []string { return o.getOverridesForUser(tenantID).PromoteOTelResourceAttributes } +func (o *Overrides) OTelKeepIdentifyingResourceAttributes(tenantID string) bool { + return o.getOverridesForUser(tenantID).OTelKeepIdentifyingResourceAttributes +} + +// DistributorIngestionArtificialDelay returns the artificial ingestion latency for a given use. +func (o *Overrides) DistributorIngestionArtificialDelay(tenantID string) time.Duration { + return time.Duration(o.getOverridesForUser(tenantID).IngestionArtificialDelay) +} + func (o *Overrides) AlignQueriesWithStep(userID string) bool { return o.getOverridesForUser(userID).AlignQueriesWithStep } diff --git a/pkg/util/validation/limits_map.go b/pkg/util/validation/limits_map.go index 4d991699e0c..324d6ab2592 100644 --- a/pkg/util/validation/limits_map.go +++ b/pkg/util/validation/limits_map.go @@ -9,15 +9,19 @@ import ( "gopkg.in/yaml.v3" ) -// LimitsMap is a generic map that can hold either float64 or int as values. -type LimitsMap[T float64 | int] struct { +// LimitsMap is a flag.Value implementation that looks like a generic map, holding float64s, ints, or strings as values. +type LimitsMap[T float64 | int | string] struct { data map[string]T validator func(k string, v T) error } -func NewLimitsMap[T float64 | int](validator func(k string, v T) error) LimitsMap[T] { +func NewLimitsMap[T float64 | int | string](validator func(k string, v T) error) LimitsMap[T] { + return NewLimitsMapWithData(make(map[string]T), validator) +} + +func NewLimitsMapWithData[T float64 | int | string](data map[string]T, validator func(k string, v T) error) LimitsMap[T] { return LimitsMap[T]{ - data: make(map[string]T), + data: data, validator: validator, } } @@ -45,6 +49,10 @@ func (m LimitsMap[T]) Set(s string) error { return m.updateMap(newMap) } +func (m LimitsMap[T]) Read() map[string]T { + return m.data +} + // Clone returns a copy of the LimitsMap. func (m LimitsMap[T]) Clone() LimitsMap[T] { newMap := make(map[string]T, len(m.data)) @@ -73,6 +81,7 @@ func (m LimitsMap[T]) updateMap(newMap map[string]T) error { } } + clear(m.data) for k, v := range newMap { m.data[k] = v } diff --git a/pkg/util/validation/limits_map_test.go b/pkg/util/validation/limits_map_test.go index b2a4820772f..d9bc2944a04 100644 --- a/pkg/util/validation/limits_map_test.go +++ b/pkg/util/validation/limits_map_test.go @@ -11,17 +11,45 @@ import ( "gopkg.in/yaml.v3" ) -var fakeValidator = func(_ string, v float64) error { +var fakeFloat64Validator = func(_ string, v float64) error { if v < 0 { return errors.New("value cannot be negative") } return nil } +var fakeIntValidator = func(_ string, v int) error { + if v < 0 { + return errors.New("value cannot be negative") + } + return nil +} + +var fakeStringValidator = func(_ string, v string) error { + if len(v) == 0 { + return errors.New("value cannot be empty") + } + return nil +} + func TestNewLimitsMap(t *testing.T) { - lm := NewLimitsMap(fakeValidator) - lm.data["key1"] = 10 - require.Len(t, lm.data, 1) + t.Run("float64", func(t *testing.T) { + lm := NewLimitsMap(fakeFloat64Validator) + lm.data["key1"] = 10.6 + require.Len(t, lm.Read(), 1) + }) + + t.Run("int", func(t *testing.T) { + lm := NewLimitsMap(fakeIntValidator) + lm.data["key1"] = 10 + require.Len(t, lm.Read(), 1) + }) + + t.Run("string", func(t *testing.T) { + lm := NewLimitsMap(fakeStringValidator) + lm.data["key1"] = "test" + require.Len(t, lm.Read(), 1) + }) } func TestLimitsMap_IsNil(t *testing.T) { @@ -48,170 +76,358 @@ func TestLimitsMap_IsNil(t *testing.T) { } func TestLimitsMap_SetAndString(t *testing.T) { - tc := map[string]struct { - input string - expected map[string]float64 - error string - }{ + t.Run("numeric", func(t *testing.T) { + tc := map[string]struct { + input string + expected map[string]float64 + error string + }{ - "set without error": { - input: `{"key1":10,"key2":20}`, - expected: map[string]float64{"key1": 10, "key2": 20}, - }, - "set with parsing error": { - input: `{"key1": 10, "key2": 20`, - error: "unexpected end of JSON input", - }, - "set with validation error": { - input: `{"key1": -10, "key2": 20}`, - error: "value cannot be negative", - }, - } + "set without error": { + input: `{"key1":10,"key2":20}`, + expected: map[string]float64{"key1": 10, "key2": 20}, + }, + "set with parsing error": { + input: `{"key1": 10, "key2": 20`, + error: "unexpected end of JSON input", + }, + "set with validation error": { + input: `{"key1": -10, "key2": 20}`, + error: "value cannot be negative", + }, + "set with incompatible value type": { + input: `{"key1": "abc", "key2": "def"}`, + error: "json: cannot unmarshal string into Go value of type float64", + }, + } - for name, tt := range tc { - t.Run(name, func(t *testing.T) { - lm := NewLimitsMap(fakeValidator) - err := lm.Set(tt.input) - if tt.error != "" { - require.Error(t, err) - require.Equal(t, tt.error, err.Error()) - } else { - require.NoError(t, err) - require.Equal(t, tt.expected, lm.data) - require.Equal(t, tt.input, lm.String()) - } - }) - } + for name, tt := range tc { + t.Run("numeric/"+name, func(t *testing.T) { + lm := NewLimitsMap(fakeFloat64Validator) + err := lm.Set(tt.input) + if tt.error != "" { + require.Error(t, err) + require.Equal(t, tt.error, err.Error()) + } else { + require.NoError(t, err) + require.Equal(t, tt.expected, lm.Read()) + require.Equal(t, tt.input, lm.String()) + } + }) + } + }) + + t.Run("string", func(t *testing.T) { + tc := map[string]struct { + input string + expected map[string]string + error string + }{ + + "set without error": { + input: `{"key1":"abc","key2":"def"}`, + expected: map[string]string{"key1": "abc", "key2": "def"}, + }, + "set with parsing error": { + input: `{"key1": "abc", "key2": "def`, + error: "unexpected end of JSON input", + }, + "set with validation error": { + input: `{"key1": "", "key2": "def"}`, + error: "value cannot be empty", + }, + "set with incompatible value type": { + input: `{"key1": 10, "key2": 20}`, + error: "json: cannot unmarshal number into Go value of type string", + }, + } + + for name, tt := range tc { + t.Run("string/"+name, func(t *testing.T) { + lm := NewLimitsMap(fakeStringValidator) + err := lm.Set(tt.input) + if tt.error != "" { + require.Error(t, err) + require.Equal(t, tt.error, err.Error()) + } else { + require.NoError(t, err) + require.Equal(t, tt.expected, lm.Read()) + require.Equal(t, tt.input, lm.String()) + } + }) + } + }) } func TestLimitsMap_UnmarshalYAML(t *testing.T) { - tc := []struct { - name string - input string - expected map[string]float64 - error string - }{ - { - name: "unmarshal without error", - input: ` + t.Run("numeric", func(t *testing.T) { + tc := []struct { + name string + input string + expected map[string]float64 + error string + }{ + { + name: "unmarshal without error", + input: ` key1: 10 key2: 20 `, - expected: map[string]float64{"key1": 10, "key2": 20}, - }, - { - name: "unmarshal with validation error", - input: ` + expected: map[string]float64{"key1": 10, "key2": 20}, + }, + { + name: "unmarshal with validation error", + input: ` key1: -10 key2: 20 `, - error: "value cannot be negative", - }, - { - name: "unmarshal with parsing error", - input: ` + error: "value cannot be negative", + }, + { + name: "unmarshal with parsing error", + input: ` key1: 10 key2: 20 - key3: 30 + key3: 30 `, - error: "yaml: line 3: found a tab character that violates indentation", - }, - } + error: "yaml: line 3: found a tab character that violates indentation", + }, + } + + for _, tt := range tc { + t.Run(tt.name, func(t *testing.T) { + lm := NewLimitsMap(fakeFloat64Validator) + err := yaml.Unmarshal([]byte(tt.input), &lm) + if tt.error != "" { + require.Error(t, err) + require.Equal(t, tt.error, err.Error()) + } else { + require.NoError(t, err) + require.Equal(t, tt.expected, lm.data) + } + }) + } + }) + + t.Run("string", func(t *testing.T) { + tc := []struct { + name string + input string + expected map[string]string + error string + }{ + { + name: "unmarshal without error", + input: ` +key1: abc +key2: def +`, + expected: map[string]string{"key1": "abc", "key2": "def"}, + }, + { + name: "unmarshal with validation error", + input: ` +key1: abc +key2: "" +`, + error: "value cannot be empty", + }, + { + name: "unmarshal with parsing error", + input: ` +key1: abc +key2: def + key3: ghi +`, + error: "yaml: line 3: found a tab character that violates indentation", + }, + } + + for _, tt := range tc { + t.Run(tt.name, func(t *testing.T) { + lm := NewLimitsMap(fakeStringValidator) + err := yaml.Unmarshal([]byte(tt.input), &lm) + if tt.error != "" { + require.Error(t, err) + require.Equal(t, tt.error, err.Error()) + } else { + require.NoError(t, err) + require.Equal(t, tt.expected, lm.data) + } + }) + } + }) - for _, tt := range tc { - t.Run(tt.name, func(t *testing.T) { - lm := NewLimitsMap(fakeValidator) - err := yaml.Unmarshal([]byte(tt.input), &lm) - if tt.error != "" { - require.Error(t, err) - require.Equal(t, tt.error, err.Error()) - } else { - require.NoError(t, err) - require.Equal(t, tt.expected, lm.data) - } - }) - } } func TestLimitsMap_MarshalYAML(t *testing.T) { - lm := NewLimitsMap(fakeValidator) - lm.data["key1"] = 10 - lm.data["key2"] = 20 + t.Run("numeric", func(t *testing.T) { + lm := NewLimitsMap(fakeFloat64Validator) + lm.data["key1"] = 10 + lm.data["key2"] = 20 + + out, err := yaml.Marshal(&lm) + require.NoError(t, err) + require.Equal(t, "key1: 10\nkey2: 20\n", string(out)) + }) - out, err := yaml.Marshal(&lm) - require.NoError(t, err) - require.Equal(t, "key1: 10\nkey2: 20\n", string(out)) + t.Run("string", func(t *testing.T) { + lm := NewLimitsMap(fakeStringValidator) + lm.data["key1"] = "abc" + lm.data["key2"] = "def" + + out, err := yaml.Marshal(&lm) + require.NoError(t, err) + require.Equal(t, "key1: abc\nkey2: def\n", string(out)) + }) } func TestLimitsMap_Equal(t *testing.T) { - tc := map[string]struct { - map1 LimitsMap[float64] - map2 LimitsMap[float64] - expected bool - }{ - "Equal maps with same key-value pairs": { - map1: LimitsMap[float64]{data: map[string]float64{"key1": 1.1, "key2": 2.2}}, - map2: LimitsMap[float64]{data: map[string]float64{"key1": 1.1, "key2": 2.2}}, - expected: true, - }, - "Different maps with different lengths": { - map1: LimitsMap[float64]{data: map[string]float64{"key1": 1.1}}, - map2: LimitsMap[float64]{data: map[string]float64{"key1": 1.1, "key2": 2.2}}, - expected: false, - }, - "Different maps with same keys but different values": { - map1: LimitsMap[float64]{data: map[string]float64{"key1": 1.1}}, - map2: LimitsMap[float64]{data: map[string]float64{"key1": 1.2}}, - expected: false, - }, - "Equal empty maps": { - map1: LimitsMap[float64]{data: map[string]float64{}}, - map2: LimitsMap[float64]{data: map[string]float64{}}, - expected: true, - }, - } + t.Run("numeric", func(t *testing.T) { + tc := map[string]struct { + map1 LimitsMap[float64] + map2 LimitsMap[float64] + expected bool + }{ + "Equal maps with same key-value pairs": { + map1: LimitsMap[float64]{data: map[string]float64{"key1": 1.1, "key2": 2.2}}, + map2: LimitsMap[float64]{data: map[string]float64{"key1": 1.1, "key2": 2.2}}, + expected: true, + }, + "Different maps with different lengths": { + map1: LimitsMap[float64]{data: map[string]float64{"key1": 1.1}}, + map2: LimitsMap[float64]{data: map[string]float64{"key1": 1.1, "key2": 2.2}}, + expected: false, + }, + "Different maps with same keys but different values": { + map1: LimitsMap[float64]{data: map[string]float64{"key1": 1.1}}, + map2: LimitsMap[float64]{data: map[string]float64{"key1": 1.2}}, + expected: false, + }, + "Equal empty maps": { + map1: LimitsMap[float64]{data: map[string]float64{}}, + map2: LimitsMap[float64]{data: map[string]float64{}}, + expected: true, + }, + } + + for name, tt := range tc { + t.Run(name, func(t *testing.T) { + require.Equal(t, tt.expected, tt.map1.Equal(LimitsMap[float64]{data: tt.map2.data})) + require.Equal(t, tt.expected, cmp.Equal(tt.map1, tt.map2)) + }) + } + }) + + t.Run("string", func(t *testing.T) { + tc := map[string]struct { + map1 LimitsMap[string] + map2 LimitsMap[string] + expected bool + }{ + "Equal maps with same key-value pairs": { + map1: LimitsMap[string]{data: map[string]string{"key1": "abc", "key2": "def"}}, + map2: LimitsMap[string]{data: map[string]string{"key1": "abc", "key2": "def"}}, + expected: true, + }, + "Different maps with different lengths": { + map1: LimitsMap[string]{data: map[string]string{"key1": "abc"}}, + map2: LimitsMap[string]{data: map[string]string{"key1": "abc", "key2": "def"}}, + expected: false, + }, + "Different maps with same keys but different values": { + map1: LimitsMap[string]{data: map[string]string{"key1": "abc"}}, + map2: LimitsMap[string]{data: map[string]string{"key1": "def"}}, + expected: false, + }, + "Equal empty maps": { + map1: LimitsMap[string]{data: map[string]string{}}, + map2: LimitsMap[string]{data: map[string]string{}}, + expected: true, + }, + } + + for name, tt := range tc { + t.Run(name, func(t *testing.T) { + require.Equal(t, tt.expected, tt.map1.Equal(LimitsMap[string]{data: tt.map2.data})) + require.Equal(t, tt.expected, cmp.Equal(tt.map1, tt.map2)) + }) + } + }) - for name, tt := range tc { - t.Run(name, func(t *testing.T) { - require.Equal(t, tt.expected, tt.map1.Equal(LimitsMap[float64]{data: tt.map2.data})) - require.Equal(t, tt.expected, cmp.Equal(tt.map1, tt.map2)) - }) - } } func TestLimitsMap_Clone(t *testing.T) { - // Create an initial LimitsMap with some data. - original := NewLimitsMap[float64](fakeValidator) - original.data["limit1"] = 1.0 - original.data["limit2"] = 2.0 + t.Run("numeric", func(t *testing.T) { + // Create an initial LimitsMap with some data. + original := NewLimitsMap[float64](nil) + original.data["limit1"] = 1.0 + original.data["limit2"] = 2.0 - // Clone the original LimitsMap. - cloned := original.Clone() + // Clone the original LimitsMap. + cloned := original.Clone() - // Check that the cloned LimitsMap is equal to the original. - require.True(t, original.Equal(cloned), "expected cloned LimitsMap to be different from original") + // Check that the cloned LimitsMap is equal to the original. + require.True(t, original.Equal(cloned), "expected cloned LimitsMap to be different from original") - // Modify the original LimitsMap and ensure the cloned map is not affected. - original.data["limit1"] = 10.0 - require.False(t, cloned.data["limit1"] == 10.0, "expected cloned LimitsMap to be unaffected by changes to original") + // Modify the original LimitsMap and ensure the cloned map is not affected. + original.data["limit1"] = 10.0 + require.False(t, cloned.data["limit1"] == 10.0, "expected cloned LimitsMap to be unaffected by changes to original") - // Modify the cloned LimitsMap and ensure the original map is not affected. - cloned.data["limit3"] = 3.0 - _, exists := original.data["limit3"] - require.False(t, exists, "expected original LimitsMap to be unaffected by changes to cloned") + // Modify the cloned LimitsMap and ensure the original map is not affected. + cloned.data["limit3"] = 3.0 + _, exists := original.data["limit3"] + require.False(t, exists, "expected original LimitsMap to be unaffected by changes to cloned") + }) + + t.Run("string", func(t *testing.T) { + // Create an initial LimitsMap with some data. + original := NewLimitsMap[string](nil) + original.data["limit1"] = "abc" + original.data["limit2"] = "def" + + // Clone the original LimitsMap. + cloned := original.Clone() + + // Check that the cloned LimitsMap is equal to the original. + require.True(t, original.Equal(cloned), "expected cloned LimitsMap to be different from original") + + // Modify the original LimitsMap and ensure the cloned map is not affected. + original.data["limit1"] = "zxcv" + require.False(t, cloned.data["limit1"] == "zxcv", "expected cloned LimitsMap to be unaffected by changes to original") + + // Modify the cloned LimitsMap and ensure the original map is not affected. + cloned.data["limit3"] = "test" + _, exists := original.data["limit3"] + require.False(t, exists, "expected original LimitsMap to be unaffected by changes to cloned") + }) } func TestLimitsMap_updateMap(t *testing.T) { - initialData := map[string]float64{"a": 1.0, "b": 2.0} - updateData := map[string]float64{"a": 3.0, "b": -3.0, "c": 5.0} + t.Run("does not apply partial updates", func(t *testing.T) { + initialData := map[string]float64{"a": 1.0, "b": 2.0} + updateData := map[string]float64{"a": 3.0, "b": -3.0, "c": 5.0} + + limitsMap := LimitsMap[float64]{data: initialData, validator: fakeFloat64Validator} + + err := limitsMap.updateMap(updateData) + require.Error(t, err) + + // Verify that no partial updates were applied. + // Because maps in Go are accessed in random order, there's a chance that the validation will fail on the first invalid element of the map thus not asserting partial updates. + expectedData := map[string]float64{"a": 1.0, "b": 2.0} + require.Equal(t, expectedData, limitsMap.Read()) + }) - limitsMap := LimitsMap[float64]{data: initialData, validator: fakeValidator} + t.Run("updates totally replace all values", func(t *testing.T) { + initialData := map[string]float64{"a": 1.0, "b": 2.0} + updateData := map[string]float64{"b": 5.0, "c": 6.0} + limitsMap := LimitsMap[float64]{data: initialData, validator: fakeFloat64Validator} - err := limitsMap.updateMap(updateData) - require.Error(t, err) + err := limitsMap.updateMap(updateData) + require.NoError(t, err) - // Verify that no partial updates were applied. - // Because maps in Go are accessed in random order, there's a chance that the validation will fail on the first invalid element of the map thus not asserting partial updates. - expectedData := map[string]float64{"a": 1.0, "b": 2.0} - require.Equal(t, expectedData, limitsMap.data) + expectedData := updateData + require.Equal(t, expectedData, limitsMap.Read()) + }) } diff --git a/renovate.json5 b/renovate.json5 index 608eb477747..0b6fb461b85 100644 --- a/renovate.json5 +++ b/renovate.json5 @@ -3,7 +3,7 @@ "extends": [ "config:base" ], - "baseBranches": ["main", "release-2.14", "release-2.13"], + "baseBranches": ["main", "release-2.15", "release-2.14", "release-2.13"], "postUpdateOptions": [ "gomodTidy", "gomodUpdateImportPaths" @@ -11,7 +11,7 @@ "schedule": ["before 9am on Monday"], "packageRules": [ { - "matchBaseBranches": ["release-2.14", "release-2.13"], + "matchBaseBranches": ["release-2.15", "release-2.14", "release-2.13"], "packagePatterns": ["*"], "enabled": false }, diff --git a/tools/copyblocks/Dockerfile b/tools/copyblocks/Dockerfile index 91f8761da44..2eb4721b704 100644 --- a/tools/copyblocks/Dockerfile +++ b/tools/copyblocks/Dockerfile @@ -1,6 +1,6 @@ # SPDX-License-Identifier: AGPL-3.0-only -FROM alpine:3.20.3 +FROM alpine:3.21.0 ARG EXTRA_PACKAGES RUN apk add --no-cache ca-certificates tzdata $EXTRA_PACKAGES # Expose TARGETOS and TARGETARCH variables. These are supported by Docker when using BuildKit, but must be "enabled" using ARG. diff --git a/tools/doc-generator/parse/parser.go b/tools/doc-generator/parse/parser.go index 5be316fe8cf..8a49efb7fc2 100644 --- a/tools/doc-generator/parse/parser.go +++ b/tools/doc-generator/parse/parser.go @@ -353,6 +353,8 @@ func getFieldCustomType(t reflect.Type) (string, bool) { return "map of string to float64", true case reflect.TypeOf(validation.LimitsMap[int]{}).String(): return "map of string to int", true + case reflect.TypeOf(validation.LimitsMap[string]{}).String(): + return "map of string to string", true case reflect.TypeOf(&url.URL{}).String(): return "url", true case reflect.TypeOf(time.Duration(0)).String(): @@ -439,6 +441,8 @@ func getCustomFieldType(t reflect.Type) (string, bool) { return "map of string to float64", true case reflect.TypeOf(validation.LimitsMap[int]{}).String(): return "map of string to int", true + case reflect.TypeOf(validation.LimitsMap[string]{}).String(): + return "map of string to string", true case reflect.TypeOf(&url.URL{}).String(): return "url", true case reflect.TypeOf(time.Duration(0)).String(): diff --git a/tools/kafkatool/util.go b/tools/kafkatool/util.go index 757701dbe4d..eb4a2c0758e 100644 --- a/tools/kafkatool/util.go +++ b/tools/kafkatool/util.go @@ -16,7 +16,6 @@ func CreateKafkaClient(kafkaAddress, kafkaClientID string, auth sasl.Mechanism) kgo.ClientID(kafkaClientID), kgo.RecordPartitioner(kgo.ManualPartitioner()), kgo.DisableIdempotentWrite(), - kgo.AllowAutoTopicCreation(), kgo.BrokerMaxWriteBytes(268_435_456), kgo.MaxBufferedBytes(268_435_456), } diff --git a/tools/querytee/proxy.go b/tools/querytee/proxy.go index 3a4f52e21bb..6954835dab5 100644 --- a/tools/querytee/proxy.go +++ b/tools/querytee/proxy.go @@ -6,11 +6,13 @@ package querytee import ( + "encoding/json" "flag" "fmt" "net/http" "net/http/httputil" "net/url" + "os" "strconv" "strings" "sync" @@ -21,8 +23,10 @@ import ( "github.com/grafana/dskit/flagext" "github.com/grafana/dskit/server" "github.com/grafana/dskit/spanlogger" + "github.com/grafana/regexp" "github.com/pkg/errors" "github.com/prometheus/client_golang/prometheus" + "gopkg.in/yaml.v3" ) type ProxyConfig struct { @@ -34,6 +38,8 @@ type ProxyConfig struct { BackendEndpoints string PreferredBackend string BackendReadTimeout time.Duration + BackendConfigFile string + parsedBackendConfig map[string]*BackendConfig CompareResponses bool LogSlowQueryResponseThreshold time.Duration ValueComparisonTolerance float64 @@ -47,6 +53,23 @@ type ProxyConfig struct { SecondaryBackendsRequestProportion float64 } +type BackendConfig struct { + RequestHeaders http.Header `json:"request_headers" yaml:"request_headers"` +} + +func exampleJSONBackendConfig() string { + cfg := BackendConfig{ + RequestHeaders: http.Header{ + "Cache-Control": {"no-store"}, + }, + } + jsonBytes, err := json.Marshal(cfg) + if err != nil { + panic("invalid example backend config" + err.Error()) + } + return string(jsonBytes) +} + func (cfg *ProxyConfig) RegisterFlags(f *flag.FlagSet) { f.StringVar(&cfg.ServerHTTPServiceAddress, "server.http-service-address", "", "Bind address for server where query-tee service listens for HTTP requests.") f.IntVar(&cfg.ServerHTTPServicePort, "server.http-service-port", 80, "The HTTP port where the query-tee service listens for HTTP requests.") @@ -62,6 +85,7 @@ func (cfg *ProxyConfig) RegisterFlags(f *flag.FlagSet) { f.BoolVar(&cfg.BackendSkipTLSVerify, "backend.skip-tls-verify", false, "Skip TLS verification on backend targets.") f.StringVar(&cfg.PreferredBackend, "backend.preferred", "", "The hostname of the preferred backend when selecting the response to send back to the client. If no preferred backend is configured then the query-tee will send back to the client the first successful response received without waiting for other backends.") f.DurationVar(&cfg.BackendReadTimeout, "backend.read-timeout", 150*time.Second, "The timeout when reading the response from a backend.") + f.StringVar(&cfg.BackendConfigFile, "backend.config-file", "", "Path to a file with YAML or JSON configuration for each backend. Each key in the YAML/JSON document is a backend hostname. This is an example configuration value for a backend in JSON: "+exampleJSONBackendConfig()) f.BoolVar(&cfg.CompareResponses, "proxy.compare-responses", false, "Compare responses between preferred and secondary endpoints for supported routes.") f.DurationVar(&cfg.LogSlowQueryResponseThreshold, "proxy.log-slow-query-response-threshold", 10*time.Second, "The minimum difference in response time between slowest and fastest back-end over which to log the query. 0 to disable.") f.Float64Var(&cfg.ValueComparisonTolerance, "proxy.value-comparison-tolerance", 0.000001, "The tolerance to apply when comparing floating point values in the responses. 0 to disable tolerance and require exact match (not recommended).") @@ -119,6 +143,17 @@ func NewProxy(cfg ProxyConfig, logger log.Logger, routes []Route, registerer pro return nil, errors.New("preferred backend must be set when secondary backends request proportion is not 1") } + if len(cfg.BackendConfigFile) > 0 { + configBytes, err := os.ReadFile(cfg.BackendConfigFile) + if err != nil { + return nil, fmt.Errorf("failed to read backend config file (%s): %w", cfg.BackendConfigFile, err) + } + err = yaml.Unmarshal(configBytes, &cfg.parsedBackendConfig) + if err != nil { + return nil, fmt.Errorf("failed to parse backend YAML config: %w", err) + } + } + p := &Proxy{ cfg: cfg, logger: logger, @@ -153,7 +188,18 @@ func NewProxy(cfg ProxyConfig, logger log.Logger, routes []Route, registerer pro preferred = preferredIdx == idx } - p.backends = append(p.backends, NewProxyBackend(name, u, cfg.BackendReadTimeout, preferred, cfg.BackendSkipTLSVerify)) + backendCfg := cfg.parsedBackendConfig[name] + if backendCfg == nil { + // In tests, we have the same hostname for all backends, so we also + // support a numeric preferred backend which is the index in the list + // of backends. + backendCfg = cfg.parsedBackendConfig[strconv.Itoa(idx)] + if backendCfg == nil { + backendCfg = &BackendConfig{} + } + } + + p.backends = append(p.backends, NewProxyBackend(name, u, cfg.BackendReadTimeout, preferred, cfg.BackendSkipTLSVerify, *backendCfg)) } // At least 1 backend is required @@ -161,6 +207,11 @@ func NewProxy(cfg ProxyConfig, logger log.Logger, routes []Route, registerer pro return nil, errors.New("at least 1 backend is required") } + err := validateBackendConfig(p.backends, cfg.parsedBackendConfig) + if err != nil { + return nil, fmt.Errorf("validating external backend configs: %w", err) + } + // If the preferred backend is configured, then it must exist among the actual backends. if cfg.PreferredBackend != "" { exists := false @@ -188,6 +239,25 @@ func NewProxy(cfg ProxyConfig, logger log.Logger, routes []Route, registerer pro return p, nil } +func validateBackendConfig(backends []ProxyBackendInterface, config map[string]*BackendConfig) error { + // Tests need to pass the same hostname for all backends, so we also + // support a numeric preferred backend which is the index in the list of backend. + numericBackendNameRegex := regexp.MustCompile("^[0-9]+$") + for configuredBackend := range config { + backendExists := false + for _, actualBacked := range backends { + if actualBacked.Name() == configuredBackend { + backendExists = true + break + } + } + if !backendExists && !numericBackendNameRegex.MatchString(configuredBackend) { + return fmt.Errorf("configured backend %s does not exist in the list of actual backends", configuredBackend) + } + } + return nil +} + func (p *Proxy) Start() error { // Setup server first, so we can fail early if the ports are in use. serv, err := server.New(server.Config{ diff --git a/tools/querytee/proxy_backend.go b/tools/querytee/proxy_backend.go index 301c6b4cdc6..394a8828bd5 100644 --- a/tools/querytee/proxy_backend.go +++ b/tools/querytee/proxy_backend.go @@ -37,10 +37,11 @@ type ProxyBackend struct { // Whether this is the preferred backend from which picking up // the response and sending it back to the client. preferred bool + cfg BackendConfig } // NewProxyBackend makes a new ProxyBackend -func NewProxyBackend(name string, endpoint *url.URL, timeout time.Duration, preferred bool, skipTLSVerify bool) ProxyBackendInterface { +func NewProxyBackend(name string, endpoint *url.URL, timeout time.Duration, preferred bool, skipTLSVerify bool, cfg BackendConfig) ProxyBackendInterface { innerTransport := &http.Transport{ Proxy: http.ProxyFromEnvironment, TLSClientConfig: &tls.Config{ @@ -63,6 +64,7 @@ func NewProxyBackend(name string, endpoint *url.URL, timeout time.Duration, pref endpoint: endpoint, timeout: timeout, preferred: preferred, + cfg: cfg, client: &http.Client{ CheckRedirect: func(_ *http.Request, _ []*http.Request) error { return errors.New("the query-tee proxy does not follow redirects") @@ -139,6 +141,12 @@ func (b *ProxyBackend) createBackendRequest(ctx context.Context, orig *http.Requ // Remove Accept-Encoding header to avoid sending compressed responses req.Header.Del("Accept-Encoding") + for headerName, headerValues := range b.cfg.RequestHeaders { + for _, headerValue := range headerValues { + req.Header.Add(headerName, headerValue) + } + } + return req, nil } diff --git a/tools/querytee/proxy_backend_test.go b/tools/querytee/proxy_backend_test.go index 4a892286f8f..9d59a032d30 100644 --- a/tools/querytee/proxy_backend_test.go +++ b/tools/querytee/proxy_backend_test.go @@ -84,7 +84,7 @@ func Test_ProxyBackend_createBackendRequest_HTTPBasicAuthentication(t *testing.T orig.Header.Set("X-Scope-OrgID", testData.clientTenant) } - b := NewProxyBackend("test", u, time.Second, false, false) + b := NewProxyBackend("test", u, time.Second, false, false, defaultBackendConfig()) bp, ok := b.(*ProxyBackend) if !ok { t.Fatalf("Type assertion to *ProxyBackend failed") @@ -98,3 +98,7 @@ func Test_ProxyBackend_createBackendRequest_HTTPBasicAuthentication(t *testing.T }) } } + +func defaultBackendConfig() BackendConfig { + return BackendConfig{} +} diff --git a/tools/querytee/proxy_endpoint_test.go b/tools/querytee/proxy_endpoint_test.go index 2658f171d51..24108364f2a 100644 --- a/tools/querytee/proxy_endpoint_test.go +++ b/tools/querytee/proxy_endpoint_test.go @@ -37,9 +37,9 @@ func Test_ProxyEndpoint_waitBackendResponseForDownstream(t *testing.T) { backendURL3, err := url.Parse("http://backend-3/") require.NoError(t, err) - backendPref := NewProxyBackend("backend-1", backendURL1, time.Second, true, false) - backendOther1 := NewProxyBackend("backend-2", backendURL2, time.Second, false, false) - backendOther2 := NewProxyBackend("backend-3", backendURL3, time.Second, false, false) + backendPref := NewProxyBackend("backend-1", backendURL1, time.Second, true, false, defaultBackendConfig()) + backendOther1 := NewProxyBackend("backend-2", backendURL2, time.Second, false, false, defaultBackendConfig()) + backendOther2 := NewProxyBackend("backend-3", backendURL3, time.Second, false, false, defaultBackendConfig()) tests := map[string]struct { backends []ProxyBackendInterface @@ -157,8 +157,8 @@ func Test_ProxyEndpoint_Requests(t *testing.T) { require.NoError(t, err) backends := []ProxyBackendInterface{ - NewProxyBackend("backend-1", backendURL1, time.Second, true, false), - NewProxyBackend("backend-2", backendURL2, time.Second, false, false), + NewProxyBackend("backend-1", backendURL1, time.Second, true, false, defaultBackendConfig()), + NewProxyBackend("backend-2", backendURL2, time.Second, false, false, defaultBackendConfig()), } endpoint := NewProxyEndpoint(backends, testRoute, NewProxyMetrics(nil), log.NewNopLogger(), nil, 0, 1.0) @@ -325,8 +325,8 @@ func Test_ProxyEndpoint_Comparison(t *testing.T) { require.NoError(t, err) backends := []ProxyBackendInterface{ - NewProxyBackend("preferred-backend", preferredBackendURL, time.Second, true, false), - NewProxyBackend("secondary-backend", secondaryBackendURL, time.Second, false, false), + NewProxyBackend("preferred-backend", preferredBackendURL, time.Second, true, false, defaultBackendConfig()), + NewProxyBackend("secondary-backend", secondaryBackendURL, time.Second, false, false, defaultBackendConfig()), } logger := newMockLogger() diff --git a/tools/querytee/proxy_test.go b/tools/querytee/proxy_test.go index 85b04e121c8..948ebdf569b 100644 --- a/tools/querytee/proxy_test.go +++ b/tools/querytee/proxy_test.go @@ -11,6 +11,7 @@ import ( "io" "net/http" "net/http/httptest" + "os" "strconv" "strings" "testing" @@ -203,6 +204,7 @@ func Test_Proxy_RequestsForwarding(t *testing.T) { requestPath string requestMethod string backends []mockedBackend + backendConfig map[string]*BackendConfig preferredBackendIdx int expectedStatus int expectedRes string @@ -323,6 +325,29 @@ func Test_Proxy_RequestsForwarding(t *testing.T) { expectedStatus: 200, expectedRes: querySingleMetric1, }, + "adds request headers to specific backend": { + requestPath: "/api/v1/query", + requestMethod: http.MethodGet, + preferredBackendIdx: 0, + backendConfig: map[string]*BackendConfig{ + "0": { + RequestHeaders: map[string][]string{ + "X-Test-Header": {"test-value"}, + }, + }, + }, + backends: []mockedBackend{ + {handler: func(rw http.ResponseWriter, r *http.Request) { + assert.Equal(t, r.Header.Get("X-Test-Header"), "test-value") + rw.WriteHeader(http.StatusOK) + }}, + {handler: func(rw http.ResponseWriter, r *http.Request) { + assert.Empty(t, r.Header.Get("X-Test-Header")) + rw.WriteHeader(http.StatusOK) + }}, + }, + expectedStatus: http.StatusOK, + }, } for testName, testData := range tests { @@ -341,6 +366,7 @@ func Test_Proxy_RequestsForwarding(t *testing.T) { cfg := ProxyConfig{ BackendEndpoints: strings.Join(backendURLs, ","), PreferredBackend: strconv.Itoa(testData.preferredBackendIdx), + parsedBackendConfig: testData.backendConfig, ServerHTTPServiceAddress: "localhost", ServerHTTPServicePort: 0, ServerGRPCServiceAddress: "localhost", @@ -671,6 +697,130 @@ func TestProxyHTTPGRPC(t *testing.T) { }) } +func Test_NewProxy_BackendConfigPath(t *testing.T) { + // Helper to create a temporary file with content + createTempFile := func(t *testing.T, content string) string { + tmpfile, err := os.CreateTemp("", "backend-config-*.yaml") + require.NoError(t, err) + + defer tmpfile.Close() + + _, err = tmpfile.Write([]byte(content)) + require.NoError(t, err) + + return tmpfile.Name() + } + + tests := map[string]struct { + configContent string + createFile bool + expectedError string + expectedConfig map[string]*BackendConfig + }{ + "missing file": { + createFile: false, + expectedError: "failed to read backend config file (/nonexistent/path): open /nonexistent/path: no such file or directory", + }, + "empty file": { + createFile: true, + configContent: "", + expectedConfig: map[string]*BackendConfig(nil), + }, + "invalid YAML structure (not a map)": { + createFile: true, + configContent: "- item1\n- item2", + expectedError: "failed to parse backend YAML config:", + }, + "valid configuration": { + createFile: true, + configContent: ` + backend1: + request_headers: + X-Custom-Header: ["value1", "value2"] + Cache-Control: ["no-store"] + backend2: + request_headers: + Authorization: ["Bearer token123"] + `, + expectedConfig: map[string]*BackendConfig{ + "backend1": { + RequestHeaders: http.Header{ + "X-Custom-Header": {"value1", "value2"}, + "Cache-Control": {"no-store"}, + }, + }, + "backend2": { + RequestHeaders: http.Header{ + "Authorization": {"Bearer token123"}, + }, + }, + }, + }, + "configured backend which doesn't exist": { + createFile: true, + configContent: ` + backend1: + request_headers: + X-Custom-Header: ["value1", "value2"] + Cache-Control: ["no-store"] + backend2: + request_headers: + Authorization: ["Bearer token123"] + backend3: + request_headers: + Authorization: ["Bearer token123"] + `, + expectedError: "backend3 does not exist in the list of actual backends", + expectedConfig: map[string]*BackendConfig{ + "backend1": { + RequestHeaders: http.Header{ + "X-Custom-Header": {"value1", "value2"}, + "Cache-Control": {"no-store"}, + }, + }, + "backend2": { + RequestHeaders: http.Header{ + "Authorization": {"Bearer token123"}, + }, + }, + }, + }, + } + + for testName, testCase := range tests { + t.Run(testName, func(t *testing.T) { + // Base config that's valid except for the backend config path + cfg := ProxyConfig{ + BackendEndpoints: "http://backend1:9090,http://backend2:9090", + ServerHTTPServiceAddress: "localhost", + ServerHTTPServicePort: 0, + ServerGRPCServiceAddress: "localhost", + ServerGRPCServicePort: 0, + SecondaryBackendsRequestProportion: 1.0, + } + + if !testCase.createFile { + cfg.BackendConfigFile = "/nonexistent/path" + } else { + tmpPath := createTempFile(t, testCase.configContent) + cfg.BackendConfigFile = tmpPath + defer os.Remove(tmpPath) + } + + p, err := NewProxy(cfg, log.NewNopLogger(), testRoutes, nil) + + if testCase.expectedError != "" { + assert.ErrorContains(t, err, testCase.expectedError) + assert.Nil(t, p) + } else { + assert.NoError(t, err) + assert.NotNil(t, p) + assert.Equal(t, testCase.expectedConfig, p.cfg.parsedBackendConfig) + } + }) + } +} + func mockQueryResponse(path string, status int, res string) http.HandlerFunc { return mockQueryResponseWithExpectedBody(path, "", status, res) } diff --git a/tools/wal-reader/main.go b/tools/wal-reader/main.go index 8399e29f70e..16c95ab2948 100644 --- a/tools/wal-reader/main.go +++ b/tools/wal-reader/main.go @@ -15,8 +15,6 @@ import ( "github.com/prometheus/prometheus/tsdb/chunks" "github.com/prometheus/prometheus/tsdb/record" "github.com/prometheus/prometheus/tsdb/wlog" - - util_math "github.com/grafana/mimir/pkg/util/math" ) func main() { @@ -172,8 +170,8 @@ func printWalEntries(r *wlog.Reader, seriesMap map[chunks.HeadSeriesRef]string, log.Println("seg:", seg, "off:", off, "samples record:", s.Ref, s.T, formatTimestamp(s.T), s.V, si) } - *minSampleTime = util_math.Min(s.T, *minSampleTime) - *maxSampleTime = util_math.Max(s.T, *maxSampleTime) + *minSampleTime = min(s.T, *minSampleTime) + *maxSampleTime = max(s.T, *maxSampleTime) } case record.Tombstones: @@ -213,8 +211,8 @@ func printWalEntries(r *wlog.Reader, seriesMap map[chunks.HeadSeriesRef]string, log.Println("seg:", seg, "off:", off, "histograms record:", s.Ref, s.T, formatTimestamp(s.T), si) } - *minSampleTime = util_math.Min(s.T, *minSampleTime) - *maxSampleTime = util_math.Max(s.T, *maxSampleTime) + *minSampleTime = min(s.T, *minSampleTime) + *maxSampleTime = max(s.T, *maxSampleTime) } case record.FloatHistogramSamples: @@ -229,8 +227,8 @@ func printWalEntries(r *wlog.Reader, seriesMap map[chunks.HeadSeriesRef]string, log.Println("seg:", seg, "off:", off, "float histograms record:", s.Ref, s.T, formatTimestamp(s.T), si) } - *minSampleTime = util_math.Min(s.T, *minSampleTime) - *maxSampleTime = util_math.Max(s.T, *maxSampleTime) + *minSampleTime = min(s.T, *minSampleTime) + *maxSampleTime = max(s.T, *maxSampleTime) } case record.Metadata: diff --git a/vendor/github.com/efficientgo/core/testutil/testorbench.go b/vendor/github.com/efficientgo/core/testutil/testorbench.go index c36b8877a12..69f3fe60f8c 100644 --- a/vendor/github.com/efficientgo/core/testutil/testorbench.go +++ b/vendor/github.com/efficientgo/core/testutil/testorbench.go @@ -30,7 +30,13 @@ type TB interface { SetBytes(n int64) N() int + ResetTimer() + StartTimer() + StopTimer() + + ReportAllocs() + ReportMetric(n float64, unit string) } // tb implements TB as well as testing.TB interfaces. @@ -78,8 +84,36 @@ func (t *tb) ResetTimer() { } } +// StartTimer starts a timer, if it's a benchmark, noop otherwise. +func (t *tb) StartTimer() { + if b, ok := t.TB.(*testing.B); ok { + b.StartTimer() + } +} + +// StopTimer stops a timer, if it's a benchmark, noop otherwise. +func (t *tb) StopTimer() { + if b, ok := t.TB.(*testing.B); ok { + b.StopTimer() + } +} + // IsBenchmark returns true if it's a benchmark. func (t *tb) IsBenchmark() bool { _, ok := t.TB.(*testing.B) return ok } + +// ReportAllocs reports allocs if it's a benchmark, noop otherwise. +func (t *tb) ReportAllocs() { + if b, ok := t.TB.(*testing.B); ok { + b.ReportAllocs() + } +} + +// ReportMetric reports metrics if it's a benchmark, noop otherwise. +func (t *tb) ReportMetric(n float64, unit string) { + if b, ok := t.TB.(*testing.B); ok { + b.ReportMetric(n, unit) + } +} diff --git a/vendor/github.com/efficientgo/core/testutil/testutil.go b/vendor/github.com/efficientgo/core/testutil/testutil.go index c74f368507b..c88191bc10d 100644 --- a/vendor/github.com/efficientgo/core/testutil/testutil.go +++ b/vendor/github.com/efficientgo/core/testutil/testutil.go @@ -14,8 +14,19 @@ import ( "github.com/davecgh/go-spew/spew" "github.com/efficientgo/core/errors" "github.com/efficientgo/core/testutil/internal" + "github.com/google/go-cmp/cmp" ) +const limitOfElemChars = 1e3 + +func withLimitf(f string, v ...interface{}) string { + s := fmt.Sprintf(f, v...) + if len(s) > limitOfElemChars { + return s[:limitOfElemChars] + "...(output trimmed)" + } + return s +} + // Assert fails the test if the condition is false. func Assert(tb testing.TB, condition bool, v ...interface{}) { tb.Helper() @@ -28,7 +39,7 @@ func Assert(tb testing.TB, condition bool, v ...interface{}) { if len(v) > 0 { msg = fmt.Sprintf(v[0].(string), v[1:]...) } - tb.Fatalf("\033[31m%s:%d: "+msg+"\033[39m\n\n", filepath.Base(file), line) + tb.Fatalf("\033[31m%s:%d: \"%s\"\033[39m\n\n", filepath.Base(file), line, withLimitf(msg)) } // Ok fails the test if an err is not nil. @@ -43,7 +54,7 @@ func Ok(tb testing.TB, err error, v ...interface{}) { if len(v) > 0 { msg = fmt.Sprintf(v[0].(string), v[1:]...) } - tb.Fatalf("\033[31m%s:%d:"+msg+"\n\n unexpected error: %s\033[39m\n\n", filepath.Base(file), line, err.Error()) + tb.Fatalf("\033[31m%s:%d: \"%s\"\n\n unexpected error: %s\033[39m\n\n", filepath.Base(file), line, withLimitf(msg), withLimitf(err.Error())) } // NotOk fails the test if an err is nil. @@ -58,7 +69,7 @@ func NotOk(tb testing.TB, err error, v ...interface{}) { if len(v) > 0 { msg = fmt.Sprintf(v[0].(string), v[1:]...) } - tb.Fatalf("\033[31m%s:%d:"+msg+"\n\n expected error, got nothing \033[39m\n\n", filepath.Base(file), line) + tb.Fatalf("\033[31m%s:%d: \"%s\"\n\n expected error, got nothing \033[39m\n\n", filepath.Base(file), line, withLimitf(msg)) } // Equals fails the test if exp is not equal to act. @@ -67,13 +78,41 @@ func Equals(tb testing.TB, exp, act interface{}, v ...interface{}) { if reflect.DeepEqual(exp, act) { return } - _, file, line, _ := runtime.Caller(1) + fatalNotEqual(tb, exp, act, v...) +} + +func fatalNotEqual(tb testing.TB, exp, act interface{}, v ...interface{}) { + _, file, line, _ := runtime.Caller(2) var msg string if len(v) > 0 { msg = fmt.Sprintf(v[0].(string), v[1:]...) } - tb.Fatal(sprintfWithLimit("\033[31m%s:%d:"+msg+"\n\n\texp: %#v\n\n\tgot: %#v%s\033[39m\n\n", filepath.Base(file), line, exp, act, diff(exp, act))) + tb.Fatalf( + "\033[31m%s:%d: \"%s\"\n\n\texp: %s\n\n\tgot: %s%s\033[39m\n\n", + filepath.Base(file), line, withLimitf(msg), withLimitf("%#v", exp), withLimitf("%#v", act), withLimitf(diff(exp, act)), + ) +} + +type goCmp struct { + opts cmp.Options +} + +// WithGoCmp allows specifying options and using https://github.com/google/go-cmp +// for equality comparisons. The compatibility guarantee of this function's arguments +// are the same as go-cmp (no guarantee due to v0.x). +func WithGoCmp(opts ...cmp.Option) goCmp { + return goCmp{opts: opts} +} + +// Equals uses go-cmp for comparing equality between two structs, and can be used with +// various options defined in go-cmp/cmp and go-cmp/cmp/cmpopts. +func (o goCmp) Equals(tb testing.TB, exp, act interface{}, v ...interface{}) { + tb.Helper() + if cmp.Equal(exp, act, o.opts) { + return + } + fatalNotEqual(tb, exp, act, v...) } // FaultOrPanicToErr returns error if panic of fault was triggered during execution of function. @@ -98,7 +137,7 @@ func ContainsStringSlice(tb testing.TB, haystack, needle []string) { _, file, line, _ := runtime.Caller(1) if !contains(haystack, needle) { - tb.Fatalf(sprintfWithLimit("\033[31m%s:%d: %#v does not contain %#v\033[39m\n\n", filepath.Base(file), line, haystack, needle)) + tb.Fatalf("\033[31m%s:%d: %s does not contain %s\033[39m\n\n", filepath.Base(file), line, withLimitf("%#v", haystack), withLimitf("%#v", needle)) } } @@ -138,14 +177,6 @@ func contains(haystack, needle []string) bool { return false } -func sprintfWithLimit(act string, v ...interface{}) string { - s := fmt.Sprintf(act, v...) - if len(s) > 10000 { - return s[:10000] + "...(output trimmed)" - } - return s -} - func typeAndKind(v interface{}) (reflect.Type, reflect.Kind) { t := reflect.TypeOf(v) k := t.Kind() diff --git a/vendor/github.com/grafana/dskit/kv/consul/mock.go b/vendor/github.com/grafana/dskit/kv/consul/mock.go index e2f434b613c..ec848e13564 100644 --- a/vendor/github.com/grafana/dskit/kv/consul/mock.go +++ b/vendor/github.com/grafana/dskit/kv/consul/mock.go @@ -15,6 +15,9 @@ import ( "github.com/grafana/dskit/kv/codec" ) +// The max wait time allowed for mockKV operations, in order to have faster tests. +const maxWaitTime = 100 * time.Millisecond + type mockKV struct { mtx sync.Mutex cond *sync.Cond @@ -87,7 +90,7 @@ func (m *mockKV) loop() { select { case <-m.close: return - case <-time.After(time.Second): + case <-time.After(maxWaitTime): m.mtx.Lock() m.cond.Broadcast() m.mtx.Unlock() @@ -258,7 +261,6 @@ func (m *mockKV) ResetIndexForKey(key string) { // mockedMaxWaitTime returns the minimum duration between the input duration // and the max wait time allowed in this mock, in order to have faster tests. func mockedMaxWaitTime(queryWaitTime time.Duration) time.Duration { - const maxWaitTime = time.Second if queryWaitTime > maxWaitTime { return maxWaitTime } diff --git a/vendor/github.com/grafana/dskit/spanprofiler/README.md b/vendor/github.com/grafana/dskit/spanprofiler/README.md index a415985f664..fbe2306edbe 100644 --- a/vendor/github.com/grafana/dskit/spanprofiler/README.md +++ b/vendor/github.com/grafana/dskit/spanprofiler/README.md @@ -6,7 +6,7 @@ The Span Profiler for OpenTracing-Go is a package that seamlessly integrates `op profiling through the use of pprof labels. Accessing trace span profiles is made convenient through the Grafana Explore view. You can find a complete example setup -with Grafana Tempo in the [Pyroscope repository](https://github.com/grafana/pyroscope/tree/main/examples/tracing/tempo): +with Grafana Tempo in the [Pyroscope repository](https://github.com/grafana/pyroscope/tree/main/examples/tracing/golang-push): ![image](https://github.com/grafana/otel-profiling-go/assets/12090599/31e33cd1-818b-4116-b952-c9ec7b1fb593) diff --git a/vendor/github.com/minio/minio-go/v7/api-datatypes.go b/vendor/github.com/minio/minio-go/v7/api-datatypes.go index 97a6f80b259..8a8fd889856 100644 --- a/vendor/github.com/minio/minio-go/v7/api-datatypes.go +++ b/vendor/github.com/minio/minio-go/v7/api-datatypes.go @@ -143,10 +143,11 @@ type UploadInfo struct { // Verified checksum values, if any. // Values are base64 (standard) encoded. // For multipart objects this is a checksum of the checksum of each part. - ChecksumCRC32 string - ChecksumCRC32C string - ChecksumSHA1 string - ChecksumSHA256 string + ChecksumCRC32 string + ChecksumCRC32C string + ChecksumSHA1 string + ChecksumSHA256 string + ChecksumCRC64NVME string } // RestoreInfo contains information of the restore operation of an archived object @@ -215,10 +216,11 @@ type ObjectInfo struct { Restore *RestoreInfo // Checksum values - ChecksumCRC32 string - ChecksumCRC32C string - ChecksumSHA1 string - ChecksumSHA256 string + ChecksumCRC32 string + ChecksumCRC32C string + ChecksumSHA1 string + ChecksumSHA256 string + ChecksumCRC64NVME string Internal *struct { K int // Data blocks diff --git a/vendor/github.com/minio/minio-go/v7/api-get-object.go b/vendor/github.com/minio/minio-go/v7/api-get-object.go index d7fd27835ba..5cc85f61c23 100644 --- a/vendor/github.com/minio/minio-go/v7/api-get-object.go +++ b/vendor/github.com/minio/minio-go/v7/api-get-object.go @@ -318,7 +318,7 @@ func (o *Object) doGetRequest(request getRequest) (getResponse, error) { response := <-o.resCh // Return any error to the top level. - if response.Error != nil { + if response.Error != nil && response.Error != io.EOF { return response, response.Error } @@ -340,7 +340,7 @@ func (o *Object) doGetRequest(request getRequest) (getResponse, error) { // Data are ready on the wire, no need to reinitiate connection in lower level o.seekData = false - return response, nil + return response, response.Error } // setOffset - handles the setting of offsets for diff --git a/vendor/github.com/minio/minio-go/v7/api-put-object-multipart.go b/vendor/github.com/minio/minio-go/v7/api-put-object-multipart.go index a70cbea9e57..03bd34f76ba 100644 --- a/vendor/github.com/minio/minio-go/v7/api-put-object-multipart.go +++ b/vendor/github.com/minio/minio-go/v7/api-put-object-multipart.go @@ -83,10 +83,7 @@ func (c *Client) putObjectMultipartNoStream(ctx context.Context, bucketName, obj // HTTPS connection. hashAlgos, hashSums := c.hashMaterials(opts.SendContentMd5, !opts.DisableContentSha256) if len(hashSums) == 0 { - if opts.UserMetadata == nil { - opts.UserMetadata = make(map[string]string, 1) - } - opts.UserMetadata["X-Amz-Checksum-Algorithm"] = opts.AutoChecksum.String() + addAutoChecksumHeaders(&opts) } // Initiate a new multipart upload. @@ -113,7 +110,6 @@ func (c *Client) putObjectMultipartNoStream(ctx context.Context, bucketName, obj // Create checksums // CRC32C is ~50% faster on AMD64 @ 30GB/s - var crcBytes []byte customHeader := make(http.Header) crc := opts.AutoChecksum.Hasher() for partNumber <= totalPartsCount { @@ -154,7 +150,6 @@ func (c *Client) putObjectMultipartNoStream(ctx context.Context, bucketName, obj crc.Write(buf[:length]) cSum := crc.Sum(nil) customHeader.Set(opts.AutoChecksum.Key(), base64.StdEncoding.EncodeToString(cSum)) - crcBytes = append(crcBytes, cSum...) } p := uploadPartParams{bucketName: bucketName, objectName: objectName, uploadID: uploadID, reader: rd, partNumber: partNumber, md5Base64: md5Base64, sha256Hex: sha256Hex, size: int64(length), sse: opts.ServerSideEncryption, streamSha256: !opts.DisableContentSha256, customHeader: customHeader} @@ -182,18 +177,21 @@ func (c *Client) putObjectMultipartNoStream(ctx context.Context, bucketName, obj // Loop over total uploaded parts to save them in // Parts array before completing the multipart request. + allParts := make([]ObjectPart, 0, len(partsInfo)) for i := 1; i < partNumber; i++ { part, ok := partsInfo[i] if !ok { return UploadInfo{}, errInvalidArgument(fmt.Sprintf("Missing part number %d", i)) } + allParts = append(allParts, part) complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{ - ETag: part.ETag, - PartNumber: part.PartNumber, - ChecksumCRC32: part.ChecksumCRC32, - ChecksumCRC32C: part.ChecksumCRC32C, - ChecksumSHA1: part.ChecksumSHA1, - ChecksumSHA256: part.ChecksumSHA256, + ETag: part.ETag, + PartNumber: part.PartNumber, + ChecksumCRC32: part.ChecksumCRC32, + ChecksumCRC32C: part.ChecksumCRC32C, + ChecksumSHA1: part.ChecksumSHA1, + ChecksumSHA256: part.ChecksumSHA256, + ChecksumCRC64NVME: part.ChecksumCRC64NVME, }) } @@ -203,12 +201,8 @@ func (c *Client) putObjectMultipartNoStream(ctx context.Context, bucketName, obj ServerSideEncryption: opts.ServerSideEncryption, AutoChecksum: opts.AutoChecksum, } - if len(crcBytes) > 0 { - // Add hash of hashes. - crc.Reset() - crc.Write(crcBytes) - opts.UserMetadata = map[string]string{opts.AutoChecksum.Key(): base64.StdEncoding.EncodeToString(crc.Sum(nil))} - } + applyAutoChecksum(&opts, allParts) + uploadInfo, err := c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload, opts) if err != nil { return UploadInfo{}, err @@ -354,10 +348,11 @@ func (c *Client) uploadPart(ctx context.Context, p uploadPartParams) (ObjectPart // Once successfully uploaded, return completed part. h := resp.Header objPart := ObjectPart{ - ChecksumCRC32: h.Get("x-amz-checksum-crc32"), - ChecksumCRC32C: h.Get("x-amz-checksum-crc32c"), - ChecksumSHA1: h.Get("x-amz-checksum-sha1"), - ChecksumSHA256: h.Get("x-amz-checksum-sha256"), + ChecksumCRC32: h.Get(ChecksumCRC32.Key()), + ChecksumCRC32C: h.Get(ChecksumCRC32C.Key()), + ChecksumSHA1: h.Get(ChecksumSHA1.Key()), + ChecksumSHA256: h.Get(ChecksumSHA256.Key()), + ChecksumCRC64NVME: h.Get(ChecksumCRC64NVME.Key()), } objPart.Size = p.size objPart.PartNumber = p.partNumber @@ -457,9 +452,10 @@ func (c *Client) completeMultipartUpload(ctx context.Context, bucketName, object Expiration: expTime, ExpirationRuleID: ruleID, - ChecksumSHA256: completeMultipartUploadResult.ChecksumSHA256, - ChecksumSHA1: completeMultipartUploadResult.ChecksumSHA1, - ChecksumCRC32: completeMultipartUploadResult.ChecksumCRC32, - ChecksumCRC32C: completeMultipartUploadResult.ChecksumCRC32C, + ChecksumSHA256: completeMultipartUploadResult.ChecksumSHA256, + ChecksumSHA1: completeMultipartUploadResult.ChecksumSHA1, + ChecksumCRC32: completeMultipartUploadResult.ChecksumCRC32, + ChecksumCRC32C: completeMultipartUploadResult.ChecksumCRC32C, + ChecksumCRC64NVME: completeMultipartUploadResult.ChecksumCRC64NVME, }, nil } diff --git a/vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go b/vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go index dac4c0efefd..3ff3b69efd6 100644 --- a/vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go +++ b/vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go @@ -113,10 +113,7 @@ func (c *Client) putObjectMultipartStreamFromReadAt(ctx context.Context, bucketN } withChecksum := c.trailingHeaderSupport if withChecksum { - if opts.UserMetadata == nil { - opts.UserMetadata = make(map[string]string, 1) - } - opts.UserMetadata["X-Amz-Checksum-Algorithm"] = opts.AutoChecksum.String() + addAutoChecksumHeaders(&opts) } // Initiate a new multipart upload. uploadID, err := c.newUploadID(ctx, bucketName, objectName, opts) @@ -240,6 +237,7 @@ func (c *Client) putObjectMultipartStreamFromReadAt(ctx context.Context, bucketN // Gather the responses as they occur and update any // progress bar. + allParts := make([]ObjectPart, 0, totalPartsCount) for u := 1; u <= totalPartsCount; u++ { select { case <-ctx.Done(): @@ -248,16 +246,17 @@ func (c *Client) putObjectMultipartStreamFromReadAt(ctx context.Context, bucketN if uploadRes.Error != nil { return UploadInfo{}, uploadRes.Error } - + allParts = append(allParts, uploadRes.Part) // Update the totalUploadedSize. totalUploadedSize += uploadRes.Size complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{ - ETag: uploadRes.Part.ETag, - PartNumber: uploadRes.Part.PartNumber, - ChecksumCRC32: uploadRes.Part.ChecksumCRC32, - ChecksumCRC32C: uploadRes.Part.ChecksumCRC32C, - ChecksumSHA1: uploadRes.Part.ChecksumSHA1, - ChecksumSHA256: uploadRes.Part.ChecksumSHA256, + ETag: uploadRes.Part.ETag, + PartNumber: uploadRes.Part.PartNumber, + ChecksumCRC32: uploadRes.Part.ChecksumCRC32, + ChecksumCRC32C: uploadRes.Part.ChecksumCRC32C, + ChecksumSHA1: uploadRes.Part.ChecksumSHA1, + ChecksumSHA256: uploadRes.Part.ChecksumSHA256, + ChecksumCRC64NVME: uploadRes.Part.ChecksumCRC64NVME, }) } } @@ -275,15 +274,7 @@ func (c *Client) putObjectMultipartStreamFromReadAt(ctx context.Context, bucketN AutoChecksum: opts.AutoChecksum, } if withChecksum { - // Add hash of hashes. - crc := opts.AutoChecksum.Hasher() - for _, part := range complMultipartUpload.Parts { - cs, err := base64.StdEncoding.DecodeString(part.Checksum(opts.AutoChecksum)) - if err == nil { - crc.Write(cs) - } - } - opts.UserMetadata = map[string]string{opts.AutoChecksum.KeyCapitalized(): base64.StdEncoding.EncodeToString(crc.Sum(nil))} + applyAutoChecksum(&opts, allParts) } uploadInfo, err := c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload, opts) @@ -312,10 +303,7 @@ func (c *Client) putObjectMultipartStreamOptionalChecksum(ctx context.Context, b } if !opts.SendContentMd5 { - if opts.UserMetadata == nil { - opts.UserMetadata = make(map[string]string, 1) - } - opts.UserMetadata["X-Amz-Checksum-Algorithm"] = opts.AutoChecksum.String() + addAutoChecksumHeaders(&opts) } // Calculate the optimal parts info for a given size. @@ -342,7 +330,6 @@ func (c *Client) putObjectMultipartStreamOptionalChecksum(ctx context.Context, b // Create checksums // CRC32C is ~50% faster on AMD64 @ 30GB/s - var crcBytes []byte customHeader := make(http.Header) crc := opts.AutoChecksum.Hasher() md5Hash := c.md5Hasher() @@ -389,7 +376,6 @@ func (c *Client) putObjectMultipartStreamOptionalChecksum(ctx context.Context, b crc.Write(buf[:length]) cSum := crc.Sum(nil) customHeader.Set(opts.AutoChecksum.KeyCapitalized(), base64.StdEncoding.EncodeToString(cSum)) - crcBytes = append(crcBytes, cSum...) } // Update progress reader appropriately to the latest offset @@ -420,18 +406,21 @@ func (c *Client) putObjectMultipartStreamOptionalChecksum(ctx context.Context, b // Loop over total uploaded parts to save them in // Parts array before completing the multipart request. + allParts := make([]ObjectPart, 0, len(partsInfo)) for i := 1; i < partNumber; i++ { part, ok := partsInfo[i] if !ok { return UploadInfo{}, errInvalidArgument(fmt.Sprintf("Missing part number %d", i)) } + allParts = append(allParts, part) complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{ - ETag: part.ETag, - PartNumber: part.PartNumber, - ChecksumCRC32: part.ChecksumCRC32, - ChecksumCRC32C: part.ChecksumCRC32C, - ChecksumSHA1: part.ChecksumSHA1, - ChecksumSHA256: part.ChecksumSHA256, + ETag: part.ETag, + PartNumber: part.PartNumber, + ChecksumCRC32: part.ChecksumCRC32, + ChecksumCRC32C: part.ChecksumCRC32C, + ChecksumSHA1: part.ChecksumSHA1, + ChecksumSHA256: part.ChecksumSHA256, + ChecksumCRC64NVME: part.ChecksumCRC64NVME, }) } @@ -442,12 +431,7 @@ func (c *Client) putObjectMultipartStreamOptionalChecksum(ctx context.Context, b ServerSideEncryption: opts.ServerSideEncryption, AutoChecksum: opts.AutoChecksum, } - if len(crcBytes) > 0 { - // Add hash of hashes. - crc.Reset() - crc.Write(crcBytes) - opts.UserMetadata = map[string]string{opts.AutoChecksum.KeyCapitalized(): base64.StdEncoding.EncodeToString(crc.Sum(nil))} - } + applyAutoChecksum(&opts, allParts) uploadInfo, err := c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload, opts) if err != nil { return UploadInfo{}, err @@ -475,10 +459,7 @@ func (c *Client) putObjectMultipartStreamParallel(ctx context.Context, bucketNam opts.AutoChecksum = opts.Checksum } if !opts.SendContentMd5 { - if opts.UserMetadata == nil { - opts.UserMetadata = make(map[string]string, 1) - } - opts.UserMetadata["X-Amz-Checksum-Algorithm"] = opts.AutoChecksum.String() + addAutoChecksumHeaders(&opts) } // Cancel all when an error occurs. @@ -510,7 +491,6 @@ func (c *Client) putObjectMultipartStreamParallel(ctx context.Context, bucketNam // Create checksums // CRC32C is ~50% faster on AMD64 @ 30GB/s - var crcBytes []byte crc := opts.AutoChecksum.Hasher() // Total data read and written to server. should be equal to 'size' at the end of the call. @@ -570,7 +550,6 @@ func (c *Client) putObjectMultipartStreamParallel(ctx context.Context, bucketNam crc.Write(buf[:length]) cSum := crc.Sum(nil) customHeader.Set(opts.AutoChecksum.Key(), base64.StdEncoding.EncodeToString(cSum)) - crcBytes = append(crcBytes, cSum...) } wg.Add(1) @@ -630,18 +609,21 @@ func (c *Client) putObjectMultipartStreamParallel(ctx context.Context, bucketNam // Loop over total uploaded parts to save them in // Parts array before completing the multipart request. + allParts := make([]ObjectPart, 0, len(partsInfo)) for i := 1; i < partNumber; i++ { part, ok := partsInfo[i] if !ok { return UploadInfo{}, errInvalidArgument(fmt.Sprintf("Missing part number %d", i)) } + allParts = append(allParts, part) complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{ - ETag: part.ETag, - PartNumber: part.PartNumber, - ChecksumCRC32: part.ChecksumCRC32, - ChecksumCRC32C: part.ChecksumCRC32C, - ChecksumSHA1: part.ChecksumSHA1, - ChecksumSHA256: part.ChecksumSHA256, + ETag: part.ETag, + PartNumber: part.PartNumber, + ChecksumCRC32: part.ChecksumCRC32, + ChecksumCRC32C: part.ChecksumCRC32C, + ChecksumSHA1: part.ChecksumSHA1, + ChecksumSHA256: part.ChecksumSHA256, + ChecksumCRC64NVME: part.ChecksumCRC64NVME, }) } @@ -652,12 +634,8 @@ func (c *Client) putObjectMultipartStreamParallel(ctx context.Context, bucketNam ServerSideEncryption: opts.ServerSideEncryption, AutoChecksum: opts.AutoChecksum, } - if len(crcBytes) > 0 { - // Add hash of hashes. - crc.Reset() - crc.Write(crcBytes) - opts.UserMetadata = map[string]string{opts.AutoChecksum.KeyCapitalized(): base64.StdEncoding.EncodeToString(crc.Sum(nil))} - } + applyAutoChecksum(&opts, allParts) + uploadInfo, err := c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload, opts) if err != nil { return UploadInfo{}, err @@ -823,9 +801,10 @@ func (c *Client) putObjectDo(ctx context.Context, bucketName, objectName string, ExpirationRuleID: ruleID, // Checksum values - ChecksumCRC32: h.Get("x-amz-checksum-crc32"), - ChecksumCRC32C: h.Get("x-amz-checksum-crc32c"), - ChecksumSHA1: h.Get("x-amz-checksum-sha1"), - ChecksumSHA256: h.Get("x-amz-checksum-sha256"), + ChecksumCRC32: h.Get(ChecksumCRC32.Key()), + ChecksumCRC32C: h.Get(ChecksumCRC32C.Key()), + ChecksumSHA1: h.Get(ChecksumSHA1.Key()), + ChecksumSHA256: h.Get(ChecksumSHA256.Key()), + ChecksumCRC64NVME: h.Get(ChecksumCRC64NVME.Key()), }, nil } diff --git a/vendor/github.com/minio/minio-go/v7/api-put-object.go b/vendor/github.com/minio/minio-go/v7/api-put-object.go index 10131a5be63..09817578412 100644 --- a/vendor/github.com/minio/minio-go/v7/api-put-object.go +++ b/vendor/github.com/minio/minio-go/v7/api-put-object.go @@ -387,10 +387,7 @@ func (c *Client) putObjectMultipartStreamNoLength(ctx context.Context, bucketNam opts.AutoChecksum = opts.Checksum } if !opts.SendContentMd5 { - if opts.UserMetadata == nil { - opts.UserMetadata = make(map[string]string, 1) - } - opts.UserMetadata["X-Amz-Checksum-Algorithm"] = opts.AutoChecksum.String() + addAutoChecksumHeaders(&opts) } // Initiate a new multipart upload. @@ -417,7 +414,6 @@ func (c *Client) putObjectMultipartStreamNoLength(ctx context.Context, bucketNam // Create checksums // CRC32C is ~50% faster on AMD64 @ 30GB/s - var crcBytes []byte customHeader := make(http.Header) crc := opts.AutoChecksum.Hasher() @@ -443,7 +439,6 @@ func (c *Client) putObjectMultipartStreamNoLength(ctx context.Context, bucketNam crc.Write(buf[:length]) cSum := crc.Sum(nil) customHeader.Set(opts.AutoChecksum.Key(), base64.StdEncoding.EncodeToString(cSum)) - crcBytes = append(crcBytes, cSum...) } // Update progress reader appropriately to the latest offset @@ -475,18 +470,21 @@ func (c *Client) putObjectMultipartStreamNoLength(ctx context.Context, bucketNam // Loop over total uploaded parts to save them in // Parts array before completing the multipart request. + allParts := make([]ObjectPart, 0, len(partsInfo)) for i := 1; i < partNumber; i++ { part, ok := partsInfo[i] if !ok { return UploadInfo{}, errInvalidArgument(fmt.Sprintf("Missing part number %d", i)) } + allParts = append(allParts, part) complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{ - ETag: part.ETag, - PartNumber: part.PartNumber, - ChecksumCRC32: part.ChecksumCRC32, - ChecksumCRC32C: part.ChecksumCRC32C, - ChecksumSHA1: part.ChecksumSHA1, - ChecksumSHA256: part.ChecksumSHA256, + ETag: part.ETag, + PartNumber: part.PartNumber, + ChecksumCRC32: part.ChecksumCRC32, + ChecksumCRC32C: part.ChecksumCRC32C, + ChecksumSHA1: part.ChecksumSHA1, + ChecksumSHA256: part.ChecksumSHA256, + ChecksumCRC64NVME: part.ChecksumCRC64NVME, }) } @@ -497,12 +495,8 @@ func (c *Client) putObjectMultipartStreamNoLength(ctx context.Context, bucketNam ServerSideEncryption: opts.ServerSideEncryption, AutoChecksum: opts.AutoChecksum, } - if len(crcBytes) > 0 { - // Add hash of hashes. - crc.Reset() - crc.Write(crcBytes) - opts.UserMetadata = map[string]string{opts.AutoChecksum.KeyCapitalized(): base64.StdEncoding.EncodeToString(crc.Sum(nil))} - } + applyAutoChecksum(&opts, allParts) + uploadInfo, err := c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload, opts) if err != nil { return UploadInfo{}, err diff --git a/vendor/github.com/minio/minio-go/v7/api-s3-datatypes.go b/vendor/github.com/minio/minio-go/v7/api-s3-datatypes.go index 790606c509d..5e015fb8279 100644 --- a/vendor/github.com/minio/minio-go/v7/api-s3-datatypes.go +++ b/vendor/github.com/minio/minio-go/v7/api-s3-datatypes.go @@ -18,6 +18,7 @@ package minio import ( + "encoding/base64" "encoding/xml" "errors" "io" @@ -276,10 +277,45 @@ type ObjectPart struct { Size int64 // Checksum values of each part. - ChecksumCRC32 string - ChecksumCRC32C string - ChecksumSHA1 string - ChecksumSHA256 string + ChecksumCRC32 string + ChecksumCRC32C string + ChecksumSHA1 string + ChecksumSHA256 string + ChecksumCRC64NVME string +} + +// Checksum will return the checksum for the given type. +// Will return the empty string if not set. +func (c ObjectPart) Checksum(t ChecksumType) string { + switch { + case t.Is(ChecksumCRC32C): + return c.ChecksumCRC32C + case t.Is(ChecksumCRC32): + return c.ChecksumCRC32 + case t.Is(ChecksumSHA1): + return c.ChecksumSHA1 + case t.Is(ChecksumSHA256): + return c.ChecksumSHA256 + case t.Is(ChecksumCRC64NVME): + return c.ChecksumCRC64NVME + } + return "" +} + +// ChecksumRaw returns the decoded checksum from the part. +func (c ObjectPart) ChecksumRaw(t ChecksumType) ([]byte, error) { + b64 := c.Checksum(t) + if b64 == "" { + return nil, errors.New("no checksum set") + } + decoded, err := base64.StdEncoding.DecodeString(b64) + if err != nil { + return nil, err + } + if len(decoded) != t.RawByteLen() { + return nil, errors.New("checksum length mismatch") + } + return decoded, nil } // ListObjectPartsResult container for ListObjectParts response. @@ -296,6 +332,12 @@ type ListObjectPartsResult struct { NextPartNumberMarker int MaxParts int + // ChecksumAlgorithm will be CRC32, CRC32C, etc. + ChecksumAlgorithm string + + // ChecksumType is FULL_OBJECT or COMPOSITE (assume COMPOSITE when unset) + ChecksumType string + // Indicates whether the returned list of parts is truncated. IsTruncated bool ObjectParts []ObjectPart `xml:"Part"` @@ -320,10 +362,11 @@ type completeMultipartUploadResult struct { ETag string // Checksum values, hash of hashes of parts. - ChecksumCRC32 string - ChecksumCRC32C string - ChecksumSHA1 string - ChecksumSHA256 string + ChecksumCRC32 string + ChecksumCRC32C string + ChecksumSHA1 string + ChecksumSHA256 string + ChecksumCRC64NVME string } // CompletePart sub container lists individual part numbers and their @@ -334,10 +377,11 @@ type CompletePart struct { ETag string // Checksum values - ChecksumCRC32 string `xml:"ChecksumCRC32,omitempty"` - ChecksumCRC32C string `xml:"ChecksumCRC32C,omitempty"` - ChecksumSHA1 string `xml:"ChecksumSHA1,omitempty"` - ChecksumSHA256 string `xml:"ChecksumSHA256,omitempty"` + ChecksumCRC32 string `xml:"ChecksumCRC32,omitempty"` + ChecksumCRC32C string `xml:"ChecksumCRC32C,omitempty"` + ChecksumSHA1 string `xml:"ChecksumSHA1,omitempty"` + ChecksumSHA256 string `xml:"ChecksumSHA256,omitempty"` + ChecksumCRC64NVME string `xml:",omitempty"` } // Checksum will return the checksum for the given type. @@ -352,6 +396,8 @@ func (c CompletePart) Checksum(t ChecksumType) string { return c.ChecksumSHA1 case t.Is(ChecksumSHA256): return c.ChecksumSHA256 + case t.Is(ChecksumCRC64NVME): + return c.ChecksumCRC64NVME } return "" } diff --git a/vendor/github.com/minio/minio-go/v7/api.go b/vendor/github.com/minio/minio-go/v7/api.go index 88e8d434777..83c003e499f 100644 --- a/vendor/github.com/minio/minio-go/v7/api.go +++ b/vendor/github.com/minio/minio-go/v7/api.go @@ -133,7 +133,7 @@ type Options struct { // Global constants. const ( libraryName = "minio-go" - libraryVersion = "v7.0.81" + libraryVersion = "v7.0.82" ) // User Agent should always following the below style. diff --git a/vendor/github.com/minio/minio-go/v7/checksum.go b/vendor/github.com/minio/minio-go/v7/checksum.go index 7eb1bf25abf..8e4c27ce42f 100644 --- a/vendor/github.com/minio/minio-go/v7/checksum.go +++ b/vendor/github.com/minio/minio-go/v7/checksum.go @@ -21,11 +21,15 @@ import ( "crypto/sha1" "crypto/sha256" "encoding/base64" + "encoding/binary" + "errors" "hash" "hash/crc32" + "hash/crc64" "io" "math/bits" "net/http" + "sort" ) // ChecksumType contains information about the checksum type. @@ -41,23 +45,41 @@ const ( ChecksumCRC32 // ChecksumCRC32C indicates a CRC32 checksum with Castagnoli table. ChecksumCRC32C + // ChecksumCRC64NVME indicates CRC64 with 0xad93d23594c93659 polynomial. + ChecksumCRC64NVME // Keep after all valid checksums checksumLast + // ChecksumFullObject is a modifier that can be used on CRC32 and CRC32C + // to indicate full object checksums. + ChecksumFullObject + // checksumMask is a mask for valid checksum types. checksumMask = checksumLast - 1 // ChecksumNone indicates no checksum. ChecksumNone ChecksumType = 0 - amzChecksumAlgo = "x-amz-checksum-algorithm" - amzChecksumCRC32 = "x-amz-checksum-crc32" - amzChecksumCRC32C = "x-amz-checksum-crc32c" - amzChecksumSHA1 = "x-amz-checksum-sha1" - amzChecksumSHA256 = "x-amz-checksum-sha256" + // ChecksumFullObjectCRC32 indicates full object CRC32 + ChecksumFullObjectCRC32 = ChecksumCRC32 | ChecksumFullObject + + // ChecksumFullObjectCRC32C indicates full object CRC32C + ChecksumFullObjectCRC32C = ChecksumCRC32C | ChecksumFullObject + + amzChecksumAlgo = "x-amz-checksum-algorithm" + amzChecksumCRC32 = "x-amz-checksum-crc32" + amzChecksumCRC32C = "x-amz-checksum-crc32c" + amzChecksumSHA1 = "x-amz-checksum-sha1" + amzChecksumSHA256 = "x-amz-checksum-sha256" + amzChecksumCRC64NVME = "x-amz-checksum-crc64nvme" ) +// Base returns the base type, without modifiers. +func (c ChecksumType) Base() ChecksumType { + return c & checksumMask +} + // Is returns if c is all of t. func (c ChecksumType) Is(t ChecksumType) bool { return c&t == t @@ -75,10 +97,39 @@ func (c ChecksumType) Key() string { return amzChecksumSHA1 case ChecksumSHA256: return amzChecksumSHA256 + case ChecksumCRC64NVME: + return amzChecksumCRC64NVME } return "" } +// CanComposite will return if the checksum type can be used for composite multipart upload on AWS. +func (c ChecksumType) CanComposite() bool { + switch c & checksumMask { + case ChecksumSHA256, ChecksumSHA1, ChecksumCRC32, ChecksumCRC32C: + return true + } + return false +} + +// CanMergeCRC will return if the checksum type can be used for multipart upload on AWS. +func (c ChecksumType) CanMergeCRC() bool { + switch c & checksumMask { + case ChecksumCRC32, ChecksumCRC32C, ChecksumCRC64NVME: + return true + } + return false +} + +// FullObjectRequested will return if the checksum type indicates full object checksum was requested. +func (c ChecksumType) FullObjectRequested() bool { + switch c & (ChecksumFullObject | checksumMask) { + case ChecksumFullObjectCRC32C, ChecksumFullObjectCRC32, ChecksumCRC64NVME: + return true + } + return false +} + // KeyCapitalized returns the capitalized key as used in HTTP headers. func (c ChecksumType) KeyCapitalized() string { return http.CanonicalHeaderKey(c.Key()) @@ -93,10 +144,17 @@ func (c ChecksumType) RawByteLen() int { return sha1.Size case ChecksumSHA256: return sha256.Size + case ChecksumCRC64NVME: + return crc64.Size } return 0 } +const crc64NVMEPolynomial = 0xad93d23594c93659 + +// crc64 uses reversed polynomials. +var crc64Table = crc64.MakeTable(bits.Reverse64(crc64NVMEPolynomial)) + // Hasher returns a hasher corresponding to the checksum type. // Returns nil if no checksum. func (c ChecksumType) Hasher() hash.Hash { @@ -109,13 +167,15 @@ func (c ChecksumType) Hasher() hash.Hash { return sha1.New() case ChecksumSHA256: return sha256.New() + case ChecksumCRC64NVME: + return crc64.New(crc64Table) } return nil } // IsSet returns whether the type is valid and known. func (c ChecksumType) IsSet() bool { - return bits.OnesCount32(uint32(c)) == 1 + return bits.OnesCount32(uint32(c&checksumMask)) == 1 } // SetDefault will set the checksum if not already set. @@ -125,6 +185,16 @@ func (c *ChecksumType) SetDefault(t ChecksumType) { } } +// EncodeToString the encoded hash value of the content provided in b. +func (c ChecksumType) EncodeToString(b []byte) string { + if !c.IsSet() { + return "" + } + h := c.Hasher() + h.Write(b) + return base64.StdEncoding.EncodeToString(h.Sum(nil)) +} + // String returns the type as a string. // CRC32, CRC32C, SHA1, and SHA256 for valid values. // Empty string for unset and "" if not valid. @@ -140,6 +210,8 @@ func (c ChecksumType) String() string { return "SHA256" case ChecksumNone: return "" + case ChecksumCRC64NVME: + return "CRC64NVME" } return "" } @@ -221,3 +293,129 @@ func (c Checksum) Raw() []byte { } return c.r } + +// CompositeChecksum returns the composite checksum of all provided parts. +func (c ChecksumType) CompositeChecksum(p []ObjectPart) (*Checksum, error) { + if !c.CanComposite() { + return nil, errors.New("cannot do composite checksum") + } + sort.Slice(p, func(i, j int) bool { + return p[i].PartNumber < p[j].PartNumber + }) + c = c.Base() + crcBytes := make([]byte, 0, len(p)*c.RawByteLen()) + for _, part := range p { + pCrc, err := part.ChecksumRaw(c) + if err != nil { + return nil, err + } + crcBytes = append(crcBytes, pCrc...) + } + h := c.Hasher() + h.Write(crcBytes) + return &Checksum{Type: c, r: h.Sum(nil)}, nil +} + +// FullObjectChecksum will return the full object checksum from provided parts. +func (c ChecksumType) FullObjectChecksum(p []ObjectPart) (*Checksum, error) { + if !c.CanMergeCRC() { + return nil, errors.New("cannot merge this checksum type") + } + c = c.Base() + sort.Slice(p, func(i, j int) bool { + return p[i].PartNumber < p[j].PartNumber + }) + + switch len(p) { + case 0: + return nil, errors.New("no parts given") + case 1: + check, err := p[0].ChecksumRaw(c) + if err != nil { + return nil, err + } + return &Checksum{ + Type: c, + r: check, + }, nil + } + var merged uint32 + var merged64 uint64 + first, err := p[0].ChecksumRaw(c) + if err != nil { + return nil, err + } + sz := p[0].Size + switch c { + case ChecksumCRC32, ChecksumCRC32C: + merged = binary.BigEndian.Uint32(first) + case ChecksumCRC64NVME: + merged64 = binary.BigEndian.Uint64(first) + } + + poly32 := uint32(crc32.IEEE) + if c.Is(ChecksumCRC32C) { + poly32 = crc32.Castagnoli + } + for _, part := range p[1:] { + if part.Size == 0 { + continue + } + sz += part.Size + pCrc, err := part.ChecksumRaw(c) + if err != nil { + return nil, err + } + switch c { + case ChecksumCRC32, ChecksumCRC32C: + merged = crc32Combine(poly32, merged, binary.BigEndian.Uint32(pCrc), part.Size) + case ChecksumCRC64NVME: + merged64 = crc64Combine(bits.Reverse64(crc64NVMEPolynomial), merged64, binary.BigEndian.Uint64(pCrc), part.Size) + } + } + var tmp [8]byte + switch c { + case ChecksumCRC32, ChecksumCRC32C: + binary.BigEndian.PutUint32(tmp[:], merged) + return &Checksum{ + Type: c, + r: tmp[:4], + }, nil + case ChecksumCRC64NVME: + binary.BigEndian.PutUint64(tmp[:], merged64) + return &Checksum{ + Type: c, + r: tmp[:8], + }, nil + default: + return nil, errors.New("unknown checksum type") + } +} + +func addAutoChecksumHeaders(opts *PutObjectOptions) { + if opts.UserMetadata == nil { + opts.UserMetadata = make(map[string]string, 1) + } + opts.UserMetadata["X-Amz-Checksum-Algorithm"] = opts.AutoChecksum.String() + if opts.AutoChecksum.FullObjectRequested() { + opts.UserMetadata["X-Amz-Checksum-Type"] = "FULL_OBJECT" + } +} + +func applyAutoChecksum(opts *PutObjectOptions, allParts []ObjectPart) { + if !opts.AutoChecksum.IsSet() { + return + } + if opts.AutoChecksum.CanComposite() && !opts.AutoChecksum.Is(ChecksumFullObject) { + // Add composite hash of hashes. + crc, err := opts.AutoChecksum.CompositeChecksum(allParts) + if err == nil { + opts.UserMetadata = map[string]string{opts.AutoChecksum.Key(): crc.Encoded()} + } + } else if opts.AutoChecksum.CanMergeCRC() { + crc, err := opts.AutoChecksum.FullObjectChecksum(allParts) + if err == nil { + opts.UserMetadata = map[string]string{opts.AutoChecksum.KeyCapitalized(): crc.Encoded(), "X-Amz-Checksum-Type": "FULL_OBJECT"} + } + } +} diff --git a/vendor/github.com/minio/minio-go/v7/functional_tests.go b/vendor/github.com/minio/minio-go/v7/functional_tests.go index 43383d13486..33e87e6e162 100644 --- a/vendor/github.com/minio/minio-go/v7/functional_tests.go +++ b/vendor/github.com/minio/minio-go/v7/functional_tests.go @@ -2006,9 +2006,13 @@ func testPutObjectWithChecksums() { {cs: minio.ChecksumCRC32}, {cs: minio.ChecksumSHA1}, {cs: minio.ChecksumSHA256}, + {cs: minio.ChecksumCRC64NVME}, } for _, test := range tests { + if os.Getenv("MINT_NO_FULL_OBJECT") != "" && test.cs.FullObjectRequested() { + continue + } bufSize := dataFileMap["datafile-10-kB"] // Save the data @@ -2065,6 +2069,7 @@ func testPutObjectWithChecksums() { cmpChecksum(resp.ChecksumSHA1, meta["x-amz-checksum-sha1"]) cmpChecksum(resp.ChecksumCRC32, meta["x-amz-checksum-crc32"]) cmpChecksum(resp.ChecksumCRC32C, meta["x-amz-checksum-crc32c"]) + cmpChecksum(resp.ChecksumCRC64NVME, meta["x-amz-checksum-crc64nvme"]) // Read the data back gopts := minio.GetObjectOptions{Checksum: true} @@ -2084,6 +2089,7 @@ func testPutObjectWithChecksums() { cmpChecksum(st.ChecksumSHA1, meta["x-amz-checksum-sha1"]) cmpChecksum(st.ChecksumCRC32, meta["x-amz-checksum-crc32"]) cmpChecksum(st.ChecksumCRC32C, meta["x-amz-checksum-crc32c"]) + cmpChecksum(st.ChecksumCRC64NVME, meta["x-amz-checksum-crc64nvme"]) if st.Size != int64(bufSize) { logError(testName, function, args, startTime, "", "Number of bytes returned by PutObject does not match GetObject, expected "+string(bufSize)+" got "+string(st.Size), err) @@ -2127,12 +2133,12 @@ func testPutObjectWithChecksums() { cmpChecksum(st.ChecksumSHA1, "") cmpChecksum(st.ChecksumCRC32, "") cmpChecksum(st.ChecksumCRC32C, "") + cmpChecksum(st.ChecksumCRC64NVME, "") delete(args, "range") delete(args, "metadata") + logSuccess(testName, function, args, startTime) } - - logSuccess(testName, function, args, startTime) } // Test PutObject with custom checksums. @@ -2173,13 +2179,16 @@ func testPutObjectWithTrailingChecksums() { tests := []struct { cs minio.ChecksumType }{ + {cs: minio.ChecksumCRC64NVME}, {cs: minio.ChecksumCRC32C}, {cs: minio.ChecksumCRC32}, {cs: minio.ChecksumSHA1}, {cs: minio.ChecksumSHA256}, } - for _, test := range tests { + if os.Getenv("MINT_NO_FULL_OBJECT") != "" && test.cs.FullObjectRequested() { + continue + } function := "PutObject(bucketName, objectName, reader,size, opts)" bufSize := dataFileMap["datafile-10-kB"] @@ -2227,6 +2236,7 @@ func testPutObjectWithTrailingChecksums() { cmpChecksum(resp.ChecksumSHA1, meta["x-amz-checksum-sha1"]) cmpChecksum(resp.ChecksumCRC32, meta["x-amz-checksum-crc32"]) cmpChecksum(resp.ChecksumCRC32C, meta["x-amz-checksum-crc32c"]) + cmpChecksum(resp.ChecksumCRC64NVME, meta["x-amz-checksum-crc64nvme"]) // Read the data back gopts := minio.GetObjectOptions{Checksum: true} @@ -2247,6 +2257,7 @@ func testPutObjectWithTrailingChecksums() { cmpChecksum(st.ChecksumSHA1, meta["x-amz-checksum-sha1"]) cmpChecksum(st.ChecksumCRC32, meta["x-amz-checksum-crc32"]) cmpChecksum(st.ChecksumCRC32C, meta["x-amz-checksum-crc32c"]) + cmpChecksum(resp.ChecksumCRC64NVME, meta["x-amz-checksum-crc64nvme"]) if st.Size != int64(bufSize) { logError(testName, function, args, startTime, "", "Number of bytes returned by PutObject does not match GetObject, expected "+string(bufSize)+" got "+string(st.Size), err) @@ -2291,6 +2302,7 @@ func testPutObjectWithTrailingChecksums() { cmpChecksum(st.ChecksumSHA1, "") cmpChecksum(st.ChecksumCRC32, "") cmpChecksum(st.ChecksumCRC32C, "") + cmpChecksum(st.ChecksumCRC64NVME, "") function = "GetObjectAttributes(...)" s, err := c.GetObjectAttributes(context.Background(), bucketName, objectName, minio.ObjectAttributesOptions{}) @@ -2305,9 +2317,8 @@ func testPutObjectWithTrailingChecksums() { delete(args, "range") delete(args, "metadata") + logSuccess(testName, function, args, startTime) } - - logSuccess(testName, function, args, startTime) } // Test PutObject with custom checksums. @@ -2319,7 +2330,7 @@ func testPutMultipartObjectWithChecksums(trailing bool) { args := map[string]interface{}{ "bucketName": "", "objectName": "", - "opts": fmt.Sprintf("minio.PutObjectOptions{UserMetadata: metadata, Progress: progress Checksum: %v}", trailing), + "opts": fmt.Sprintf("minio.PutObjectOptions{UserMetadata: metadata, Trailing: %v}", trailing), } if !isFullMode() { @@ -2344,14 +2355,18 @@ func testPutMultipartObjectWithChecksums(trailing bool) { return } - hashMultiPart := func(b []byte, partSize int, hasher hash.Hash) string { + hashMultiPart := func(b []byte, partSize int, cs minio.ChecksumType) string { r := bytes.NewReader(b) + hasher := cs.Hasher() + if cs.FullObjectRequested() { + partSize = len(b) + } tmp := make([]byte, partSize) parts := 0 var all []byte for { n, err := io.ReadFull(r, tmp) - if err != nil && err != io.ErrUnexpectedEOF { + if err != nil && err != io.ErrUnexpectedEOF && err != io.EOF { logError(testName, function, args, startTime, "", "Calc crc failed", err) } if n == 0 { @@ -2365,6 +2380,9 @@ func testPutMultipartObjectWithChecksums(trailing bool) { break } } + if parts == 1 { + return base64.StdEncoding.EncodeToString(hasher.Sum(nil)) + } hasher.Reset() hasher.Write(all) return fmt.Sprintf("%s-%d", base64.StdEncoding.EncodeToString(hasher.Sum(nil)), parts) @@ -2373,6 +2391,9 @@ func testPutMultipartObjectWithChecksums(trailing bool) { tests := []struct { cs minio.ChecksumType }{ + {cs: minio.ChecksumFullObjectCRC32}, + {cs: minio.ChecksumFullObjectCRC32C}, + {cs: minio.ChecksumCRC64NVME}, {cs: minio.ChecksumCRC32C}, {cs: minio.ChecksumCRC32}, {cs: minio.ChecksumSHA1}, @@ -2380,8 +2401,12 @@ func testPutMultipartObjectWithChecksums(trailing bool) { } for _, test := range tests { - bufSize := dataFileMap["datafile-129-MB"] + if os.Getenv("MINT_NO_FULL_OBJECT") != "" && test.cs.FullObjectRequested() { + continue + } + args["section"] = "prep" + bufSize := dataFileMap["datafile-129-MB"] // Save the data objectName := randString(60, rand.NewSource(time.Now().UnixNano()), "") args["objectName"] = objectName @@ -2405,7 +2430,7 @@ func testPutMultipartObjectWithChecksums(trailing bool) { reader.Close() h := test.cs.Hasher() h.Reset() - want := hashMultiPart(b, partSize, test.cs.Hasher()) + want := hashMultiPart(b, partSize, test.cs) var cs minio.ChecksumType rd := io.Reader(io.NopCloser(bytes.NewReader(b))) @@ -2413,7 +2438,9 @@ func testPutMultipartObjectWithChecksums(trailing bool) { cs = test.cs rd = bytes.NewReader(b) } + // Set correct CRC. + args["section"] = "PutObject" resp, err := c.PutObject(context.Background(), bucketName, objectName, rd, int64(bufSize), minio.PutObjectOptions{ DisableContentSha256: true, DisableMultipart: false, @@ -2427,7 +2454,7 @@ func testPutMultipartObjectWithChecksums(trailing bool) { return } - switch test.cs { + switch test.cs.Base() { case minio.ChecksumCRC32C: cmpChecksum(resp.ChecksumCRC32C, want) case minio.ChecksumCRC32: @@ -2436,15 +2463,41 @@ func testPutMultipartObjectWithChecksums(trailing bool) { cmpChecksum(resp.ChecksumSHA1, want) case minio.ChecksumSHA256: cmpChecksum(resp.ChecksumSHA256, want) + case minio.ChecksumCRC64NVME: + cmpChecksum(resp.ChecksumCRC64NVME, want) } + args["section"] = "HeadObject" + st, err := c.StatObject(context.Background(), bucketName, objectName, minio.StatObjectOptions{Checksum: true}) + if err != nil { + logError(testName, function, args, startTime, "", "StatObject failed", err) + return + } + switch test.cs.Base() { + case minio.ChecksumCRC32C: + cmpChecksum(st.ChecksumCRC32C, want) + case minio.ChecksumCRC32: + cmpChecksum(st.ChecksumCRC32, want) + case minio.ChecksumSHA1: + cmpChecksum(st.ChecksumSHA1, want) + case minio.ChecksumSHA256: + cmpChecksum(st.ChecksumSHA256, want) + case minio.ChecksumCRC64NVME: + cmpChecksum(st.ChecksumCRC64NVME, want) + } + + args["section"] = "GetObjectAttributes" s, err := c.GetObjectAttributes(context.Background(), bucketName, objectName, minio.ObjectAttributesOptions{}) if err != nil { logError(testName, function, args, startTime, "", "GetObjectAttributes failed", err) return } - want = want[:strings.IndexByte(want, '-')] + + if strings.ContainsRune(want, '-') { + want = want[:strings.IndexByte(want, '-')] + } switch test.cs { + // Full Object CRC does not return anything with GetObjectAttributes case minio.ChecksumCRC32C: cmpChecksum(s.Checksum.ChecksumCRC32C, want) case minio.ChecksumCRC32: @@ -2460,13 +2513,14 @@ func testPutMultipartObjectWithChecksums(trailing bool) { gopts.PartNumber = 2 // We cannot use StatObject, since it ignores partnumber. + args["section"] = "GetObject-Part" r, err := c.GetObject(context.Background(), bucketName, objectName, gopts) if err != nil { logError(testName, function, args, startTime, "", "GetObject failed", err) return } io.Copy(io.Discard, r) - st, err := r.Stat() + st, err = r.Stat() if err != nil { logError(testName, function, args, startTime, "", "Stat failed", err) return @@ -2478,6 +2532,7 @@ func testPutMultipartObjectWithChecksums(trailing bool) { want = base64.StdEncoding.EncodeToString(h.Sum(nil)) switch test.cs { + // Full Object CRC does not return any part CRC for whatever reason. case minio.ChecksumCRC32C: cmpChecksum(st.ChecksumCRC32C, want) case minio.ChecksumCRC32: @@ -2486,12 +2541,17 @@ func testPutMultipartObjectWithChecksums(trailing bool) { cmpChecksum(st.ChecksumSHA1, want) case minio.ChecksumSHA256: cmpChecksum(st.ChecksumSHA256, want) + case minio.ChecksumCRC64NVME: + // AWS doesn't return part checksum, but may in the future. + if st.ChecksumCRC64NVME != "" { + cmpChecksum(st.ChecksumCRC64NVME, want) + } } delete(args, "metadata") + delete(args, "section") + logSuccess(testName, function, args, startTime) } - - logSuccess(testName, function, args, startTime) } // Test PutObject with trailing checksums. @@ -2688,9 +2748,8 @@ func testTrailingChecksums() { } delete(args, "metadata") + logSuccess(testName, function, args, startTime) } - - logSuccess(testName, function, args, startTime) } // Test PutObject with custom checksums. @@ -5324,21 +5383,11 @@ func testPresignedPostPolicyWrongFile() { defer cleanupBucket(bucketName, c) - // Generate 33K of data. - reader := getDataReader("datafile-33-kB") - defer reader.Close() - objectName := randString(60, rand.NewSource(time.Now().UnixNano()), "") // Azure requires the key to not start with a number metadataKey := randString(60, rand.NewSource(time.Now().UnixNano()), "user") metadataValue := randString(60, rand.NewSource(time.Now().UnixNano()), "") - buf, err := io.ReadAll(reader) - if err != nil { - logError(testName, function, args, startTime, "", "ReadAll failed", err) - return - } - policy := minio.NewPostPolicy() policy.SetBucket(bucketName) policy.SetKey(objectName) @@ -5347,8 +5396,8 @@ func testPresignedPostPolicyWrongFile() { policy.SetContentLengthRange(10, 1024*1024) policy.SetUserMetadata(metadataKey, metadataValue) - // Add CRC32C of the 33kB file that the policy will explicitly allow. - checksum := minio.ChecksumCRC32C.ChecksumBytes(buf) + // Add CRC32C of some data that the policy will explicitly allow. + checksum := minio.ChecksumCRC32C.ChecksumBytes([]byte{0x01, 0x02, 0x03}) err = policy.SetChecksum(checksum) if err != nil { logError(testName, function, args, startTime, "", "SetChecksum failed", err) @@ -5363,7 +5412,7 @@ func testPresignedPostPolicyWrongFile() { return } - // At this stage, we have a policy that allows us to upload datafile-33-kB. + // At this stage, we have a policy that allows us to upload for a specific checksum. // Test that uploading datafile-10-kB, with a different checksum, fails as expected filePath := getMintDataDirFilePath("datafile-10-kB") if filePath == "" { @@ -5456,7 +5505,7 @@ func testPresignedPostPolicyWrongFile() { // Normalize the response body, because S3 uses quotes around the policy condition components // in the error message, MinIO does not. resBodyStr := strings.ReplaceAll(string(resBody), `"`, "") - if !strings.Contains(resBodyStr, "Policy Condition failed: [eq, $x-amz-checksum-crc32c, aHnJMw==]") { + if !strings.Contains(resBodyStr, "Policy Condition failed: [eq, $x-amz-checksum-crc32c, 8TDyHg=") { logError(testName, function, args, startTime, "", "Unexpected response body", errors.New(resBodyStr)) return } diff --git a/vendor/github.com/minio/minio-go/v7/pkg/credentials/sts_web_identity.go b/vendor/github.com/minio/minio-go/v7/pkg/credentials/sts_web_identity.go index 787f0a38d69..8c06bac60db 100644 --- a/vendor/github.com/minio/minio-go/v7/pkg/credentials/sts_web_identity.go +++ b/vendor/github.com/minio/minio-go/v7/pkg/credentials/sts_web_identity.go @@ -162,6 +162,10 @@ func getWebIdentityCredentials(clnt *http.Client, endpoint, roleARN, roleSession // Usually set when server is using extended userInfo endpoint. v.Set("WebIdentityAccessToken", idToken.AccessToken) } + if idToken.RefreshToken != "" { + // Usually set when server is using extended userInfo endpoint. + v.Set("WebIdentityRefreshToken", idToken.RefreshToken) + } if idToken.Expiry > 0 { v.Set("DurationSeconds", fmt.Sprintf("%d", idToken.Expiry)) } diff --git a/vendor/github.com/minio/minio-go/v7/utils.go b/vendor/github.com/minio/minio-go/v7/utils.go index a5beb371f2c..cd7d2c27e62 100644 --- a/vendor/github.com/minio/minio-go/v7/utils.go +++ b/vendor/github.com/minio/minio-go/v7/utils.go @@ -378,10 +378,11 @@ func ToObjectInfo(bucketName, objectName string, h http.Header) (ObjectInfo, err Restore: restore, // Checksum values - ChecksumCRC32: h.Get("x-amz-checksum-crc32"), - ChecksumCRC32C: h.Get("x-amz-checksum-crc32c"), - ChecksumSHA1: h.Get("x-amz-checksum-sha1"), - ChecksumSHA256: h.Get("x-amz-checksum-sha256"), + ChecksumCRC32: h.Get(ChecksumCRC32.Key()), + ChecksumCRC32C: h.Get(ChecksumCRC32C.Key()), + ChecksumSHA1: h.Get(ChecksumSHA1.Key()), + ChecksumSHA256: h.Get(ChecksumSHA256.Key()), + ChecksumCRC64NVME: h.Get(ChecksumCRC64NVME.Key()), }, nil } @@ -698,3 +699,146 @@ func (h *hashReaderWrapper) Read(p []byte) (n int, err error) { } return n, err } + +// Following is ported from C to Go in 2016 by Justin Ruggles, with minimal alteration. +// Used uint for unsigned long. Used uint32 for input arguments in order to match +// the Go hash/crc32 package. zlib CRC32 combine (https://github.com/madler/zlib) +// Modified for hash/crc64 by Klaus Post, 2024. +func gf2MatrixTimes(mat []uint64, vec uint64) uint64 { + var sum uint64 + + for vec != 0 { + if vec&1 != 0 { + sum ^= mat[0] + } + vec >>= 1 + mat = mat[1:] + } + return sum +} + +func gf2MatrixSquare(square, mat []uint64) { + if len(square) != len(mat) { + panic("square matrix size mismatch") + } + for n := range mat { + square[n] = gf2MatrixTimes(mat, mat[n]) + } +} + +// crc32Combine returns the combined CRC-32 hash value of the two passed CRC-32 +// hash values crc1 and crc2. poly represents the generator polynomial +// and len2 specifies the byte length that the crc2 hash covers. +func crc32Combine(poly uint32, crc1, crc2 uint32, len2 int64) uint32 { + // degenerate case (also disallow negative lengths) + if len2 <= 0 { + return crc1 + } + + even := make([]uint64, 32) // even-power-of-two zeros operator + odd := make([]uint64, 32) // odd-power-of-two zeros operator + + // put operator for one zero bit in odd + odd[0] = uint64(poly) // CRC-32 polynomial + row := uint64(1) + for n := 1; n < 32; n++ { + odd[n] = row + row <<= 1 + } + + // put operator for two zero bits in even + gf2MatrixSquare(even, odd) + + // put operator for four zero bits in odd + gf2MatrixSquare(odd, even) + + // apply len2 zeros to crc1 (first square will put the operator for one + // zero byte, eight zero bits, in even) + crc1n := uint64(crc1) + for { + // apply zeros operator for this bit of len2 + gf2MatrixSquare(even, odd) + if len2&1 != 0 { + crc1n = gf2MatrixTimes(even, crc1n) + } + len2 >>= 1 + + // if no more bits set, then done + if len2 == 0 { + break + } + + // another iteration of the loop with odd and even swapped + gf2MatrixSquare(odd, even) + if len2&1 != 0 { + crc1n = gf2MatrixTimes(odd, crc1n) + } + len2 >>= 1 + + // if no more bits set, then done + if len2 == 0 { + break + } + } + + // return combined crc + crc1n ^= uint64(crc2) + return uint32(crc1n) +} + +func crc64Combine(poly uint64, crc1, crc2 uint64, len2 int64) uint64 { + // degenerate case (also disallow negative lengths) + if len2 <= 0 { + return crc1 + } + + even := make([]uint64, 64) // even-power-of-two zeros operator + odd := make([]uint64, 64) // odd-power-of-two zeros operator + + // put operator for one zero bit in odd + odd[0] = poly // CRC-64 polynomial + row := uint64(1) + for n := 1; n < 64; n++ { + odd[n] = row + row <<= 1 + } + + // put operator for two zero bits in even + gf2MatrixSquare(even, odd) + + // put operator for four zero bits in odd + gf2MatrixSquare(odd, even) + + // apply len2 zeros to crc1 (first square will put the operator for one + // zero byte, eight zero bits, in even) + crc1n := crc1 + for { + // apply zeros operator for this bit of len2 + gf2MatrixSquare(even, odd) + if len2&1 != 0 { + crc1n = gf2MatrixTimes(even, crc1n) + } + len2 >>= 1 + + // if no more bits set, then done + if len2 == 0 { + break + } + + // another iteration of the loop with odd and even swapped + gf2MatrixSquare(odd, even) + if len2&1 != 0 { + crc1n = gf2MatrixTimes(odd, crc1n) + } + len2 >>= 1 + + // if no more bits set, then done + if len2 == 0 { + break + } + } + + // return combined crc + crc1n ^= crc2 + return crc1n +} diff --git a/vendor/github.com/prometheus/prometheus/config/config.go b/vendor/github.com/prometheus/prometheus/config/config.go index ef3ea5e67ed..86d8563536a 100644 --- a/vendor/github.com/prometheus/prometheus/config/config.go +++ b/vendor/github.com/prometheus/prometheus/config/config.go @@ -30,7 +30,7 @@ import ( "github.com/grafana/regexp" "github.com/prometheus/common/config" "github.com/prometheus/common/model" - "github.com/prometheus/common/sigv4" + "github.com/prometheus/sigv4" "gopkg.in/yaml.v2" "github.com/prometheus/prometheus/discovery" @@ -1195,6 +1195,7 @@ type RemoteWriteConfig struct { Name string `yaml:"name,omitempty"` SendExemplars bool `yaml:"send_exemplars,omitempty"` SendNativeHistograms bool `yaml:"send_native_histograms,omitempty"` + RoundRobinDNS bool `yaml:"round_robin_dns,omitempty"` // ProtobufMessage specifies the protobuf message to use against the remote // receiver as specified in https://prometheus.io/docs/specs/remote_write_spec_2_0/ ProtobufMessage RemoteWriteProtoMsg `yaml:"protobuf_message,omitempty"` @@ -1428,8 +1429,9 @@ var ( // OTLPConfig is the configuration for writing to the OTLP endpoint. type OTLPConfig struct { - PromoteResourceAttributes []string `yaml:"promote_resource_attributes,omitempty"` - TranslationStrategy translationStrategyOption `yaml:"translation_strategy,omitempty"` + PromoteResourceAttributes []string `yaml:"promote_resource_attributes,omitempty"` + TranslationStrategy translationStrategyOption `yaml:"translation_strategy,omitempty"` + KeepIdentifyingResourceAttributes bool `yaml:"keep_identifying_resource_attributes,omitempty"` } // UnmarshalYAML implements the yaml.Unmarshaler interface. diff --git a/vendor/github.com/prometheus/prometheus/model/labels/regexp.go b/vendor/github.com/prometheus/prometheus/model/labels/regexp.go index 0847c819c59..edad614a011 100644 --- a/vendor/github.com/prometheus/prometheus/model/labels/regexp.go +++ b/vendor/github.com/prometheus/prometheus/model/labels/regexp.go @@ -848,7 +848,7 @@ type equalMultiStringMapMatcher struct { func (m *equalMultiStringMapMatcher) add(s string) { if !m.caseSensitive { - s = toNormalisedLower(s) + s = toNormalisedLower(s, nil) // Don't pass a stack buffer here - it will always escape to heap. } m.values[s] = struct{}{} @@ -886,15 +886,24 @@ func (m *equalMultiStringMapMatcher) setMatches() []string { } func (m *equalMultiStringMapMatcher) Matches(s string) bool { - if !m.caseSensitive { - s = toNormalisedLower(s) + if len(m.values) > 0 { + sNorm := s + var a [32]byte + if !m.caseSensitive { + sNorm = toNormalisedLower(s, a[:]) + } + if _, ok := m.values[sNorm]; ok { + return true + } } - if _, ok := m.values[s]; ok { - return true - } if m.minPrefixLen > 0 && len(s) >= m.minPrefixLen { - for _, matcher := range m.prefixes[s[:m.minPrefixLen]] { + prefix := s[:m.minPrefixLen] + var a [32]byte + if !m.caseSensitive { + prefix = toNormalisedLower(s[:m.minPrefixLen], a[:]) + } + for _, matcher := range m.prefixes[prefix] { if matcher.Matches(s) { return true } @@ -905,22 +914,37 @@ func (m *equalMultiStringMapMatcher) Matches(s string) bool { // toNormalisedLower normalise the input string using "Unicode Normalization Form D" and then convert // it to lower case. -func toNormalisedLower(s string) string { - var buf []byte +func toNormalisedLower(s string, a []byte) string { for i := 0; i < len(s); i++ { c := s[i] if c >= utf8.RuneSelf { return strings.Map(unicode.ToLower, norm.NFKD.String(s)) } if 'A' <= c && c <= 'Z' { - if buf == nil { - buf = []byte(s) - } - buf[i] = c + 'a' - 'A' + return toNormalisedLowerSlow(s, i, a) } } - if buf == nil { - return s + return s +} + +// toNormalisedLowerSlow is split from toNormalisedLower because having a call +// to `copy` slows it down even when it is not called. +func toNormalisedLowerSlow(s string, i int, a []byte) string { + var buf []byte + if cap(a) > len(s) { + buf = a[:len(s)] + copy(buf, s) + } else { + buf = []byte(s) + } + for ; i < len(s); i++ { + c := s[i] + if c >= utf8.RuneSelf { + return strings.Map(unicode.ToLower, norm.NFKD.String(s)) + } + if 'A' <= c && c <= 'Z' { + buf[i] = c + 'a' - 'A' + } } return yoloString(buf) } diff --git a/vendor/github.com/prometheus/prometheus/model/relabel/relabel.go b/vendor/github.com/prometheus/prometheus/model/relabel/relabel.go index 93331cf99f2..2bec6cfabb9 100644 --- a/vendor/github.com/prometheus/prometheus/model/relabel/relabel.go +++ b/vendor/github.com/prometheus/prometheus/model/relabel/relabel.go @@ -16,6 +16,7 @@ package relabel import ( "crypto/md5" "encoding/binary" + "encoding/json" "errors" "fmt" "strconv" @@ -84,20 +85,20 @@ func (a *Action) UnmarshalYAML(unmarshal func(interface{}) error) error { type Config struct { // A list of labels from which values are taken and concatenated // with the configured separator in order. - SourceLabels model.LabelNames `yaml:"source_labels,flow,omitempty"` + SourceLabels model.LabelNames `yaml:"source_labels,flow,omitempty" json:"sourceLabels,omitempty"` // Separator is the string between concatenated values from the source labels. - Separator string `yaml:"separator,omitempty"` + Separator string `yaml:"separator,omitempty" json:"separator,omitempty"` // Regex against which the concatenation is matched. - Regex Regexp `yaml:"regex,omitempty"` + Regex Regexp `yaml:"regex,omitempty" json:"regex,omitempty"` // Modulus to take of the hash of concatenated values from the source labels. - Modulus uint64 `yaml:"modulus,omitempty"` + Modulus uint64 `yaml:"modulus,omitempty" json:"modulus,omitempty"` // TargetLabel is the label to which the resulting string is written in a replacement. // Regexp interpolation is allowed for the replace action. - TargetLabel string `yaml:"target_label,omitempty"` + TargetLabel string `yaml:"target_label,omitempty" json:"targetLabel,omitempty"` // Replacement is the regex replacement pattern to be used. - Replacement string `yaml:"replacement,omitempty"` + Replacement string `yaml:"replacement,omitempty" json:"replacement,omitempty"` // Action is the action to be performed for the relabeling. - Action Action `yaml:"action,omitempty"` + Action Action `yaml:"action,omitempty" json:"action,omitempty"` } // UnmarshalYAML implements the yaml.Unmarshaler interface. @@ -207,6 +208,25 @@ func (re Regexp) MarshalYAML() (interface{}, error) { return nil, nil } +// UnmarshalJSON implements the json.Unmarshaler interface. +func (re *Regexp) UnmarshalJSON(b []byte) error { + var s string + if err := json.Unmarshal(b, &s); err != nil { + return err + } + r, err := NewRegexp(s) + if err != nil { + return err + } + *re = r + return nil +} + +// MarshalJSON implements the json.Marshaler interface. +func (re Regexp) MarshalJSON() ([]byte, error) { + return json.Marshal(re.String()) +} + // IsZero implements the yaml.IsZeroer interface. func (re Regexp) IsZero() bool { return re.Regexp == DefaultRelabelConfig.Regex.Regexp diff --git a/vendor/github.com/prometheus/prometheus/model/textparse/nhcbparse.go b/vendor/github.com/prometheus/prometheus/model/textparse/nhcbparse.go index 6fe2e8e54eb..ff756965f49 100644 --- a/vendor/github.com/prometheus/prometheus/model/textparse/nhcbparse.go +++ b/vendor/github.com/prometheus/prometheus/model/textparse/nhcbparse.go @@ -243,7 +243,8 @@ func (p *NHCBParser) compareLabels() bool { // Different metric type. return true } - if p.lastHistogramName != convertnhcb.GetHistogramMetricBaseName(p.lset.Get(labels.MetricName)) { + _, name := convertnhcb.GetHistogramMetricBaseName(p.lset.Get(labels.MetricName)) + if p.lastHistogramName != name { // Different metric name. return true } @@ -253,8 +254,8 @@ func (p *NHCBParser) compareLabels() bool { } // Save the label set of the classic histogram without suffix and bucket `le` label. -func (p *NHCBParser) storeClassicLabels() { - p.lastHistogramName = convertnhcb.GetHistogramMetricBaseName(p.lset.Get(labels.MetricName)) +func (p *NHCBParser) storeClassicLabels(name string) { + p.lastHistogramName = name p.lastHistogramLabelsHash, _ = p.lset.HashWithoutLabels(p.hBuffer, labels.BucketLabel) p.lastHistogramExponential = false } @@ -275,25 +276,30 @@ func (p *NHCBParser) handleClassicHistogramSeries(lset labels.Labels) bool { } mName := lset.Get(labels.MetricName) // Sanity check to ensure that the TYPE metadata entry name is the same as the base name. - if convertnhcb.GetHistogramMetricBaseName(mName) != string(p.bName) { + suffixType, name := convertnhcb.GetHistogramMetricBaseName(mName) + if name != string(p.bName) { return false } - switch { - case strings.HasSuffix(mName, "_bucket") && lset.Has(labels.BucketLabel): + switch suffixType { + case convertnhcb.SuffixBucket: + if !lset.Has(labels.BucketLabel) { + // This should not really happen. + return false + } le, err := strconv.ParseFloat(lset.Get(labels.BucketLabel), 64) if err == nil && !math.IsNaN(le) { - p.processClassicHistogramSeries(lset, "_bucket", func(hist *convertnhcb.TempHistogram) { + p.processClassicHistogramSeries(lset, name, func(hist *convertnhcb.TempHistogram) { _ = hist.SetBucketCount(le, p.value) }) return true } - case strings.HasSuffix(mName, "_count"): - p.processClassicHistogramSeries(lset, "_count", func(hist *convertnhcb.TempHistogram) { + case convertnhcb.SuffixCount: + p.processClassicHistogramSeries(lset, name, func(hist *convertnhcb.TempHistogram) { _ = hist.SetCount(p.value) }) return true - case strings.HasSuffix(mName, "_sum"): - p.processClassicHistogramSeries(lset, "_sum", func(hist *convertnhcb.TempHistogram) { + case convertnhcb.SuffixSum: + p.processClassicHistogramSeries(lset, name, func(hist *convertnhcb.TempHistogram) { _ = hist.SetSum(p.value) }) return true @@ -301,12 +307,12 @@ func (p *NHCBParser) handleClassicHistogramSeries(lset labels.Labels) bool { return false } -func (p *NHCBParser) processClassicHistogramSeries(lset labels.Labels, suffix string, updateHist func(*convertnhcb.TempHistogram)) { +func (p *NHCBParser) processClassicHistogramSeries(lset labels.Labels, name string, updateHist func(*convertnhcb.TempHistogram)) { if p.state != stateCollecting { - p.storeClassicLabels() + p.storeClassicLabels(name) p.tempCT = p.parser.CreatedTimestamp() p.state = stateCollecting - p.tempLsetNHCB = convertnhcb.GetHistogramMetricBase(lset, suffix) + p.tempLsetNHCB = convertnhcb.GetHistogramMetricBase(lset, name) } p.storeExemplars() updateHist(&p.tempNHCB) diff --git a/vendor/github.com/prometheus/prometheus/model/textparse/openmetricslex.l b/vendor/github.com/prometheus/prometheus/model/textparse/openmetricslex.l index 9afbbbd8bd5..09106c52ced 100644 --- a/vendor/github.com/prometheus/prometheus/model/textparse/openmetricslex.l +++ b/vendor/github.com/prometheus/prometheus/model/textparse/openmetricslex.l @@ -69,6 +69,7 @@ S [ ] {S}#{S}\{ l.state = sExemplar; return tComment {L}({L}|{D})* return tLName +\"(\\.|[^\\"\n])*\" l.state = sExemplar; return tQString \} l.state = sEValue; return tBraceClose = l.state = sEValue; return tEqual \"(\\.|[^\\"\n])*\" l.state = sExemplar; return tLValue diff --git a/vendor/github.com/prometheus/prometheus/model/textparse/openmetricslex.l.go b/vendor/github.com/prometheus/prometheus/model/textparse/openmetricslex.l.go index c8789ef60d4..c0b2fcdb4d8 100644 --- a/vendor/github.com/prometheus/prometheus/model/textparse/openmetricslex.l.go +++ b/vendor/github.com/prometheus/prometheus/model/textparse/openmetricslex.l.go @@ -53,9 +53,9 @@ yystate0: case 8: // start condition: sExemplar goto yystart57 case 9: // start condition: sEValue - goto yystart62 + goto yystart65 case 10: // start condition: sETimestamp - goto yystart68 + goto yystart71 } yystate1: @@ -538,125 +538,153 @@ yystart57: switch { default: goto yyabort - case c == ',': + case c == '"': goto yystate58 + case c == ',': + goto yystate61 case c == '=': - goto yystate59 + goto yystate62 case c == '}': - goto yystate61 + goto yystate64 case c >= 'A' && c <= 'Z' || c == '_' || c >= 'a' && c <= 'z': - goto yystate60 + goto yystate63 } yystate58: c = l.next() - goto yyrule26 + switch { + default: + goto yyabort + case c == '"': + goto yystate59 + case c == '\\': + goto yystate60 + case c >= '\x01' && c <= '\t' || c >= '\v' && c <= '!' || c >= '#' && c <= '[' || c >= ']' && c <= 'ÿ': + goto yystate58 + } yystate59: c = l.next() - goto yyrule24 + goto yyrule23 yystate60: c = l.next() switch { default: - goto yyrule22 - case c >= '0' && c <= '9' || c >= 'A' && c <= 'Z' || c == '_' || c >= 'a' && c <= 'z': - goto yystate60 + goto yyabort + case c >= '\x01' && c <= '\t' || c >= '\v' && c <= 'ÿ': + goto yystate58 } yystate61: c = l.next() - goto yyrule23 + goto yyrule27 yystate62: c = l.next() -yystart62: + goto yyrule25 + +yystate63: + c = l.next() + switch { + default: + goto yyrule22 + case c >= '0' && c <= '9' || c >= 'A' && c <= 'Z' || c == '_' || c >= 'a' && c <= 'z': + goto yystate63 + } + +yystate64: + c = l.next() + goto yyrule24 + +yystate65: + c = l.next() +yystart65: switch { default: goto yyabort case c == ' ': - goto yystate63 + goto yystate66 case c == '"': - goto yystate65 + goto yystate68 } -yystate63: +yystate66: c = l.next() switch { default: goto yyabort case c >= '\x01' && c <= '\t' || c >= '\v' && c <= '\x1f' || c >= '!' && c <= 'ÿ': - goto yystate64 + goto yystate67 } -yystate64: +yystate67: c = l.next() switch { default: - goto yyrule27 + goto yyrule28 case c >= '\x01' && c <= '\t' || c >= '\v' && c <= '\x1f' || c >= '!' && c <= 'ÿ': - goto yystate64 + goto yystate67 } -yystate65: +yystate68: c = l.next() switch { default: goto yyabort case c == '"': - goto yystate66 + goto yystate69 case c == '\\': - goto yystate67 + goto yystate70 case c >= '\x01' && c <= '\t' || c >= '\v' && c <= '!' || c >= '#' && c <= '[' || c >= ']' && c <= 'ÿ': - goto yystate65 + goto yystate68 } -yystate66: +yystate69: c = l.next() - goto yyrule25 + goto yyrule26 -yystate67: +yystate70: c = l.next() switch { default: goto yyabort case c >= '\x01' && c <= '\t' || c >= '\v' && c <= 'ÿ': - goto yystate65 + goto yystate68 } -yystate68: +yystate71: c = l.next() -yystart68: +yystart71: switch { default: goto yyabort case c == ' ': - goto yystate70 + goto yystate73 case c == '\n': - goto yystate69 + goto yystate72 } -yystate69: +yystate72: c = l.next() - goto yyrule29 + goto yyrule30 -yystate70: +yystate73: c = l.next() switch { default: goto yyabort case c >= '\x01' && c <= '\t' || c >= '\v' && c <= '\x1f' || c >= '!' && c <= 'ÿ': - goto yystate71 + goto yystate74 } -yystate71: +yystate74: c = l.next() switch { default: - goto yyrule28 + goto yyrule29 case c >= '\x01' && c <= '\t' || c >= '\v' && c <= '\x1f' || c >= '!' && c <= 'ÿ': - goto yystate71 + goto yystate74 } yyrule1: // #{S} @@ -782,39 +810,45 @@ yyrule22: // {L}({L}|{D})* { return tLName } -yyrule23: // \} +yyrule23: // \"(\\.|[^\\"\n])*\" + { + l.state = sExemplar + return tQString + goto yystate0 + } +yyrule24: // \} { l.state = sEValue return tBraceClose goto yystate0 } -yyrule24: // = +yyrule25: // = { l.state = sEValue return tEqual goto yystate0 } -yyrule25: // \"(\\.|[^\\"\n])*\" +yyrule26: // \"(\\.|[^\\"\n])*\" { l.state = sExemplar return tLValue goto yystate0 } -yyrule26: // , +yyrule27: // , { return tComma } -yyrule27: // {S}[^ \n]+ +yyrule28: // {S}[^ \n]+ { l.state = sETimestamp return tValue goto yystate0 } -yyrule28: // {S}[^ \n]+ +yyrule29: // {S}[^ \n]+ { return tTimestamp } -yyrule29: // \n +yyrule30: // \n if true { // avoid go vet determining the below panic will not be reached l.state = sInit return tLinebreak @@ -859,10 +893,10 @@ yyabort: // no lexem recognized goto yystate57 } if false { - goto yystate62 + goto yystate65 } if false { - goto yystate68 + goto yystate71 } } diff --git a/vendor/github.com/prometheus/prometheus/model/textparse/openmetricsparse.go b/vendor/github.com/prometheus/prometheus/model/textparse/openmetricsparse.go index 3ae9c7ddfc3..16e805f3a93 100644 --- a/vendor/github.com/prometheus/prometheus/model/textparse/openmetricsparse.go +++ b/vendor/github.com/prometheus/prometheus/model/textparse/openmetricsparse.go @@ -388,7 +388,7 @@ func (p *OpenMetricsParser) setCTParseValues(ct int64, ctHashSet uint64, mfName p.skipCTSeries = skipCTSeries // Do we need to set it? } -// resetCtParseValues resets the parser to the state before CreatedTimestamp method was called. +// resetCTParseValues resets the parser to the state before CreatedTimestamp method was called. func (p *OpenMetricsParser) resetCTParseValues() { p.ctHashSet = 0 p.skipCTSeries = true diff --git a/vendor/github.com/prometheus/prometheus/model/value/value.go b/vendor/github.com/prometheus/prometheus/model/value/value.go index d3dd9b996fe..655ce852d51 100644 --- a/vendor/github.com/prometheus/prometheus/model/value/value.go +++ b/vendor/github.com/prometheus/prometheus/model/value/value.go @@ -26,9 +26,6 @@ const ( // complicated values in the future. It is 2 rather than 1 to make // it easier to distinguish from the NormalNaN by a human when debugging. StaleNaN uint64 = 0x7ff0000000000002 - - // QuietZeroNaN signals TSDB to add a zero, but do nothing if there is already a value at that timestamp. - QuietZeroNaN uint64 = 0x7ff0000000000003 ) // IsStaleNaN returns true when the provided NaN value is a stale marker. diff --git a/vendor/github.com/prometheus/prometheus/notifier/notifier.go b/vendor/github.com/prometheus/prometheus/notifier/notifier.go index e970b67e6d2..956fd4652ac 100644 --- a/vendor/github.com/prometheus/prometheus/notifier/notifier.go +++ b/vendor/github.com/prometheus/prometheus/notifier/notifier.go @@ -34,8 +34,8 @@ import ( config_util "github.com/prometheus/common/config" "github.com/prometheus/common/model" "github.com/prometheus/common/promslog" - "github.com/prometheus/common/sigv4" "github.com/prometheus/common/version" + "github.com/prometheus/sigv4" "go.uber.org/atomic" "gopkg.in/yaml.v2" @@ -160,7 +160,7 @@ func newAlertMetrics(r prometheus.Registerer, queueCap int, queueLen, alertmanag Namespace: namespace, Subsystem: subsystem, Name: "errors_total", - Help: "Total number of errors sending alert notifications.", + Help: "Total number of sent alerts affected by errors.", }, []string{alertmanagerLabel}, ), @@ -619,13 +619,13 @@ func (n *Manager) sendAll(alerts ...*Alert) bool { go func(ctx context.Context, client *http.Client, url string, payload []byte, count int) { if err := n.sendOne(ctx, client, url, payload); err != nil { - n.logger.Error("Error sending alert", "alertmanager", url, "count", count, "err", err) - n.metrics.errors.WithLabelValues(url).Inc() + n.logger.Error("Error sending alerts", "alertmanager", url, "count", count, "err", err) + n.metrics.errors.WithLabelValues(url).Add(float64(count)) } else { numSuccess.Inc() } n.metrics.latency.WithLabelValues(url).Observe(time.Since(begin).Seconds()) - n.metrics.sent.WithLabelValues(url).Add(float64(len(amAlerts))) + n.metrics.sent.WithLabelValues(url).Add(float64(count)) wg.Done() }(ctx, ams.client, am.url().String(), payload, len(amAlerts)) diff --git a/vendor/github.com/prometheus/prometheus/promql/engine.go b/vendor/github.com/prometheus/prometheus/promql/engine.go index b2bba56a2f1..3f336188e9b 100644 --- a/vendor/github.com/prometheus/prometheus/promql/engine.go +++ b/vendor/github.com/prometheus/prometheus/promql/engine.go @@ -126,10 +126,7 @@ type QueryEngine interface { // QueryLogger is an interface that can be used to log all the queries logged // by the engine. type QueryLogger interface { - Error(msg string, args ...any) - Info(msg string, args ...any) - Debug(msg string, args ...any) - Warn(msg string, args ...any) + Log(context.Context, slog.Level, string, ...any) With(args ...any) Close() error } @@ -638,20 +635,20 @@ func (ng *Engine) exec(ctx context.Context, q *query) (v parser.Value, ws annota // The step provided by the user is in seconds. params["step"] = int64(eq.Interval / (time.Second / time.Nanosecond)) } - l.With("params", params) + f := []interface{}{"params", params} if err != nil { - l.With("error", err) + f = append(f, "error", err) } - l.With("stats", stats.NewQueryStats(q.Stats())) + f = append(f, "stats", stats.NewQueryStats(q.Stats())) if span := trace.SpanFromContext(ctx); span != nil { - l.With("spanID", span.SpanContext().SpanID()) + f = append(f, "spanID", span.SpanContext().SpanID()) } if origin := ctx.Value(QueryOrigin{}); origin != nil { for k, v := range origin.(map[string]interface{}) { - l.With(k, v) + f = append(f, k, v) } } - l.Info("promql query logged") + l.Log(context.Background(), slog.LevelInfo, "promql query logged", f...) // TODO: @tjhop -- do we still need this metric/error log if logger doesn't return errors? // ng.metrics.queryLogFailures.Inc() // ng.logger.Error("can't log query", "err", err) @@ -3188,18 +3185,19 @@ func (ev *evaluator) aggregationK(e *parser.AggregateExpr, k int, r float64, inp seriesLoop: for si := range inputMatrix { - f, _, ok := ev.nextValues(enh.Ts, &inputMatrix[si]) + f, h, ok := ev.nextValues(enh.Ts, &inputMatrix[si]) if !ok { continue } - s = Sample{Metric: inputMatrix[si].Metric, F: f, DropName: inputMatrix[si].DropName} + s = Sample{Metric: inputMatrix[si].Metric, F: f, H: h, DropName: inputMatrix[si].DropName} group := &groups[seriesToResult[si]] // Initialize this group if it's the first time we've seen it. if !group.seen { // LIMIT_RATIO is a special case, as we may not add this very sample to the heap, // while we also don't know the final size of it. - if op == parser.LIMIT_RATIO { + switch op { + case parser.LIMIT_RATIO: *group = groupedAggregation{ seen: true, heap: make(vectorByValueHeap, 0), @@ -3207,12 +3205,34 @@ seriesLoop: if ratiosampler.AddRatioSample(r, &s) { heap.Push(&group.heap, &s) } - } else { + case parser.LIMITK: *group = groupedAggregation{ seen: true, heap: make(vectorByValueHeap, 1, k), } group.heap[0] = s + case parser.TOPK: + *group = groupedAggregation{ + seen: true, + heap: make(vectorByValueHeap, 0, k), + } + if s.H != nil { + group.seen = false + annos.Add(annotations.NewHistogramIgnoredInAggregationInfo("topk", e.PosRange)) + } else { + heap.Push(&group.heap, &s) + } + case parser.BOTTOMK: + *group = groupedAggregation{ + seen: true, + heap: make(vectorByValueHeap, 0, k), + } + if s.H != nil { + group.seen = false + annos.Add(annotations.NewHistogramIgnoredInAggregationInfo("bottomk", e.PosRange)) + } else { + heap.Push(&group.heap, &s) + } } continue } @@ -3221,6 +3241,9 @@ seriesLoop: case parser.TOPK: // We build a heap of up to k elements, with the smallest element at heap[0]. switch { + case s.H != nil: + // Ignore histogram sample and add info annotation. + annos.Add(annotations.NewHistogramIgnoredInAggregationInfo("topk", e.PosRange)) case len(group.heap) < k: heap.Push(&group.heap, &s) case group.heap[0].F < s.F || (math.IsNaN(group.heap[0].F) && !math.IsNaN(s.F)): @@ -3234,6 +3257,9 @@ seriesLoop: case parser.BOTTOMK: // We build a heap of up to k elements, with the biggest element at heap[0]. switch { + case s.H != nil: + // Ignore histogram sample and add info annotation. + annos.Add(annotations.NewHistogramIgnoredInAggregationInfo("bottomk", e.PosRange)) case len(group.heap) < k: heap.Push((*vectorByReverseValueHeap)(&group.heap), &s) case group.heap[0].F > s.F || (math.IsNaN(group.heap[0].F) && !math.IsNaN(s.F)): @@ -3276,10 +3302,14 @@ seriesLoop: mat = make(Matrix, 0, len(groups)) } - add := func(lbls labels.Labels, f float64, dropName bool) { + add := func(lbls labels.Labels, f float64, h *histogram.FloatHistogram, dropName bool) { // If this could be an instant query, add directly to the matrix so the result is in consistent order. if ev.endTimestamp == ev.startTimestamp { - mat = append(mat, Series{Metric: lbls, Floats: []FPoint{{T: enh.Ts, F: f}}, DropName: dropName}) + if h != nil { + mat = append(mat, Series{Metric: lbls, Histograms: []HPoint{{T: enh.Ts, H: h}}, DropName: dropName}) + } else { + mat = append(mat, Series{Metric: lbls, Floats: []FPoint{{T: enh.Ts, F: f}}, DropName: dropName}) + } } else { // Otherwise the results are added into seriess elements. hash := lbls.Hash() @@ -3287,7 +3317,7 @@ seriesLoop: if !ok { ss = Series{Metric: lbls, DropName: dropName} } - addToSeries(&ss, enh.Ts, f, nil, numSteps) + addToSeries(&ss, enh.Ts, f, h, numSteps) seriess[hash] = ss } } @@ -3302,7 +3332,7 @@ seriesLoop: sort.Sort(sort.Reverse(aggr.heap)) } for _, v := range aggr.heap { - add(v.Metric, v.F, v.DropName) + add(v.Metric, v.F, v.H, v.DropName) } case parser.BOTTOMK: @@ -3311,12 +3341,12 @@ seriesLoop: sort.Sort(sort.Reverse((*vectorByReverseValueHeap)(&aggr.heap))) } for _, v := range aggr.heap { - add(v.Metric, v.F, v.DropName) + add(v.Metric, v.F, v.H, v.DropName) } case parser.LIMITK, parser.LIMIT_RATIO: for _, v := range aggr.heap { - add(v.Metric, v.F, v.DropName) + add(v.Metric, v.F, v.H, v.DropName) } } } @@ -3336,7 +3366,11 @@ func (ev *evaluator) aggregationCountValues(e *parser.AggregateExpr, grouping [] var buf []byte for _, s := range vec { enh.resetBuilder(s.Metric) - enh.lb.Set(valueLabel, strconv.FormatFloat(s.F, 'f', -1, 64)) + if s.H == nil { + enh.lb.Set(valueLabel, strconv.FormatFloat(s.F, 'f', -1, 64)) + } else { + enh.lb.Set(valueLabel, s.H.String()) + } metric := enh.lb.Labels() // Considering the count_values() @@ -3448,7 +3482,7 @@ func handleVectorBinopError(err error, e *parser.BinaryExpr) annotations.Annotat return nil } -// groupingKey builds and returns the grouping key for the given metric and +// generateGroupingKey builds and returns the grouping key for the given metric and // grouping labels. func generateGroupingKey(metric labels.Labels, grouping []string, without bool, buf []byte) (uint64, []byte) { if without { diff --git a/vendor/github.com/prometheus/prometheus/promql/functions.go b/vendor/github.com/prometheus/prometheus/promql/functions.go index f9af4fbe092..016e676d316 100644 --- a/vendor/github.com/prometheus/prometheus/promql/functions.go +++ b/vendor/github.com/prometheus/prometheus/promql/functions.go @@ -465,15 +465,15 @@ func funcSortByLabelDesc(vals []parser.Value, args parser.Expressions, enh *Eval return vals[0].(Vector), nil } -// === clamp(Vector parser.ValueTypeVector, min, max Scalar) (Vector, Annotations) === -func funcClamp(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) { - vec := vals[0].(Vector) - minVal := vals[1].(Vector)[0].F - maxVal := vals[2].(Vector)[0].F +func clamp(vec Vector, minVal, maxVal float64, enh *EvalNodeHelper) (Vector, annotations.Annotations) { if maxVal < minVal { return enh.Out, nil } for _, el := range vec { + if el.H != nil { + // Process only float samples. + continue + } if !enh.enableDelayedNameRemoval { el.Metric = el.Metric.DropMetricName() } @@ -486,38 +486,26 @@ func funcClamp(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper return enh.Out, nil } +// === clamp(Vector parser.ValueTypeVector, min, max Scalar) (Vector, Annotations) === +func funcClamp(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) { + vec := vals[0].(Vector) + minVal := vals[1].(Vector)[0].F + maxVal := vals[2].(Vector)[0].F + return clamp(vec, minVal, maxVal, enh) +} + // === clamp_max(Vector parser.ValueTypeVector, max Scalar) (Vector, Annotations) === func funcClampMax(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) { vec := vals[0].(Vector) maxVal := vals[1].(Vector)[0].F - for _, el := range vec { - if !enh.enableDelayedNameRemoval { - el.Metric = el.Metric.DropMetricName() - } - enh.Out = append(enh.Out, Sample{ - Metric: el.Metric, - F: math.Min(maxVal, el.F), - DropName: true, - }) - } - return enh.Out, nil + return clamp(vec, math.Inf(-1), maxVal, enh) } // === clamp_min(Vector parser.ValueTypeVector, min Scalar) (Vector, Annotations) === func funcClampMin(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) { vec := vals[0].(Vector) minVal := vals[1].(Vector)[0].F - for _, el := range vec { - if !enh.enableDelayedNameRemoval { - el.Metric = el.Metric.DropMetricName() - } - enh.Out = append(enh.Out, Sample{ - Metric: el.Metric, - F: math.Max(minVal, el.F), - DropName: true, - }) - } - return enh.Out, nil + return clamp(vec, minVal, math.Inf(+1), enh) } // === round(Vector parser.ValueTypeVector, toNearest=1 Scalar) (Vector, Annotations) === @@ -1312,7 +1300,7 @@ func funcHistogramFraction(vals []parser.Value, args parser.Expressions, enh *Ev } enh.Out = append(enh.Out, Sample{ Metric: sample.Metric, - F: histogramFraction(lower, upper, sample.H), + F: HistogramFraction(lower, upper, sample.H), DropName: true, }) } @@ -1364,7 +1352,7 @@ func funcHistogramQuantile(vals []parser.Value, args parser.Expressions, enh *Ev mb = &metricWithBuckets{sample.Metric, nil} enh.signatureToMetricWithBuckets[string(enh.lblBuf)] = mb } - mb.buckets = append(mb.buckets, bucket{upperBound, sample.F}) + mb.buckets = append(mb.buckets, Bucket{upperBound, sample.F}) } // Now deal with the histograms. @@ -1386,14 +1374,14 @@ func funcHistogramQuantile(vals []parser.Value, args parser.Expressions, enh *Ev } enh.Out = append(enh.Out, Sample{ Metric: sample.Metric, - F: histogramQuantile(q, sample.H), + F: HistogramQuantile(q, sample.H), DropName: true, }) } for _, mb := range enh.signatureToMetricWithBuckets { if len(mb.buckets) > 0 { - res, forcedMonotonicity, _ := bucketQuantile(q, mb.buckets) + res, forcedMonotonicity, _ := BucketQuantile(q, mb.buckets) enh.Out = append(enh.Out, Sample{ Metric: mb.metric, F: res, @@ -1412,27 +1400,41 @@ func funcResets(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelpe floats := vals[0].(Matrix)[0].Floats histograms := vals[0].(Matrix)[0].Histograms resets := 0 + if len(floats) == 0 && len(histograms) == 0 { + return enh.Out, nil + } - if len(floats) > 1 { - prev := floats[0].F - for _, sample := range floats[1:] { - current := sample.F - if current < prev { + var prevSample, curSample Sample + for iFloat, iHistogram := 0, 0; iFloat < len(floats) || iHistogram < len(histograms); { + switch { + // Process a float sample if no histogram sample remains or its timestamp is earlier. + // Process a histogram sample if no float sample remains or its timestamp is earlier. + case iHistogram >= len(histograms) || iFloat < len(floats) && floats[iFloat].T < histograms[iHistogram].T: + curSample.F = floats[iFloat].F + curSample.H = nil + iFloat++ + case iFloat >= len(floats) || iHistogram < len(histograms) && floats[iFloat].T > histograms[iHistogram].T: + curSample.H = histograms[iHistogram].H + iHistogram++ + } + // Skip the comparison for the first sample, just initialize prevSample. + if iFloat+iHistogram == 1 { + prevSample = curSample + continue + } + switch { + case prevSample.H == nil && curSample.H == nil: + if curSample.F < prevSample.F { resets++ } - prev = current - } - } - - if len(histograms) > 1 { - prev := histograms[0].H - for _, sample := range histograms[1:] { - current := sample.H - if current.DetectReset(prev) { + case prevSample.H != nil && curSample.H == nil, prevSample.H == nil && curSample.H != nil: + resets++ + case prevSample.H != nil && curSample.H != nil: + if curSample.H.DetectReset(prevSample.H) { resets++ } - prev = current } + prevSample = curSample } return append(enh.Out, Sample{F: float64(resets)}), nil @@ -1441,20 +1443,43 @@ func funcResets(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelpe // === changes(Matrix parser.ValueTypeMatrix) (Vector, Annotations) === func funcChanges(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) { floats := vals[0].(Matrix)[0].Floats + histograms := vals[0].(Matrix)[0].Histograms changes := 0 - - if len(floats) == 0 { - // TODO(beorn7): Only histogram values, still need to add support. + if len(floats) == 0 && len(histograms) == 0 { return enh.Out, nil } - prev := floats[0].F - for _, sample := range floats[1:] { - current := sample.F - if current != prev && !(math.IsNaN(current) && math.IsNaN(prev)) { + var prevSample, curSample Sample + for iFloat, iHistogram := 0, 0; iFloat < len(floats) || iHistogram < len(histograms); { + switch { + // Process a float sample if no histogram sample remains or its timestamp is earlier. + // Process a histogram sample if no float sample remains or its timestamp is earlier. + case iHistogram >= len(histograms) || iFloat < len(floats) && floats[iFloat].T < histograms[iHistogram].T: + curSample.F = floats[iFloat].F + curSample.H = nil + iFloat++ + case iFloat >= len(floats) || iHistogram < len(histograms) && floats[iFloat].T > histograms[iHistogram].T: + curSample.H = histograms[iHistogram].H + iHistogram++ + } + // Skip the comparison for the first sample, just initialize prevSample. + if iFloat+iHistogram == 1 { + prevSample = curSample + continue + } + switch { + case prevSample.H == nil && curSample.H == nil: + if curSample.F != prevSample.F && !(math.IsNaN(curSample.F) && math.IsNaN(prevSample.F)) { + changes++ + } + case prevSample.H != nil && curSample.H == nil, prevSample.H == nil && curSample.H != nil: changes++ + case prevSample.H != nil && curSample.H != nil: + if !curSample.H.Equals(prevSample.H) { + changes++ + } } - prev = current + prevSample = curSample } return append(enh.Out, Sample{F: float64(changes)}), nil @@ -1565,6 +1590,10 @@ func dateWrapper(vals []parser.Value, enh *EvalNodeHelper, f func(time.Time) flo } for _, el := range vals[0].(Vector) { + if el.H != nil { + // Ignore histogram sample. + continue + } t := time.Unix(int64(el.F), 0).UTC() if !enh.enableDelayedNameRemoval { el.Metric = el.Metric.DropMetricName() diff --git a/vendor/github.com/prometheus/prometheus/promql/parser/generated_parser.y b/vendor/github.com/prometheus/prometheus/promql/parser/generated_parser.y index befb9bdf3e6..c321a1e9735 100644 --- a/vendor/github.com/prometheus/prometheus/promql/parser/generated_parser.y +++ b/vendor/github.com/prometheus/prometheus/promql/parser/generated_parser.y @@ -667,10 +667,16 @@ label_set_list : label_set_list COMMA label_set_item label_set_item : IDENTIFIER EQL STRING { $$ = labels.Label{Name: $1.Val, Value: yylex.(*parser).unquoteString($3.Val) } } + | string_identifier EQL STRING + { $$ = labels.Label{Name: $1.Val, Value: yylex.(*parser).unquoteString($3.Val) } } | IDENTIFIER EQL error { yylex.(*parser).unexpected("label set", "string"); $$ = labels.Label{}} + | string_identifier EQL error + { yylex.(*parser).unexpected("label set", "string"); $$ = labels.Label{}} | IDENTIFIER error { yylex.(*parser).unexpected("label set", "\"=\""); $$ = labels.Label{}} + | string_identifier error + { yylex.(*parser).unexpected("label set", "\"=\""); $$ = labels.Label{}} | error { yylex.(*parser).unexpected("label set", "identifier or \"}\""); $$ = labels.Label{} } ; diff --git a/vendor/github.com/prometheus/prometheus/promql/parser/generated_parser.y.go b/vendor/github.com/prometheus/prometheus/promql/parser/generated_parser.y.go index ad58a52976e..8979410ceb4 100644 --- a/vendor/github.com/prometheus/prometheus/promql/parser/generated_parser.y.go +++ b/vendor/github.com/prometheus/prometheus/promql/parser/generated_parser.y.go @@ -251,293 +251,295 @@ var yyExca = [...]int16{ 1, -1, -2, 0, -1, 37, - 1, 138, - 10, 138, - 24, 138, + 1, 141, + 10, 141, + 24, 141, -2, 0, -1, 61, - 2, 181, - 15, 181, - 79, 181, - 85, 181, - -2, 102, - -1, 62, - 2, 182, - 15, 182, - 79, 182, - 85, 182, - -2, 103, - -1, 63, - 2, 183, - 15, 183, - 79, 183, - 85, 183, - -2, 105, - -1, 64, 2, 184, 15, 184, 79, 184, 85, 184, - -2, 106, - -1, 65, + -2, 102, + -1, 62, 2, 185, 15, 185, 79, 185, 85, 185, - -2, 107, - -1, 66, + -2, 103, + -1, 63, 2, 186, 15, 186, 79, 186, 85, 186, - -2, 112, - -1, 67, + -2, 105, + -1, 64, 2, 187, 15, 187, 79, 187, 85, 187, - -2, 114, - -1, 68, + -2, 106, + -1, 65, 2, 188, 15, 188, 79, 188, 85, 188, - -2, 116, - -1, 69, + -2, 107, + -1, 66, 2, 189, 15, 189, 79, 189, 85, 189, - -2, 117, - -1, 70, + -2, 112, + -1, 67, 2, 190, 15, 190, 79, 190, 85, 190, - -2, 118, - -1, 71, + -2, 114, + -1, 68, 2, 191, 15, 191, 79, 191, 85, 191, - -2, 119, - -1, 72, + -2, 116, + -1, 69, 2, 192, 15, 192, 79, 192, 85, 192, - -2, 120, - -1, 73, + -2, 117, + -1, 70, 2, 193, 15, 193, 79, 193, 85, 193, - -2, 124, - -1, 74, + -2, 118, + -1, 71, 2, 194, 15, 194, 79, 194, 85, 194, + -2, 119, + -1, 72, + 2, 195, + 15, 195, + 79, 195, + 85, 195, + -2, 120, + -1, 73, + 2, 196, + 15, 196, + 79, 196, + 85, 196, + -2, 124, + -1, 74, + 2, 197, + 15, 197, + 79, 197, + 85, 197, -2, 125, - -1, 200, - 9, 243, - 12, 243, - 13, 243, - 18, 243, - 19, 243, - 25, 243, - 41, 243, - 47, 243, - 48, 243, - 51, 243, - 57, 243, - 62, 243, - 63, 243, - 64, 243, - 65, 243, - 66, 243, - 67, 243, - 68, 243, - 69, 243, - 70, 243, - 71, 243, - 72, 243, - 73, 243, - 74, 243, - 75, 243, - 79, 243, - 83, 243, - 85, 243, - 88, 243, - 89, 243, + -1, 205, + 9, 246, + 12, 246, + 13, 246, + 18, 246, + 19, 246, + 25, 246, + 41, 246, + 47, 246, + 48, 246, + 51, 246, + 57, 246, + 62, 246, + 63, 246, + 64, 246, + 65, 246, + 66, 246, + 67, 246, + 68, 246, + 69, 246, + 70, 246, + 71, 246, + 72, 246, + 73, 246, + 74, 246, + 75, 246, + 79, 246, + 83, 246, + 85, 246, + 88, 246, + 89, 246, -2, 0, - -1, 201, - 9, 243, - 12, 243, - 13, 243, - 18, 243, - 19, 243, - 25, 243, - 41, 243, - 47, 243, - 48, 243, - 51, 243, - 57, 243, - 62, 243, - 63, 243, - 64, 243, - 65, 243, - 66, 243, - 67, 243, - 68, 243, - 69, 243, - 70, 243, - 71, 243, - 72, 243, - 73, 243, - 74, 243, - 75, 243, - 79, 243, - 83, 243, - 85, 243, - 88, 243, - 89, 243, + -1, 206, + 9, 246, + 12, 246, + 13, 246, + 18, 246, + 19, 246, + 25, 246, + 41, 246, + 47, 246, + 48, 246, + 51, 246, + 57, 246, + 62, 246, + 63, 246, + 64, 246, + 65, 246, + 66, 246, + 67, 246, + 68, 246, + 69, 246, + 70, 246, + 71, 246, + 72, 246, + 73, 246, + 74, 246, + 75, 246, + 79, 246, + 83, 246, + 85, 246, + 88, 246, + 89, 246, -2, 0, } const yyPrivate = 57344 -const yyLast = 799 +const yyLast = 804 var yyAct = [...]int16{ - 152, 334, 332, 155, 339, 226, 39, 192, 276, 44, - 291, 290, 118, 82, 178, 229, 107, 106, 346, 347, - 348, 349, 109, 108, 198, 239, 199, 156, 110, 105, - 6, 245, 200, 201, 133, 325, 111, 329, 228, 60, - 357, 293, 328, 304, 267, 160, 266, 128, 55, 151, - 302, 311, 302, 196, 340, 159, 55, 89, 54, 356, - 241, 242, 355, 113, 243, 114, 54, 98, 99, 265, - 112, 101, 256, 104, 88, 230, 232, 234, 235, 236, - 244, 246, 249, 250, 251, 252, 253, 257, 258, 105, - 333, 231, 233, 237, 238, 240, 247, 248, 103, 115, - 109, 254, 255, 324, 150, 218, 110, 264, 111, 270, - 77, 35, 7, 149, 188, 163, 322, 321, 173, 320, - 167, 170, 323, 165, 271, 166, 2, 3, 4, 5, - 263, 101, 194, 104, 180, 184, 197, 187, 186, 319, - 272, 202, 203, 204, 205, 206, 207, 208, 209, 210, - 211, 212, 213, 214, 215, 216, 195, 299, 103, 318, - 217, 36, 298, 1, 190, 219, 220, 317, 160, 160, - 316, 193, 160, 154, 182, 196, 229, 297, 159, 159, - 160, 358, 159, 268, 181, 183, 239, 260, 296, 262, - 159, 315, 245, 129, 314, 55, 225, 313, 161, 228, - 161, 161, 259, 312, 161, 54, 86, 295, 310, 288, - 289, 8, 161, 292, 162, 37, 162, 162, 49, 269, - 162, 241, 242, 309, 179, 243, 180, 127, 162, 126, - 308, 223, 294, 256, 48, 222, 230, 232, 234, 235, - 236, 244, 246, 249, 250, 251, 252, 253, 257, 258, - 221, 169, 231, 233, 237, 238, 240, 247, 248, 157, - 158, 164, 254, 255, 168, 10, 182, 300, 55, 301, - 303, 47, 305, 46, 132, 79, 181, 183, 54, 306, - 307, 45, 134, 135, 136, 137, 138, 139, 140, 141, - 142, 143, 144, 145, 146, 147, 148, 43, 59, 50, - 84, 9, 9, 121, 326, 78, 327, 130, 171, 121, - 83, 42, 131, 119, 335, 336, 337, 331, 185, 119, - 338, 261, 342, 341, 344, 343, 122, 117, 41, 177, - 350, 351, 122, 55, 176, 352, 53, 77, 40, 56, - 125, 354, 22, 54, 84, 124, 172, 175, 51, 57, - 191, 353, 273, 85, 83, 189, 359, 224, 123, 80, - 345, 120, 81, 153, 58, 75, 227, 52, 116, 0, - 0, 18, 19, 0, 0, 20, 0, 0, 0, 0, - 0, 76, 0, 0, 0, 0, 61, 62, 63, 64, - 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, - 0, 0, 0, 13, 0, 0, 0, 24, 0, 30, - 0, 0, 31, 32, 55, 38, 105, 53, 77, 0, - 56, 275, 0, 22, 54, 0, 0, 0, 274, 0, - 57, 0, 278, 279, 277, 284, 286, 283, 285, 280, - 281, 282, 287, 87, 89, 0, 75, 0, 0, 0, - 0, 0, 18, 19, 98, 99, 20, 0, 101, 102, - 104, 88, 76, 0, 0, 0, 0, 61, 62, 63, - 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, - 74, 0, 0, 0, 13, 103, 0, 0, 24, 0, - 30, 0, 55, 31, 32, 53, 77, 0, 56, 330, - 0, 22, 54, 0, 0, 0, 0, 0, 57, 0, - 278, 279, 277, 284, 286, 283, 285, 280, 281, 282, - 287, 0, 0, 0, 75, 0, 0, 0, 0, 0, - 18, 19, 0, 0, 20, 0, 0, 0, 17, 77, - 76, 0, 0, 0, 22, 61, 62, 63, 64, 65, - 66, 67, 68, 69, 70, 71, 72, 73, 74, 0, - 0, 0, 13, 0, 0, 0, 24, 0, 30, 0, - 0, 31, 32, 18, 19, 0, 0, 20, 0, 0, - 0, 17, 35, 0, 0, 0, 0, 22, 11, 12, - 14, 15, 16, 21, 23, 25, 26, 27, 28, 29, - 33, 34, 0, 0, 0, 13, 0, 0, 0, 24, - 0, 30, 0, 0, 31, 32, 18, 19, 0, 0, - 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, - 0, 11, 12, 14, 15, 16, 21, 23, 25, 26, - 27, 28, 29, 33, 34, 105, 0, 0, 13, 0, - 0, 0, 24, 174, 30, 0, 0, 31, 32, 0, + 155, 339, 337, 158, 344, 231, 39, 197, 281, 44, + 296, 295, 84, 120, 82, 181, 109, 108, 351, 352, + 353, 354, 107, 111, 203, 136, 204, 159, 154, 112, + 205, 206, 234, 6, 271, 55, 163, 163, 107, 334, + 333, 307, 244, 275, 309, 54, 162, 162, 250, 363, + 91, 272, 330, 131, 362, 233, 60, 270, 276, 110, + 100, 101, 298, 115, 103, 116, 106, 90, 164, 164, + 114, 265, 113, 361, 277, 307, 360, 246, 247, 338, + 103, 248, 106, 153, 165, 165, 264, 316, 201, 261, + 122, 105, 235, 237, 239, 240, 241, 249, 251, 254, + 255, 256, 257, 258, 262, 263, 273, 105, 236, 238, + 242, 243, 245, 252, 253, 152, 117, 166, 259, 260, + 176, 164, 170, 173, 163, 168, 223, 169, 172, 2, + 3, 4, 5, 107, 162, 199, 111, 165, 187, 202, + 189, 171, 112, 269, 207, 208, 209, 210, 211, 212, + 213, 214, 215, 216, 217, 218, 219, 220, 221, 200, + 89, 91, 113, 222, 123, 193, 268, 329, 224, 225, + 183, 100, 101, 191, 121, 103, 104, 106, 90, 7, + 85, 234, 266, 182, 55, 183, 328, 86, 192, 123, + 83, 244, 122, 267, 54, 132, 190, 250, 188, 121, + 345, 230, 105, 86, 233, 77, 35, 119, 304, 10, + 185, 327, 86, 303, 293, 294, 157, 315, 297, 79, + 184, 186, 326, 163, 274, 185, 246, 247, 302, 325, + 248, 324, 314, 162, 323, 184, 186, 299, 261, 313, + 322, 235, 237, 239, 240, 241, 249, 251, 254, 255, + 256, 257, 258, 262, 263, 164, 321, 236, 238, 242, + 243, 245, 252, 253, 180, 126, 320, 259, 260, 179, + 125, 165, 305, 319, 306, 308, 318, 310, 317, 130, + 88, 129, 178, 124, 311, 312, 137, 138, 139, 140, + 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, + 151, 195, 160, 161, 50, 163, 36, 167, 198, 331, + 78, 332, 201, 228, 55, 162, 85, 227, 1, 340, + 341, 342, 336, 49, 54, 343, 83, 347, 346, 349, + 348, 48, 226, 47, 81, 355, 356, 164, 55, 86, + 357, 53, 77, 301, 56, 8, 359, 22, 54, 37, + 55, 175, 46, 165, 57, 128, 135, 127, 45, 43, + 54, 364, 300, 59, 133, 174, 9, 9, 42, 134, + 75, 41, 40, 51, 196, 358, 18, 19, 278, 87, + 20, 194, 229, 80, 350, 156, 76, 58, 232, 52, + 118, 61, 62, 63, 64, 65, 66, 67, 68, 69, + 70, 71, 72, 73, 74, 0, 0, 0, 13, 0, + 0, 0, 24, 0, 30, 0, 0, 31, 32, 55, + 38, 0, 53, 77, 0, 56, 280, 0, 22, 54, + 0, 0, 0, 279, 0, 57, 0, 283, 284, 282, + 289, 291, 288, 290, 285, 286, 287, 292, 0, 0, + 0, 75, 0, 0, 0, 0, 0, 18, 19, 0, + 0, 20, 0, 0, 0, 0, 0, 76, 0, 0, + 0, 0, 61, 62, 63, 64, 65, 66, 67, 68, + 69, 70, 71, 72, 73, 74, 0, 0, 0, 13, + 0, 0, 0, 24, 0, 30, 0, 55, 31, 32, + 53, 77, 0, 56, 335, 0, 22, 54, 0, 0, + 0, 0, 0, 57, 0, 283, 284, 282, 289, 291, + 288, 290, 285, 286, 287, 292, 0, 0, 0, 75, + 0, 0, 0, 0, 0, 18, 19, 0, 0, 20, + 0, 0, 0, 17, 77, 76, 0, 0, 0, 22, + 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, + 71, 72, 73, 74, 0, 0, 0, 13, 0, 0, + 0, 24, 0, 30, 0, 0, 31, 32, 18, 19, + 0, 0, 20, 0, 0, 0, 17, 35, 0, 0, + 0, 0, 22, 11, 12, 14, 15, 16, 21, 23, + 25, 26, 27, 28, 29, 33, 34, 0, 0, 0, + 13, 0, 0, 0, 24, 0, 30, 0, 0, 31, + 32, 18, 19, 0, 0, 20, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 11, 12, 14, 15, + 16, 21, 23, 25, 26, 27, 28, 29, 33, 34, + 107, 0, 0, 13, 0, 0, 0, 24, 177, 30, + 0, 0, 31, 32, 0, 0, 0, 0, 0, 107, + 0, 0, 0, 0, 0, 0, 0, 89, 91, 92, + 0, 93, 94, 95, 96, 97, 98, 99, 100, 101, + 102, 0, 103, 104, 106, 90, 89, 91, 92, 0, + 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, + 0, 103, 104, 106, 90, 107, 0, 0, 0, 105, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 107, 0, 0, 0, 105, 0, + 0, 0, 89, 91, 92, 0, 93, 94, 95, 0, + 97, 98, 99, 100, 101, 102, 0, 103, 104, 106, + 90, 89, 91, 92, 0, 93, 94, 0, 0, 97, + 98, 0, 100, 101, 102, 0, 103, 104, 106, 90, 0, 0, 0, 0, 105, 0, 0, 0, 0, 0, - 0, 0, 87, 89, 90, 0, 91, 92, 93, 94, - 95, 96, 97, 98, 99, 100, 0, 101, 102, 104, - 88, 87, 89, 90, 0, 91, 92, 93, 94, 95, - 96, 97, 98, 99, 100, 0, 101, 102, 104, 88, - 105, 0, 0, 0, 103, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 105, - 0, 0, 0, 103, 0, 0, 0, 87, 89, 90, - 0, 91, 92, 93, 0, 95, 96, 97, 98, 99, - 100, 0, 101, 102, 104, 88, 87, 89, 90, 0, - 91, 92, 0, 0, 95, 96, 0, 98, 99, 100, - 0, 101, 102, 104, 88, 0, 0, 0, 0, 103, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 103, + 0, 0, 0, 105, } var yyPact = [...]int16{ - 28, 102, 569, 569, 405, 526, -1000, -1000, -1000, 98, + 31, 169, 574, 574, 410, 531, -1000, -1000, -1000, 193, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, - -1000, -1000, -1000, -1000, -1000, 342, -1000, 204, -1000, 650, + -1000, -1000, -1000, -1000, -1000, 314, -1000, 278, -1000, 655, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, - -1000, -1000, 21, 93, -1000, -1000, 483, -1000, 483, 97, + -1000, -1000, 57, 147, -1000, -1000, 488, -1000, 488, 192, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, - -1000, -1000, -1000, -1000, -1000, -1000, -1000, 307, -1000, -1000, - 338, -1000, -1000, 225, -1000, 23, -1000, -44, -44, -44, - -44, -44, -44, -44, -44, -44, -44, -44, -44, -44, - -44, -44, -44, 47, 171, 259, 93, -57, -1000, 249, - 249, 324, -1000, 631, 75, -1000, 327, -1000, -1000, 222, - 130, -1000, -1000, -1000, 298, -1000, 112, -1000, 159, 483, - -1000, -58, -48, -1000, 483, 483, 483, 483, 483, 483, - 483, 483, 483, 483, 483, 483, 483, 483, 483, -1000, - 39, -1000, -1000, 90, -1000, -1000, -1000, -1000, -1000, -1000, - -1000, 36, 36, 229, -1000, -1000, -1000, -1000, 174, -1000, - -1000, 180, -1000, 650, -1000, -1000, 301, -1000, 105, -1000, - -1000, -1000, -1000, -1000, 44, -1000, -1000, -1000, -1000, -1000, - 18, 157, 83, -1000, -1000, -1000, 404, 15, 249, 249, - 249, 249, 75, 75, 402, 402, 402, 715, 696, 402, - 402, 715, 75, 75, 402, 75, 15, -1000, 19, -1000, - -1000, -1000, 186, -1000, 155, -1000, -1000, -1000, -1000, -1000, + -1000, -1000, -1000, -1000, -1000, -1000, -1000, 187, -1000, -1000, + 263, -1000, -1000, 353, 277, -1000, -1000, 29, -1000, -53, + -53, -53, -53, -53, -53, -53, -53, -53, -53, -53, + -53, -53, -53, -53, -53, 26, 214, 305, 147, -56, + -1000, 126, 126, 329, -1000, 636, 24, -1000, 262, -1000, + -1000, 181, 166, -1000, -1000, 178, -1000, 171, -1000, 163, + -1000, 296, 488, -1000, -58, -50, -1000, 488, 488, 488, + 488, 488, 488, 488, 488, 488, 488, 488, 488, 488, + 488, 488, -1000, 175, -1000, -1000, 111, -1000, -1000, -1000, + -1000, -1000, -1000, -1000, 115, 115, 311, -1000, -1000, -1000, + -1000, 179, -1000, -1000, 64, -1000, 655, -1000, -1000, 162, + -1000, 141, -1000, -1000, -1000, -1000, -1000, 32, -1000, -1000, + -1000, -1000, -1000, -1000, -1000, 25, 80, 17, -1000, -1000, + -1000, 409, 8, 126, 126, 126, 126, 24, 24, 119, + 119, 119, 720, 701, 119, 119, 720, 24, 24, 119, + 24, 8, -1000, 40, -1000, -1000, -1000, 341, -1000, 206, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, - 483, -1000, -1000, -1000, -1000, -1000, -1000, 31, 31, 17, - 31, 37, 37, 206, 34, -1000, -1000, 197, 191, 188, - 185, 164, 161, 153, 133, 113, 111, 110, -1000, -1000, - -1000, -1000, -1000, -1000, 101, -1000, -1000, -1000, 13, -1000, - 650, -1000, -1000, -1000, 31, -1000, 16, 11, 482, -1000, - -1000, -1000, 33, 163, 163, 163, 36, 40, 40, 33, - 40, 33, -74, -1000, -1000, -1000, -1000, -1000, 31, 31, - -1000, -1000, -1000, 31, -1000, -1000, -1000, -1000, -1000, -1000, - 163, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, - -1000, -1000, -1000, 38, -1000, 160, -1000, -1000, -1000, -1000, + -1000, -1000, -1000, -1000, -1000, 488, -1000, -1000, -1000, -1000, + -1000, -1000, 56, 56, 18, 56, 72, 72, 215, 70, + -1000, -1000, 272, 270, 267, 260, 250, 234, 228, 225, + 223, 216, 205, -1000, -1000, -1000, -1000, -1000, -1000, 165, + -1000, -1000, -1000, 30, -1000, 655, -1000, -1000, -1000, 56, + -1000, 14, 13, 487, -1000, -1000, -1000, 22, 27, 27, + 27, 115, 186, 186, 22, 186, 22, -74, -1000, -1000, + -1000, -1000, -1000, 56, 56, -1000, -1000, -1000, 56, -1000, + -1000, -1000, -1000, -1000, -1000, 27, -1000, -1000, -1000, -1000, + -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, 52, -1000, + 28, -1000, -1000, -1000, -1000, } var yyPgo = [...]int16{ - 0, 368, 12, 367, 5, 14, 366, 298, 364, 363, - 361, 360, 265, 211, 359, 13, 357, 10, 11, 355, - 353, 7, 352, 8, 4, 351, 2, 1, 3, 350, - 27, 0, 348, 338, 17, 193, 328, 312, 6, 311, - 308, 16, 307, 39, 297, 9, 281, 274, 273, 271, - 234, 218, 299, 163, 161, + 0, 390, 13, 389, 5, 15, 388, 363, 387, 385, + 12, 384, 209, 345, 383, 14, 382, 10, 11, 381, + 379, 7, 378, 8, 4, 375, 2, 1, 3, 374, + 27, 0, 373, 372, 17, 195, 371, 369, 6, 368, + 365, 16, 364, 56, 359, 9, 358, 356, 352, 333, + 331, 323, 304, 318, 306, } var yyR1 = [...]int8{ @@ -554,18 +556,18 @@ var yyR1 = [...]int8{ 13, 13, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 12, 12, 12, 12, - 14, 14, 14, 15, 15, 15, 15, 54, 20, 20, - 20, 20, 19, 19, 19, 19, 19, 19, 19, 19, - 19, 29, 29, 29, 21, 21, 21, 21, 22, 22, - 22, 23, 23, 23, 23, 23, 23, 23, 23, 23, - 23, 23, 24, 24, 25, 25, 25, 11, 11, 11, - 11, 3, 3, 3, 3, 3, 3, 3, 3, 3, - 3, 3, 3, 3, 3, 6, 6, 6, 6, 6, + 14, 14, 14, 15, 15, 15, 15, 15, 15, 15, + 54, 20, 20, 20, 20, 19, 19, 19, 19, 19, + 19, 19, 19, 19, 29, 29, 29, 21, 21, 21, + 21, 22, 22, 22, 23, 23, 23, 23, 23, 23, + 23, 23, 23, 23, 23, 24, 24, 25, 25, 25, + 11, 11, 11, 11, 3, 3, 3, 3, 3, 3, + 3, 3, 3, 3, 3, 3, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, - 6, 6, 6, 6, 8, 8, 5, 5, 5, 5, - 45, 45, 28, 28, 30, 30, 31, 31, 27, 26, - 26, 49, 10, 18, 18, + 6, 6, 6, 6, 6, 6, 6, 8, 8, 5, + 5, 5, 5, 45, 45, 28, 28, 30, 30, 31, + 31, 27, 26, 26, 49, 10, 18, 18, } var yyR2 = [...]int8{ @@ -582,18 +584,18 @@ var yyR2 = [...]int8{ 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 4, 2, 0, - 3, 1, 2, 3, 3, 2, 1, 2, 0, 3, - 2, 1, 1, 3, 1, 3, 4, 1, 3, 5, - 5, 1, 1, 1, 4, 3, 3, 2, 3, 1, - 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, - 3, 3, 4, 3, 3, 1, 2, 1, 1, 1, + 3, 1, 2, 3, 3, 3, 3, 2, 2, 1, + 2, 0, 3, 2, 1, 1, 3, 1, 3, 4, + 1, 3, 5, 5, 1, 1, 1, 4, 3, 3, + 2, 3, 1, 2, 3, 3, 3, 3, 3, 3, + 3, 3, 3, 3, 3, 4, 3, 3, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 2, 2, 1, 1, 1, 2, - 1, 1, 1, 0, 1, + 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, + 1, 1, 2, 1, 1, 1, 0, 1, } var yyChk = [...]int16{ @@ -605,34 +607,35 @@ var yyChk = [...]int16{ -52, -32, -3, 12, 19, 9, 15, 25, -8, -7, -43, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 41, 57, 13, -52, -12, - -14, 20, -15, 12, 2, -20, 2, 41, 59, 42, - 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, - 54, 56, 57, 83, 58, 14, -34, -41, 2, 79, - 85, 15, -41, -38, -38, -43, -1, 20, -2, 12, - -10, 2, 25, 20, 7, 2, 4, 2, 24, -35, - -42, -37, -47, 78, -35, -35, -35, -35, -35, -35, - -35, -35, -35, -35, -35, -35, -35, -35, -35, -45, - 57, 2, -31, -9, 2, -28, -30, 88, 89, 19, - 9, 41, 57, -45, 2, -41, -34, -17, 15, 2, - -17, -40, 22, -38, 22, 20, 7, 2, -5, 2, - 4, 54, 44, 55, -5, 20, -15, 25, 2, -19, - 5, -29, -21, 12, -28, -30, 16, -38, 82, 84, - 80, 81, -38, -38, -38, -38, -38, -38, -38, -38, - -38, -38, -38, -38, -38, -38, -38, -45, 15, -28, - -28, 21, 6, 2, -16, 22, -4, -6, 25, 2, - 62, 78, 63, 79, 64, 65, 66, 80, 81, 12, - 82, 47, 48, 51, 67, 18, 68, 83, 84, 69, - 70, 71, 72, 73, 88, 89, 59, 74, 75, 22, - 7, 20, -2, 25, 2, 25, 2, 26, 26, -30, - 26, 41, 57, -22, 24, 17, -23, 30, 28, 29, - 35, 36, 37, 33, 31, 34, 32, 38, -17, -17, - -18, -17, -18, 22, -45, 21, 2, 22, 7, 2, - -38, -27, 19, -27, 26, -27, -21, -21, 24, 17, - 2, 17, 6, 6, 6, 6, 6, 6, 6, 6, - 6, 6, 6, 21, 2, 22, -4, -27, 26, 26, - 17, -23, -26, 57, -27, -31, -31, -31, -28, -24, - 14, -24, -26, -24, -26, -11, 92, 93, 94, 95, - -27, -27, -27, -25, -31, 24, 21, 2, 21, -31, + -14, 20, -15, 12, -10, 2, 25, -20, 2, 41, + 59, 42, 43, 45, 46, 47, 48, 49, 50, 51, + 52, 53, 54, 56, 57, 83, 58, 14, -34, -41, + 2, 79, 85, 15, -41, -38, -38, -43, -1, 20, + -2, 12, -10, 2, 20, 7, 2, 4, 2, 4, + 2, 24, -35, -42, -37, -47, 78, -35, -35, -35, + -35, -35, -35, -35, -35, -35, -35, -35, -35, -35, + -35, -35, -45, 57, 2, -31, -9, 2, -28, -30, + 88, 89, 19, 9, 41, 57, -45, 2, -41, -34, + -17, 15, 2, -17, -40, 22, -38, 22, 20, 7, + 2, -5, 2, 4, 54, 44, 55, -5, 20, -15, + 25, 2, 25, 2, -19, 5, -29, -21, 12, -28, + -30, 16, -38, 82, 84, 80, 81, -38, -38, -38, + -38, -38, -38, -38, -38, -38, -38, -38, -38, -38, + -38, -38, -45, 15, -28, -28, 21, 6, 2, -16, + 22, -4, -6, 25, 2, 62, 78, 63, 79, 64, + 65, 66, 80, 81, 12, 82, 47, 48, 51, 67, + 18, 68, 83, 84, 69, 70, 71, 72, 73, 88, + 89, 59, 74, 75, 22, 7, 20, -2, 25, 2, + 25, 2, 26, 26, -30, 26, 41, 57, -22, 24, + 17, -23, 30, 28, 29, 35, 36, 37, 33, 31, + 34, 32, 38, -17, -17, -18, -17, -18, 22, -45, + 21, 2, 22, 7, 2, -38, -27, 19, -27, 26, + -27, -21, -21, 24, 17, 2, 17, 6, 6, 6, + 6, 6, 6, 6, 6, 6, 6, 6, 21, 2, + 22, -4, -27, 26, 26, 17, -23, -26, 57, -27, + -31, -31, -31, -28, -24, 14, -24, -26, -24, -26, + -11, 92, 93, 94, 95, -27, -27, -27, -25, -31, + 24, 21, 2, 21, -31, } var yyDef = [...]int16{ @@ -641,37 +644,38 @@ var yyDef = [...]int16{ 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 0, 2, -2, 3, 4, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, - 18, 19, 0, 108, 230, 231, 0, 241, 0, 85, + 18, 19, 0, 108, 233, 234, 0, 244, 0, 85, 86, -2, -2, -2, -2, -2, -2, -2, -2, -2, - -2, -2, -2, -2, -2, 224, 225, 0, 5, 100, - 0, 128, 131, 0, 136, 137, 141, 43, 43, 43, + -2, -2, -2, -2, -2, 227, 228, 0, 5, 100, + 0, 128, 131, 0, 0, 139, 245, 140, 144, 43, 43, 43, 43, 43, 43, 43, 43, 43, 43, 43, - 43, 43, 43, 0, 0, 0, 0, 22, 23, 0, - 0, 0, 61, 0, 83, 84, 0, 89, 91, 0, - 95, 99, 242, 126, 0, 132, 0, 135, 140, 0, - 42, 47, 48, 44, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 68, - 0, 70, 71, 0, 73, 236, 237, 74, 75, 232, - 233, 0, 0, 0, 82, 20, 21, 24, 0, 54, - 25, 0, 63, 65, 67, 87, 0, 92, 0, 98, - 226, 227, 228, 229, 0, 127, 130, 133, 134, 139, - 142, 144, 147, 151, 152, 153, 0, 26, 0, 0, - -2, -2, 27, 28, 29, 30, 31, 32, 33, 34, - 35, 36, 37, 38, 39, 40, 41, 69, 0, 234, - 235, 76, 0, 81, 0, 53, 56, 58, 59, 60, - 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, - 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, - 215, 216, 217, 218, 219, 220, 221, 222, 223, 62, - 66, 88, 90, 93, 97, 94, 96, 0, 0, 0, - 0, 0, 0, 0, 0, 157, 159, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 45, 46, - 49, 244, 50, 72, 0, 78, 80, 51, 0, 57, - 64, 143, 238, 145, 0, 148, 0, 0, 0, 155, - 160, 156, 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 77, 79, 52, 55, 146, 0, 0, - 154, 158, 161, 0, 240, 162, 163, 164, 165, 166, - 0, 167, 168, 169, 170, 171, 177, 178, 179, 180, - 149, 150, 239, 0, 175, 0, 173, 176, 172, 174, + 43, 43, 43, 43, 43, 0, 0, 0, 0, 22, + 23, 0, 0, 0, 61, 0, 83, 84, 0, 89, + 91, 0, 95, 99, 126, 0, 132, 0, 137, 0, + 138, 143, 0, 42, 47, 48, 44, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 68, 0, 70, 71, 0, 73, 239, 240, + 74, 75, 235, 236, 0, 0, 0, 82, 20, 21, + 24, 0, 54, 25, 0, 63, 65, 67, 87, 0, + 92, 0, 98, 229, 230, 231, 232, 0, 127, 130, + 133, 135, 134, 136, 142, 145, 147, 150, 154, 155, + 156, 0, 26, 0, 0, -2, -2, 27, 28, 29, + 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, + 40, 41, 69, 0, 237, 238, 76, 0, 81, 0, + 53, 56, 58, 59, 60, 198, 199, 200, 201, 202, + 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, + 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, + 223, 224, 225, 226, 62, 66, 88, 90, 93, 97, + 94, 96, 0, 0, 0, 0, 0, 0, 0, 0, + 160, 162, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 45, 46, 49, 247, 50, 72, 0, + 78, 80, 51, 0, 57, 64, 146, 241, 148, 0, + 151, 0, 0, 0, 158, 163, 159, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 77, 79, + 52, 55, 149, 0, 0, 157, 161, 164, 0, 243, + 165, 166, 167, 168, 169, 0, 170, 171, 172, 173, + 174, 180, 181, 182, 183, 152, 153, 242, 0, 178, + 0, 176, 179, 175, 177, } var yyTok1 = [...]int8{ @@ -1614,24 +1618,41 @@ yydefault: yyVAL.label = labels.Label{Name: yyDollar[1].item.Val, Value: yylex.(*parser).unquoteString(yyDollar[3].item.Val)} } case 134: + yyDollar = yyS[yypt-3 : yypt+1] + { + yyVAL.label = labels.Label{Name: yyDollar[1].item.Val, Value: yylex.(*parser).unquoteString(yyDollar[3].item.Val)} + } + case 135: yyDollar = yyS[yypt-3 : yypt+1] { yylex.(*parser).unexpected("label set", "string") yyVAL.label = labels.Label{} } - case 135: + case 136: + yyDollar = yyS[yypt-3 : yypt+1] + { + yylex.(*parser).unexpected("label set", "string") + yyVAL.label = labels.Label{} + } + case 137: yyDollar = yyS[yypt-2 : yypt+1] { yylex.(*parser).unexpected("label set", "\"=\"") yyVAL.label = labels.Label{} } - case 136: + case 138: + yyDollar = yyS[yypt-2 : yypt+1] + { + yylex.(*parser).unexpected("label set", "\"=\"") + yyVAL.label = labels.Label{} + } + case 139: yyDollar = yyS[yypt-1 : yypt+1] { yylex.(*parser).unexpected("label set", "identifier or \"}\"") yyVAL.label = labels.Label{} } - case 137: + case 140: yyDollar = yyS[yypt-2 : yypt+1] { yylex.(*parser).generatedParserResult = &seriesDescription{ @@ -1639,33 +1660,33 @@ yydefault: values: yyDollar[2].series, } } - case 138: + case 141: yyDollar = yyS[yypt-0 : yypt+1] { yyVAL.series = []SequenceValue{} } - case 139: + case 142: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.series = append(yyDollar[1].series, yyDollar[3].series...) } - case 140: + case 143: yyDollar = yyS[yypt-2 : yypt+1] { yyVAL.series = yyDollar[1].series } - case 141: + case 144: yyDollar = yyS[yypt-1 : yypt+1] { yylex.(*parser).unexpected("series values", "") yyVAL.series = nil } - case 142: + case 145: yyDollar = yyS[yypt-1 : yypt+1] { yyVAL.series = []SequenceValue{{Omitted: true}} } - case 143: + case 146: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.series = []SequenceValue{} @@ -1673,12 +1694,12 @@ yydefault: yyVAL.series = append(yyVAL.series, SequenceValue{Omitted: true}) } } - case 144: + case 147: yyDollar = yyS[yypt-1 : yypt+1] { yyVAL.series = []SequenceValue{{Value: yyDollar[1].float}} } - case 145: + case 148: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.series = []SequenceValue{} @@ -1687,7 +1708,7 @@ yydefault: yyVAL.series = append(yyVAL.series, SequenceValue{Value: yyDollar[1].float}) } } - case 146: + case 149: yyDollar = yyS[yypt-4 : yypt+1] { yyVAL.series = []SequenceValue{} @@ -1697,12 +1718,12 @@ yydefault: yyDollar[1].float += yyDollar[2].float } } - case 147: + case 150: yyDollar = yyS[yypt-1 : yypt+1] { yyVAL.series = []SequenceValue{{Histogram: yyDollar[1].histogram}} } - case 148: + case 151: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.series = []SequenceValue{} @@ -1712,7 +1733,7 @@ yydefault: //$1 += $2 } } - case 149: + case 152: yyDollar = yyS[yypt-5 : yypt+1] { val, err := yylex.(*parser).histogramsIncreaseSeries(yyDollar[1].histogram, yyDollar[3].histogram, yyDollar[5].uint) @@ -1721,7 +1742,7 @@ yydefault: } yyVAL.series = val } - case 150: + case 153: yyDollar = yyS[yypt-5 : yypt+1] { val, err := yylex.(*parser).histogramsDecreaseSeries(yyDollar[1].histogram, yyDollar[3].histogram, yyDollar[5].uint) @@ -1730,7 +1751,7 @@ yydefault: } yyVAL.series = val } - case 151: + case 154: yyDollar = yyS[yypt-1 : yypt+1] { if yyDollar[1].item.Val != "stale" { @@ -1738,130 +1759,130 @@ yydefault: } yyVAL.float = math.Float64frombits(value.StaleNaN) } - case 154: + case 157: yyDollar = yyS[yypt-4 : yypt+1] { yyVAL.histogram = yylex.(*parser).buildHistogramFromMap(&yyDollar[2].descriptors) } - case 155: + case 158: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.histogram = yylex.(*parser).buildHistogramFromMap(&yyDollar[2].descriptors) } - case 156: + case 159: yyDollar = yyS[yypt-3 : yypt+1] { m := yylex.(*parser).newMap() yyVAL.histogram = yylex.(*parser).buildHistogramFromMap(&m) } - case 157: + case 160: yyDollar = yyS[yypt-2 : yypt+1] { m := yylex.(*parser).newMap() yyVAL.histogram = yylex.(*parser).buildHistogramFromMap(&m) } - case 158: + case 161: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.descriptors = *(yylex.(*parser).mergeMaps(&yyDollar[1].descriptors, &yyDollar[3].descriptors)) } - case 159: + case 162: yyDollar = yyS[yypt-1 : yypt+1] { yyVAL.descriptors = yyDollar[1].descriptors } - case 160: + case 163: yyDollar = yyS[yypt-2 : yypt+1] { yylex.(*parser).unexpected("histogram description", "histogram description key, e.g. buckets:[5 10 7]") } - case 161: + case 164: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.descriptors = yylex.(*parser).newMap() yyVAL.descriptors["schema"] = yyDollar[3].int } - case 162: + case 165: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.descriptors = yylex.(*parser).newMap() yyVAL.descriptors["sum"] = yyDollar[3].float } - case 163: + case 166: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.descriptors = yylex.(*parser).newMap() yyVAL.descriptors["count"] = yyDollar[3].float } - case 164: + case 167: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.descriptors = yylex.(*parser).newMap() yyVAL.descriptors["z_bucket"] = yyDollar[3].float } - case 165: + case 168: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.descriptors = yylex.(*parser).newMap() yyVAL.descriptors["z_bucket_w"] = yyDollar[3].float } - case 166: + case 169: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.descriptors = yylex.(*parser).newMap() yyVAL.descriptors["custom_values"] = yyDollar[3].bucket_set } - case 167: + case 170: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.descriptors = yylex.(*parser).newMap() yyVAL.descriptors["buckets"] = yyDollar[3].bucket_set } - case 168: + case 171: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.descriptors = yylex.(*parser).newMap() yyVAL.descriptors["offset"] = yyDollar[3].int } - case 169: + case 172: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.descriptors = yylex.(*parser).newMap() yyVAL.descriptors["n_buckets"] = yyDollar[3].bucket_set } - case 170: + case 173: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.descriptors = yylex.(*parser).newMap() yyVAL.descriptors["n_offset"] = yyDollar[3].int } - case 171: + case 174: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.descriptors = yylex.(*parser).newMap() yyVAL.descriptors["counter_reset_hint"] = yyDollar[3].item } - case 172: + case 175: yyDollar = yyS[yypt-4 : yypt+1] { yyVAL.bucket_set = yyDollar[2].bucket_set } - case 173: + case 176: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.bucket_set = yyDollar[2].bucket_set } - case 174: + case 177: yyDollar = yyS[yypt-3 : yypt+1] { yyVAL.bucket_set = append(yyDollar[1].bucket_set, yyDollar[3].float) } - case 175: + case 178: yyDollar = yyS[yypt-1 : yypt+1] { yyVAL.bucket_set = []float64{yyDollar[1].float} } - case 230: + case 233: yyDollar = yyS[yypt-1 : yypt+1] { yyVAL.node = &NumberLiteral{ @@ -1869,7 +1890,7 @@ yydefault: PosRange: yyDollar[1].item.PositionRange(), } } - case 231: + case 234: yyDollar = yyS[yypt-1 : yypt+1] { var err error @@ -1883,12 +1904,12 @@ yydefault: PosRange: yyDollar[1].item.PositionRange(), } } - case 232: + case 235: yyDollar = yyS[yypt-1 : yypt+1] { yyVAL.float = yylex.(*parser).number(yyDollar[1].item.Val) } - case 233: + case 236: yyDollar = yyS[yypt-1 : yypt+1] { var err error @@ -1899,17 +1920,17 @@ yydefault: } yyVAL.float = dur.Seconds() } - case 234: + case 237: yyDollar = yyS[yypt-2 : yypt+1] { yyVAL.float = yyDollar[2].float } - case 235: + case 238: yyDollar = yyS[yypt-2 : yypt+1] { yyVAL.float = -yyDollar[2].float } - case 238: + case 241: yyDollar = yyS[yypt-1 : yypt+1] { var err error @@ -1918,17 +1939,17 @@ yydefault: yylex.(*parser).addParseErrf(yyDollar[1].item.PositionRange(), "invalid repetition in series values: %s", err) } } - case 239: + case 242: yyDollar = yyS[yypt-2 : yypt+1] { yyVAL.int = -int64(yyDollar[2].uint) } - case 240: + case 243: yyDollar = yyS[yypt-1 : yypt+1] { yyVAL.int = int64(yyDollar[1].uint) } - case 241: + case 244: yyDollar = yyS[yypt-1 : yypt+1] { yyVAL.node = &StringLiteral{ @@ -1936,7 +1957,7 @@ yydefault: PosRange: yyDollar[1].item.PositionRange(), } } - case 242: + case 245: yyDollar = yyS[yypt-1 : yypt+1] { yyVAL.item = Item{ @@ -1945,7 +1966,7 @@ yydefault: Val: yylex.(*parser).unquoteString(yyDollar[1].item.Val), } } - case 243: + case 246: yyDollar = yyS[yypt-0 : yypt+1] { yyVAL.strings = nil diff --git a/vendor/github.com/prometheus/prometheus/promql/parser/lex.go b/vendor/github.com/prometheus/prometheus/promql/parser/lex.go index 82bf0367b83..7210d51b7bd 100644 --- a/vendor/github.com/prometheus/prometheus/promql/parser/lex.go +++ b/vendor/github.com/prometheus/prometheus/promql/parser/lex.go @@ -68,6 +68,12 @@ func (i ItemType) IsAggregatorWithParam() bool { return i == TOPK || i == BOTTOMK || i == COUNT_VALUES || i == QUANTILE || i == LIMITK || i == LIMIT_RATIO } +// IsExperimentalAggregator defines the experimental aggregation functions that are controlled +// with EnableExperimentalFunctions. +func (i ItemType) IsExperimentalAggregator() bool { + return i == LIMITK || i == LIMIT_RATIO +} + // IsKeyword returns true if the Item corresponds to a keyword. // Returns false otherwise. func (i ItemType) IsKeyword() bool { return i > keywordsStart && i < keywordsEnd } diff --git a/vendor/github.com/prometheus/prometheus/promql/parser/parse.go b/vendor/github.com/prometheus/prometheus/promql/parser/parse.go index 3f50cf2b8ac..e8abe90194f 100644 --- a/vendor/github.com/prometheus/prometheus/promql/parser/parse.go +++ b/vendor/github.com/prometheus/prometheus/promql/parser/parse.go @@ -447,7 +447,7 @@ func (p *parser) newAggregateExpr(op Item, modifier, args Node) (ret *AggregateE desiredArgs := 1 if ret.Op.IsAggregatorWithParam() { - if !EnableExperimentalFunctions && (ret.Op == LIMITK || ret.Op == LIMIT_RATIO) { + if !EnableExperimentalFunctions && ret.Op.IsExperimentalAggregator() { // In mimir we return a custom message which doesn't mention the CLI flag that should be used to enable // experimental functions, given it's different (and in SaaS customers don't even have access to it). p.addParseErrf(ret.PositionRange(), "limitk() and limit_ratio() functions are not enabled") diff --git a/vendor/github.com/prometheus/prometheus/promql/promqltest/test.go b/vendor/github.com/prometheus/prometheus/promql/promqltest/test.go index 4cbd39b403a..f208b4f3135 100644 --- a/vendor/github.com/prometheus/prometheus/promql/promqltest/test.go +++ b/vendor/github.com/prometheus/prometheus/promql/promqltest/test.go @@ -66,7 +66,7 @@ var testStartTime = time.Unix(0, 0).UTC() // LoadedStorage returns storage with generated data using the provided load statements. // Non-load statements will cause test errors. func LoadedStorage(t testutil.T, input string) *teststorage.TestStorage { - test, err := newTest(t, input) + test, err := newTest(t, input, false) require.NoError(t, err) for _, cmd := range test.cmds { @@ -125,11 +125,17 @@ func RunBuiltinTests(t TBRun, engine promql.QueryEngine) { // RunTest parses and runs the test against the provided engine. func RunTest(t testutil.T, input string, engine promql.QueryEngine) { - require.NoError(t, runTest(t, input, engine)) + require.NoError(t, runTest(t, input, engine, false)) } -func runTest(t testutil.T, input string, engine promql.QueryEngine) error { - test, err := newTest(t, input) +// testTest allows tests to be run in "test-the-test" mode (true for +// testingMode). This is a special mode for testing test code execution itself. +func testTest(t testutil.T, input string, engine promql.QueryEngine) error { + return runTest(t, input, engine, true) +} + +func runTest(t testutil.T, input string, engine promql.QueryEngine, testingMode bool) error { + test, err := newTest(t, input, testingMode) // Why do this before checking err? newTest() can create the test storage and then return an error, // and we want to make sure to clean that up to avoid leaking goroutines. @@ -164,6 +170,8 @@ func runTest(t testutil.T, input string, engine promql.QueryEngine) error { // against a test storage. type test struct { testutil.T + // testingMode distinguishes between normal execution and test-execution mode. + testingMode bool cmds []testCommand @@ -174,10 +182,11 @@ type test struct { } // newTest returns an initialized empty Test. -func newTest(t testutil.T, input string) (*test, error) { +func newTest(t testutil.T, input string, testingMode bool) (*test, error) { test := &test{ - T: t, - cmds: []testCommand{}, + T: t, + cmds: []testCommand{}, + testingMode: testingMode, } err := test.parse(input) test.clear() @@ -491,8 +500,8 @@ func newTempHistogramWrapper() tempHistogramWrapper { } } -func processClassicHistogramSeries(m labels.Labels, suffix string, histogramMap map[uint64]tempHistogramWrapper, smpls []promql.Sample, updateHistogram func(*convertnhcb.TempHistogram, float64)) { - m2 := convertnhcb.GetHistogramMetricBase(m, suffix) +func processClassicHistogramSeries(m labels.Labels, name string, histogramMap map[uint64]tempHistogramWrapper, smpls []promql.Sample, updateHistogram func(*convertnhcb.TempHistogram, float64)) { + m2 := convertnhcb.GetHistogramMetricBase(m, name) m2hash := m2.Hash() histogramWrapper, exists := histogramMap[m2hash] if !exists { @@ -523,21 +532,25 @@ func (cmd *loadCmd) appendCustomHistogram(a storage.Appender) error { for hash, smpls := range cmd.defs { m := cmd.metrics[hash] mName := m.Get(labels.MetricName) - switch { - case strings.HasSuffix(mName, "_bucket") && m.Has(labels.BucketLabel): + suffixType, name := convertnhcb.GetHistogramMetricBaseName(mName) + switch suffixType { + case convertnhcb.SuffixBucket: + if !m.Has(labels.BucketLabel) { + panic(fmt.Sprintf("expected bucket label in metric %s", m)) + } le, err := strconv.ParseFloat(m.Get(labels.BucketLabel), 64) if err != nil || math.IsNaN(le) { continue } - processClassicHistogramSeries(m, "_bucket", histogramMap, smpls, func(histogram *convertnhcb.TempHistogram, f float64) { + processClassicHistogramSeries(m, name, histogramMap, smpls, func(histogram *convertnhcb.TempHistogram, f float64) { _ = histogram.SetBucketCount(le, f) }) - case strings.HasSuffix(mName, "_count"): - processClassicHistogramSeries(m, "_count", histogramMap, smpls, func(histogram *convertnhcb.TempHistogram, f float64) { + case convertnhcb.SuffixCount: + processClassicHistogramSeries(m, name, histogramMap, smpls, func(histogram *convertnhcb.TempHistogram, f float64) { _ = histogram.SetCount(f) }) - case strings.HasSuffix(mName, "_sum"): - processClassicHistogramSeries(m, "_sum", histogramMap, smpls, func(histogram *convertnhcb.TempHistogram, f float64) { + case convertnhcb.SuffixSum: + processClassicHistogramSeries(m, name, histogramMap, smpls, func(histogram *convertnhcb.TempHistogram, f float64) { _ = histogram.SetSum(f) }) } @@ -1074,11 +1087,25 @@ func (t *test) exec(tc testCommand, engine promql.QueryEngine) error { } func (t *test) execEval(cmd *evalCmd, engine promql.QueryEngine) error { - if cmd.isRange { - return t.execRangeEval(cmd, engine) + do := func() error { + if cmd.isRange { + return t.execRangeEval(cmd, engine) + } + + return t.execInstantEval(cmd, engine) + } + + if t.testingMode { + return do() } - return t.execInstantEval(cmd, engine) + if tt, ok := t.T.(*testing.T); ok { + tt.Run(fmt.Sprintf("line %d/%s", cmd.line, cmd.expr), func(t *testing.T) { + require.NoError(t, do()) + }) + return nil + } + return errors.New("t.T is not testing.T") } func (t *test) execRangeEval(cmd *evalCmd, engine promql.QueryEngine) error { @@ -1097,12 +1124,16 @@ func (t *test) execRangeEval(cmd *evalCmd, engine promql.QueryEngine) error { if res.Err == nil && cmd.fail { return fmt.Errorf("expected error evaluating query %q (line %d) but got none", cmd.expr, cmd.line) } - countWarnings, _ := res.Warnings.CountWarningsAndInfo() - if !cmd.warn && countWarnings > 0 { + countWarnings, countInfo := res.Warnings.CountWarningsAndInfo() + switch { + case !cmd.warn && countWarnings > 0: return fmt.Errorf("unexpected warnings evaluating query %q (line %d): %v", cmd.expr, cmd.line, res.Warnings) - } - if cmd.warn && countWarnings == 0 { + case cmd.warn && countWarnings == 0: return fmt.Errorf("expected warnings evaluating query %q (line %d) but got none", cmd.expr, cmd.line) + case !cmd.info && countInfo > 0: + return fmt.Errorf("unexpected info annotations evaluating query %q (line %d): %v", cmd.expr, cmd.line, res.Warnings) + case cmd.info && countInfo == 0: + return fmt.Errorf("expected info annotations evaluating query %q (line %d) but got none", cmd.expr, cmd.line) } defer q.Close() @@ -1148,13 +1179,14 @@ func (t *test) runInstantQuery(iq atModifierTestCase, cmd *evalCmd, engine promq return fmt.Errorf("expected error evaluating query %q (line %d) but got none", iq.expr, cmd.line) } countWarnings, countInfo := res.Warnings.CountWarningsAndInfo() - if !cmd.warn && countWarnings > 0 { + switch { + case !cmd.warn && countWarnings > 0: return fmt.Errorf("unexpected warnings evaluating query %q (line %d): %v", iq.expr, cmd.line, res.Warnings) - } - if cmd.warn && countWarnings == 0 { + case cmd.warn && countWarnings == 0: return fmt.Errorf("expected warnings evaluating query %q (line %d) but got none", iq.expr, cmd.line) - } - if cmd.info && countInfo == 0 { + case !cmd.info && countInfo > 0: + return fmt.Errorf("unexpected info annotations evaluating query %q (line %d): %v", iq.expr, cmd.line, res.Warnings) + case cmd.info && countInfo == 0: return fmt.Errorf("expected info annotations evaluating query %q (line %d) but got none", iq.expr, cmd.line) } err = cmd.compareResult(res.Value) diff --git a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/aggregators.test b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/aggregators.test index 19a896a6fbb..7479d2c3d3b 100644 --- a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/aggregators.test +++ b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/aggregators.test @@ -272,6 +272,8 @@ load 5m http_requests{job="app-server", instance="1", group="production"} 0+60x10 http_requests{job="app-server", instance="0", group="canary"} 0+70x10 http_requests{job="app-server", instance="1", group="canary"} 0+80x10 + http_requests_histogram{job="app-server", instance="2", group="canary"} {{schema:0 sum:10 count:10}}x11 + http_requests_histogram{job="api-server", instance="3", group="production"} {{schema:0 sum:20 count:20}}x11 foo 3+0x10 eval_ordered instant at 50m topk(3, http_requests) @@ -338,6 +340,47 @@ eval_ordered instant at 50m topk(scalar(foo), http_requests) http_requests{group="canary", instance="0", job="app-server"} 700 http_requests{group="production", instance="1", job="app-server"} 600 +# Tests for histogram: should ignore histograms. +eval_info instant at 50m topk(100, http_requests_histogram) + #empty + +eval_info range from 0 to 50m step 5m topk(100, http_requests_histogram) + #empty + +eval_info instant at 50m topk(1, {__name__=~"http_requests(_histogram)?"}) + {__name__="http_requests", group="canary", instance="1", job="app-server"} 800 + +eval_info instant at 50m count(topk(1000, {__name__=~"http_requests(_histogram)?"})) + {} 9 + +eval_info range from 0 to 50m step 5m count(topk(1000, {__name__=~"http_requests(_histogram)?"})) + {} 9x10 + +eval_info instant at 50m topk by (instance) (1, {__name__=~"http_requests(_histogram)?"}) + {__name__="http_requests", group="canary", instance="0", job="app-server"} 700 + {__name__="http_requests", group="canary", instance="1", job="app-server"} 800 + {__name__="http_requests", group="production", instance="2", job="api-server"} NaN + +eval_info instant at 50m bottomk(100, http_requests_histogram) + #empty + +eval_info range from 0 to 50m step 5m bottomk(100, http_requests_histogram) + #empty + +eval_info instant at 50m bottomk(1, {__name__=~"http_requests(_histogram)?"}) + {__name__="http_requests", group="production", instance="0", job="api-server"} 100 + +eval_info instant at 50m count(bottomk(1000, {__name__=~"http_requests(_histogram)?"})) + {} 9 + +eval_info range from 0 to 50m step 5m count(bottomk(1000, {__name__=~"http_requests(_histogram)?"})) + {} 9x10 + +eval_info instant at 50m bottomk by (instance) (1, {__name__=~"http_requests(_histogram)?"}) + {__name__="http_requests", group="production", instance="0", job="api-server"} 100 + {__name__="http_requests", group="production", instance="1", job="api-server"} 200 + {__name__="http_requests", group="production", instance="2", job="api-server"} NaN + clear # Tests for count_values. @@ -351,37 +394,41 @@ load 5m version{job="app-server", instance="1", group="production"} 6 version{job="app-server", instance="0", group="canary"} 7 version{job="app-server", instance="1", group="canary"} 7 + version{job="app-server", instance="2", group="canary"} {{schema:0 sum:10 count:20 z_bucket_w:0.001 z_bucket:2 buckets:[1 2] n_buckets:[1 2]}} + version{job="app-server", instance="3", group="canary"} {{schema:0 sum:10 count:20 z_bucket_w:0.001 z_bucket:2 buckets:[1 2] n_buckets:[1 2]}} eval instant at 1m count_values("version", version) {version="6"} 5 {version="7"} 2 {version="8"} 2 - + {version="{count:20, sum:10, [-2,-1):2, [-1,-0.5):1, [-0.001,0.001]:2, (0.5,1]:1, (1,2]:2}"} 2 eval instant at 1m count_values(((("version"))), version) - {version="6"} 5 - {version="7"} 2 - {version="8"} 2 - + {version="6"} 5 + {version="7"} 2 + {version="8"} 2 + {version="{count:20, sum:10, [-2,-1):2, [-1,-0.5):1, [-0.001,0.001]:2, (0.5,1]:1, (1,2]:2}"} 2 eval instant at 1m count_values without (instance)("version", version) {job="api-server", group="production", version="6"} 3 {job="api-server", group="canary", version="8"} 2 {job="app-server", group="production", version="6"} 2 {job="app-server", group="canary", version="7"} 2 + {job="app-server", group="canary", version="{count:20, sum:10, [-2,-1):2, [-1,-0.5):1, [-0.001,0.001]:2, (0.5,1]:1, (1,2]:2}"} 2 # Overwrite label with output. Don't do this. eval instant at 1m count_values without (instance)("job", version) {job="6", group="production"} 5 {job="8", group="canary"} 2 {job="7", group="canary"} 2 + {job="{count:20, sum:10, [-2,-1):2, [-1,-0.5):1, [-0.001,0.001]:2, (0.5,1]:1, (1,2]:2}", group="canary"} 2 # Overwrite label with output. Don't do this. eval instant at 1m count_values by (job, group)("job", version) {job="6", group="production"} 5 {job="8", group="canary"} 2 {job="7", group="canary"} 2 - + {job="{count:20, sum:10, [-2,-1):2, [-1,-0.5):1, [-0.001,0.001]:2, (0.5,1]:1, (1,2]:2}", group="canary"} 2 # Tests for quantile. clear @@ -441,12 +488,14 @@ load 10s data{test="uneven samples",point="a"} 0 data{test="uneven samples",point="b"} 1 data{test="uneven samples",point="c"} 4 + data{test="histogram sample",point="c"} {{schema:0 sum:0 count:0}} foo .8 eval instant at 1m group without(point)(data) {test="two samples"} 1 {test="three samples"} 1 {test="uneven samples"} 1 + {test="histogram sample"} 1 eval instant at 1m group(foo) {} 1 @@ -494,6 +543,7 @@ load 10s data{test="bigzero",point="b"} -9.988465674311579e+307 data{test="bigzero",point="c"} 9.988465674311579e+307 data{test="bigzero",point="d"} 9.988465674311579e+307 + data{test="value is nan"} NaN eval instant at 1m avg(data{test="ten"}) {} 10 @@ -528,6 +578,10 @@ eval instant at 1m avg(data{test="-big"}) eval instant at 1m avg(data{test="bigzero"}) {} 0 +# If NaN is in the mix, the result is NaN. +eval instant at 1m avg(data) + {} NaN + # Test summing and averaging extreme values. clear @@ -618,11 +672,11 @@ eval_info instant at 0m stddev({label="c"}) eval_info instant at 0m stdvar({label="c"}) -eval instant at 0m stddev by (label) (series) +eval_info instant at 0m stddev by (label) (series) {label="a"} 0 {label="b"} 0 -eval instant at 0m stdvar by (label) (series) +eval_info instant at 0m stdvar by (label) (series) {label="a"} 0 {label="b"} 0 @@ -633,17 +687,17 @@ load 5m series{label="b"} 1 series{label="c"} 2 -eval instant at 0m stddev(series) +eval_info instant at 0m stddev(series) {} 0.5 -eval instant at 0m stdvar(series) +eval_info instant at 0m stdvar(series) {} 0.25 -eval instant at 0m stddev by (label) (series) +eval_info instant at 0m stddev by (label) (series) {label="b"} 0 {label="c"} 0 -eval instant at 0m stdvar by (label) (series) +eval_info instant at 0m stdvar by (label) (series) {label="b"} 0 {label="c"} 0 diff --git a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/at_modifier.test b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/at_modifier.test index 4091f7eabf2..1ad301bdb7d 100644 --- a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/at_modifier.test +++ b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/at_modifier.test @@ -90,7 +90,8 @@ eval instant at 25s sum_over_time(metric{job="1"}[100] offset 50s @ 100) eval instant at 25s metric{job="1"} @ 50 + metric{job="1"} @ 100 {job="1"} 15 -eval instant at 25s rate(metric{job="1"}[100s] @ 100) + label_replace(rate(metric{job="2"}[123s] @ 200), "job", "1", "", "") +# Note that this triggers an info annotation because we are rate'ing a metric that does not end in `_total`. +eval_info instant at 25s rate(metric{job="1"}[100s] @ 100) + label_replace(rate(metric{job="2"}[123s] @ 200), "job", "1", "", "") {job="1"} 0.3 eval instant at 25s sum_over_time(metric{job="1"}[100s] @ 100) + label_replace(sum_over_time(metric{job="2"}[100s] @ 100), "job", "1", "", "") diff --git a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/functions.test b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/functions.test index fb1d1696244..2ed7ffb6a45 100644 --- a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/functions.test +++ b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/functions.test @@ -2,7 +2,10 @@ load 5m http_requests{path="/foo"} 1 2 3 0 1 0 0 1 2 0 http_requests{path="/bar"} 1 2 3 4 5 1 2 3 4 5 - http_requests{path="/biz"} 0 0 0 0 0 1 1 1 1 1 + http_requests{path="/biz"} 0 0 0 0 0 1 1 1 1 1 + http_requests_histogram{path="/foo"} {{schema:0 sum:1 count:1}}x9 + http_requests_histogram{path="/bar"} 0 0 0 0 0 0 0 0 {{schema:0 sum:1 count:1}} {{schema:0 sum:1 count:1}} + http_requests_histogram{path="/biz"} 0 1 0 2 0 3 0 {{schema:0 sum:1 count:1 z_bucket_w:0.001 z_bucket:2}} {{schema:0 sum:2 count:2 z_bucket_w:0.001 z_bucket:1}} {{schema:0 sum:1 count:1 z_bucket_w:0.001 z_bucket:2}} # Tests for resets(). eval instant at 50m resets(http_requests[5m]) @@ -39,6 +42,18 @@ eval instant at 50m resets(http_requests[50m]) eval instant at 50m resets(nonexistent_metric[50m]) +# Test for mix of floats and histograms. + +eval instant at 50m resets(http_requests_histogram[6m]) + {path="/foo"} 0 + {path="/bar"} 0 + {path="/biz"} 0 + +eval instant at 50m resets(http_requests_histogram[60m]) + {path="/foo"} 0 + {path="/bar"} 1 + {path="/biz"} 6 + # Tests for changes(). eval instant at 50m changes(http_requests[5m]) @@ -69,6 +84,21 @@ eval instant at 50m changes((http_requests[50m])) eval instant at 50m changes(nonexistent_metric[50m]) +# Test for mix of floats and histograms. +# Because of bug #14172 we are not able to test more complex cases like below: +# 0 1 2 {{schema:0 sum:1 count:1}} 3 {{schema:0 sum:2 count:2}} 4 {{schema:0 sum:3 count:3}}. +eval instant at 50m changes(http_requests_histogram[5m]) + +eval instant at 50m changes(http_requests_histogram[6m]) + {path="/foo"} 0 + {path="/bar"} 0 + {path="/biz"} 0 + +eval instant at 50m changes(http_requests_histogram[60m]) + {path="/foo"} 0 + {path="/bar"} 1 + {path="/biz"} 9 + clear load 5m @@ -83,13 +113,13 @@ clear # Tests for increase(). load 5m - http_requests{path="/foo"} 0+10x10 - http_requests{path="/bar"} 0+18x5 0+18x5 - http_requests{path="/dings"} 10+10x10 - http_requests{path="/bumms"} 1+10x10 + http_requests_total{path="/foo"} 0+10x10 + http_requests_total{path="/bar"} 0+18x5 0+18x5 + http_requests_total{path="/dings"} 10+10x10 + http_requests_total{path="/bumms"} 1+10x10 # Tests for increase(). -eval instant at 50m increase(http_requests[50m]) +eval instant at 50m increase(http_requests_total[50m]) {path="/foo"} 100 {path="/bar"} 160 {path="/dings"} 100 @@ -102,7 +132,7 @@ eval instant at 50m increase(http_requests[50m]) # chosen. However, "bumms" has value 1 at t=0 and would reach 0 at # t=-30s. Here the extrapolation to t=-2m30s would reach a negative # value, and therefore the extrapolation happens only by 30s. -eval instant at 50m increase(http_requests[100m]) +eval instant at 50m increase(http_requests_total[100m]) {path="/foo"} 100 {path="/bar"} 162 {path="/dings"} 105 @@ -115,57 +145,57 @@ clear # So the sequence 3 2 (decreasing counter = reset) is interpreted the same as 3 0 1 2. # Prometheus assumes it missed the intermediate values 0 and 1. load 5m - http_requests{path="/foo"} 0 1 2 3 2 3 4 + http_requests_total{path="/foo"} 0 1 2 3 2 3 4 -eval instant at 30m increase(http_requests[30m]) +eval instant at 30m increase(http_requests_total[30m]) {path="/foo"} 7 clear # Tests for rate(). load 5m - testcounter_reset_middle 0+27x4 0+27x5 - testcounter_reset_end 0+10x9 0 10 + testcounter_reset_middle_total 0+27x4 0+27x5 + testcounter_reset_end_total 0+10x9 0 10 # Counter resets at in the middle of range are handled correctly by rate(). -eval instant at 50m rate(testcounter_reset_middle[50m]) +eval instant at 50m rate(testcounter_reset_middle_total[50m]) {} 0.08 # Counter resets at end of range are ignored by rate(). -eval instant at 50m rate(testcounter_reset_end[5m]) +eval instant at 50m rate(testcounter_reset_end_total[5m]) -eval instant at 50m rate(testcounter_reset_end[6m]) +eval instant at 50m rate(testcounter_reset_end_total[6m]) {} 0 clear load 5m - calculate_rate_offset{x="a"} 0+10x10 - calculate_rate_offset{x="b"} 0+20x10 - calculate_rate_window 0+80x10 + calculate_rate_offset_total{x="a"} 0+10x10 + calculate_rate_offset_total{x="b"} 0+20x10 + calculate_rate_window_total 0+80x10 # Rates should calculate per-second rates. -eval instant at 50m rate(calculate_rate_window[50m]) +eval instant at 50m rate(calculate_rate_window_total[50m]) {} 0.26666666666666666 -eval instant at 50m rate(calculate_rate_offset[10m] offset 5m) +eval instant at 50m rate(calculate_rate_offset_total[10m] offset 5m) {x="a"} 0.03333333333333333 {x="b"} 0.06666666666666667 clear load 4m - testcounter_zero_cutoff{start="0m"} 0+240x10 - testcounter_zero_cutoff{start="1m"} 60+240x10 - testcounter_zero_cutoff{start="2m"} 120+240x10 - testcounter_zero_cutoff{start="3m"} 180+240x10 - testcounter_zero_cutoff{start="4m"} 240+240x10 - testcounter_zero_cutoff{start="5m"} 300+240x10 + testcounter_zero_cutoff_total{start="0m"} 0+240x10 + testcounter_zero_cutoff_total{start="1m"} 60+240x10 + testcounter_zero_cutoff_total{start="2m"} 120+240x10 + testcounter_zero_cutoff_total{start="3m"} 180+240x10 + testcounter_zero_cutoff_total{start="4m"} 240+240x10 + testcounter_zero_cutoff_total{start="5m"} 300+240x10 # Zero cutoff for left-side extrapolation happens until we # reach half a sampling interval (2m). Beyond that, we only # extrapolate by half a sampling interval. -eval instant at 10m rate(testcounter_zero_cutoff[20m]) +eval instant at 10m rate(testcounter_zero_cutoff_total[20m]) {start="0m"} 0.5 {start="1m"} 0.55 {start="2m"} 0.6 @@ -174,7 +204,7 @@ eval instant at 10m rate(testcounter_zero_cutoff[20m]) {start="5m"} 0.6 # Normal half-interval cutoff for left-side extrapolation. -eval instant at 50m rate(testcounter_zero_cutoff[20m]) +eval instant at 50m rate(testcounter_zero_cutoff_total[20m]) {start="0m"} 0.6 {start="1m"} 0.6 {start="2m"} 0.6 @@ -186,15 +216,15 @@ clear # Tests for irate(). load 5m - http_requests{path="/foo"} 0+10x10 - http_requests{path="/bar"} 0+10x5 0+10x5 + http_requests_total{path="/foo"} 0+10x10 + http_requests_total{path="/bar"} 0+10x5 0+10x5 -eval instant at 50m irate(http_requests[50m]) +eval instant at 50m irate(http_requests_total[50m]) {path="/foo"} .03333333333333333333 {path="/bar"} .03333333333333333333 # Counter reset. -eval instant at 30m irate(http_requests[50m]) +eval instant at 30m irate(http_requests_total[50m]) {path="/foo"} .03333333333333333333 {path="/bar"} 0 @@ -224,18 +254,18 @@ clear # Tests for deriv() and predict_linear(). load 5m - testcounter_reset_middle 0+10x4 0+10x5 - http_requests{job="app-server", instance="1", group="canary"} 0+80x10 + testcounter_reset_middle_total 0+10x4 0+10x5 + http_requests_total{job="app-server", instance="1", group="canary"} 0+80x10 # deriv should return the same as rate in simple cases. -eval instant at 50m rate(http_requests{group="canary", instance="1", job="app-server"}[50m]) +eval instant at 50m rate(http_requests_total{group="canary", instance="1", job="app-server"}[50m]) {group="canary", instance="1", job="app-server"} 0.26666666666666666 -eval instant at 50m deriv(http_requests{group="canary", instance="1", job="app-server"}[50m]) +eval instant at 50m deriv(http_requests_total{group="canary", instance="1", job="app-server"}[50m]) {group="canary", instance="1", job="app-server"} 0.26666666666666666 # deriv should return correct result. -eval instant at 50m deriv(testcounter_reset_middle[100m]) +eval instant at 50m deriv(testcounter_reset_middle_total[100m]) {} 0.010606060606060607 # predict_linear should return correct result. @@ -252,31 +282,31 @@ eval instant at 50m deriv(testcounter_reset_middle[100m]) # intercept at t=0: 6.818181818181818 # intercept at t=3000: 38.63636363636364 # intercept at t=3000+3600: 76.81818181818181 -eval instant at 50m predict_linear(testcounter_reset_middle[50m], 3600) +eval instant at 50m predict_linear(testcounter_reset_middle_total[50m], 3600) {} 70 -eval instant at 50m predict_linear(testcounter_reset_middle[50m], 1h) +eval instant at 50m predict_linear(testcounter_reset_middle_total[50m], 1h) {} 70 # intercept at t = 3000+3600 = 6600 -eval instant at 50m predict_linear(testcounter_reset_middle[55m] @ 3000, 3600) +eval instant at 50m predict_linear(testcounter_reset_middle_total[55m] @ 3000, 3600) {} 76.81818181818181 -eval instant at 50m predict_linear(testcounter_reset_middle[55m] @ 3000, 1h) +eval instant at 50m predict_linear(testcounter_reset_middle_total[55m] @ 3000, 1h) {} 76.81818181818181 # intercept at t = 600+3600 = 4200 -eval instant at 10m predict_linear(testcounter_reset_middle[55m] @ 3000, 3600) +eval instant at 10m predict_linear(testcounter_reset_middle_total[55m] @ 3000, 3600) {} 51.36363636363637 # intercept at t = 4200+3600 = 7800 -eval instant at 70m predict_linear(testcounter_reset_middle[55m] @ 3000, 3600) +eval instant at 70m predict_linear(testcounter_reset_middle_total[55m] @ 3000, 3600) {} 89.54545454545455 -# With http_requests, there is a sample value exactly at the end of +# With http_requests_total, there is a sample value exactly at the end of # the range, and it has exactly the predicted value, so predict_linear # can be emulated with deriv. -eval instant at 50m predict_linear(http_requests[50m], 3600) - (http_requests + deriv(http_requests[50m]) * 3600) +eval instant at 50m predict_linear(http_requests_total[50m], 3600) - (http_requests_total + deriv(http_requests_total[50m]) * 3600) {group="canary", instance="1", job="app-server"} 0 clear @@ -452,6 +482,21 @@ eval instant at 0m clamp(test_clamp, NaN, 0) eval instant at 0m clamp(test_clamp, 5, -5) +clear + +load 1m + mixed_metric {{schema:0 sum:5 count:4 buckets:[1 2 1]}} 1 2 3 {{schema:0 sum:5 count:4 buckets:[1 2 1]}} {{schema:0 sum:8 count:6 buckets:[1 4 1]}} + +# clamp ignores any histograms +eval range from 0 to 5m step 1m clamp(mixed_metric, 2, 5) + {} _ 2 2 3 + +eval range from 0 to 5m step 1m clamp_min(mixed_metric, 2) + {} _ 2 2 3 + +eval range from 0 to 5m step 1m clamp_max(mixed_metric, 2) + {} _ 1 2 2 + # Test cases for sgn. clear load 5m @@ -461,6 +506,7 @@ load 5m test_sgn{src="sgn-d"} -50 test_sgn{src="sgn-e"} 0 test_sgn{src="sgn-f"} 100 + test_sgn{src="sgn-histogram"} {{schema:1 sum:1 count:1}} eval instant at 0m sgn(test_sgn) {src="sgn-a"} -1 @@ -698,6 +744,7 @@ load 10s metric8 9.988465674311579e+307 9.988465674311579e+307 metric9 -9.988465674311579e+307 -9.988465674311579e+307 -9.988465674311579e+307 metric10 -9.988465674311579e+307 9.988465674311579e+307 + metric11 1 2 3 NaN NaN eval instant at 55s avg_over_time(metric[1m]) {} 3 @@ -791,6 +838,19 @@ eval instant at 45s sum_over_time(metric10[1m])/count_over_time(metric10[1m]) eval instant at 1m sum_over_time(metric10[2m])/count_over_time(metric10[2m]) {} 0 +# NaN behavior. +eval instant at 20s avg_over_time(metric11[1m]) + {} 2 + +eval instant at 30s avg_over_time(metric11[1m]) + {} NaN + +eval instant at 1m avg_over_time(metric11[1m]) + {} NaN + +eval instant at 1m sum_over_time(metric11[1m])/count_over_time(metric11[1m]) + {} NaN + # Test if very big intermediate values cause loss of detail. clear load 10s @@ -886,6 +946,9 @@ eval_warn instant at 1m (quantile_over_time(2, (data[2m]))) clear # Test time-related functions. +load 5m + histogram_sample {{schema:0 sum:1 count:1}} + eval instant at 0m year() {} 1970 @@ -967,6 +1030,23 @@ eval instant at 0m days_in_month(vector(1454284800)) eval instant at 0m days_in_month(vector(1485907200)) {} 28 +# Test for histograms. +eval instant at 0m day_of_month(histogram_sample) + +eval instant at 0m day_of_week(histogram_sample) + +eval instant at 0m day_of_year(histogram_sample) + +eval instant at 0m days_in_month(histogram_sample) + +eval instant at 0m hour(histogram_sample) + +eval instant at 0m minute(histogram_sample) + +eval instant at 0m month(histogram_sample) + +eval instant at 0m year(histogram_sample) + clear # Test duplicate labelset in promql output. @@ -1058,49 +1138,49 @@ eval instant at 50m absent(rate(nonexistant[5m])) clear # Testdata for absent_over_time() -eval instant at 1m absent_over_time(http_requests[5m]) +eval instant at 1m absent_over_time(http_requests_total[5m]) {} 1 -eval instant at 1m absent_over_time(http_requests{handler="/foo"}[5m]) +eval instant at 1m absent_over_time(http_requests_total{handler="/foo"}[5m]) {handler="/foo"} 1 -eval instant at 1m absent_over_time(http_requests{handler!="/foo"}[5m]) +eval instant at 1m absent_over_time(http_requests_total{handler!="/foo"}[5m]) {} 1 -eval instant at 1m absent_over_time(http_requests{handler="/foo", handler="/bar", handler="/foobar"}[5m]) +eval instant at 1m absent_over_time(http_requests_total{handler="/foo", handler="/bar", handler="/foobar"}[5m]) {} 1 eval instant at 1m absent_over_time(rate(nonexistant[5m])[5m:]) {} 1 -eval instant at 1m absent_over_time(http_requests{handler="/foo", handler="/bar", instance="127.0.0.1"}[5m]) +eval instant at 1m absent_over_time(http_requests_total{handler="/foo", handler="/bar", instance="127.0.0.1"}[5m]) {instance="127.0.0.1"} 1 load 1m - http_requests{path="/foo",instance="127.0.0.1",job="httpd"} 1+1x10 - http_requests{path="/bar",instance="127.0.0.1",job="httpd"} 1+1x10 + http_requests_total{path="/foo",instance="127.0.0.1",job="httpd"} 1+1x10 + http_requests_total{path="/bar",instance="127.0.0.1",job="httpd"} 1+1x10 httpd_handshake_failures_total{instance="127.0.0.1",job="node"} 1+1x15 httpd_log_lines_total{instance="127.0.0.1",job="node"} 1 ssl_certificate_expiry_seconds{job="ingress"} NaN NaN NaN NaN NaN -eval instant at 5m absent_over_time(http_requests[5m]) +eval instant at 5m absent_over_time(http_requests_total[5m]) -eval instant at 5m absent_over_time(rate(http_requests[5m])[5m:1m]) +eval instant at 5m absent_over_time(rate(http_requests_total[5m])[5m:1m]) eval instant at 0m absent_over_time(httpd_log_lines_total[30s]) eval instant at 1m absent_over_time(httpd_log_lines_total[30s]) {} 1 -eval instant at 15m absent_over_time(http_requests[5m]) +eval instant at 15m absent_over_time(http_requests_total[5m]) {} 1 -eval instant at 15m absent_over_time(http_requests[10m]) +eval instant at 15m absent_over_time(http_requests_total[10m]) -eval instant at 16m absent_over_time(http_requests[6m]) +eval instant at 16m absent_over_time(http_requests_total[6m]) {} 1 -eval instant at 16m absent_over_time(http_requests[16m]) +eval instant at 16m absent_over_time(http_requests_total[16m]) eval instant at 16m absent_over_time(httpd_handshake_failures_total[1m]) {} 1 @@ -1128,30 +1208,30 @@ eval instant at 10m absent_over_time({job="ingress"}[4m]) clear # Testdata for present_over_time() -eval instant at 1m present_over_time(http_requests[5m]) +eval instant at 1m present_over_time(http_requests_total[5m]) -eval instant at 1m present_over_time(http_requests{handler="/foo"}[5m]) +eval instant at 1m present_over_time(http_requests_total{handler="/foo"}[5m]) -eval instant at 1m present_over_time(http_requests{handler!="/foo"}[5m]) +eval instant at 1m present_over_time(http_requests_total{handler!="/foo"}[5m]) -eval instant at 1m present_over_time(http_requests{handler="/foo", handler="/bar", handler="/foobar"}[5m]) +eval instant at 1m present_over_time(http_requests_total{handler="/foo", handler="/bar", handler="/foobar"}[5m]) eval instant at 1m present_over_time(rate(nonexistant[5m])[5m:]) -eval instant at 1m present_over_time(http_requests{handler="/foo", handler="/bar", instance="127.0.0.1"}[5m]) +eval instant at 1m present_over_time(http_requests_total{handler="/foo", handler="/bar", instance="127.0.0.1"}[5m]) load 1m - http_requests{path="/foo",instance="127.0.0.1",job="httpd"} 1+1x10 - http_requests{path="/bar",instance="127.0.0.1",job="httpd"} 1+1x10 + http_requests_total{path="/foo",instance="127.0.0.1",job="httpd"} 1+1x10 + http_requests_total{path="/bar",instance="127.0.0.1",job="httpd"} 1+1x10 httpd_handshake_failures_total{instance="127.0.0.1",job="node"} 1+1x15 httpd_log_lines_total{instance="127.0.0.1",job="node"} 1 ssl_certificate_expiry_seconds{job="ingress"} NaN NaN NaN NaN NaN -eval instant at 5m present_over_time(http_requests[5m]) +eval instant at 5m present_over_time(http_requests_total[5m]) {instance="127.0.0.1", job="httpd", path="/bar"} 1 {instance="127.0.0.1", job="httpd", path="/foo"} 1 -eval instant at 5m present_over_time(rate(http_requests[5m])[5m:1m]) +eval instant at 5m present_over_time(rate(http_requests_total[5m])[5m:1m]) {instance="127.0.0.1", job="httpd", path="/bar"} 1 {instance="127.0.0.1", job="httpd", path="/foo"} 1 @@ -1160,15 +1240,15 @@ eval instant at 0m present_over_time(httpd_log_lines_total[30s]) eval instant at 1m present_over_time(httpd_log_lines_total[30s]) -eval instant at 15m present_over_time(http_requests[5m]) +eval instant at 15m present_over_time(http_requests_total[5m]) -eval instant at 15m present_over_time(http_requests[10m]) +eval instant at 15m present_over_time(http_requests_total[10m]) {instance="127.0.0.1", job="httpd", path="/bar"} 1 {instance="127.0.0.1", job="httpd", path="/foo"} 1 -eval instant at 16m present_over_time(http_requests[6m]) +eval instant at 16m present_over_time(http_requests_total[6m]) -eval instant at 16m present_over_time(http_requests[16m]) +eval instant at 16m present_over_time(http_requests_total[16m]) {instance="127.0.0.1", job="httpd", path="/bar"} 1 {instance="127.0.0.1", job="httpd", path="/foo"} 1 @@ -1192,11 +1272,16 @@ clear load 5m exp_root_log{l="x"} 10 exp_root_log{l="y"} 20 + exp_root_log_h{l="z"} {{schema:1 sum:1 count:1}} eval instant at 1m exp(exp_root_log) {l="x"} 22026.465794806718 {l="y"} 485165195.4097903 +eval instant at 1m exp({__name__=~"exp_root_log(_h)?"}) + {l="x"} 22026.465794806718 + {l="y"} 485165195.4097903 + eval instant at 1m exp(exp_root_log - 10) {l="y"} 22026.465794806718 {l="x"} 1 @@ -1209,6 +1294,10 @@ eval instant at 1m ln(exp_root_log) {l="x"} 2.302585092994046 {l="y"} 2.995732273553991 +eval instant at 1m ln({__name__=~"exp_root_log(_h)?"}) + {l="x"} 2.302585092994046 + {l="y"} 2.995732273553991 + eval instant at 1m ln(exp_root_log - 10) {l="y"} 2.302585092994046 {l="x"} -Inf @@ -1221,14 +1310,26 @@ eval instant at 1m exp(ln(exp_root_log)) {l="y"} 20 {l="x"} 10 +eval instant at 1m exp(ln({__name__=~"exp_root_log(_h)?"})) + {l="y"} 20 + {l="x"} 10 + eval instant at 1m sqrt(exp_root_log) {l="x"} 3.1622776601683795 {l="y"} 4.47213595499958 +eval instant at 1m sqrt({__name__=~"exp_root_log(_h)?"}) + {l="x"} 3.1622776601683795 + {l="y"} 4.47213595499958 + eval instant at 1m log2(exp_root_log) {l="x"} 3.3219280948873626 {l="y"} 4.321928094887363 +eval instant at 1m log2({__name__=~"exp_root_log(_h)?"}) + {l="x"} 3.3219280948873626 + {l="y"} 4.321928094887363 + eval instant at 1m log2(exp_root_log - 10) {l="y"} 3.3219280948873626 {l="x"} -Inf @@ -1241,6 +1342,10 @@ eval instant at 1m log10(exp_root_log) {l="x"} 1 {l="y"} 1.301029995663981 +eval instant at 1m log10({__name__=~"exp_root_log(_h)?"}) + {l="x"} 1 + {l="y"} 1.301029995663981 + eval instant at 1m log10(exp_root_log - 10) {l="y"} 1 {l="x"} -Inf diff --git a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/histograms.test b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/histograms.test index 6089fd01d20..8ab23640af1 100644 --- a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/histograms.test +++ b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/histograms.test @@ -452,14 +452,14 @@ load 5m nonmonotonic_bucket{le="1000"} 0+9x10 nonmonotonic_bucket{le="+Inf"} 0+8x10 -# Nonmonotonic buckets -eval instant at 50m histogram_quantile(0.01, nonmonotonic_bucket) +# Nonmonotonic buckets, triggering an info annotation. +eval_info instant at 50m histogram_quantile(0.01, nonmonotonic_bucket) {} 0.0045 -eval instant at 50m histogram_quantile(0.5, nonmonotonic_bucket) +eval_info instant at 50m histogram_quantile(0.5, nonmonotonic_bucket) {} 8.5 -eval instant at 50m histogram_quantile(0.99, nonmonotonic_bucket) +eval_info instant at 50m histogram_quantile(0.99, nonmonotonic_bucket) {} 979.75 # Buckets with different representations of the same upper bound. diff --git a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/limit.test b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/limit.test index 0ab363f9ae1..e6dd007af47 100644 --- a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/limit.test +++ b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/limit.test @@ -9,6 +9,8 @@ load 5m http_requests{job="api-server", instance="1", group="canary"} 0+40x10 http_requests{job="api-server", instance="2", group="canary"} 0+50x10 http_requests{job="api-server", instance="3", group="canary"} 0+60x10 + http_requests{job="api-server", instance="histogram_1", group="canary"} {{schema:0 sum:10 count:10}}x11 + http_requests{job="api-server", instance="histogram_2", group="canary"} {{schema:0 sum:20 count:20}}x11 eval instant at 50m count(limitk by (group) (0, http_requests)) # empty @@ -16,25 +18,56 @@ eval instant at 50m count(limitk by (group) (0, http_requests)) eval instant at 50m count(limitk by (group) (-1, http_requests)) # empty -# Exercise k==1 special case (as sample is added before the main series loop +# Exercise k==1 special case (as sample is added before the main series loop). eval instant at 50m count(limitk by (group) (1, http_requests) and http_requests) - {} 2 + {} 2 eval instant at 50m count(limitk by (group) (2, http_requests) and http_requests) - {} 4 + {} 4 eval instant at 50m count(limitk(100, http_requests) and http_requests) - {} 6 + {} 8 -# Exercise k==1 special case (as sample is added before the main series loop +# Exercise k==1 special case (as sample is added before the main series loop). eval instant at 50m count(limitk by (group) (1, http_requests) and http_requests) - {} 2 + {} 2 eval instant at 50m count(limitk by (group) (2, http_requests) and http_requests) - {} 4 + {} 4 eval instant at 50m count(limitk(100, http_requests) and http_requests) - {} 6 + {} 8 + +# Test for histograms. +# k==1: verify that histogram is included in the result. +eval instant at 50m limitk(1, http_requests{instance="histogram_1"}) + {__name__="http_requests", group="canary", instance="histogram_1", job="api-server"} {{count:10 sum:10}} + +eval range from 0 to 50m step 5m limitk(1, http_requests{instance="histogram_1"}) + {__name__="http_requests", group="canary", instance="histogram_1", job="api-server"} {{count:10 sum:10}}x10 + +# Histogram is included with mix of floats as well. +eval instant at 50m limitk(8, http_requests{instance=~"(histogram_2|0)"}) + {__name__="http_requests", group="canary", instance="histogram_2", job="api-server"} {{count:20 sum:20}} + {__name__="http_requests", group="production", instance="0", job="api-server"} 100 + {__name__="http_requests", group="canary", instance="0", job="api-server"} 300 + +eval range from 0 to 50m step 5m limitk(8, http_requests{instance=~"(histogram_2|0)"}) + {__name__="http_requests", group="canary", instance="histogram_2", job="api-server"} {{count:20 sum:20}}x10 + {__name__="http_requests", group="production", instance="0", job="api-server"} 0+10x10 + {__name__="http_requests", group="canary", instance="0", job="api-server"} 0+30x10 + +eval instant at 50m count(limitk(2, http_requests{instance=~"histogram_[0-9]"})) + {} 2 + +eval range from 0 to 50m step 5m count(limitk(2, http_requests{instance=~"histogram_[0-9]"})) + {} 2+0x10 + +eval instant at 50m count(limitk(1000, http_requests{instance=~"histogram_[0-9]"})) + {} 2 + +eval range from 0 to 50m step 5m count(limitk(1000, http_requests{instance=~"histogram_[0-9]"})) + {} 2+0x10 # limit_ratio eval range from 0 to 50m step 5m count(limit_ratio(0.0, http_requests)) @@ -42,9 +75,9 @@ eval range from 0 to 50m step 5m count(limit_ratio(0.0, http_requests)) # limitk(2, ...) should always return a 2-count subset of the timeseries (hence the AND'ing) eval range from 0 to 50m step 5m count(limitk(2, http_requests) and http_requests) - {} 2+0x10 + {} 2+0x10 -# Tests for limit_ratio +# Tests for limit_ratio. # # NB: below 0.5 ratio will depend on some hashing "luck" (also there's no guarantee that # an integer comes from: total number of series * ratio), as it depends on: @@ -56,50 +89,50 @@ eval range from 0 to 50m step 5m count(limitk(2, http_requests) and http_request # # See `AddRatioSample()` in promql/engine.go for more details. -# Half~ish samples: verify we get "near" 3 (of 0.5 * 6) -eval range from 0 to 50m step 5m count(limit_ratio(0.5, http_requests) and http_requests) <= bool (3+1) - {} 1+0x10 +# Half~ish samples: verify we get "near" 3 (of 0.5 * 6). +eval range from 0 to 50m step 5m count(limit_ratio(0.5, http_requests) and http_requests) <= bool (4+1) + {} 1+0x10 -eval range from 0 to 50m step 5m count(limit_ratio(0.5, http_requests) and http_requests) >= bool (3-1) - {} 1+0x10 +eval range from 0 to 50m step 5m count(limit_ratio(0.5, http_requests) and http_requests) >= bool (4-1) + {} 1+0x10 -# All samples +# All samples. eval range from 0 to 50m step 5m count(limit_ratio(1.0, http_requests) and http_requests) - {} 6+0x10 + {} 8+0x10 -# All samples +# All samples. eval range from 0 to 50m step 5m count(limit_ratio(-1.0, http_requests) and http_requests) - {} 6+0x10 + {} 8+0x10 -# Capped to 1.0 -> all samples +# Capped to 1.0 -> all samples. eval_warn range from 0 to 50m step 5m count(limit_ratio(1.1, http_requests) and http_requests) - {} 6+0x10 + {} 8+0x10 -# Capped to -1.0 -> all samples +# Capped to -1.0 -> all samples. eval_warn range from 0 to 50m step 5m count(limit_ratio(-1.1, http_requests) and http_requests) - {} 6+0x10 + {} 8+0x10 -# Verify that limit_ratio(value) and limit_ratio(1.0-value) return the "complement" of each other -# Complement below for [0.2, -0.8] +# Verify that limit_ratio(value) and limit_ratio(1.0-value) return the "complement" of each other. +# Complement below for [0.2, -0.8]. # -# Complement 1of2: `or` should return all samples +# Complement 1of2: `or` should return all samples. eval range from 0 to 50m step 5m count(limit_ratio(0.2, http_requests) or limit_ratio(-0.8, http_requests)) - {} 6+0x10 + {} 8+0x10 -# Complement 2of2: `and` should return no samples +# Complement 2of2: `and` should return no samples. eval range from 0 to 50m step 5m count(limit_ratio(0.2, http_requests) and limit_ratio(-0.8, http_requests)) # empty -# Complement below for [0.5, -0.5] +# Complement below for [0.5, -0.5]. eval range from 0 to 50m step 5m count(limit_ratio(0.5, http_requests) or limit_ratio(-0.5, http_requests)) - {} 6+0x10 + {} 8+0x10 eval range from 0 to 50m step 5m count(limit_ratio(0.5, http_requests) and limit_ratio(-0.5, http_requests)) # empty -# Complement below for [0.8, -0.2] +# Complement below for [0.8, -0.2]. eval range from 0 to 50m step 5m count(limit_ratio(0.8, http_requests) or limit_ratio(-0.2, http_requests)) - {} 6+0x10 + {} 8+0x10 eval range from 0 to 50m step 5m count(limit_ratio(0.8, http_requests) and limit_ratio(-0.2, http_requests)) # empty @@ -107,13 +140,19 @@ eval range from 0 to 50m step 5m count(limit_ratio(0.8, http_requests) and limit # Complement below for [some_ratio, 1.0 - some_ratio], some_ratio derived from time(), # using a small prime number to avoid rounded ratio values, and a small set of them. eval range from 0 to 50m step 5m count(limit_ratio(time() % 17/17, http_requests) or limit_ratio(1.0 - (time() % 17/17), http_requests)) - {} 6+0x10 + {} 8+0x10 eval range from 0 to 50m step 5m count(limit_ratio(time() % 17/17, http_requests) and limit_ratio(1.0 - (time() % 17/17), http_requests)) # empty -# Poor man's normality check: ok (loaded samples follow a nice linearity over labels and time) -# The check giving: 1 (i.e. true) -eval range from 0 to 50m step 5m abs(avg(limit_ratio(0.5, http_requests)) - avg(limit_ratio(-0.5, http_requests))) <= bool stddev(http_requests) +# Poor man's normality check: ok (loaded samples follow a nice linearity over labels and time). +# The check giving: 1 (i.e. true). +eval range from 0 to 50m step 5m abs(avg(limit_ratio(0.5, http_requests{instance!~"histogram_[0-9]"})) - avg(limit_ratio(-0.5, http_requests{instance!~"histogram_[0-9]"}))) <= bool stddev(http_requests{instance!~"histogram_[0-9]"}) {} 1+0x10 +# All specified histograms are included for r=1. +eval instant at 50m limit_ratio(1, http_requests{instance="histogram_1"}) + {__name__="http_requests", group="canary", instance="histogram_1", job="api-server"} {{count:10 sum:10}} + +eval range from 0 to 50m step 5m limit_ratio(1, http_requests{instance="histogram_1"}) + {__name__="http_requests", group="canary", instance="histogram_1", job="api-server"} {{count:10 sum:10}}x10 diff --git a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/name_label_dropping.test b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/name_label_dropping.test index d4a2ad257e1..9af45a73240 100644 --- a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/name_label_dropping.test +++ b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/name_label_dropping.test @@ -1,88 +1,88 @@ # Test for __name__ label drop. load 5m - metric{env="1"} 0 60 120 - another_metric{env="1"} 60 120 180 + metric_total{env="1"} 0 60 120 + another_metric_total{env="1"} 60 120 180 -# Does not drop __name__ for vector selector -eval instant at 10m metric{env="1"} - metric{env="1"} 120 +# Does not drop __name__ for vector selector. +eval instant at 10m metric_total{env="1"} + metric_total{env="1"} 120 -# Drops __name__ for unary operators -eval instant at 10m -metric +# Drops __name__ for unary operators. +eval instant at 10m -metric_total {env="1"} -120 -# Drops __name__ for binary operators -eval instant at 10m metric + another_metric +# Drops __name__ for binary operators. +eval instant at 10m metric_total + another_metric_total {env="1"} 300 -# Does not drop __name__ for binary comparison operators -eval instant at 10m metric <= another_metric - metric{env="1"} 120 +# Does not drop __name__ for binary comparison operators. +eval instant at 10m metric_total <= another_metric_total + metric_total{env="1"} 120 -# Drops __name__ for binary comparison operators with "bool" modifier -eval instant at 10m metric <= bool another_metric +# Drops __name__ for binary comparison operators with "bool" modifier. +eval instant at 10m metric_total <= bool another_metric_total {env="1"} 1 -# Drops __name__ for vector-scalar operations -eval instant at 10m metric * 2 +# Drops __name__ for vector-scalar operations. +eval instant at 10m metric_total * 2 {env="1"} 240 -# Drops __name__ for instant-vector functions -eval instant at 10m clamp(metric, 0, 100) +# Drops __name__ for instant-vector functions. +eval instant at 10m clamp(metric_total, 0, 100) {env="1"} 100 -# Drops __name__ for round function -eval instant at 10m round(metric) +# Drops __name__ for round function. +eval instant at 10m round(metric_total) {env="1"} 120 -# Drops __name__ for range-vector functions -eval instant at 10m rate(metric{env="1"}[10m]) +# Drops __name__ for range-vector functions. +eval instant at 10m rate(metric_total{env="1"}[10m]) {env="1"} 0.2 -# Does not drop __name__ for last_over_time function -eval instant at 10m last_over_time(metric{env="1"}[10m]) - metric{env="1"} 120 +# Does not drop __name__ for last_over_time function. +eval instant at 10m last_over_time(metric_total{env="1"}[10m]) + metric_total{env="1"} 120 -# Drops name for other _over_time functions -eval instant at 10m max_over_time(metric{env="1"}[10m]) +# Drops name for other _over_time functions. +eval instant at 10m max_over_time(metric_total{env="1"}[10m]) {env="1"} 120 -# Allows relabeling (to-be-dropped) __name__ via label_replace +# Allows relabeling (to-be-dropped) __name__ via label_replace. eval instant at 10m label_replace(rate({env="1"}[10m]), "my_name", "rate_$1", "__name__", "(.+)") - {my_name="rate_metric", env="1"} 0.2 - {my_name="rate_another_metric", env="1"} 0.2 + {my_name="rate_metric_total", env="1"} 0.2 + {my_name="rate_another_metric_total", env="1"} 0.2 -# Allows preserving __name__ via label_replace +# Allows preserving __name__ via label_replace. eval instant at 10m label_replace(rate({env="1"}[10m]), "__name__", "rate_$1", "__name__", "(.+)") - rate_metric{env="1"} 0.2 - rate_another_metric{env="1"} 0.2 + rate_metric_total{env="1"} 0.2 + rate_another_metric_total{env="1"} 0.2 -# Allows relabeling (to-be-dropped) __name__ via label_join +# Allows relabeling (to-be-dropped) __name__ via label_join. eval instant at 10m label_join(rate({env="1"}[10m]), "my_name", "_", "__name__") - {my_name="metric", env="1"} 0.2 - {my_name="another_metric", env="1"} 0.2 + {my_name="metric_total", env="1"} 0.2 + {my_name="another_metric_total", env="1"} 0.2 -# Allows preserving __name__ via label_join +# Allows preserving __name__ via label_join. eval instant at 10m label_join(rate({env="1"}[10m]), "__name__", "_", "__name__", "env") - metric_1{env="1"} 0.2 - another_metric_1{env="1"} 0.2 + metric_total_1{env="1"} 0.2 + another_metric_total_1{env="1"} 0.2 -# Does not drop metric names fro aggregation operators -eval instant at 10m sum by (__name__, env) (metric{env="1"}) - metric{env="1"} 120 +# Does not drop metric names from aggregation operators. +eval instant at 10m sum by (__name__, env) (metric_total{env="1"}) + metric_total{env="1"} 120 -# Aggregation operators by __name__ lead to duplicate labelset errors (aggregation is partitioned by not yet removed __name__ label) +# Aggregation operators by __name__ lead to duplicate labelset errors (aggregation is partitioned by not yet removed __name__ label). # This is an accidental side effect of delayed __name__ label dropping eval_fail instant at 10m sum by (__name__) (rate({env="1"}[10m])) -# Aggregation operators aggregate metrics with same labelset and to-be-dropped names +# Aggregation operators aggregate metrics with same labelset and to-be-dropped names. # This is an accidental side effect of delayed __name__ label dropping eval instant at 10m sum(rate({env="1"}[10m])) by (env) {env="1"} 0.4 -# Aggregationk operators propagate __name__ label dropping information -eval instant at 10m topk(10, sum by (__name__, env) (metric{env="1"})) - metric{env="1"} 120 +# Aggregationk operators propagate __name__ label dropping information. +eval instant at 10m topk(10, sum by (__name__, env) (metric_total{env="1"})) + metric_total{env="1"} 120 -eval instant at 10m topk(10, sum by (__name__, env) (rate(metric{env="1"}[10m]))) +eval instant at 10m topk(10, sum by (__name__, env) (rate(metric_total{env="1"}[10m]))) {env="1"} 0.2 diff --git a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/native_histograms.test b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/native_histograms.test index 0463384e2e9..6be298cf7d6 100644 --- a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/native_histograms.test +++ b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/native_histograms.test @@ -1186,3 +1186,23 @@ eval instant at 3m avg_over_time(histogram_sum_over_time[4m:1m]) {} {{schema:0 count:26.75 sum:1172.8 z_bucket:3.5 z_bucket_w:0.001 buckets:[0.75 2 0.5 1.25 0.75 0.5 0.5] n_buckets:[0.5 1.5 2 1 3.75 2.25 0 0 0 2.5 2.5 1]}} clear + +# Test native histograms with sub operator. +load 10m + histogram_sub_1{idx="0"} {{schema:0 count:41 sum:2345.6 z_bucket:5 z_bucket_w:0.001 buckets:[1 3 1 2 1 1 1] n_buckets:[0 1 4 2 7 0 0 0 0 5 5 2]}}x1 + histogram_sub_1{idx="1"} {{schema:0 count:11 sum:1234.5 z_bucket:3 z_bucket_w:0.001 buckets:[0 2 1] n_buckets:[0 0 3 2]}}x1 + histogram_sub_2{idx="0"} {{schema:0 count:41 sum:2345.6 z_bucket:5 z_bucket_w:0.001 buckets:[1 3 1 2 1 1 1] n_buckets:[0 1 4 2 7 0 0 0 0 5 5 2]}}x1 + histogram_sub_2{idx="1"} {{schema:1 count:11 sum:1234.5 z_bucket:3 z_bucket_w:0.001 buckets:[0 2 1] n_buckets:[0 0 3 2]}}x1 + histogram_sub_3{idx="0"} {{schema:1 count:11 sum:1234.5 z_bucket:3 z_bucket_w:0.001 buckets:[0 2 1] n_buckets:[0 0 3 2]}}x1 + histogram_sub_3{idx="1"} {{schema:0 count:41 sum:2345.6 z_bucket:5 z_bucket_w:0.001 buckets:[1 3 1 2 1 1 1] n_buckets:[0 1 4 2 7 0 0 0 0 5 5 2]}}x1 + +eval instant at 10m histogram_sub_1{idx="0"} - ignoring(idx) histogram_sub_1{idx="1"} + {} {{schema:0 count:30 sum:1111.1 z_bucket:2 z_bucket_w:0.001 buckets:[1 1 0 2 1 1 1] n_buckets:[0 1 1 0 7 0 0 0 0 5 5 2]}} + +eval instant at 10m histogram_sub_2{idx="0"} - ignoring(idx) histogram_sub_2{idx="1"} + {} {{schema:0 count:30 sum:1111.1 z_bucket:2 z_bucket_w:0.001 buckets:[1 0 1 2 1 1 1] n_buckets:[0 -2 2 2 7 0 0 0 0 5 5 2]}} + +eval instant at 10m histogram_sub_3{idx="0"} - ignoring(idx) histogram_sub_3{idx="1"} + {} {{schema:0 count:-30 sum:-1111.1 z_bucket:-2 z_bucket_w:0.001 buckets:[-1 0 -1 -2 -1 -1 -1] n_buckets:[0 2 -2 -2 -7 0 0 0 0 -5 -5 -2]}} + +clear diff --git a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/operators.test b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/operators.test index 4b00831dfc8..667989ca77d 100644 --- a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/operators.test +++ b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/operators.test @@ -1,12 +1,12 @@ load 5m - http_requests{job="api-server", instance="0", group="production"} 0+10x10 - http_requests{job="api-server", instance="1", group="production"} 0+20x10 - http_requests{job="api-server", instance="0", group="canary"} 0+30x10 - http_requests{job="api-server", instance="1", group="canary"} 0+40x10 - http_requests{job="app-server", instance="0", group="production"} 0+50x10 - http_requests{job="app-server", instance="1", group="production"} 0+60x10 - http_requests{job="app-server", instance="0", group="canary"} 0+70x10 - http_requests{job="app-server", instance="1", group="canary"} 0+80x10 + http_requests_total{job="api-server", instance="0", group="production"} 0+10x10 + http_requests_total{job="api-server", instance="1", group="production"} 0+20x10 + http_requests_total{job="api-server", instance="0", group="canary"} 0+30x10 + http_requests_total{job="api-server", instance="1", group="canary"} 0+40x10 + http_requests_total{job="app-server", instance="0", group="production"} 0+50x10 + http_requests_total{job="app-server", instance="1", group="production"} 0+60x10 + http_requests_total{job="app-server", instance="0", group="canary"} 0+70x10 + http_requests_total{job="app-server", instance="1", group="canary"} 0+80x10 http_requests_histogram{job="app-server", instance="1", group="production"} {{schema:1 sum:15 count:10 buckets:[3 2 5 7 9]}}x11 load 5m @@ -15,21 +15,21 @@ load 5m vector_matching_b{l="x"} 0+4x25 -eval instant at 50m SUM(http_requests) BY (job) - COUNT(http_requests) BY (job) +eval instant at 50m SUM(http_requests_total) BY (job) - COUNT(http_requests_total) BY (job) {job="api-server"} 996 {job="app-server"} 2596 -eval instant at 50m 2 - SUM(http_requests) BY (job) +eval instant at 50m 2 - SUM(http_requests_total) BY (job) {job="api-server"} -998 {job="app-server"} -2598 -eval instant at 50m -http_requests{job="api-server",instance="0",group="production"} +eval instant at 50m -http_requests_total{job="api-server",instance="0",group="production"} {job="api-server",instance="0",group="production"} -100 -eval instant at 50m +http_requests{job="api-server",instance="0",group="production"} - http_requests{job="api-server",instance="0",group="production"} 100 +eval instant at 50m +http_requests_total{job="api-server",instance="0",group="production"} + http_requests_total{job="api-server",instance="0",group="production"} 100 -eval instant at 50m - - - SUM(http_requests) BY (job) +eval instant at 50m - - - SUM(http_requests_total) BY (job) {job="api-server"} -1000 {job="app-server"} -2600 @@ -42,83 +42,83 @@ eval instant at 50m -2^---1*3 eval instant at 50m 2/-2^---1*3+2 -10 -eval instant at 50m -10^3 * - SUM(http_requests) BY (job) ^ -1 +eval instant at 50m -10^3 * - SUM(http_requests_total) BY (job) ^ -1 {job="api-server"} 1 {job="app-server"} 0.38461538461538464 -eval instant at 50m 1000 / SUM(http_requests) BY (job) +eval instant at 50m 1000 / SUM(http_requests_total) BY (job) {job="api-server"} 1 {job="app-server"} 0.38461538461538464 -eval instant at 50m SUM(http_requests) BY (job) - 2 +eval instant at 50m SUM(http_requests_total) BY (job) - 2 {job="api-server"} 998 {job="app-server"} 2598 -eval instant at 50m SUM(http_requests) BY (job) % 3 +eval instant at 50m SUM(http_requests_total) BY (job) % 3 {job="api-server"} 1 {job="app-server"} 2 -eval instant at 50m SUM(http_requests) BY (job) % 0.3 +eval instant at 50m SUM(http_requests_total) BY (job) % 0.3 {job="api-server"} 0.1 {job="app-server"} 0.2 -eval instant at 50m SUM(http_requests) BY (job) ^ 2 +eval instant at 50m SUM(http_requests_total) BY (job) ^ 2 {job="api-server"} 1000000 {job="app-server"} 6760000 -eval instant at 50m SUM(http_requests) BY (job) % 3 ^ 2 +eval instant at 50m SUM(http_requests_total) BY (job) % 3 ^ 2 {job="api-server"} 1 {job="app-server"} 8 -eval instant at 50m SUM(http_requests) BY (job) % 2 ^ (3 ^ 2) +eval instant at 50m SUM(http_requests_total) BY (job) % 2 ^ (3 ^ 2) {job="api-server"} 488 {job="app-server"} 40 -eval instant at 50m SUM(http_requests) BY (job) % 2 ^ 3 ^ 2 +eval instant at 50m SUM(http_requests_total) BY (job) % 2 ^ 3 ^ 2 {job="api-server"} 488 {job="app-server"} 40 -eval instant at 50m SUM(http_requests) BY (job) % 2 ^ 3 ^ 2 ^ 2 +eval instant at 50m SUM(http_requests_total) BY (job) % 2 ^ 3 ^ 2 ^ 2 {job="api-server"} 1000 {job="app-server"} 2600 -eval instant at 50m COUNT(http_requests) BY (job) ^ COUNT(http_requests) BY (job) +eval instant at 50m COUNT(http_requests_total) BY (job) ^ COUNT(http_requests_total) BY (job) {job="api-server"} 256 {job="app-server"} 256 -eval instant at 50m SUM(http_requests) BY (job) / 0 +eval instant at 50m SUM(http_requests_total) BY (job) / 0 {job="api-server"} +Inf {job="app-server"} +Inf -eval instant at 50m http_requests{group="canary", instance="0", job="api-server"} / 0 +eval instant at 50m http_requests_total{group="canary", instance="0", job="api-server"} / 0 {group="canary", instance="0", job="api-server"} +Inf -eval instant at 50m -1 * http_requests{group="canary", instance="0", job="api-server"} / 0 +eval instant at 50m -1 * http_requests_total{group="canary", instance="0", job="api-server"} / 0 {group="canary", instance="0", job="api-server"} -Inf -eval instant at 50m 0 * http_requests{group="canary", instance="0", job="api-server"} / 0 +eval instant at 50m 0 * http_requests_total{group="canary", instance="0", job="api-server"} / 0 {group="canary", instance="0", job="api-server"} NaN -eval instant at 50m 0 * http_requests{group="canary", instance="0", job="api-server"} % 0 +eval instant at 50m 0 * http_requests_total{group="canary", instance="0", job="api-server"} % 0 {group="canary", instance="0", job="api-server"} NaN -eval instant at 50m SUM(http_requests) BY (job) + SUM(http_requests) BY (job) +eval instant at 50m SUM(http_requests_total) BY (job) + SUM(http_requests_total) BY (job) {job="api-server"} 2000 {job="app-server"} 5200 -eval instant at 50m (SUM((http_requests)) BY (job)) + SUM(http_requests) BY (job) +eval instant at 50m (SUM((http_requests_total)) BY (job)) + SUM(http_requests_total) BY (job) {job="api-server"} 2000 {job="app-server"} 5200 -eval instant at 50m http_requests{job="api-server", group="canary"} - http_requests{group="canary", instance="0", job="api-server"} 300 - http_requests{group="canary", instance="1", job="api-server"} 400 +eval instant at 50m http_requests_total{job="api-server", group="canary"} + http_requests_total{group="canary", instance="0", job="api-server"} 300 + http_requests_total{group="canary", instance="1", job="api-server"} 400 -eval instant at 50m http_requests{job="api-server", group="canary"} + rate(http_requests{job="api-server"}[10m]) * 5 * 60 +eval instant at 50m http_requests_total{job="api-server", group="canary"} + rate(http_requests_total{job="api-server"}[10m]) * 5 * 60 {group="canary", instance="0", job="api-server"} 330 {group="canary", instance="1", job="api-server"} 440 -eval instant at 50m rate(http_requests[25m]) * 25 * 60 +eval instant at 50m rate(http_requests_total[25m]) * 25 * 60 {group="canary", instance="0", job="api-server"} 150 {group="canary", instance="0", job="app-server"} 350 {group="canary", instance="1", job="api-server"} 200 @@ -128,7 +128,7 @@ eval instant at 50m rate(http_requests[25m]) * 25 * 60 {group="production", instance="1", job="api-server"} 100 {group="production", instance="1", job="app-server"} 300 -eval instant at 50m (rate((http_requests[25m])) * 25) * 60 +eval instant at 50m (rate((http_requests_total[25m])) * 25) * 60 {group="canary", instance="0", job="api-server"} 150 {group="canary", instance="0", job="app-server"} 350 {group="canary", instance="1", job="api-server"} 200 @@ -139,53 +139,53 @@ eval instant at 50m (rate((http_requests[25m])) * 25) * 60 {group="production", instance="1", job="app-server"} 300 -eval instant at 50m http_requests{group="canary"} and http_requests{instance="0"} - http_requests{group="canary", instance="0", job="api-server"} 300 - http_requests{group="canary", instance="0", job="app-server"} 700 +eval instant at 50m http_requests_total{group="canary"} and http_requests_total{instance="0"} + http_requests_total{group="canary", instance="0", job="api-server"} 300 + http_requests_total{group="canary", instance="0", job="app-server"} 700 -eval instant at 50m (http_requests{group="canary"} + 1) and http_requests{instance="0"} +eval instant at 50m (http_requests_total{group="canary"} + 1) and http_requests_total{instance="0"} {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 -eval instant at 50m (http_requests{group="canary"} + 1) and on(instance, job) http_requests{instance="0", group="production"} +eval instant at 50m (http_requests_total{group="canary"} + 1) and on(instance, job) http_requests_total{instance="0", group="production"} {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 -eval instant at 50m (http_requests{group="canary"} + 1) and on(instance) http_requests{instance="0", group="production"} +eval instant at 50m (http_requests_total{group="canary"} + 1) and on(instance) http_requests_total{instance="0", group="production"} {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 -eval instant at 50m (http_requests{group="canary"} + 1) and ignoring(group) http_requests{instance="0", group="production"} +eval instant at 50m (http_requests_total{group="canary"} + 1) and ignoring(group) http_requests_total{instance="0", group="production"} {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 -eval instant at 50m (http_requests{group="canary"} + 1) and ignoring(group, job) http_requests{instance="0", group="production"} +eval instant at 50m (http_requests_total{group="canary"} + 1) and ignoring(group, job) http_requests_total{instance="0", group="production"} {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 -eval instant at 50m http_requests{group="canary"} or http_requests{group="production"} - http_requests{group="canary", instance="0", job="api-server"} 300 - http_requests{group="canary", instance="0", job="app-server"} 700 - http_requests{group="canary", instance="1", job="api-server"} 400 - http_requests{group="canary", instance="1", job="app-server"} 800 - http_requests{group="production", instance="0", job="api-server"} 100 - http_requests{group="production", instance="0", job="app-server"} 500 - http_requests{group="production", instance="1", job="api-server"} 200 - http_requests{group="production", instance="1", job="app-server"} 600 +eval instant at 50m http_requests_total{group="canary"} or http_requests_total{group="production"} + http_requests_total{group="canary", instance="0", job="api-server"} 300 + http_requests_total{group="canary", instance="0", job="app-server"} 700 + http_requests_total{group="canary", instance="1", job="api-server"} 400 + http_requests_total{group="canary", instance="1", job="app-server"} 800 + http_requests_total{group="production", instance="0", job="api-server"} 100 + http_requests_total{group="production", instance="0", job="app-server"} 500 + http_requests_total{group="production", instance="1", job="api-server"} 200 + http_requests_total{group="production", instance="1", job="app-server"} 600 # On overlap the rhs samples must be dropped. -eval instant at 50m (http_requests{group="canary"} + 1) or http_requests{instance="1"} +eval instant at 50m (http_requests_total{group="canary"} + 1) or http_requests_total{instance="1"} {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 {group="canary", instance="1", job="api-server"} 401 {group="canary", instance="1", job="app-server"} 801 - http_requests{group="production", instance="1", job="api-server"} 200 - http_requests{group="production", instance="1", job="app-server"} 600 + http_requests_total{group="production", instance="1", job="api-server"} 200 + http_requests_total{group="production", instance="1", job="app-server"} 600 # Matching only on instance excludes everything that has instance=0/1 but includes # entries without the instance label. -eval instant at 50m (http_requests{group="canary"} + 1) or on(instance) (http_requests or cpu_count or vector_matching_a) +eval instant at 50m (http_requests_total{group="canary"} + 1) or on(instance) (http_requests_total or cpu_count or vector_matching_a) {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 {group="canary", instance="1", job="api-server"} 401 @@ -193,7 +193,7 @@ eval instant at 50m (http_requests{group="canary"} + 1) or on(instance) (http_re vector_matching_a{l="x"} 10 vector_matching_a{l="y"} 20 -eval instant at 50m (http_requests{group="canary"} + 1) or ignoring(l, group, job) (http_requests or cpu_count or vector_matching_a) +eval instant at 50m (http_requests_total{group="canary"} + 1) or ignoring(l, group, job) (http_requests_total or cpu_count or vector_matching_a) {group="canary", instance="0", job="api-server"} 301 {group="canary", instance="0", job="app-server"} 701 {group="canary", instance="1", job="api-server"} 401 @@ -201,81 +201,81 @@ eval instant at 50m (http_requests{group="canary"} + 1) or ignoring(l, group, jo vector_matching_a{l="x"} 10 vector_matching_a{l="y"} 20 -eval instant at 50m http_requests{group="canary"} unless http_requests{instance="0"} - http_requests{group="canary", instance="1", job="api-server"} 400 - http_requests{group="canary", instance="1", job="app-server"} 800 +eval instant at 50m http_requests_total{group="canary"} unless http_requests_total{instance="0"} + http_requests_total{group="canary", instance="1", job="api-server"} 400 + http_requests_total{group="canary", instance="1", job="app-server"} 800 -eval instant at 50m http_requests{group="canary"} unless on(job) http_requests{instance="0"} +eval instant at 50m http_requests_total{group="canary"} unless on(job) http_requests_total{instance="0"} -eval instant at 50m http_requests{group="canary"} unless on(job, instance) http_requests{instance="0"} - http_requests{group="canary", instance="1", job="api-server"} 400 - http_requests{group="canary", instance="1", job="app-server"} 800 +eval instant at 50m http_requests_total{group="canary"} unless on(job, instance) http_requests_total{instance="0"} + http_requests_total{group="canary", instance="1", job="api-server"} 400 + http_requests_total{group="canary", instance="1", job="app-server"} 800 -eval instant at 50m http_requests{group="canary"} / on(instance,job) http_requests{group="production"} +eval instant at 50m http_requests_total{group="canary"} / on(instance,job) http_requests_total{group="production"} {instance="0", job="api-server"} 3 {instance="0", job="app-server"} 1.4 {instance="1", job="api-server"} 2 {instance="1", job="app-server"} 1.3333333333333333 -eval instant at 50m http_requests{group="canary"} unless ignoring(group, instance) http_requests{instance="0"} +eval instant at 50m http_requests_total{group="canary"} unless ignoring(group, instance) http_requests_total{instance="0"} -eval instant at 50m http_requests{group="canary"} unless ignoring(group) http_requests{instance="0"} - http_requests{group="canary", instance="1", job="api-server"} 400 - http_requests{group="canary", instance="1", job="app-server"} 800 +eval instant at 50m http_requests_total{group="canary"} unless ignoring(group) http_requests_total{instance="0"} + http_requests_total{group="canary", instance="1", job="api-server"} 400 + http_requests_total{group="canary", instance="1", job="app-server"} 800 -eval instant at 50m http_requests{group="canary"} / ignoring(group) http_requests{group="production"} +eval instant at 50m http_requests_total{group="canary"} / ignoring(group) http_requests_total{group="production"} {instance="0", job="api-server"} 3 {instance="0", job="app-server"} 1.4 {instance="1", job="api-server"} 2 {instance="1", job="app-server"} 1.3333333333333333 # https://github.com/prometheus/prometheus/issues/1489 -eval instant at 50m http_requests AND ON (dummy) vector(1) - http_requests{group="canary", instance="0", job="api-server"} 300 - http_requests{group="canary", instance="0", job="app-server"} 700 - http_requests{group="canary", instance="1", job="api-server"} 400 - http_requests{group="canary", instance="1", job="app-server"} 800 - http_requests{group="production", instance="0", job="api-server"} 100 - http_requests{group="production", instance="0", job="app-server"} 500 - http_requests{group="production", instance="1", job="api-server"} 200 - http_requests{group="production", instance="1", job="app-server"} 600 - -eval instant at 50m http_requests AND IGNORING (group, instance, job) vector(1) - http_requests{group="canary", instance="0", job="api-server"} 300 - http_requests{group="canary", instance="0", job="app-server"} 700 - http_requests{group="canary", instance="1", job="api-server"} 400 - http_requests{group="canary", instance="1", job="app-server"} 800 - http_requests{group="production", instance="0", job="api-server"} 100 - http_requests{group="production", instance="0", job="app-server"} 500 - http_requests{group="production", instance="1", job="api-server"} 200 - http_requests{group="production", instance="1", job="app-server"} 600 +eval instant at 50m http_requests_total AND ON (dummy) vector(1) + http_requests_total{group="canary", instance="0", job="api-server"} 300 + http_requests_total{group="canary", instance="0", job="app-server"} 700 + http_requests_total{group="canary", instance="1", job="api-server"} 400 + http_requests_total{group="canary", instance="1", job="app-server"} 800 + http_requests_total{group="production", instance="0", job="api-server"} 100 + http_requests_total{group="production", instance="0", job="app-server"} 500 + http_requests_total{group="production", instance="1", job="api-server"} 200 + http_requests_total{group="production", instance="1", job="app-server"} 600 + +eval instant at 50m http_requests_total AND IGNORING (group, instance, job) vector(1) + http_requests_total{group="canary", instance="0", job="api-server"} 300 + http_requests_total{group="canary", instance="0", job="app-server"} 700 + http_requests_total{group="canary", instance="1", job="api-server"} 400 + http_requests_total{group="canary", instance="1", job="app-server"} 800 + http_requests_total{group="production", instance="0", job="api-server"} 100 + http_requests_total{group="production", instance="0", job="app-server"} 500 + http_requests_total{group="production", instance="1", job="api-server"} 200 + http_requests_total{group="production", instance="1", job="app-server"} 600 # Comparisons. -eval instant at 50m SUM(http_requests) BY (job) > 1000 +eval instant at 50m SUM(http_requests_total) BY (job) > 1000 {job="app-server"} 2600 -eval instant at 50m 1000 < SUM(http_requests) BY (job) +eval instant at 50m 1000 < SUM(http_requests_total) BY (job) {job="app-server"} 2600 -eval instant at 50m SUM(http_requests) BY (job) <= 1000 +eval instant at 50m SUM(http_requests_total) BY (job) <= 1000 {job="api-server"} 1000 -eval instant at 50m SUM(http_requests) BY (job) != 1000 +eval instant at 50m SUM(http_requests_total) BY (job) != 1000 {job="app-server"} 2600 -eval instant at 50m SUM(http_requests) BY (job) == 1000 +eval instant at 50m SUM(http_requests_total) BY (job) == 1000 {job="api-server"} 1000 -eval instant at 50m SUM(http_requests) BY (job) == bool 1000 +eval instant at 50m SUM(http_requests_total) BY (job) == bool 1000 {job="api-server"} 1 {job="app-server"} 0 -eval instant at 50m SUM(http_requests) BY (job) == bool SUM(http_requests) BY (job) +eval instant at 50m SUM(http_requests_total) BY (job) == bool SUM(http_requests_total) BY (job) {job="api-server"} 1 {job="app-server"} 1 -eval instant at 50m SUM(http_requests) BY (job) != bool SUM(http_requests) BY (job) +eval instant at 50m SUM(http_requests_total) BY (job) != bool SUM(http_requests_total) BY (job) {job="api-server"} 0 {job="app-server"} 0 @@ -285,12 +285,12 @@ eval instant at 50m 0 == bool 1 eval instant at 50m 1 == bool 1 1 -eval instant at 50m http_requests{job="api-server", instance="0", group="production"} == bool 100 +eval instant at 50m http_requests_total{job="api-server", instance="0", group="production"} == bool 100 {job="api-server", instance="0", group="production"} 1 # The histogram is ignored here so the result doesn't change but it has an info annotation now. eval_info instant at 5m {job="app-server"} == 80 - http_requests{group="canary", instance="1", job="app-server"} 80 + http_requests_total{group="canary", instance="1", job="app-server"} 80 eval_info instant at 5m http_requests_histogram != 80 @@ -673,7 +673,7 @@ eval_info range from 0 to 24m step 6m left_histograms == 0 eval_info range from 0 to 24m step 6m left_histograms != 3 # No results. -eval range from 0 to 24m step 6m left_histograms != 0 +eval_info range from 0 to 24m step 6m left_histograms != 0 # No results. eval_info range from 0 to 24m step 6m left_histograms > 3 @@ -682,7 +682,7 @@ eval_info range from 0 to 24m step 6m left_histograms > 3 eval_info range from 0 to 24m step 6m left_histograms > 0 # No results. -eval range from 0 to 24m step 6m left_histograms >= 3 +eval_info range from 0 to 24m step 6m left_histograms >= 3 # No results. eval_info range from 0 to 24m step 6m left_histograms >= 0 @@ -697,7 +697,7 @@ eval_info range from 0 to 24m step 6m left_histograms < 0 eval_info range from 0 to 24m step 6m left_histograms <= 3 # No results. -eval range from 0 to 24m step 6m left_histograms <= 0 +eval_info range from 0 to 24m step 6m left_histograms <= 0 # No results. eval_info range from 0 to 24m step 6m left_histograms == bool 3 @@ -770,40 +770,87 @@ eval range from 0 to 60m step 6m NaN == left_floats eval range from 0 to 60m step 6m NaN == bool left_floats {} 0 0 _ _ 0 _ 0 0 0 0 0 -eval range from 0 to 24m step 6m 3 == left_histograms +eval_info range from 0 to 24m step 6m 3 == left_histograms # No results. -eval range from 0 to 24m step 6m 0 == left_histograms +eval_info range from 0 to 24m step 6m 0 == left_histograms # No results. -eval range from 0 to 24m step 6m 3 != left_histograms +eval_info range from 0 to 24m step 6m 3 != left_histograms # No results. -eval range from 0 to 24m step 6m 0 != left_histograms +eval_info range from 0 to 24m step 6m 0 != left_histograms # No results. -eval range from 0 to 24m step 6m 3 < left_histograms +eval_info range from 0 to 24m step 6m 3 < left_histograms # No results. -eval range from 0 to 24m step 6m 0 < left_histograms +eval_info range from 0 to 24m step 6m 0 < left_histograms # No results. -eval range from 0 to 24m step 6m 3 < left_histograms +eval_info range from 0 to 24m step 6m 3 < left_histograms # No results. -eval range from 0 to 24m step 6m 0 < left_histograms +eval_info range from 0 to 24m step 6m 0 < left_histograms # No results. -eval range from 0 to 24m step 6m 3 > left_histograms +eval_info range from 0 to 24m step 6m 3 > left_histograms # No results. -eval range from 0 to 24m step 6m 0 > left_histograms +eval_info range from 0 to 24m step 6m 0 > left_histograms # No results. -eval range from 0 to 24m step 6m 3 >= left_histograms +eval_info range from 0 to 24m step 6m 3 >= left_histograms # No results. -eval range from 0 to 24m step 6m 0 >= left_histograms +eval_info range from 0 to 24m step 6m 0 >= left_histograms # No results. clear + +# Test completely discarding or completely including series in results with "and on" +load_with_nhcb 5m + testhistogram_bucket{le="0.1", id="1"} 0+5x10 + testhistogram_bucket{le="0.2", id="1"} 0+7x10 + testhistogram_bucket{le="+Inf", id="1"} 0+12x10 + testhistogram_bucket{le="0.1", id="2"} 0+4x10 + testhistogram_bucket{le="0.2", id="2"} 0+6x10 + testhistogram_bucket{le="+Inf", id="2"} 0+11x10 + +# Include all series when "and on" with non-empty vector. +eval instant at 10m (testhistogram_bucket) and on() (vector(1) == 1) + {__name__="testhistogram_bucket", le="0.1", id="1"} 10.0 + {__name__="testhistogram_bucket", le="0.2", id="1"} 14.0 + {__name__="testhistogram_bucket", le="+Inf", id="1"} 24.0 + {__name__="testhistogram_bucket", le="0.1", id="2"} 8.0 + {__name__="testhistogram_bucket", le="0.2", id="2"} 12.0 + {__name__="testhistogram_bucket", le="+Inf", id="2"} 22.0 + +eval range from 0 to 10m step 5m (testhistogram_bucket) and on() (vector(1) == 1) + {__name__="testhistogram_bucket", le="0.1", id="1"} 0.0 5.0 10.0 + {__name__="testhistogram_bucket", le="0.2", id="1"} 0.0 7.0 14.0 + {__name__="testhistogram_bucket", le="+Inf", id="1"} 0.0 12.0 24.0 + {__name__="testhistogram_bucket", le="0.1", id="2"} 0.0 4.0 8.0 + {__name__="testhistogram_bucket", le="0.2", id="2"} 0.0 6.0 12.0 + {__name__="testhistogram_bucket", le="+Inf", id="2"} 0.0 11.0 22.0 + +# Exclude all series when "and on" with empty vector. +eval instant at 10m (testhistogram_bucket) and on() (vector(-1) == 1) + +eval range from 0 to 10m step 5m (testhistogram_bucket) and on() (vector(-1) == 1) + +# Include all native histogram series when "and on" with non-empty vector. +eval instant at 10m (testhistogram) and on() (vector(1) == 1) + {__name__="testhistogram", id="1"} {{schema:-53 sum:0 count:24 buckets:[10 4 10] custom_values:[0.1 0.2]}} + {__name__="testhistogram", id="2"} {{schema:-53 sum:0 count:22 buckets:[8 4 10] custom_values:[0.1 0.2]}} + +eval range from 0 to 10m step 5m (testhistogram) and on() (vector(1) == 1) + {__name__="testhistogram", id="1"} {{schema:-53 sum:0 count:0 custom_values:[0.1 0.2]}} {{schema:-53 sum:0 count:12 buckets:[5 2 5] custom_values:[0.1 0.2]}} {{schema:-53 sum:0 count:24 buckets:[10 4 10] custom_values:[0.1 0.2]}} + {__name__="testhistogram", id="2"} {{schema:-53 sum:0 count:0 custom_values:[0.1 0.2]}} {{schema:-53 sum:0 count:11 buckets:[4 2 5] custom_values:[0.1 0.2]}} {{schema:-53 sum:0 count:22 buckets:[8 4 10] custom_values:[0.1 0.2]}} + +# Exclude all native histogram series when "and on" with empty vector. +eval instant at 10m (testhistogram) and on() (vector(-1) == 1) + +eval range from 0 to 10m step 5m (testhistogram) and on() (vector(-1) == 1) + +clear diff --git a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/selectors.test b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/selectors.test index 6742d83e99c..3a1f5263dd3 100644 --- a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/selectors.test +++ b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/selectors.test @@ -1,109 +1,109 @@ load 10s - http_requests{job="api-server", instance="0", group="production"} 0+10x1000 100+30x1000 - http_requests{job="api-server", instance="1", group="production"} 0+20x1000 200+30x1000 - http_requests{job="api-server", instance="0", group="canary"} 0+30x1000 300+80x1000 - http_requests{job="api-server", instance="1", group="canary"} 0+40x2000 + http_requests_total{job="api-server", instance="0", group="production"} 0+10x1000 100+30x1000 + http_requests_total{job="api-server", instance="1", group="production"} 0+20x1000 200+30x1000 + http_requests_total{job="api-server", instance="0", group="canary"} 0+30x1000 300+80x1000 + http_requests_total{job="api-server", instance="1", group="canary"} 0+40x2000 -eval instant at 8000s rate(http_requests[1m]) +eval instant at 8000s rate(http_requests_total[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 {job="api-server", instance="0", group="canary"} 3 {job="api-server", instance="1", group="canary"} 4 -eval instant at 18000s rate(http_requests[1m]) +eval instant at 18000s rate(http_requests_total[1m]) {job="api-server", instance="0", group="production"} 3 {job="api-server", instance="1", group="production"} 3 {job="api-server", instance="0", group="canary"} 8 {job="api-server", instance="1", group="canary"} 4 -eval instant at 8000s rate(http_requests{group=~"pro.*"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~"pro.*"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 18000s rate(http_requests{group=~".*ry", instance="1"}[1m]) +eval instant at 18000s rate(http_requests_total{group=~".*ry", instance="1"}[1m]) {job="api-server", instance="1", group="canary"} 4 -eval instant at 18000s rate(http_requests{instance!="3"}[1m] offset 10000s) +eval instant at 18000s rate(http_requests_total{instance!="3"}[1m] offset 10000s) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 {job="api-server", instance="0", group="canary"} 3 {job="api-server", instance="1", group="canary"} 4 -eval instant at 4000s rate(http_requests{instance!="3"}[1m] offset -4000s) +eval instant at 4000s rate(http_requests_total{instance!="3"}[1m] offset -4000s) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 {job="api-server", instance="0", group="canary"} 3 {job="api-server", instance="1", group="canary"} 4 -eval instant at 18000s rate(http_requests[40s]) - rate(http_requests[1m] offset 10000s) +eval instant at 18000s rate(http_requests_total[40s]) - rate(http_requests_total[1m] offset 10000s) {job="api-server", instance="0", group="production"} 2 {job="api-server", instance="1", group="production"} 1 {job="api-server", instance="0", group="canary"} 5 {job="api-server", instance="1", group="canary"} 0 # https://github.com/prometheus/prometheus/issues/3575 -eval instant at 0s http_requests{foo!="bar"} - http_requests{job="api-server", instance="0", group="production"} 0 - http_requests{job="api-server", instance="1", group="production"} 0 - http_requests{job="api-server", instance="0", group="canary"} 0 - http_requests{job="api-server", instance="1", group="canary"} 0 - -eval instant at 0s http_requests{foo!="bar", job="api-server"} - http_requests{job="api-server", instance="0", group="production"} 0 - http_requests{job="api-server", instance="1", group="production"} 0 - http_requests{job="api-server", instance="0", group="canary"} 0 - http_requests{job="api-server", instance="1", group="canary"} 0 - -eval instant at 0s http_requests{foo!~"bar", job="api-server"} - http_requests{job="api-server", instance="0", group="production"} 0 - http_requests{job="api-server", instance="1", group="production"} 0 - http_requests{job="api-server", instance="0", group="canary"} 0 - http_requests{job="api-server", instance="1", group="canary"} 0 - -eval instant at 0s http_requests{foo!~"bar", job="api-server", instance="1", x!="y", z="", group!=""} - http_requests{job="api-server", instance="1", group="production"} 0 - http_requests{job="api-server", instance="1", group="canary"} 0 +eval instant at 0s http_requests_total{foo!="bar"} + http_requests_total{job="api-server", instance="0", group="production"} 0 + http_requests_total{job="api-server", instance="1", group="production"} 0 + http_requests_total{job="api-server", instance="0", group="canary"} 0 + http_requests_total{job="api-server", instance="1", group="canary"} 0 + +eval instant at 0s http_requests_total{foo!="bar", job="api-server"} + http_requests_total{job="api-server", instance="0", group="production"} 0 + http_requests_total{job="api-server", instance="1", group="production"} 0 + http_requests_total{job="api-server", instance="0", group="canary"} 0 + http_requests_total{job="api-server", instance="1", group="canary"} 0 + +eval instant at 0s http_requests_total{foo!~"bar", job="api-server"} + http_requests_total{job="api-server", instance="0", group="production"} 0 + http_requests_total{job="api-server", instance="1", group="production"} 0 + http_requests_total{job="api-server", instance="0", group="canary"} 0 + http_requests_total{job="api-server", instance="1", group="canary"} 0 + +eval instant at 0s http_requests_total{foo!~"bar", job="api-server", instance="1", x!="y", z="", group!=""} + http_requests_total{job="api-server", instance="1", group="production"} 0 + http_requests_total{job="api-server", instance="1", group="canary"} 0 # https://github.com/prometheus/prometheus/issues/7994 -eval instant at 8000s rate(http_requests{group=~"(?i:PRO).*"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~"(?i:PRO).*"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~".*?(?i:PRO).*"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~".*?(?i:PRO).*"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~".*(?i:DUC).*"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~".*(?i:DUC).*"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~".*(?i:TION)"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~".*(?i:TION)"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~".*(?i:TION).*?"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~".*(?i:TION).*?"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~"((?i)PRO).*"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~"((?i)PRO).*"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~".*((?i)DUC).*"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~".*((?i)DUC).*"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~".*((?i)TION)"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~".*((?i)TION)"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~"(?i:PRODUCTION)"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~"(?i:PRODUCTION)"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 8000s rate(http_requests{group=~".*(?i:C).*"}[1m]) +eval instant at 8000s rate(http_requests_total{group=~".*(?i:C).*"}[1m]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 {job="api-server", instance="0", group="canary"} 3 @@ -133,14 +133,14 @@ load 5m label_grouping_test{a="a", b="abb"} 0+20x10 load 5m - http_requests{job="api-server", instance="0", group="production"} 0+10x10 - http_requests{job="api-server", instance="1", group="production"} 0+20x10 - http_requests{job="api-server", instance="0", group="canary"} 0+30x10 - http_requests{job="api-server", instance="1", group="canary"} 0+40x10 - http_requests{job="app-server", instance="0", group="production"} 0+50x10 - http_requests{job="app-server", instance="1", group="production"} 0+60x10 - http_requests{job="app-server", instance="0", group="canary"} 0+70x10 - http_requests{job="app-server", instance="1", group="canary"} 0+80x10 + http_requests_total{job="api-server", instance="0", group="production"} 0+10x10 + http_requests_total{job="api-server", instance="1", group="production"} 0+20x10 + http_requests_total{job="api-server", instance="0", group="canary"} 0+30x10 + http_requests_total{job="api-server", instance="1", group="canary"} 0+40x10 + http_requests_total{job="app-server", instance="0", group="production"} 0+50x10 + http_requests_total{job="app-server", instance="1", group="production"} 0+60x10 + http_requests_total{job="app-server", instance="0", group="canary"} 0+70x10 + http_requests_total{job="app-server", instance="1", group="canary"} 0+80x10 # Single-letter label names and values. eval instant at 50m x{y="testvalue"} @@ -148,14 +148,14 @@ eval instant at 50m x{y="testvalue"} # Basic Regex eval instant at 50m {__name__=~".+"} - http_requests{group="canary", instance="0", job="api-server"} 300 - http_requests{group="canary", instance="0", job="app-server"} 700 - http_requests{group="canary", instance="1", job="api-server"} 400 - http_requests{group="canary", instance="1", job="app-server"} 800 - http_requests{group="production", instance="0", job="api-server"} 100 - http_requests{group="production", instance="0", job="app-server"} 500 - http_requests{group="production", instance="1", job="api-server"} 200 - http_requests{group="production", instance="1", job="app-server"} 600 + http_requests_total{group="canary", instance="0", job="api-server"} 300 + http_requests_total{group="canary", instance="0", job="app-server"} 700 + http_requests_total{group="canary", instance="1", job="api-server"} 400 + http_requests_total{group="canary", instance="1", job="app-server"} 800 + http_requests_total{group="production", instance="0", job="api-server"} 100 + http_requests_total{group="production", instance="0", job="app-server"} 500 + http_requests_total{group="production", instance="1", job="api-server"} 200 + http_requests_total{group="production", instance="1", job="app-server"} 600 x{y="testvalue"} 100 label_grouping_test{a="a", b="abb"} 200 label_grouping_test{a="aa", b="bb"} 100 @@ -164,34 +164,34 @@ eval instant at 50m {__name__=~".+"} cpu_count{instance="0", type="numa"} 300 eval instant at 50m {job=~".+-server", job!~"api-.+"} - http_requests{group="canary", instance="0", job="app-server"} 700 - http_requests{group="canary", instance="1", job="app-server"} 800 - http_requests{group="production", instance="0", job="app-server"} 500 - http_requests{group="production", instance="1", job="app-server"} 600 - -eval instant at 50m http_requests{group!="canary"} - http_requests{group="production", instance="1", job="app-server"} 600 - http_requests{group="production", instance="0", job="app-server"} 500 - http_requests{group="production", instance="1", job="api-server"} 200 - http_requests{group="production", instance="0", job="api-server"} 100 - -eval instant at 50m http_requests{job=~".+-server",group!="canary"} - http_requests{group="production", instance="1", job="app-server"} 600 - http_requests{group="production", instance="0", job="app-server"} 500 - http_requests{group="production", instance="1", job="api-server"} 200 - http_requests{group="production", instance="0", job="api-server"} 100 - -eval instant at 50m http_requests{job!~"api-.+",group!="canary"} - http_requests{group="production", instance="1", job="app-server"} 600 - http_requests{group="production", instance="0", job="app-server"} 500 - -eval instant at 50m http_requests{group="production",job=~"api-.+"} - http_requests{group="production", instance="0", job="api-server"} 100 - http_requests{group="production", instance="1", job="api-server"} 200 - -eval instant at 50m http_requests{group="production",job="api-server"} offset 5m - http_requests{group="production", instance="0", job="api-server"} 90 - http_requests{group="production", instance="1", job="api-server"} 180 + http_requests_total{group="canary", instance="0", job="app-server"} 700 + http_requests_total{group="canary", instance="1", job="app-server"} 800 + http_requests_total{group="production", instance="0", job="app-server"} 500 + http_requests_total{group="production", instance="1", job="app-server"} 600 + +eval instant at 50m http_requests_total{group!="canary"} + http_requests_total{group="production", instance="1", job="app-server"} 600 + http_requests_total{group="production", instance="0", job="app-server"} 500 + http_requests_total{group="production", instance="1", job="api-server"} 200 + http_requests_total{group="production", instance="0", job="api-server"} 100 + +eval instant at 50m http_requests_total{job=~".+-server",group!="canary"} + http_requests_total{group="production", instance="1", job="app-server"} 600 + http_requests_total{group="production", instance="0", job="app-server"} 500 + http_requests_total{group="production", instance="1", job="api-server"} 200 + http_requests_total{group="production", instance="0", job="api-server"} 100 + +eval instant at 50m http_requests_total{job!~"api-.+",group!="canary"} + http_requests_total{group="production", instance="1", job="app-server"} 600 + http_requests_total{group="production", instance="0", job="app-server"} 500 + +eval instant at 50m http_requests_total{group="production",job=~"api-.+"} + http_requests_total{group="production", instance="0", job="api-server"} 100 + http_requests_total{group="production", instance="1", job="api-server"} 200 + +eval instant at 50m http_requests_total{group="production",job="api-server"} offset 5m + http_requests_total{group="production", instance="0", job="api-server"} 90 + http_requests_total{group="production", instance="1", job="api-server"} 180 clear diff --git a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/subquery.test b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/subquery.test index 3ac547a2b57..5a5e4e00920 100644 --- a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/subquery.test +++ b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/subquery.test @@ -1,41 +1,41 @@ load 10s - metric 1 2 + metric_total 1 2 # Evaluation before 0s gets no sample. -eval instant at 10s sum_over_time(metric[50s:10s]) +eval instant at 10s sum_over_time(metric_total[50s:10s]) {} 3 -eval instant at 10s sum_over_time(metric[50s:5s]) +eval instant at 10s sum_over_time(metric_total[50s:5s]) {} 4 # Every evaluation yields the last value, i.e. 2 -eval instant at 5m sum_over_time(metric[50s:10s]) +eval instant at 5m sum_over_time(metric_total[50s:10s]) {} 10 -# Series becomes stale at 5m10s (5m after last sample) +# Series becomes stale at 5m10s (5m after last sample). # Hence subquery gets a single sample at 5m10s. -eval instant at 5m59s sum_over_time(metric[60s:10s]) +eval instant at 5m59s sum_over_time(metric_total[60s:10s]) {} 2 -eval instant at 10s rate(metric[20s:10s]) +eval instant at 10s rate(metric_total[20s:10s]) {} 0.1 -eval instant at 20s rate(metric[20s:5s]) +eval instant at 20s rate(metric_total[20s:5s]) {} 0.06666666666666667 clear load 10s - http_requests{job="api-server", instance="1", group="production"} 0+20x1000 200+30x1000 - http_requests{job="api-server", instance="0", group="production"} 0+10x1000 100+30x1000 - http_requests{job="api-server", instance="0", group="canary"} 0+30x1000 300+80x1000 - http_requests{job="api-server", instance="1", group="canary"} 0+40x2000 + http_requests_total{job="api-server", instance="1", group="production"} 0+20x1000 200+30x1000 + http_requests_total{job="api-server", instance="0", group="production"} 0+10x1000 100+30x1000 + http_requests_total{job="api-server", instance="0", group="canary"} 0+30x1000 300+80x1000 + http_requests_total{job="api-server", instance="1", group="canary"} 0+40x2000 -eval instant at 8000s rate(http_requests{group=~"pro.*"}[1m:10s]) +eval instant at 8000s rate(http_requests_total{group=~"pro.*"}[1m:10s]) {job="api-server", instance="0", group="production"} 1 {job="api-server", instance="1", group="production"} 2 -eval instant at 20000s avg_over_time(rate(http_requests[1m])[1m:1s]) +eval instant at 20000s avg_over_time(rate(http_requests_total[1m])[1m:1s]) {job="api-server", instance="0", group="canary"} 8 {job="api-server", instance="1", group="canary"} 4 {job="api-server", instance="1", group="production"} 3 @@ -44,64 +44,64 @@ eval instant at 20000s avg_over_time(rate(http_requests[1m])[1m:1s]) clear load 10s - metric1 0+1x1000 - metric2 0+2x1000 - metric3 0+3x1000 + metric1_total 0+1x1000 + metric2_total 0+2x1000 + metric3_total 0+3x1000 -eval instant at 1000s sum_over_time(metric1[30s:10s]) +eval instant at 1000s sum_over_time(metric1_total[30s:10s]) {} 297 # This is (97 + 98*2 + 99*2 + 100), because other than 97@975s and 100@1000s, # everything else is repeated with the 5s step. -eval instant at 1000s sum_over_time(metric1[30s:5s]) +eval instant at 1000s sum_over_time(metric1_total[30s:5s]) {} 591 # Offset is aligned with the step, so this is from [98@980s, 99@990s, 100@1000s]. -eval instant at 1010s sum_over_time(metric1[30s:10s] offset 10s) +eval instant at 1010s sum_over_time(metric1_total[30s:10s] offset 10s) {} 297 # Same result for different offsets due to step alignment. -eval instant at 1010s sum_over_time(metric1[30s:10s] offset 9s) +eval instant at 1010s sum_over_time(metric1_total[30s:10s] offset 9s) {} 297 -eval instant at 1010s sum_over_time(metric1[30s:10s] offset 7s) +eval instant at 1010s sum_over_time(metric1_total[30s:10s] offset 7s) {} 297 -eval instant at 1010s sum_over_time(metric1[30s:10s] offset 5s) +eval instant at 1010s sum_over_time(metric1_total[30s:10s] offset 5s) {} 297 -eval instant at 1010s sum_over_time(metric1[30s:10s] offset 3s) +eval instant at 1010s sum_over_time(metric1_total[30s:10s] offset 3s) {} 297 -eval instant at 1010s sum_over_time((metric1)[30s:10s] offset 3s) +eval instant at 1010s sum_over_time((metric1_total)[30s:10s] offset 3s) {} 297 -eval instant at 1010s sum_over_time(metric1[30:10] offset 3) +eval instant at 1010s sum_over_time(metric1_total[30:10] offset 3) {} 297 -eval instant at 1010s sum_over_time((metric1)[30:10s] offset 3s) +eval instant at 1010s sum_over_time((metric1_total)[30:10s] offset 3s) {} 297 -eval instant at 1010s sum_over_time((metric1)[30:10s] offset 3s) +eval instant at 1010s sum_over_time((metric1_total)[30:10s] offset 3s) {} 297 -eval instant at 1010s sum_over_time((metric1)[30:10] offset 3s) +eval instant at 1010s sum_over_time((metric1_total)[30:10] offset 3s) {} 297 -eval instant at 1010s sum_over_time((metric1)[30:10] offset 3) +eval instant at 1010s sum_over_time((metric1_total)[30:10] offset 3) {} 297 -# Nested subqueries -eval instant at 1000s rate(sum_over_time(metric1[30s:10s])[50s:10s]) +# Nested subqueries. +eval instant at 1000s rate(sum_over_time(metric1_total[30s:10s])[50s:10s]) {} 0.30000000000000004 -eval instant at 1000s rate(sum_over_time(metric2[30s:10s])[50s:10s]) +eval instant at 1000s rate(sum_over_time(metric2_total[30s:10s])[50s:10s]) {} 0.6000000000000001 -eval instant at 1000s rate(sum_over_time(metric3[30s:10s])[50s:10s]) +eval instant at 1000s rate(sum_over_time(metric3_total[30s:10s])[50s:10s]) {} 0.9 -eval instant at 1000s rate(sum_over_time((metric1+metric2+metric3)[30s:10s])[30s:10s]) +eval instant at 1000s rate(sum_over_time((metric1_total+metric2_total+metric3_total)[30s:10s])[30s:10s]) {} 1.8 clear @@ -109,28 +109,28 @@ clear # Fibonacci sequence, to ensure the rate is not constant. # Additional note: using subqueries unnecessarily is unwise. load 7s - metric 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765 10946 17711 28657 46368 75025 121393 196418 317811 514229 832040 1346269 2178309 3524578 5702887 9227465 14930352 24157817 39088169 63245986 102334155 165580141 267914296 433494437 701408733 1134903170 1836311903 2971215073 4807526976 7778742049 12586269025 20365011074 32951280099 53316291173 86267571272 139583862445 225851433717 365435296162 591286729879 956722026041 1548008755920 2504730781961 4052739537881 6557470319842 10610209857723 17167680177565 27777890035288 44945570212853 72723460248141 117669030460994 190392490709135 308061521170129 498454011879264 806515533049393 1304969544928657 2111485077978050 3416454622906707 5527939700884757 8944394323791464 14472334024676221 23416728348467685 37889062373143906 61305790721611591 99194853094755497 160500643816367088 259695496911122585 420196140727489673 679891637638612258 1100087778366101931 1779979416004714189 2880067194370816120 4660046610375530309 7540113804746346429 12200160415121876738 19740274219868223167 31940434634990099905 51680708854858323072 83621143489848422977 135301852344706746049 218922995834555169026 354224848179261915075 573147844013817084101 927372692193078999176 1500520536206896083277 2427893228399975082453 3928413764606871165730 6356306993006846248183 10284720757613717413913 16641027750620563662096 26925748508234281076009 43566776258854844738105 70492524767089125814114 114059301025943970552219 184551825793033096366333 298611126818977066918552 483162952612010163284885 781774079430987230203437 1264937032042997393488322 2046711111473984623691759 3311648143516982017180081 5358359254990966640871840 8670007398507948658051921 14028366653498915298923761 22698374052006863956975682 36726740705505779255899443 59425114757512643212875125 96151855463018422468774568 155576970220531065681649693 251728825683549488150424261 407305795904080553832073954 659034621587630041982498215 1066340417491710595814572169 1725375039079340637797070384 2791715456571051233611642553 4517090495650391871408712937 7308805952221443105020355490 11825896447871834976429068427 19134702400093278081449423917 30960598847965113057878492344 50095301248058391139327916261 81055900096023504197206408605 131151201344081895336534324866 212207101440105399533740733471 343358302784187294870275058337 555565404224292694404015791808 898923707008479989274290850145 1454489111232772683678306641953 2353412818241252672952597492098 3807901929474025356630904134051 6161314747715278029583501626149 9969216677189303386214405760200 16130531424904581415797907386349 26099748102093884802012313146549 42230279526998466217810220532898 68330027629092351019822533679447 110560307156090817237632754212345 178890334785183168257455287891792 289450641941273985495088042104137 468340976726457153752543329995929 757791618667731139247631372100066 1226132595394188293000174702095995 1983924214061919432247806074196061 3210056809456107725247980776292056 5193981023518027157495786850488117 8404037832974134882743767626780173 13598018856492162040239554477268290 22002056689466296922983322104048463 35600075545958458963222876581316753 57602132235424755886206198685365216 93202207781383214849429075266681969 150804340016807970735635273952047185 244006547798191185585064349218729154 394810887814999156320699623170776339 638817435613190341905763972389505493 1033628323428189498226463595560281832 1672445759041379840132227567949787325 2706074082469569338358691163510069157 4378519841510949178490918731459856482 7084593923980518516849609894969925639 11463113765491467695340528626429782121 18547707689471986212190138521399707760 + metric_total 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765 10946 17711 28657 46368 75025 121393 196418 317811 514229 832040 1346269 2178309 3524578 5702887 9227465 14930352 24157817 39088169 63245986 102334155 165580141 267914296 433494437 701408733 1134903170 1836311903 2971215073 4807526976 7778742049 12586269025 20365011074 32951280099 53316291173 86267571272 139583862445 225851433717 365435296162 591286729879 956722026041 1548008755920 2504730781961 4052739537881 6557470319842 10610209857723 17167680177565 27777890035288 44945570212853 72723460248141 117669030460994 190392490709135 308061521170129 498454011879264 806515533049393 1304969544928657 2111485077978050 3416454622906707 5527939700884757 8944394323791464 14472334024676221 23416728348467685 37889062373143906 61305790721611591 99194853094755497 160500643816367088 259695496911122585 420196140727489673 679891637638612258 1100087778366101931 1779979416004714189 2880067194370816120 4660046610375530309 7540113804746346429 12200160415121876738 19740274219868223167 31940434634990099905 51680708854858323072 83621143489848422977 135301852344706746049 218922995834555169026 354224848179261915075 573147844013817084101 927372692193078999176 1500520536206896083277 2427893228399975082453 3928413764606871165730 6356306993006846248183 10284720757613717413913 16641027750620563662096 26925748508234281076009 43566776258854844738105 70492524767089125814114 114059301025943970552219 184551825793033096366333 298611126818977066918552 483162952612010163284885 781774079430987230203437 1264937032042997393488322 2046711111473984623691759 3311648143516982017180081 5358359254990966640871840 8670007398507948658051921 14028366653498915298923761 22698374052006863956975682 36726740705505779255899443 59425114757512643212875125 96151855463018422468774568 155576970220531065681649693 251728825683549488150424261 407305795904080553832073954 659034621587630041982498215 1066340417491710595814572169 1725375039079340637797070384 2791715456571051233611642553 4517090495650391871408712937 7308805952221443105020355490 11825896447871834976429068427 19134702400093278081449423917 30960598847965113057878492344 50095301248058391139327916261 81055900096023504197206408605 131151201344081895336534324866 212207101440105399533740733471 343358302784187294870275058337 555565404224292694404015791808 898923707008479989274290850145 1454489111232772683678306641953 2353412818241252672952597492098 3807901929474025356630904134051 6161314747715278029583501626149 9969216677189303386214405760200 16130531424904581415797907386349 26099748102093884802012313146549 42230279526998466217810220532898 68330027629092351019822533679447 110560307156090817237632754212345 178890334785183168257455287891792 289450641941273985495088042104137 468340976726457153752543329995929 757791618667731139247631372100066 1226132595394188293000174702095995 1983924214061919432247806074196061 3210056809456107725247980776292056 5193981023518027157495786850488117 8404037832974134882743767626780173 13598018856492162040239554477268290 22002056689466296922983322104048463 35600075545958458963222876581316753 57602132235424755886206198685365216 93202207781383214849429075266681969 150804340016807970735635273952047185 244006547798191185585064349218729154 394810887814999156320699623170776339 638817435613190341905763972389505493 1033628323428189498226463595560281832 1672445759041379840132227567949787325 2706074082469569338358691163510069157 4378519841510949178490918731459856482 7084593923980518516849609894969925639 11463113765491467695340528626429782121 18547707689471986212190138521399707760 # Extrapolated from [3@21, 144@77]: (144 - 3) / (77 - 21) -eval instant at 80s rate(metric[1m]) +eval instant at 80s rate(metric_total[1m]) {} 2.517857143 # Extrapolated to range start for counter, [2@20, 144@80]: (144 - 2) / (80 - 20) -eval instant at 80s rate(metric[1m500ms:10s]) +eval instant at 80s rate(metric_total[1m500ms:10s]) {} 2.3666666666666667 # Extrapolated to zero value for counter, [2@20, 144@80]: (144 - 0) / 61 -eval instant at 80s rate(metric[1m1s:10s]) +eval instant at 80s rate(metric_total[1m1s:10s]) {} 2.360655737704918 # Only one value between 10s and 20s, 2@14 -eval instant at 20s min_over_time(metric[10s]) +eval instant at 20s min_over_time(metric_total[10s]) {} 2 # min(2@20) -eval instant at 20s min_over_time(metric[15s:10s]) +eval instant at 20s min_over_time(metric_total[15s:10s]) {} 1 -eval instant at 20m min_over_time(rate(metric[5m])[20m:1m]) +eval instant at 20m min_over_time(rate(metric_total[5m])[20m:1m]) {} 0.12119047619047618 diff --git a/vendor/github.com/prometheus/prometheus/promql/quantile.go b/vendor/github.com/prometheus/prometheus/promql/quantile.go index 06775d3ae67..a6851810c5a 100644 --- a/vendor/github.com/prometheus/prometheus/promql/quantile.go +++ b/vendor/github.com/prometheus/prometheus/promql/quantile.go @@ -51,20 +51,22 @@ var excludedLabels = []string{ labels.BucketLabel, } -type bucket struct { - upperBound float64 - count float64 +// Bucket represents a bucket of a classic histogram. It is used internally by the promql +// package, but it is nevertheless exported for potential use in other PromQL engines. +type Bucket struct { + UpperBound float64 + Count float64 } -// buckets implements sort.Interface. -type buckets []bucket +// Buckets implements sort.Interface. +type Buckets []Bucket type metricWithBuckets struct { metric labels.Labels - buckets buckets + buckets Buckets } -// bucketQuantile calculates the quantile 'q' based on the given buckets. The +// BucketQuantile calculates the quantile 'q' based on the given buckets. The // buckets will be sorted by upperBound by this function (i.e. no sorting // needed before calling this function). The quantile value is interpolated // assuming a linear distribution within a bucket. However, if the quantile @@ -95,7 +97,14 @@ type metricWithBuckets struct { // and another bool to indicate if small differences between buckets (that // are likely artifacts of floating point precision issues) have been // ignored. -func bucketQuantile(q float64, buckets buckets) (float64, bool, bool) { +// +// Generically speaking, BucketQuantile is for calculating the +// histogram_quantile() of classic histograms. See also: HistogramQuantile +// for native histograms. +// +// BucketQuantile is exported as a useful quantile function over a set of +// given buckets. It may be used by other PromQL engine implementations. +func BucketQuantile(q float64, buckets Buckets) (float64, bool, bool) { if math.IsNaN(q) { return math.NaN(), false, false } @@ -105,17 +114,17 @@ func bucketQuantile(q float64, buckets buckets) (float64, bool, bool) { if q > 1 { return math.Inf(+1), false, false } - slices.SortFunc(buckets, func(a, b bucket) int { + slices.SortFunc(buckets, func(a, b Bucket) int { // We don't expect the bucket boundary to be a NaN. - if a.upperBound < b.upperBound { + if a.UpperBound < b.UpperBound { return -1 } - if a.upperBound > b.upperBound { + if a.UpperBound > b.UpperBound { return +1 } return 0 }) - if !math.IsInf(buckets[len(buckets)-1].upperBound, +1) { + if !math.IsInf(buckets[len(buckets)-1].UpperBound, +1) { return math.NaN(), false, false } @@ -125,33 +134,33 @@ func bucketQuantile(q float64, buckets buckets) (float64, bool, bool) { if len(buckets) < 2 { return math.NaN(), false, false } - observations := buckets[len(buckets)-1].count + observations := buckets[len(buckets)-1].Count if observations == 0 { return math.NaN(), false, false } rank := q * observations - b := sort.Search(len(buckets)-1, func(i int) bool { return buckets[i].count >= rank }) + b := sort.Search(len(buckets)-1, func(i int) bool { return buckets[i].Count >= rank }) if b == len(buckets)-1 { - return buckets[len(buckets)-2].upperBound, forcedMonotonic, fixedPrecision + return buckets[len(buckets)-2].UpperBound, forcedMonotonic, fixedPrecision } - if b == 0 && buckets[0].upperBound <= 0 { - return buckets[0].upperBound, forcedMonotonic, fixedPrecision + if b == 0 && buckets[0].UpperBound <= 0 { + return buckets[0].UpperBound, forcedMonotonic, fixedPrecision } var ( bucketStart float64 - bucketEnd = buckets[b].upperBound - count = buckets[b].count + bucketEnd = buckets[b].UpperBound + count = buckets[b].Count ) if b > 0 { - bucketStart = buckets[b-1].upperBound - count -= buckets[b-1].count - rank -= buckets[b-1].count + bucketStart = buckets[b-1].UpperBound + count -= buckets[b-1].Count + rank -= buckets[b-1].Count } return bucketStart + (bucketEnd-bucketStart)*(rank/count), forcedMonotonic, fixedPrecision } -// histogramQuantile calculates the quantile 'q' based on the given histogram. +// HistogramQuantile calculates the quantile 'q' based on the given histogram. // // For custom buckets, the result is interpolated linearly, i.e. it is assumed // the observations are uniformly distributed within each bucket. (This is a @@ -186,7 +195,13 @@ func bucketQuantile(q float64, buckets buckets) (float64, bool, bool) { // If q>1, +Inf is returned. // // If q is NaN, NaN is returned. -func histogramQuantile(q float64, h *histogram.FloatHistogram) float64 { +// +// HistogramQuantile is for calculating the histogram_quantile() of native +// histograms. See also: BucketQuantile for classic histograms. +// +// HistogramQuantile is exported as it may be used by other PromQL engine +// implementations. +func HistogramQuantile(q float64, h *histogram.FloatHistogram) float64 { if q < 0 { return math.Inf(-1) } @@ -297,11 +312,11 @@ func histogramQuantile(q float64, h *histogram.FloatHistogram) float64 { return -math.Exp2(logUpper + (logLower-logUpper)*(1-fraction)) } -// histogramFraction calculates the fraction of observations between the +// HistogramFraction calculates the fraction of observations between the // provided lower and upper bounds, based on the provided histogram. // -// histogramFraction is in a certain way the inverse of histogramQuantile. If -// histogramQuantile(0.9, h) returns 123.4, then histogramFraction(-Inf, 123.4, h) +// HistogramFraction is in a certain way the inverse of histogramQuantile. If +// HistogramQuantile(0.9, h) returns 123.4, then HistogramFraction(-Inf, 123.4, h) // returns 0.9. // // The same notes with regard to interpolation and assumptions about the zero @@ -328,7 +343,10 @@ func histogramQuantile(q float64, h *histogram.FloatHistogram) float64 { // If lower or upper is NaN, NaN is returned. // // If lower >= upper and the histogram has at least 1 observation, zero is returned. -func histogramFraction(lower, upper float64, h *histogram.FloatHistogram) float64 { +// +// HistogramFraction is exported as it may be used by other PromQL engine +// implementations. +func HistogramFraction(lower, upper float64, h *histogram.FloatHistogram) float64 { if h.Count == 0 || math.IsNaN(lower) || math.IsNaN(upper) { return math.NaN() } @@ -434,12 +452,12 @@ func histogramFraction(lower, upper float64, h *histogram.FloatHistogram) float6 // coalesceBuckets merges buckets with the same upper bound. // // The input buckets must be sorted. -func coalesceBuckets(buckets buckets) buckets { +func coalesceBuckets(buckets Buckets) Buckets { last := buckets[0] i := 0 for _, b := range buckets[1:] { - if b.upperBound == last.upperBound { - last.count += b.count + if b.UpperBound == last.UpperBound { + last.Count += b.Count } else { buckets[i] = last last = b @@ -476,11 +494,11 @@ func coalesceBuckets(buckets buckets) buckets { // // We return a bool to indicate if this monotonicity was forced or not, and // another bool to indicate if small deltas were ignored or not. -func ensureMonotonicAndIgnoreSmallDeltas(buckets buckets, tolerance float64) (bool, bool) { +func ensureMonotonicAndIgnoreSmallDeltas(buckets Buckets, tolerance float64) (bool, bool) { var forcedMonotonic, fixedPrecision bool - prev := buckets[0].count + prev := buckets[0].Count for i := 1; i < len(buckets); i++ { - curr := buckets[i].count // Assumed always positive. + curr := buckets[i].Count // Assumed always positive. if curr == prev { // No correction needed if the counts are identical between buckets. continue @@ -489,14 +507,14 @@ func ensureMonotonicAndIgnoreSmallDeltas(buckets buckets, tolerance float64) (bo // Silently correct numerically insignificant differences from floating // point precision errors, regardless of direction. // Do not update the 'prev' value as we are ignoring the difference. - buckets[i].count = prev + buckets[i].Count = prev fixedPrecision = true continue } if curr < prev { // Force monotonicity by removing any decreases regardless of magnitude. // Do not update the 'prev' value as we are ignoring the decrease. - buckets[i].count = prev + buckets[i].Count = prev forcedMonotonic = true continue } diff --git a/vendor/github.com/prometheus/prometheus/scrape/scrape.go b/vendor/github.com/prometheus/prometheus/scrape/scrape.go index c5e34498209..5c6063fa586 100644 --- a/vendor/github.com/prometheus/prometheus/scrape/scrape.go +++ b/vendor/github.com/prometheus/prometheus/scrape/scrape.go @@ -330,6 +330,8 @@ func (sp *scrapePool) restartLoops(reuseCache bool) { trackTimestampsStaleness = sp.config.TrackTimestampsStaleness mrc = sp.config.MetricRelabelConfigs fallbackScrapeProtocol = sp.config.ScrapeFallbackProtocol.HeaderMediaType() + alwaysScrapeClassicHist = sp.config.AlwaysScrapeClassicHistograms + convertClassicHistToNHCB = sp.config.ConvertClassicHistogramsToNHCB ) validationScheme := model.UTF8Validation @@ -350,12 +352,12 @@ func (sp *scrapePool) restartLoops(reuseCache bool) { } t := sp.activeTargets[fp] - interval, timeout, err := t.intervalAndTimeout(interval, timeout) + targetInterval, targetTimeout, err := t.intervalAndTimeout(interval, timeout) var ( s = &targetScraper{ Target: t, client: sp.client, - timeout: timeout, + timeout: targetTimeout, bodySizeLimit: bodySizeLimit, acceptHeader: acceptHeader(sp.config.ScrapeProtocols, validationScheme), acceptEncodingHeader: acceptEncodingHeader(enableCompression), @@ -373,10 +375,12 @@ func (sp *scrapePool) restartLoops(reuseCache bool) { trackTimestampsStaleness: trackTimestampsStaleness, mrc: mrc, cache: cache, - interval: interval, - timeout: timeout, + interval: targetInterval, + timeout: targetTimeout, validationScheme: validationScheme, fallbackScrapeProtocol: fallbackScrapeProtocol, + alwaysScrapeClassicHist: alwaysScrapeClassicHist, + convertClassicHistToNHCB: convertClassicHistToNHCB, }) ) if err != nil { @@ -1421,7 +1425,7 @@ func (sl *scrapeLoop) scrapeAndReport(last, appendTime time.Time, errc chan<- er sl.l.Debug("Scrape failed", "err", scrapeErr) sl.scrapeFailureLoggerMtx.RLock() if sl.scrapeFailureLogger != nil { - sl.scrapeFailureLogger.Error(scrapeErr.Error()) + sl.scrapeFailureLogger.Log(context.Background(), slog.LevelError, scrapeErr.Error()) } sl.scrapeFailureLoggerMtx.RUnlock() if errc != nil { @@ -1552,7 +1556,34 @@ type appendErrors struct { numExemplarOutOfOrder int } +// Update the stale markers. +func (sl *scrapeLoop) updateStaleMarkers(app storage.Appender, defTime int64) (err error) { + sl.cache.forEachStale(func(lset labels.Labels) bool { + // Series no longer exposed, mark it stale. + app.SetOptions(&storage.AppendOptions{DiscardOutOfOrder: true}) + _, err = app.Append(0, lset, defTime, math.Float64frombits(value.StaleNaN)) + app.SetOptions(nil) + switch { + case errors.Is(err, storage.ErrOutOfOrderSample), errors.Is(err, storage.ErrDuplicateSampleForTimestamp): + // Do not count these in logging, as this is expected if a target + // goes away and comes back again with a new scrape loop. + err = nil + } + return err == nil + }) + return +} + func (sl *scrapeLoop) append(app storage.Appender, b []byte, contentType string, ts time.Time) (total, added, seriesAdded int, err error) { + defTime := timestamp.FromTime(ts) + + if len(b) == 0 { + // Empty scrape. Just update the stale makers and swap the cache (but don't flush it). + err = sl.updateStaleMarkers(app, defTime) + sl.cache.iterDone(false) + return + } + p, err := textparse.New(b, contentType, sl.fallbackScrapeProtocol, sl.alwaysScrapeClassicHist, sl.enableCTZeroIngestion, sl.symbolTable) if p == nil { sl.l.Error( @@ -1574,9 +1605,7 @@ func (sl *scrapeLoop) append(app storage.Appender, b []byte, contentType string, "err", err, ) } - var ( - defTime = timestamp.FromTime(ts) appErrs = appendErrors{} sampleLimitErr error bucketLimitErr error @@ -1617,9 +1646,8 @@ func (sl *scrapeLoop) append(app storage.Appender, b []byte, contentType string, if err != nil { return } - // Only perform cache cleaning if the scrape was not empty. - // An empty scrape (usually) is used to indicate a failed scrape. - sl.cache.iterDone(len(b) > 0) + // Flush and swap the cache as the scrape was non-empty. + sl.cache.iterDone(true) }() loop: @@ -1862,19 +1890,7 @@ loop: sl.l.Warn("Error on ingesting out-of-order exemplars", "num_dropped", appErrs.numExemplarOutOfOrder) } if err == nil { - sl.cache.forEachStale(func(lset labels.Labels) bool { - // Series no longer exposed, mark it stale. - app.SetOptions(&storage.AppendOptions{DiscardOutOfOrder: true}) - _, err = app.Append(0, lset, defTime, math.Float64frombits(value.StaleNaN)) - app.SetOptions(nil) - switch { - case errors.Is(err, storage.ErrOutOfOrderSample), errors.Is(err, storage.ErrDuplicateSampleForTimestamp): - // Do not count these in logging, as this is expected if a target - // goes away and comes back again with a new scrape loop. - err = nil - } - return err == nil - }) + err = sl.updateStaleMarkers(app, defTime) } return } diff --git a/vendor/github.com/prometheus/prometheus/storage/merge.go b/vendor/github.com/prometheus/prometheus/storage/merge.go index 3144a0f648e..2f7f661adb7 100644 --- a/vendor/github.com/prometheus/prometheus/storage/merge.go +++ b/vendor/github.com/prometheus/prometheus/storage/merge.go @@ -19,7 +19,6 @@ import ( "context" "fmt" "math" - "slices" "sync" "github.com/prometheus/prometheus/model/histogram" @@ -136,13 +135,17 @@ func filterChunkQueriers(qs []ChunkQuerier) []ChunkQuerier { // Select returns a set of series that matches the given label matchers. func (q *mergeGenericQuerier) Select(ctx context.Context, sortSeries bool, hints *SelectHints, matchers ...*labels.Matcher) genericSeriesSet { seriesSets := make([]genericSeriesSet, 0, len(q.queriers)) + var limit int + if hints != nil { + limit = hints.Limit + } if !q.concurrentSelect { for _, querier := range q.queriers { // We need to sort for merge to work. seriesSets = append(seriesSets, querier.Select(ctx, true, hints, matchers...)) } return &lazyGenericSeriesSet{init: func() (genericSeriesSet, bool) { - s := newGenericMergeSeriesSet(seriesSets, q.mergeFn) + s := newGenericMergeSeriesSet(seriesSets, limit, q.mergeFn) return s, s.Next() }} } @@ -175,7 +178,7 @@ func (q *mergeGenericQuerier) Select(ctx context.Context, sortSeries bool, hints seriesSets = append(seriesSets, r) } return &lazyGenericSeriesSet{init: func() (genericSeriesSet, bool) { - s := newGenericMergeSeriesSet(seriesSets, q.mergeFn) + s := newGenericMergeSeriesSet(seriesSets, limit, q.mergeFn) return s, s.Next() }} } @@ -193,35 +196,44 @@ func (l labelGenericQueriers) SplitByHalf() (labelGenericQueriers, labelGenericQ // If matchers are specified the returned result set is reduced // to label values of metrics matching the matchers. func (q *mergeGenericQuerier) LabelValues(ctx context.Context, name string, hints *LabelHints, matchers ...*labels.Matcher) ([]string, annotations.Annotations, error) { - res, ws, err := q.lvals(ctx, q.queriers, name, hints, matchers...) + res, ws, err := q.mergeResults(q.queriers, hints, func(q LabelQuerier) ([]string, annotations.Annotations, error) { + return q.LabelValues(ctx, name, hints, matchers...) + }) if err != nil { return nil, nil, fmt.Errorf("LabelValues() from merge generic querier for label %s: %w", name, err) } return res, ws, nil } -// lvals performs merge sort for LabelValues from multiple queriers. -func (q *mergeGenericQuerier) lvals(ctx context.Context, lq labelGenericQueriers, n string, hints *LabelHints, matchers ...*labels.Matcher) ([]string, annotations.Annotations, error) { +// mergeResults performs merge sort on the results of invoking the resultsFn against multiple queriers. +func (q *mergeGenericQuerier) mergeResults(lq labelGenericQueriers, hints *LabelHints, resultsFn func(q LabelQuerier) ([]string, annotations.Annotations, error)) ([]string, annotations.Annotations, error) { if lq.Len() == 0 { return nil, nil, nil } if lq.Len() == 1 { - return lq.Get(0).LabelValues(ctx, n, hints, matchers...) + return resultsFn(lq.Get(0)) } a, b := lq.SplitByHalf() var ws annotations.Annotations - s1, w, err := q.lvals(ctx, a, n, hints, matchers...) + s1, w, err := q.mergeResults(a, hints, resultsFn) ws.Merge(w) if err != nil { return nil, ws, err } - s2, ws, err := q.lvals(ctx, b, n, hints, matchers...) + s2, w, err := q.mergeResults(b, hints, resultsFn) ws.Merge(w) if err != nil { return nil, ws, err } - return mergeStrings(s1, s2), ws, nil + + s1 = truncateToLimit(s1, hints) + s2 = truncateToLimit(s2, hints) + + merged := mergeStrings(s1, s2) + merged = truncateToLimit(merged, hints) + + return merged, ws, nil } func mergeStrings(a, b []string) []string { @@ -253,33 +265,13 @@ func mergeStrings(a, b []string) []string { // LabelNames returns all the unique label names present in all queriers in sorted order. func (q *mergeGenericQuerier) LabelNames(ctx context.Context, hints *LabelHints, matchers ...*labels.Matcher) ([]string, annotations.Annotations, error) { - var ( - labelNamesMap = make(map[string]struct{}) - warnings annotations.Annotations - ) - for _, querier := range q.queriers { - names, wrn, err := querier.LabelNames(ctx, hints, matchers...) - if wrn != nil { - // TODO(bwplotka): We could potentially wrap warnings. - warnings.Merge(wrn) - } - if err != nil { - return nil, nil, fmt.Errorf("LabelNames() from merge generic querier: %w", err) - } - for _, name := range names { - labelNamesMap[name] = struct{}{} - } - } - if len(labelNamesMap) == 0 { - return nil, warnings, nil - } - - labelNames := make([]string, 0, len(labelNamesMap)) - for name := range labelNamesMap { - labelNames = append(labelNames, name) + res, ws, err := q.mergeResults(q.queriers, hints, func(q LabelQuerier) ([]string, annotations.Annotations, error) { + return q.LabelNames(ctx, hints, matchers...) + }) + if err != nil { + return nil, nil, fmt.Errorf("LabelNames() from merge generic querier: %w", err) } - slices.Sort(labelNames) - return labelNames, warnings, nil + return res, ws, nil } // Close releases the resources of the generic querier. @@ -293,17 +285,25 @@ func (q *mergeGenericQuerier) Close() error { return errs.Err() } +func truncateToLimit(s []string, hints *LabelHints) []string { + if hints != nil && hints.Limit > 0 && len(s) > hints.Limit { + s = s[:hints.Limit] + } + return s +} + // VerticalSeriesMergeFunc returns merged series implementation that merges series with same labels together. // It has to handle time-overlapped series as well. type VerticalSeriesMergeFunc func(...Series) Series // NewMergeSeriesSet returns a new SeriesSet that merges many SeriesSets together. -func NewMergeSeriesSet(sets []SeriesSet, mergeFunc VerticalSeriesMergeFunc) SeriesSet { +// If limit is set, the SeriesSet will be limited up-to the limit. 0 means disabled. +func NewMergeSeriesSet(sets []SeriesSet, limit int, mergeFunc VerticalSeriesMergeFunc) SeriesSet { genericSets := make([]genericSeriesSet, 0, len(sets)) for _, s := range sets { genericSets = append(genericSets, &genericSeriesSetAdapter{s}) } - return &seriesSetAdapter{newGenericMergeSeriesSet(genericSets, (&seriesMergerAdapter{VerticalSeriesMergeFunc: mergeFunc}).Merge)} + return &seriesSetAdapter{newGenericMergeSeriesSet(genericSets, limit, (&seriesMergerAdapter{VerticalSeriesMergeFunc: mergeFunc}).Merge)} } // VerticalChunkSeriesMergeFunc returns merged chunk series implementation that merges potentially time-overlapping @@ -313,12 +313,12 @@ func NewMergeSeriesSet(sets []SeriesSet, mergeFunc VerticalSeriesMergeFunc) Seri type VerticalChunkSeriesMergeFunc func(...ChunkSeries) ChunkSeries // NewMergeChunkSeriesSet returns a new ChunkSeriesSet that merges many SeriesSet together. -func NewMergeChunkSeriesSet(sets []ChunkSeriesSet, mergeFunc VerticalChunkSeriesMergeFunc) ChunkSeriesSet { +func NewMergeChunkSeriesSet(sets []ChunkSeriesSet, limit int, mergeFunc VerticalChunkSeriesMergeFunc) ChunkSeriesSet { genericSets := make([]genericSeriesSet, 0, len(sets)) for _, s := range sets { genericSets = append(genericSets, &genericChunkSeriesSetAdapter{s}) } - return &chunkSeriesSetAdapter{newGenericMergeSeriesSet(genericSets, (&chunkSeriesMergerAdapter{VerticalChunkSeriesMergeFunc: mergeFunc}).Merge)} + return &chunkSeriesSetAdapter{newGenericMergeSeriesSet(genericSets, limit, (&chunkSeriesMergerAdapter{VerticalChunkSeriesMergeFunc: mergeFunc}).Merge)} } // genericMergeSeriesSet implements genericSeriesSet. @@ -326,9 +326,11 @@ type genericMergeSeriesSet struct { currentLabels labels.Labels mergeFunc genericSeriesMergeFunc - heap genericSeriesSetHeap - sets []genericSeriesSet - currentSets []genericSeriesSet + heap genericSeriesSetHeap + sets []genericSeriesSet + currentSets []genericSeriesSet + seriesLimit int + mergedSeries int // tracks the total number of series merged and returned. } // newGenericMergeSeriesSet returns a new genericSeriesSet that merges (and deduplicates) @@ -336,7 +338,8 @@ type genericMergeSeriesSet struct { // Each series set must return its series in labels order, otherwise // merged series set will be incorrect. // Overlapped situations are merged using provided mergeFunc. -func newGenericMergeSeriesSet(sets []genericSeriesSet, mergeFunc genericSeriesMergeFunc) genericSeriesSet { +// If seriesLimit is set, only limited series are returned. +func newGenericMergeSeriesSet(sets []genericSeriesSet, seriesLimit int, mergeFunc genericSeriesMergeFunc) genericSeriesSet { if len(sets) == 1 { return sets[0] } @@ -356,13 +359,19 @@ func newGenericMergeSeriesSet(sets []genericSeriesSet, mergeFunc genericSeriesMe } } return &genericMergeSeriesSet{ - mergeFunc: mergeFunc, - sets: sets, - heap: h, + mergeFunc: mergeFunc, + sets: sets, + heap: h, + seriesLimit: seriesLimit, } } func (c *genericMergeSeriesSet) Next() bool { + if c.seriesLimit > 0 && c.mergedSeries >= c.seriesLimit { + // Exit early if seriesLimit is set. + return false + } + // Run in a loop because the "next" series sets may not be valid anymore. // If, for the current label set, all the next series sets come from // failed remote storage sources, we want to keep trying with the next label set. @@ -393,6 +402,7 @@ func (c *genericMergeSeriesSet) Next() bool { break } } + c.mergedSeries++ return true } diff --git a/vendor/github.com/prometheus/prometheus/storage/remote/client.go b/vendor/github.com/prometheus/prometheus/storage/remote/client.go index 23775122e56..ad766be9bf4 100644 --- a/vendor/github.com/prometheus/prometheus/storage/remote/client.go +++ b/vendor/github.com/prometheus/prometheus/storage/remote/client.go @@ -30,8 +30,8 @@ import ( "github.com/prometheus/client_golang/prometheus" config_util "github.com/prometheus/common/config" "github.com/prometheus/common/model" - "github.com/prometheus/common/sigv4" "github.com/prometheus/common/version" + "github.com/prometheus/sigv4" "go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace" "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp" "go.opentelemetry.io/otel" @@ -145,6 +145,7 @@ type ClientConfig struct { RetryOnRateLimit bool WriteProtoMsg config.RemoteWriteProtoMsg ChunkedReadLimit uint64 + RoundRobinDNS bool } // ReadClient will request the STREAMED_XOR_CHUNKS method of remote read but can @@ -180,7 +181,11 @@ func NewReadClient(name string, conf *ClientConfig) (ReadClient, error) { // NewWriteClient creates a new client for remote write. func NewWriteClient(name string, conf *ClientConfig) (WriteClient, error) { - httpClient, err := config_util.NewClientFromConfig(conf.HTTPClientConfig, "remote_storage_write_client") + var httpOpts []config_util.HTTPClientOption + if conf.RoundRobinDNS { + httpOpts = []config_util.HTTPClientOption{config_util.WithDialContextFunc(newDialContextWithRoundRobinDNS().dialContextFn())} + } + httpClient, err := config_util.NewClientFromConfig(conf.HTTPClientConfig, "remote_storage_write_client", httpOpts...) if err != nil { return nil, err } diff --git a/vendor/github.com/prometheus/prometheus/storage/remote/dial_context.go b/vendor/github.com/prometheus/prometheus/storage/remote/dial_context.go new file mode 100644 index 00000000000..b842728e4ce --- /dev/null +++ b/vendor/github.com/prometheus/prometheus/storage/remote/dial_context.go @@ -0,0 +1,62 @@ +// Copyright 2024 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package remote + +import ( + "context" + "math/rand" + "net" + "net/http" + "time" + + "github.com/prometheus/common/config" +) + +type hostResolver interface { + LookupHost(context.Context, string) ([]string, error) +} + +type dialContextWithRoundRobinDNS struct { + dialContext config.DialContextFunc + resolver hostResolver + rand *rand.Rand +} + +// newDialContextWithRoundRobinDNS creates a new dialContextWithRoundRobinDNS. +// We discourage creating new instances of struct dialContextWithRoundRobinDNS by explicitly setting its members, +// except for testing purposes, and recommend using newDialContextWithRoundRobinDNS. +func newDialContextWithRoundRobinDNS() *dialContextWithRoundRobinDNS { + return &dialContextWithRoundRobinDNS{ + dialContext: http.DefaultTransport.(*http.Transport).DialContext, + resolver: net.DefaultResolver, + rand: rand.New(rand.NewSource(time.Now().Unix())), + } +} + +func (dc *dialContextWithRoundRobinDNS) dialContextFn() config.DialContextFunc { + return func(ctx context.Context, network, addr string) (net.Conn, error) { + host, port, err := net.SplitHostPort(addr) + if err != nil { + return dc.dialContext(ctx, network, addr) + } + + addrs, err := dc.resolver.LookupHost(ctx, host) + if err != nil || len(addrs) == 0 { + return dc.dialContext(ctx, network, addr) + } + + randomAddr := net.JoinHostPort(addrs[dc.rand.Intn(len(addrs))], port) + return dc.dialContext(ctx, network, randomAddr) + } +} diff --git a/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheus/normalize_label.go b/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheus/normalize_label.go index b928e6888d1..b51b5e945a3 100644 --- a/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheus/normalize_label.go +++ b/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheus/normalize_label.go @@ -29,15 +29,15 @@ import ( // // Labels that start with non-letter rune will be prefixed with "key_". // An exception is made for double-underscores which are allowed. -func NormalizeLabel(label string, allowUTF8 bool) string { - // Trivial case - if len(label) == 0 || allowUTF8 { +func NormalizeLabel(label string) string { + // Trivial case. + if len(label) == 0 { return label } label = strutil.SanitizeLabelName(label) - // If label starts with a number, prepend with "key_" + // If label starts with a number, prepend with "key_". if unicode.IsDigit(rune(label[0])) { label = "key_" + label } else if strings.HasPrefix(label, "_") && !strings.HasPrefix(label, "__") { diff --git a/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheusremotewrite/helper.go b/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheusremotewrite/helper.go index d08de7feced..1f9c8b6570c 100644 --- a/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheusremotewrite/helper.go +++ b/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheusremotewrite/helper.go @@ -158,7 +158,10 @@ func createAttributes(resource pcommon.Resource, attributes pcommon.Map, setting // map ensures no duplicate label names. l := make(map[string]string, maxLabelCount) for _, label := range labels { - var finalKey = prometheustranslator.NormalizeLabel(label.Name, settings.AllowUTF8) + finalKey := label.Name + if !settings.AllowUTF8 { + finalKey = prometheustranslator.NormalizeLabel(finalKey) + } if existingValue, alreadyExists := l[finalKey]; alreadyExists { l[finalKey] = existingValue + ";" + label.Value } else { @@ -167,7 +170,10 @@ func createAttributes(resource pcommon.Resource, attributes pcommon.Map, setting } for _, lbl := range promotedAttrs { - normalized := prometheustranslator.NormalizeLabel(lbl.Name, settings.AllowUTF8) + normalized := lbl.Name + if !settings.AllowUTF8 { + normalized = prometheustranslator.NormalizeLabel(normalized) + } if _, exists := l[normalized]; !exists { l[normalized] = lbl.Value } @@ -205,8 +211,8 @@ func createAttributes(resource pcommon.Resource, attributes pcommon.Map, setting log.Println("label " + name + " is overwritten. Check if Prometheus reserved labels are used.") } // internal labels should be maintained - if !(len(name) > 4 && name[:2] == "__" && name[len(name)-2:] == "__") { - name = prometheustranslator.NormalizeLabel(name, settings.AllowUTF8) + if !settings.AllowUTF8 && !(len(name) > 4 && name[:2] == "__" && name[len(name)-2:] == "__") { + name = prometheustranslator.NormalizeLabel(name) } l[name] = extras[i+1] } @@ -589,10 +595,9 @@ const defaultIntervalForStartTimestamps = int64(300_000) // handleStartTime adds a zero sample at startTs only if startTs is within validIntervalForStartTimestamps of the sample timestamp. // The reason for doing this is that PRW v1 doesn't support Created Timestamps. After switching to PRW v2's direct CT support, // make use of its direct support fort Created Timestamps instead. -// See https://github.com/prometheus/prometheus/issues/14600 for context. // See https://opentelemetry.io/docs/specs/otel/metrics/data-model/#resets-and-gaps to know more about how OTel handles // resets for cumulative metrics. -func (c *PrometheusConverter) handleStartTime(startTs, ts int64, labels []prompb.Label, settings Settings, typ string, val float64, logger *slog.Logger) { +func (c *PrometheusConverter) handleStartTime(startTs, ts int64, labels []prompb.Label, settings Settings, typ string, value float64, logger *slog.Logger) { if !settings.EnableCreatedTimestampZeroIngestion { return } @@ -614,9 +619,10 @@ func (c *PrometheusConverter) handleStartTime(startTs, ts int64, labels []prompb return } - logger.Debug("adding zero value at start_ts", "type", typ, "labels", labelsStringer(labels), "start_ts", startTs, "sample_ts", ts, "sample_value", val) + logger.Debug("adding zero value at start_ts", "type", typ, "labels", labelsStringer(labels), "start_ts", startTs, "sample_ts", ts, "sample_value", value) - c.addSample(&prompb.Sample{Timestamp: startTs, Value: math.Float64frombits(value.QuietZeroNaN)}, labels) + // See https://github.com/prometheus/prometheus/issues/14600 for context. + c.addSample(&prompb.Sample{Timestamp: startTs}, labels) } // handleHistogramStartTime similar to the method above but for native histograms.. @@ -675,6 +681,10 @@ func addResourceTargetInfo(resource pcommon.Resource, settings Settings, timesta } settings.PromoteResourceAttributes = nil + if settings.KeepIdentifyingResourceAttributes { + // Do not pass identifying attributes as ignoreAttrs below. + identifyingAttrs = nil + } labels := createAttributes(resource, attributes, settings, identifyingAttrs, false, model.MetricNameLabel, name) haveIdentifier := false for _, l := range labels { diff --git a/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheusremotewrite/metrics_to_prw.go b/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheusremotewrite/metrics_to_prw.go index 3dde6ce6a17..7f0cc04a106 100644 --- a/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheusremotewrite/metrics_to_prw.go +++ b/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheusremotewrite/metrics_to_prw.go @@ -35,14 +35,17 @@ import ( ) type Settings struct { - Namespace string - ExternalLabels map[string]string - DisableTargetInfo bool - ExportCreatedMetric bool - AddMetricSuffixes bool - SendMetadata bool - AllowUTF8 bool - PromoteResourceAttributes []string + Namespace string + ExternalLabels map[string]string + DisableTargetInfo bool + ExportCreatedMetric bool + AddMetricSuffixes bool + SendMetadata bool + AllowUTF8 bool + PromoteResourceAttributes []string + KeepIdentifyingResourceAttributes bool + + // Mimir specifics. EnableCreatedTimestampZeroIngestion bool ValidIntervalCreatedTimestampZeroIngestion time.Duration } @@ -58,6 +61,7 @@ type PrometheusConverter struct { unique map[uint64]*prompb.TimeSeries conflicts map[uint64][]*prompb.TimeSeries everyN everyNTimes + metadata []prompb.MetricMetadata } func NewPrometheusConverter() *PrometheusConverter { @@ -71,6 +75,16 @@ func NewPrometheusConverter() *PrometheusConverter { func (c *PrometheusConverter) FromMetrics(ctx context.Context, md pmetric.Metrics, settings Settings, logger *slog.Logger) (annots annotations.Annotations, errs error) { c.everyN = everyNTimes{n: 128} resourceMetricsSlice := md.ResourceMetrics() + + numMetrics := 0 + for i := 0; i < resourceMetricsSlice.Len(); i++ { + scopeMetricsSlice := resourceMetricsSlice.At(i).ScopeMetrics() + for j := 0; j < scopeMetricsSlice.Len(); j++ { + numMetrics += scopeMetricsSlice.At(j).Metrics().Len() + } + } + c.metadata = make([]prompb.MetricMetadata, 0, numMetrics) + for i := 0; i < resourceMetricsSlice.Len(); i++ { resourceMetrics := resourceMetricsSlice.At(i) resource := resourceMetrics.Resource() @@ -97,6 +111,12 @@ func (c *PrometheusConverter) FromMetrics(ctx context.Context, md pmetric.Metric } promName := prometheustranslator.BuildCompliantName(metric, settings.Namespace, settings.AddMetricSuffixes, settings.AllowUTF8) + c.metadata = append(c.metadata, prompb.MetricMetadata{ + Type: otelMetricTypeToPromMetricType(metric), + MetricFamilyName: promName, + Help: metric.Description(), + Unit: metric.Unit(), + }) // handle individual metrics based on type //exhaustive:enforce diff --git a/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheusremotewrite/otlp_to_openmetrics_metadata.go b/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheusremotewrite/otlp_to_openmetrics_metadata.go index b423d2cc6e4..359fc525220 100644 --- a/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheusremotewrite/otlp_to_openmetrics_metadata.go +++ b/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheusremotewrite/otlp_to_openmetrics_metadata.go @@ -20,7 +20,6 @@ import ( "go.opentelemetry.io/collector/pdata/pmetric" "github.com/prometheus/prometheus/prompb" - prometheustranslator "github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheus" ) func otelMetricTypeToPromMetricType(otelMetric pmetric.Metric) prompb.MetricMetadata_MetricType { @@ -42,36 +41,3 @@ func otelMetricTypeToPromMetricType(otelMetric pmetric.Metric) prompb.MetricMeta } return prompb.MetricMetadata_UNKNOWN } - -func OtelMetricsToMetadata(md pmetric.Metrics, addMetricSuffixes, allowUTF8 bool) []*prompb.MetricMetadata { - resourceMetricsSlice := md.ResourceMetrics() - - metadataLength := 0 - for i := 0; i < resourceMetricsSlice.Len(); i++ { - scopeMetricsSlice := resourceMetricsSlice.At(i).ScopeMetrics() - for j := 0; j < scopeMetricsSlice.Len(); j++ { - metadataLength += scopeMetricsSlice.At(j).Metrics().Len() - } - } - - var metadata = make([]*prompb.MetricMetadata, 0, metadataLength) - for i := 0; i < resourceMetricsSlice.Len(); i++ { - resourceMetrics := resourceMetricsSlice.At(i) - scopeMetricsSlice := resourceMetrics.ScopeMetrics() - - for j := 0; j < scopeMetricsSlice.Len(); j++ { - scopeMetrics := scopeMetricsSlice.At(j) - for k := 0; k < scopeMetrics.Metrics().Len(); k++ { - metric := scopeMetrics.Metrics().At(k) - entry := prompb.MetricMetadata{ - Type: otelMetricTypeToPromMetricType(metric), - MetricFamilyName: prometheustranslator.BuildCompliantName(metric, "", addMetricSuffixes, allowUTF8), - Help: metric.Description(), - } - metadata = append(metadata, &entry) - } - } - } - - return metadata -} diff --git a/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheusremotewrite/timeseries.go b/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheusremotewrite/timeseries.go index fe973761a25..abffbe61054 100644 --- a/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheusremotewrite/timeseries.go +++ b/vendor/github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheusremotewrite/timeseries.go @@ -39,3 +39,8 @@ func (c *PrometheusConverter) TimeSeries() []prompb.TimeSeries { return allTS } + +// Metadata returns a slice of the prompb.Metadata that were converted from OTel format. +func (c *PrometheusConverter) Metadata() []prompb.MetricMetadata { + return c.metadata +} diff --git a/vendor/github.com/prometheus/prometheus/storage/remote/queue_manager.go b/vendor/github.com/prometheus/prometheus/storage/remote/queue_manager.go index 9f27c333a6d..f6d6cbc7e91 100644 --- a/vendor/github.com/prometheus/prometheus/storage/remote/queue_manager.go +++ b/vendor/github.com/prometheus/prometheus/storage/remote/queue_manager.go @@ -957,13 +957,6 @@ func (t *QueueManager) Stop() { if t.mcfg.Send { t.metadataWatcher.Stop() } - - // On shutdown, release the strings in the labels from the intern pool. - t.seriesMtx.Lock() - for _, labels := range t.seriesLabels { - t.releaseLabels(labels) - } - t.seriesMtx.Unlock() t.metrics.unregister() } @@ -985,14 +978,6 @@ func (t *QueueManager) StoreSeries(series []record.RefSeries, index int) { continue } lbls := t.builder.Labels() - t.internLabels(lbls) - - // We should not ever be replacing a series labels in the map, but just - // in case we do we need to ensure we do not leak the replaced interned - // strings. - if orig, ok := t.seriesLabels[s.Ref]; ok { - t.releaseLabels(orig) - } t.seriesLabels[s.Ref] = lbls } } @@ -1037,7 +1022,6 @@ func (t *QueueManager) SeriesReset(index int) { for k, v := range t.seriesSegmentIndexes { if v < index { delete(t.seriesSegmentIndexes, k) - t.releaseLabels(t.seriesLabels[k]) delete(t.seriesLabels, k) delete(t.seriesMetadata, k) delete(t.droppedSeries, k) @@ -1059,14 +1043,6 @@ func (t *QueueManager) client() WriteClient { return t.storeClient } -func (t *QueueManager) internLabels(lbls labels.Labels) { - lbls.InternStrings(t.interner.intern) -} - -func (t *QueueManager) releaseLabels(ls labels.Labels) { - ls.ReleaseStrings(t.interner.release) -} - // processExternalLabels merges externalLabels into b. If b contains // a label in externalLabels, the value in b wins. func processExternalLabels(b *labels.Builder, externalLabels []labels.Label) { diff --git a/vendor/github.com/prometheus/prometheus/storage/remote/write.go b/vendor/github.com/prometheus/prometheus/storage/remote/write.go index 639f3445209..03630954447 100644 --- a/vendor/github.com/prometheus/prometheus/storage/remote/write.go +++ b/vendor/github.com/prometheus/prometheus/storage/remote/write.go @@ -180,6 +180,7 @@ func (rws *WriteStorage) ApplyConfig(conf *config.Config) error { GoogleIAMConfig: rwConf.GoogleIAMConfig, Headers: rwConf.Headers, RetryOnRateLimit: rwConf.QueueConfig.RetryOnRateLimit, + RoundRobinDNS: rwConf.RoundRobinDNS, }) if err != nil { return err diff --git a/vendor/github.com/prometheus/prometheus/storage/remote/write_handler.go b/vendor/github.com/prometheus/prometheus/storage/remote/write_handler.go index e15c01bed7a..49ec44dc086 100644 --- a/vendor/github.com/prometheus/prometheus/storage/remote/write_handler.go +++ b/vendor/github.com/prometheus/prometheus/storage/remote/write_handler.go @@ -516,9 +516,12 @@ func (h *otlpWriteHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) { converter := otlptranslator.NewPrometheusConverter() annots, err := converter.FromMetrics(r.Context(), req.Metrics(), otlptranslator.Settings{ - AddMetricSuffixes: true, - AllowUTF8: otlpCfg.TranslationStrategy == config.NoUTF8EscapingWithSuffixes, - PromoteResourceAttributes: otlpCfg.PromoteResourceAttributes, + AddMetricSuffixes: true, + AllowUTF8: otlpCfg.TranslationStrategy == config.NoUTF8EscapingWithSuffixes, + PromoteResourceAttributes: otlpCfg.PromoteResourceAttributes, + KeepIdentifyingResourceAttributes: otlpCfg.KeepIdentifyingResourceAttributes, + + // Mimir specifics. EnableCreatedTimestampZeroIngestion: h.enableCTZeroIngestion, ValidIntervalCreatedTimestampZeroIngestion: h.validIntervalCTZeroIngestion, }, h.logger) @@ -532,6 +535,7 @@ func (h *otlpWriteHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) { err = h.rwHandler.write(r.Context(), &prompb.WriteRequest{ Timeseries: converter.TimeSeries(), + Metadata: converter.Metadata(), }) switch { diff --git a/vendor/github.com/prometheus/prometheus/tsdb/chunkenc/float_histogram.go b/vendor/github.com/prometheus/prometheus/tsdb/chunkenc/float_histogram.go index 1c7e2c3ace8..0da00dcda44 100644 --- a/vendor/github.com/prometheus/prometheus/tsdb/chunkenc/float_histogram.go +++ b/vendor/github.com/prometheus/prometheus/tsdb/chunkenc/float_histogram.go @@ -976,6 +976,7 @@ func (it *floatHistogramIterator) Reset(b []byte) { it.atFloatHistogramCalled = false it.pBuckets, it.nBuckets = nil, nil it.pSpans, it.nSpans = nil, nil + it.customValues = nil } else { it.pBuckets, it.nBuckets = it.pBuckets[:0], it.nBuckets[:0] } @@ -1071,7 +1072,7 @@ func (it *floatHistogramIterator) Next() ValueType { // The case for the 2nd sample with single deltas is implicitly handled correctly with the double delta code, // so we don't need a separate single delta logic for the 2nd sample. - // Recycle bucket and span slices that have not been returned yet. Otherwise, copy them. + // Recycle bucket, span and custom value slices that have not been returned yet. Otherwise, copy them. // We can always recycle the slices for leading and trailing bits as they are // never returned to the caller. if it.atFloatHistogramCalled { @@ -1104,6 +1105,13 @@ func (it *floatHistogramIterator) Next() ValueType { } else { it.nSpans = nil } + if len(it.customValues) > 0 { + newCustomValues := make([]float64, len(it.customValues)) + copy(newCustomValues, it.customValues) + it.customValues = newCustomValues + } else { + it.customValues = nil + } } tDod, err := readVarbitInt(&it.br) diff --git a/vendor/github.com/prometheus/prometheus/tsdb/chunkenc/histogram.go b/vendor/github.com/prometheus/prometheus/tsdb/chunkenc/histogram.go index bb747e135f0..d2eec6b75ae 100644 --- a/vendor/github.com/prometheus/prometheus/tsdb/chunkenc/histogram.go +++ b/vendor/github.com/prometheus/prometheus/tsdb/chunkenc/histogram.go @@ -1075,6 +1075,7 @@ func (it *histogramIterator) Reset(b []byte) { it.atHistogramCalled = false it.pBuckets, it.nBuckets = nil, nil it.pSpans, it.nSpans = nil, nil + it.customValues = nil } else { it.pBuckets = it.pBuckets[:0] it.nBuckets = it.nBuckets[:0] @@ -1082,6 +1083,7 @@ func (it *histogramIterator) Reset(b []byte) { if it.atFloatHistogramCalled { it.atFloatHistogramCalled = false it.pFloatBuckets, it.nFloatBuckets = nil, nil + it.customValues = nil } else { it.pFloatBuckets = it.pFloatBuckets[:0] it.nFloatBuckets = it.nFloatBuckets[:0] @@ -1187,8 +1189,7 @@ func (it *histogramIterator) Next() ValueType { // The case for the 2nd sample with single deltas is implicitly handled correctly with the double delta code, // so we don't need a separate single delta logic for the 2nd sample. - // Recycle bucket and span slices that have not been returned yet. Otherwise, copy them. - // copy them. + // Recycle bucket, span and custom value slices that have not been returned yet. Otherwise, copy them. if it.atFloatHistogramCalled || it.atHistogramCalled { if len(it.pSpans) > 0 { newSpans := make([]histogram.Span, len(it.pSpans)) @@ -1204,6 +1205,13 @@ func (it *histogramIterator) Next() ValueType { } else { it.nSpans = nil } + if len(it.customValues) > 0 { + newCustomValues := make([]float64, len(it.customValues)) + copy(newCustomValues, it.customValues) + it.customValues = newCustomValues + } else { + it.customValues = nil + } } if it.atHistogramCalled { diff --git a/vendor/github.com/prometheus/prometheus/tsdb/compact.go b/vendor/github.com/prometheus/prometheus/tsdb/compact.go index 8042dae1962..651736ba005 100644 --- a/vendor/github.com/prometheus/prometheus/tsdb/compact.go +++ b/vendor/github.com/prometheus/prometheus/tsdb/compact.go @@ -1149,7 +1149,7 @@ func (c DefaultBlockPopulator) PopulateBlock(ctx context.Context, metrics *Compa if len(sets) > 1 { // Merge series using specified chunk series merger. // The default one is the compacting series merger. - set = storage.NewMergeChunkSeriesSet(sets, mergeFunc) + set = storage.NewMergeChunkSeriesSet(sets, 0, mergeFunc) } // Iterate over all sorted chunk series. @@ -1238,7 +1238,7 @@ func populateSymbols(ctx context.Context, mergeFunc storage.VerticalChunkSeriesM seriesSet := sets[0] if len(sets) > 1 { - seriesSet = storage.NewMergeChunkSeriesSet(sets, mergeFunc) + seriesSet = storage.NewMergeChunkSeriesSet(sets, 0, mergeFunc) } for seriesSet.Next() { diff --git a/vendor/github.com/prometheus/prometheus/tsdb/head_append.go b/vendor/github.com/prometheus/prometheus/tsdb/head_append.go index e16bd73b295..b64607a417e 100644 --- a/vendor/github.com/prometheus/prometheus/tsdb/head_append.go +++ b/vendor/github.com/prometheus/prometheus/tsdb/head_append.go @@ -497,7 +497,7 @@ func (s *memSeries) appendable(t int64, v float64, headMaxt, minValidTime, oooTi if s.lastHistogramValue != nil || s.lastFloatHistogramValue != nil { return false, 0, storage.NewDuplicateHistogramToFloatErr(t, v) } - if math.Float64bits(s.lastValue) != math.Float64bits(v) && math.Float64bits(v) != value.QuietZeroNaN { + if math.Float64bits(s.lastValue) != math.Float64bits(v) { return false, 0, storage.NewDuplicateFloatErr(t, s.lastValue, v) } // Sample is identical (ts + value) with most current (highest ts) sample in sampleBuf. @@ -505,10 +505,6 @@ func (s *memSeries) appendable(t int64, v float64, headMaxt, minValidTime, oooTi } } - if math.Float64bits(v) == value.QuietZeroNaN { // Say it's allowed; it will be dropped later in commitSamples. - return true, 0, nil - } - // The sample cannot go in the in-order chunk. Check if it can go in the out-of-order chunk. if oooTimeWindow > 0 && t >= headMaxt-oooTimeWindow { return true, headMaxt - t, nil @@ -1148,8 +1144,6 @@ func (a *headAppender) commitSamples(acc *appenderCommitContext) { switch { case err != nil: // Do nothing here. - case oooSample && math.Float64bits(s.V) == value.QuietZeroNaN: - // No-op: we don't store quiet zeros out-of-order. case oooSample: // Sample is OOO and OOO handling is enabled // and the delta is within the OOO tolerance. @@ -1196,9 +1190,6 @@ func (a *headAppender) commitSamples(acc *appenderCommitContext) { acc.floatsAppended-- } default: - if math.Float64bits(s.V) == value.QuietZeroNaN { - s.V = 0 - } ok, chunkCreated = series.append(s.T, s.V, a.appendID, acc.appendChunkOpts) if ok { if s.T < acc.inOrderMint { diff --git a/vendor/github.com/prometheus/prometheus/tsdb/head_wal.go b/vendor/github.com/prometheus/prometheus/tsdb/head_wal.go index 6729d770901..b1f3abd1545 100644 --- a/vendor/github.com/prometheus/prometheus/tsdb/head_wal.go +++ b/vendor/github.com/prometheus/prometheus/tsdb/head_wal.go @@ -658,32 +658,15 @@ func (h *Head) loadWBL(r *wlog.Reader, syms *labels.SymbolTable, multiRef map[ch concurrency = h.opts.WALReplayConcurrency processors = make([]wblSubsetProcessor, concurrency) - dec record.Decoder shards = make([][]record.RefSample, concurrency) histogramShards = make([][]histogramRecord, concurrency) - decodedCh = make(chan interface{}, 10) - decodeErr error - samplesPool = sync.Pool{ - New: func() interface{} { - return []record.RefSample{} - }, - } - markersPool = sync.Pool{ - New: func() interface{} { - return []record.RefMmapMarker{} - }, - } - histogramSamplesPool = sync.Pool{ - New: func() interface{} { - return []record.RefHistogramSample{} - }, - } - floatHistogramSamplesPool = sync.Pool{ - New: func() interface{} { - return []record.RefFloatHistogramSample{} - }, - } + decodedCh = make(chan interface{}, 10) + decodeErr error + samplesPool zeropool.Pool[[]record.RefSample] + markersPool zeropool.Pool[[]record.RefMmapMarker] + histogramSamplesPool zeropool.Pool[[]record.RefHistogramSample] + floatHistogramSamplesPool zeropool.Pool[[]record.RefFloatHistogramSample] ) defer func() { @@ -713,11 +696,13 @@ func (h *Head) loadWBL(r *wlog.Reader, syms *labels.SymbolTable, multiRef map[ch go func() { defer close(decodedCh) + var err error + dec := record.NewDecoder(syms) for r.Next() { rec := r.Record() switch dec.Type(rec) { case record.Samples: - samples := samplesPool.Get().([]record.RefSample)[:0] + samples := samplesPool.Get()[:0] samples, err = dec.Samples(rec, samples) if err != nil { decodeErr = &wlog.CorruptionErr{ @@ -729,7 +714,7 @@ func (h *Head) loadWBL(r *wlog.Reader, syms *labels.SymbolTable, multiRef map[ch } decodedCh <- samples case record.MmapMarkers: - markers := markersPool.Get().([]record.RefMmapMarker)[:0] + markers := markersPool.Get()[:0] markers, err = dec.MmapMarkers(rec, markers) if err != nil { decodeErr = &wlog.CorruptionErr{ @@ -741,7 +726,7 @@ func (h *Head) loadWBL(r *wlog.Reader, syms *labels.SymbolTable, multiRef map[ch } decodedCh <- markers case record.HistogramSamples: - hists := histogramSamplesPool.Get().([]record.RefHistogramSample)[:0] + hists := histogramSamplesPool.Get()[:0] hists, err = dec.HistogramSamples(rec, hists) if err != nil { decodeErr = &wlog.CorruptionErr{ @@ -753,7 +738,7 @@ func (h *Head) loadWBL(r *wlog.Reader, syms *labels.SymbolTable, multiRef map[ch } decodedCh <- hists case record.FloatHistogramSamples: - hists := floatHistogramSamplesPool.Get().([]record.RefFloatHistogramSample)[:0] + hists := floatHistogramSamplesPool.Get()[:0] hists, err = dec.FloatHistogramSamples(rec, hists) if err != nil { decodeErr = &wlog.CorruptionErr{ @@ -804,7 +789,7 @@ func (h *Head) loadWBL(r *wlog.Reader, syms *labels.SymbolTable, multiRef map[ch } samples = samples[m:] } - samplesPool.Put(d) + samplesPool.Put(v) case []record.RefMmapMarker: markers := v for _, rm := range markers { @@ -859,7 +844,7 @@ func (h *Head) loadWBL(r *wlog.Reader, syms *labels.SymbolTable, multiRef map[ch } samples = samples[m:] } - histogramSamplesPool.Put(v) //nolint:staticcheck + histogramSamplesPool.Put(v) case []record.RefFloatHistogramSample: samples := v // We split up the samples into chunks of 5000 samples or less. @@ -891,7 +876,7 @@ func (h *Head) loadWBL(r *wlog.Reader, syms *labels.SymbolTable, multiRef map[ch } samples = samples[m:] } - floatHistogramSamplesPool.Put(v) //nolint:staticcheck + floatHistogramSamplesPool.Put(v) default: panic(fmt.Errorf("unexpected decodedCh type: %T", d)) } diff --git a/vendor/github.com/prometheus/prometheus/tsdb/index/index.go b/vendor/github.com/prometheus/prometheus/tsdb/index/index.go index 6a1064b356f..5878d934a9b 100644 --- a/vendor/github.com/prometheus/prometheus/tsdb/index/index.go +++ b/vendor/github.com/prometheus/prometheus/tsdb/index/index.go @@ -111,12 +111,6 @@ func newCRC32() hash.Hash32 { return crc32.New(castagnoliTable) } -type symbolCacheEntry struct { - index uint32 - lastValueIndex uint32 - lastValue string -} - type PostingsEncoder func(*encoding.Encbuf, []uint32) error type PostingsDecoder func(encoding.Decbuf) (int, Postings, error) @@ -147,7 +141,7 @@ type Writer struct { symbols *Symbols symbolFile *fileutil.MmapFile lastSymbol string - symbolCache map[string]symbolCacheEntry + symbolCache map[string]uint32 // From symbol to index in table. labelIndexes []labelIndexHashEntry // Label index offsets. labelNames map[string]uint64 // Label names, and their usage. @@ -247,7 +241,7 @@ func NewWriterWithEncoder(ctx context.Context, fn string, encoder PostingsEncode buf1: encoding.Encbuf{B: make([]byte, 0, 1<<22)}, buf2: encoding.Encbuf{B: make([]byte, 0, 1<<22)}, - symbolCache: make(map[string]symbolCacheEntry, 1<<8), + symbolCache: make(map[string]uint32, 1<<16), labelNames: make(map[string]uint64, 1<<8), crc32: newCRC32(), postingsEncoder: encoder, @@ -479,29 +473,16 @@ func (w *Writer) AddSeries(ref storage.SeriesRef, lset labels.Labels, chunks ... w.buf2.PutUvarint(lset.Len()) if err := lset.Validate(func(l labels.Label) error { - var err error - cacheEntry, ok := w.symbolCache[l.Name] - nameIndex := cacheEntry.index + nameIndex, ok := w.symbolCache[l.Name] if !ok { - nameIndex, err = w.symbols.ReverseLookup(l.Name) - if err != nil { - return fmt.Errorf("symbol entry for %q does not exist, %w", l.Name, err) - } + return fmt.Errorf("symbol entry for %q does not exist", l.Name) } w.labelNames[l.Name]++ w.buf2.PutUvarint32(nameIndex) - valueIndex := cacheEntry.lastValueIndex - if !ok || cacheEntry.lastValue != l.Value { - valueIndex, err = w.symbols.ReverseLookup(l.Value) - if err != nil { - return fmt.Errorf("symbol entry for %q does not exist, %w", l.Value, err) - } - w.symbolCache[l.Name] = symbolCacheEntry{ - index: nameIndex, - lastValueIndex: valueIndex, - lastValue: l.Value, - } + valueIndex, ok := w.symbolCache[l.Value] + if !ok { + return fmt.Errorf("symbol entry for %q does not exist", l.Value) } w.buf2.PutUvarint32(valueIndex) return nil @@ -560,6 +541,7 @@ func (w *Writer) AddSymbol(sym string) error { return fmt.Errorf("symbol %q out-of-order", sym) } w.lastSymbol = sym + w.symbolCache[sym] = uint32(w.numSymbols) w.numSymbols++ w.buf1.Reset() w.buf1.PutUvarintStr(sym) @@ -629,10 +611,10 @@ func (w *Writer) writeLabelIndices() error { values := []uint32{} for d.Err() == nil && cnt > 0 { cnt-- - d.Uvarint() // Keycount. - name := d.UvarintBytes() // Label name. - value := yoloString(d.UvarintBytes()) // Label value. - d.Uvarint64() // Offset. + d.Uvarint() // Keycount. + name := d.UvarintBytes() // Label name. + value := d.UvarintBytes() // Label value. + d.Uvarint64() // Offset. if len(name) == 0 { continue // All index is ignored. } @@ -645,9 +627,9 @@ func (w *Writer) writeLabelIndices() error { values = values[:0] } current = name - sid, err := w.symbols.ReverseLookup(value) - if err != nil { - return err + sid, ok := w.symbolCache[string(value)] + if !ok { + return fmt.Errorf("symbol entry for %q does not exist", string(value)) } values = append(values, sid) } @@ -919,9 +901,9 @@ func (w *Writer) writePostingsToTmpFiles() error { nameSymbols := map[uint32]string{} for _, name := range batchNames { - sid, err := w.symbols.ReverseLookup(name) - if err != nil { - return err + sid, ok := w.symbolCache[name] + if !ok { + return fmt.Errorf("symbol entry for %q does not exist", name) } nameSymbols[sid] = name } @@ -958,9 +940,9 @@ func (w *Writer) writePostingsToTmpFiles() error { for _, name := range batchNames { // Write out postings for this label name. - sid, err := w.symbols.ReverseLookup(name) - if err != nil { - return err + sid, ok := w.symbolCache[name] + if !ok { + return fmt.Errorf("symbol entry for %q does not exist", name) } values := make([]uint32, 0, len(postings[sid])) for v := range postings[sid] { diff --git a/vendor/github.com/prometheus/prometheus/tsdb/index/postings.go b/vendor/github.com/prometheus/prometheus/tsdb/index/postings.go index 67c8f53bfed..b44b4089275 100644 --- a/vendor/github.com/prometheus/prometheus/tsdb/index/postings.go +++ b/vendor/github.com/prometheus/prometheus/tsdb/index/postings.go @@ -514,10 +514,14 @@ func (p *MemPostings) PostingsForAllLabelValues(ctx context.Context, name string e := p.m[name] its := make([]Postings, 0, len(e)) + lps := make([]ListPostings, len(e)) + i := 0 for _, refs := range e { if len(refs) > 0 { - its = append(its, NewListPostings(refs)) + lps[i] = ListPostings{list: refs} + its = append(its, &lps[i]) } + i++ } // Let the mutex go before merging. diff --git a/vendor/github.com/prometheus/prometheus/tsdb/querier.go b/vendor/github.com/prometheus/prometheus/tsdb/querier.go index 36b22567911..7f4c4317f23 100644 --- a/vendor/github.com/prometheus/prometheus/tsdb/querier.go +++ b/vendor/github.com/prometheus/prometheus/tsdb/querier.go @@ -255,10 +255,23 @@ func PostingsForMatchers(ctx context.Context, ix IndexPostingsReader, ms ...*lab // We already handled the case at the top of the function, // and it is unexpected to get all postings again here. return nil, errors.New("unexpected all postings") + case m.Type == labels.MatchRegexp && m.Value == ".*": // .* regexp matches any string: do nothing. case m.Type == labels.MatchNotRegexp && m.Value == ".*": return index.EmptyPostings(), nil + + case m.Type == labels.MatchRegexp && m.Value == ".+": + // .+ regexp matches any non-empty string: get postings for all label values. + it := ix.PostingsForAllLabelValues(ctx, m.Name) + if index.IsEmptyPostingsType(it) { + return index.EmptyPostings(), nil + } + its = append(its, it) + case m.Type == labels.MatchNotRegexp && m.Value == ".+": + // .+ regexp matches any non-empty string: get postings for all label values and remove them. + its = append(notIts, ix.PostingsForAllLabelValues(ctx, m.Name)) + case labelMustBeSet[m.Name]: // If this matcher must be non-empty, we can be smarter. matchesEmpty := m.Matches("") @@ -293,7 +306,7 @@ func PostingsForMatchers(ctx context.Context, ix IndexPostingsReader, ms ...*lab return index.EmptyPostings(), nil } its = append(its, it) - default: // l="a" + default: // l="a", l=~"a|b", l=~"a.b", etc. // Non-Not matcher, use normal postingsForMatcher. it, err := postingsForMatcher(ctx, ix, m) if err != nil { diff --git a/vendor/github.com/prometheus/prometheus/tsdb/wlog/watcher.go b/vendor/github.com/prometheus/prometheus/tsdb/wlog/watcher.go index d68ef2accb8..89db5d2dd72 100644 --- a/vendor/github.com/prometheus/prometheus/tsdb/wlog/watcher.go +++ b/vendor/github.com/prometheus/prometheus/tsdb/wlog/watcher.go @@ -206,7 +206,7 @@ func (w *Watcher) Notify() { } } -func (w *Watcher) setMetrics() { +func (w *Watcher) SetMetrics() { // Setup the WAL Watchers metrics. We do this here rather than in the // constructor because of the ordering of creating Queue Managers's, // stopping them, and then starting new ones in storage/remote/storage.go ApplyConfig. @@ -221,7 +221,7 @@ func (w *Watcher) setMetrics() { // Start the Watcher. func (w *Watcher) Start() { - w.setMetrics() + w.SetMetrics() w.logger.Info("Starting WAL watcher", "queue", w.name) go w.loop() diff --git a/vendor/github.com/prometheus/prometheus/util/convertnhcb/convertnhcb.go b/vendor/github.com/prometheus/prometheus/util/convertnhcb/convertnhcb.go index ee5bcb72def..21ae62b3cb3 100644 --- a/vendor/github.com/prometheus/prometheus/util/convertnhcb/convertnhcb.go +++ b/vendor/github.com/prometheus/prometheus/util/convertnhcb/convertnhcb.go @@ -139,21 +139,18 @@ func (h TempHistogram) Convert() (*histogram.Histogram, *histogram.FloatHistogra return nil, nil, h.err } + if !h.hasCount && len(h.buckets) > 0 { + // No count, so set count to the highest known bucket's count. + h.count = h.buckets[len(h.buckets)-1].count + h.hasCount = true + } + if len(h.buckets) == 0 || h.buckets[len(h.buckets)-1].le != math.Inf(1) { // No +Inf bucket. - if !h.hasCount && len(h.buckets) > 0 { - // No count either, so set count to the last known bucket's count. - h.count = h.buckets[len(h.buckets)-1].count - } // Let the last bucket be +Inf with the overall count. h.buckets = append(h.buckets, tempHistogramBucket{le: math.Inf(1), count: h.count}) } - if !h.hasCount { - h.count = h.buckets[len(h.buckets)-1].count - h.hasCount = true - } - for _, b := range h.buckets { intCount := int64(math.Round(b.count)) if b.count != float64(intCount) { @@ -232,26 +229,34 @@ func (h TempHistogram) convertToFloatHistogram() (*histogram.Histogram, *histogr return nil, rh.Compact(0), nil } -func GetHistogramMetricBase(m labels.Labels, suffix string) labels.Labels { - mName := m.Get(labels.MetricName) +func GetHistogramMetricBase(m labels.Labels, name string) labels.Labels { return labels.NewBuilder(m). - Set(labels.MetricName, strings.TrimSuffix(mName, suffix)). + Set(labels.MetricName, name). Del(labels.BucketLabel). Labels() } +type SuffixType int + +const ( + SuffixNone SuffixType = iota + SuffixBucket + SuffixSum + SuffixCount +) + // GetHistogramMetricBaseName removes the suffixes _bucket, _sum, _count from // the metric name. We specifically do not remove the _created suffix as that // should be removed by the caller. -func GetHistogramMetricBaseName(s string) string { +func GetHistogramMetricBaseName(s string) (SuffixType, string) { if r, ok := strings.CutSuffix(s, "_bucket"); ok { - return r + return SuffixBucket, r } if r, ok := strings.CutSuffix(s, "_sum"); ok { - return r + return SuffixSum, r } if r, ok := strings.CutSuffix(s, "_count"); ok { - return r + return SuffixCount, r } - return s + return SuffixNone, s } diff --git a/vendor/github.com/prometheus/prometheus/util/logging/dedupe.go b/vendor/github.com/prometheus/prometheus/util/logging/dedupe.go index d5aee5c095c..e7dff20f786 100644 --- a/vendor/github.com/prometheus/prometheus/util/logging/dedupe.go +++ b/vendor/github.com/prometheus/prometheus/util/logging/dedupe.go @@ -26,6 +26,8 @@ const ( maxEntries = 1024 ) +var _ slog.Handler = (*Deduper)(nil) + // Deduper implements *slog.Handler, dedupes log lines based on a time duration. type Deduper struct { next *slog.Logger diff --git a/vendor/github.com/prometheus/prometheus/util/logging/file.go b/vendor/github.com/prometheus/prometheus/util/logging/file.go index f20927bedaf..9db7fb722be 100644 --- a/vendor/github.com/prometheus/prometheus/util/logging/file.go +++ b/vendor/github.com/prometheus/prometheus/util/logging/file.go @@ -14,6 +14,7 @@ package logging import ( + "context" "fmt" "log/slog" "os" @@ -57,26 +58,8 @@ func (l *JSONFileLogger) With(args ...any) { l.logger = l.logger.With(args...) } -// Info calls the `Info()` method on the underlying `log/slog.Logger` with the +// Log calls the `Log()` method on the underlying `log/slog.Logger` with the // provided msg and args. It implements the promql.QueryLogger interface. -func (l *JSONFileLogger) Info(msg string, args ...any) { - l.logger.Info(msg, args...) -} - -// Error calls the `Error()` method on the underlying `log/slog.Logger` with the -// provided msg and args. It implements the promql.QueryLogger interface. -func (l *JSONFileLogger) Error(msg string, args ...any) { - l.logger.Error(msg, args...) -} - -// Debug calls the `Debug()` method on the underlying `log/slog.Logger` with the -// provided msg and args. It implements the promql.QueryLogger interface. -func (l *JSONFileLogger) Debug(msg string, args ...any) { - l.logger.Debug(msg, args...) -} - -// Warn calls the `Warn()` method on the underlying `log/slog.Logger` with the -// provided msg and args. It implements the promql.QueryLogger interface. -func (l *JSONFileLogger) Warn(msg string, args ...any) { - l.logger.Warn(msg, args...) +func (l *JSONFileLogger) Log(ctx context.Context, level slog.Level, msg string, args ...any) { + l.logger.Log(ctx, level, msg, args...) } diff --git a/vendor/github.com/prometheus/prometheus/web/api/v1/api.go b/vendor/github.com/prometheus/prometheus/web/api/v1/api.go index 76a8c5864e1..10c2ba2e0a4 100644 --- a/vendor/github.com/prometheus/prometheus/web/api/v1/api.go +++ b/vendor/github.com/prometheus/prometheus/web/api/v1/api.go @@ -746,7 +746,12 @@ func (api *API) labelValues(r *http.Request) (result apiFuncResult) { ctx := r.Context() name := route.Param(ctx, "name") - if !model.LabelNameRE.MatchString(name) { + if strings.HasPrefix(name, "U__") { + name = model.UnescapeName(name, model.ValueEncodingEscaping) + } + + label := model.LabelName(name) + if !label.IsValid() { return apiFuncResult{nil, &apiError{errorBadData, fmt.Errorf("invalid label name: %q", name)}, nil, nil} } @@ -919,7 +924,7 @@ func (api *API) series(r *http.Request) (result apiFuncResult) { s := q.Select(ctx, true, hints, mset...) sets = append(sets, s) } - set = storage.NewMergeSeriesSet(sets, storage.ChainedSeriesMerge) + set = storage.NewMergeSeriesSet(sets, 0, storage.ChainedSeriesMerge) } else { // At this point at least one match exists. set = q.Select(ctx, false, hints, matcherSets[0]...) @@ -1376,7 +1381,7 @@ func (api *API) metricMetadata(r *http.Request) apiFuncResult { // RuleDiscovery has info for all rules. type RuleDiscovery struct { RuleGroups []*RuleGroup `json:"groups"` - GroupNextToken string `json:"groupNextToken:omitempty"` + GroupNextToken string `json:"groupNextToken,omitempty"` } // RuleGroup has info for rules which are part of a group. diff --git a/vendor/github.com/prometheus/sigv4/.golangci.yml b/vendor/github.com/prometheus/sigv4/.golangci.yml new file mode 100644 index 00000000000..9dfe1a89af9 --- /dev/null +++ b/vendor/github.com/prometheus/sigv4/.golangci.yml @@ -0,0 +1,41 @@ +issues: + max-issues-per-linter: 0 + max-same-issues: 0 +linters: + enable: + - errcheck + - errorlint + - gofumpt + - goimports + - gosimple + - govet + - ineffassign + - misspell + - revive + - staticcheck + - testifylint + - unused +linters-settings: + goimports: + local-prefixes: github.com/prometheus/sigv4 + revive: + rules: + # https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#unused-parameter + - name: unused-parameter + severity: warning + disabled: true + testifylint: + disable: + - float-compare + - go-require + enable: + - bool-compare + - compares + - empty + - error-is-as + - error-nil + - expected-actual + - len + - require-error + - suite-dont-use-pkg + - suite-extra-assert-call diff --git a/vendor/github.com/prometheus/sigv4/.yamllint b/vendor/github.com/prometheus/sigv4/.yamllint new file mode 100644 index 00000000000..281c9464634 --- /dev/null +++ b/vendor/github.com/prometheus/sigv4/.yamllint @@ -0,0 +1,27 @@ +--- +extends: default + +rules: + braces: + max-spaces-inside: 1 + level: error + brackets: + max-spaces-inside: 1 + level: error + commas: disable + comments: disable + comments-indentation: disable + document-start: disable + indentation: + spaces: consistent + key-duplicates: + ignore: | + config/testdata/section_key_dup.bad.yml + line-length: disable + truthy: + ignore: | + .github/workflows/codeql-analysis.yml + .github/workflows/funcbench.yml + .github/workflows/fuzzing.yml + .github/workflows/prombench.yml + .github/workflows/golangci-lint.yml diff --git a/vendor/github.com/prometheus/sigv4/CODE_OF_CONDUCT.md b/vendor/github.com/prometheus/sigv4/CODE_OF_CONDUCT.md new file mode 100644 index 00000000000..d325872bdfa --- /dev/null +++ b/vendor/github.com/prometheus/sigv4/CODE_OF_CONDUCT.md @@ -0,0 +1,3 @@ +# Prometheus Community Code of Conduct + +Prometheus follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md). diff --git a/vendor/github.com/prometheus/sigv4/CONTRIBUTING.md b/vendor/github.com/prometheus/sigv4/CONTRIBUTING.md new file mode 100644 index 00000000000..40503edbf18 --- /dev/null +++ b/vendor/github.com/prometheus/sigv4/CONTRIBUTING.md @@ -0,0 +1,18 @@ +# Contributing + +Prometheus uses GitHub to manage reviews of pull requests. + +* If you have a trivial fix or improvement, go ahead and create a pull request, + addressing (with `@...`) the maintainer of this repository (see + [MAINTAINERS.md](MAINTAINERS.md)) in the description of the pull request. + +* If you plan to do something more involved, first discuss your ideas + on our [mailing list](https://groups.google.com/forum/?fromgroups#!forum/prometheus-developers). + This will avoid unnecessary work and surely give you and us a good deal + of inspiration. + +* Relevant coding style guidelines are the [Go Code Review + Comments](https://code.google.com/p/go-wiki/wiki/CodeReviewComments) + and the _Formatting and style_ section of Peter Bourgon's [Go: Best + Practices for Production + Environments](http://peter.bourgon.org/go-in-production/#formatting-and-style). diff --git a/vendor/github.com/prometheus/sigv4/LICENSE b/vendor/github.com/prometheus/sigv4/LICENSE new file mode 100644 index 00000000000..261eeb9e9f8 --- /dev/null +++ b/vendor/github.com/prometheus/sigv4/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/prometheus/sigv4/MAINTAINERS.md b/vendor/github.com/prometheus/sigv4/MAINTAINERS.md new file mode 100644 index 00000000000..3981a2ebc1e --- /dev/null +++ b/vendor/github.com/prometheus/sigv4/MAINTAINERS.md @@ -0,0 +1,2 @@ +* Julien Pivotto @roidelapluie +* Josue (Josh) Abreu @gotjosh diff --git a/vendor/github.com/prometheus/sigv4/Makefile b/vendor/github.com/prometheus/sigv4/Makefile new file mode 100644 index 00000000000..fbed72d67a9 --- /dev/null +++ b/vendor/github.com/prometheus/sigv4/Makefile @@ -0,0 +1,17 @@ +# Copyright 2018 The Prometheus Authors +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +include Makefile.common + +.PHONY: test +test:: deps check_license unused common-test lint diff --git a/vendor/github.com/prometheus/sigv4/Makefile.common b/vendor/github.com/prometheus/sigv4/Makefile.common new file mode 100644 index 00000000000..cbb5d86382a --- /dev/null +++ b/vendor/github.com/prometheus/sigv4/Makefile.common @@ -0,0 +1,283 @@ +# Copyright 2018 The Prometheus Authors +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +# A common Makefile that includes rules to be reused in different prometheus projects. +# !!! Open PRs only against the prometheus/prometheus/Makefile.common repository! + +# Example usage : +# Create the main Makefile in the root project directory. +# include Makefile.common +# customTarget: +# @echo ">> Running customTarget" +# + +# Ensure GOBIN is not set during build so that promu is installed to the correct path +unexport GOBIN + +GO ?= go +GOFMT ?= $(GO)fmt +FIRST_GOPATH := $(firstword $(subst :, ,$(shell $(GO) env GOPATH))) +GOOPTS ?= +GOHOSTOS ?= $(shell $(GO) env GOHOSTOS) +GOHOSTARCH ?= $(shell $(GO) env GOHOSTARCH) + +GO_VERSION ?= $(shell $(GO) version) +GO_VERSION_NUMBER ?= $(word 3, $(GO_VERSION)) +PRE_GO_111 ?= $(shell echo $(GO_VERSION_NUMBER) | grep -E 'go1\.(10|[0-9])\.') + +PROMU := $(FIRST_GOPATH)/bin/promu +pkgs = ./... + +ifeq (arm, $(GOHOSTARCH)) + GOHOSTARM ?= $(shell GOARM= $(GO) env GOARM) + GO_BUILD_PLATFORM ?= $(GOHOSTOS)-$(GOHOSTARCH)v$(GOHOSTARM) +else + GO_BUILD_PLATFORM ?= $(GOHOSTOS)-$(GOHOSTARCH) +endif + +GOTEST := $(GO) test +GOTEST_DIR := +ifneq ($(CIRCLE_JOB),) +ifneq ($(shell command -v gotestsum 2> /dev/null),) + GOTEST_DIR := test-results + GOTEST := gotestsum --junitfile $(GOTEST_DIR)/unit-tests.xml -- +endif +endif + +PROMU_VERSION ?= 0.17.0 +PROMU_URL := https://github.com/prometheus/promu/releases/download/v$(PROMU_VERSION)/promu-$(PROMU_VERSION).$(GO_BUILD_PLATFORM).tar.gz + +SKIP_GOLANGCI_LINT := +GOLANGCI_LINT := +GOLANGCI_LINT_OPTS ?= +GOLANGCI_LINT_VERSION ?= v1.60.2 +# golangci-lint only supports linux, darwin and windows platforms on i386/amd64/arm64. +# windows isn't included here because of the path separator being different. +ifeq ($(GOHOSTOS),$(filter $(GOHOSTOS),linux darwin)) + ifeq ($(GOHOSTARCH),$(filter $(GOHOSTARCH),amd64 i386 arm64)) + # If we're in CI and there is an Actions file, that means the linter + # is being run in Actions, so we don't need to run it here. + ifneq (,$(SKIP_GOLANGCI_LINT)) + GOLANGCI_LINT := + else ifeq (,$(CIRCLE_JOB)) + GOLANGCI_LINT := $(FIRST_GOPATH)/bin/golangci-lint + else ifeq (,$(wildcard .github/workflows/golangci-lint.yml)) + GOLANGCI_LINT := $(FIRST_GOPATH)/bin/golangci-lint + endif + endif +endif + +PREFIX ?= $(shell pwd) +BIN_DIR ?= $(shell pwd) +DOCKER_IMAGE_TAG ?= $(subst /,-,$(shell git rev-parse --abbrev-ref HEAD)) +DOCKERFILE_PATH ?= ./Dockerfile +DOCKERBUILD_CONTEXT ?= ./ +DOCKER_REPO ?= prom + +DOCKER_ARCHS ?= amd64 + +BUILD_DOCKER_ARCHS = $(addprefix common-docker-,$(DOCKER_ARCHS)) +PUBLISH_DOCKER_ARCHS = $(addprefix common-docker-publish-,$(DOCKER_ARCHS)) +TAG_DOCKER_ARCHS = $(addprefix common-docker-tag-latest-,$(DOCKER_ARCHS)) + +SANITIZED_DOCKER_IMAGE_TAG := $(subst +,-,$(DOCKER_IMAGE_TAG)) + +ifeq ($(GOHOSTARCH),amd64) + ifeq ($(GOHOSTOS),$(filter $(GOHOSTOS),linux freebsd darwin windows)) + # Only supported on amd64 + test-flags := -race + endif +endif + +# This rule is used to forward a target like "build" to "common-build". This +# allows a new "build" target to be defined in a Makefile which includes this +# one and override "common-build" without override warnings. +%: common-% ; + +.PHONY: common-all +common-all: precheck style check_license lint yamllint unused build test + +.PHONY: common-style +common-style: + @echo ">> checking code style" + @fmtRes=$$($(GOFMT) -d $$(find . -path ./vendor -prune -o -name '*.go' -print)); \ + if [ -n "$${fmtRes}" ]; then \ + echo "gofmt checking failed!"; echo "$${fmtRes}"; echo; \ + echo "Please ensure you are using $$($(GO) version) for formatting code."; \ + exit 1; \ + fi + +.PHONY: common-check_license +common-check_license: + @echo ">> checking license header" + @licRes=$$(for file in $$(find . -type f -iname '*.go' ! -path './vendor/*') ; do \ + awk 'NR<=3' $$file | grep -Eq "(Copyright|generated|GENERATED)" || echo $$file; \ + done); \ + if [ -n "$${licRes}" ]; then \ + echo "license header checking failed:"; echo "$${licRes}"; \ + exit 1; \ + fi + +.PHONY: common-deps +common-deps: + @echo ">> getting dependencies" + $(GO) mod download + +.PHONY: update-go-deps +update-go-deps: + @echo ">> updating Go dependencies" + @for m in $$($(GO) list -mod=readonly -m -f '{{ if and (not .Indirect) (not .Main)}}{{.Path}}{{end}}' all); do \ + $(GO) get -d $$m; \ + done + $(GO) mod tidy + +.PHONY: common-test-short +common-test-short: $(GOTEST_DIR) + @echo ">> running short tests" + $(GOTEST) -short $(GOOPTS) $(pkgs) + +.PHONY: common-test +common-test: $(GOTEST_DIR) + @echo ">> running all tests" + $(GOTEST) $(test-flags) $(GOOPTS) $(pkgs) + +$(GOTEST_DIR): + @mkdir -p $@ + +.PHONY: common-format +common-format: + @echo ">> formatting code" + $(GO) fmt $(pkgs) + +.PHONY: common-vet +common-vet: + @echo ">> vetting code" + $(GO) vet $(GOOPTS) $(pkgs) + +.PHONY: common-lint +common-lint: $(GOLANGCI_LINT) +ifdef GOLANGCI_LINT + @echo ">> running golangci-lint" + $(GOLANGCI_LINT) run $(GOLANGCI_LINT_OPTS) $(pkgs) +endif + +.PHONY: common-lint-fix +common-lint-fix: $(GOLANGCI_LINT) +ifdef GOLANGCI_LINT + @echo ">> running golangci-lint fix" + $(GOLANGCI_LINT) run --fix $(GOLANGCI_LINT_OPTS) $(pkgs) +endif + +.PHONY: common-yamllint +common-yamllint: + @echo ">> running yamllint on all YAML files in the repository" +ifeq (, $(shell command -v yamllint 2> /dev/null)) + @echo "yamllint not installed so skipping" +else + yamllint . +endif + +# For backward-compatibility. +.PHONY: common-staticcheck +common-staticcheck: lint + +.PHONY: common-unused +common-unused: + @echo ">> running check for unused/missing packages in go.mod" + $(GO) mod tidy + @git diff --exit-code -- go.sum go.mod + +.PHONY: common-build +common-build: promu + @echo ">> building binaries" + $(PROMU) build --prefix $(PREFIX) $(PROMU_BINARIES) + +.PHONY: common-tarball +common-tarball: promu + @echo ">> building release tarball" + $(PROMU) tarball --prefix $(PREFIX) $(BIN_DIR) + +.PHONY: common-docker-repo-name +common-docker-repo-name: + @echo "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)" + +.PHONY: common-docker $(BUILD_DOCKER_ARCHS) +common-docker: $(BUILD_DOCKER_ARCHS) +$(BUILD_DOCKER_ARCHS): common-docker-%: + docker build -t "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(SANITIZED_DOCKER_IMAGE_TAG)" \ + -f $(DOCKERFILE_PATH) \ + --build-arg ARCH="$*" \ + --build-arg OS="linux" \ + $(DOCKERBUILD_CONTEXT) + +.PHONY: common-docker-publish $(PUBLISH_DOCKER_ARCHS) +common-docker-publish: $(PUBLISH_DOCKER_ARCHS) +$(PUBLISH_DOCKER_ARCHS): common-docker-publish-%: + docker push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(SANITIZED_DOCKER_IMAGE_TAG)" + +DOCKER_MAJOR_VERSION_TAG = $(firstword $(subst ., ,$(shell cat VERSION))) +.PHONY: common-docker-tag-latest $(TAG_DOCKER_ARCHS) +common-docker-tag-latest: $(TAG_DOCKER_ARCHS) +$(TAG_DOCKER_ARCHS): common-docker-tag-latest-%: + docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(SANITIZED_DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:latest" + docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(SANITIZED_DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:v$(DOCKER_MAJOR_VERSION_TAG)" + +.PHONY: common-docker-manifest +common-docker-manifest: + DOCKER_CLI_EXPERIMENTAL=enabled docker manifest create -a "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(SANITIZED_DOCKER_IMAGE_TAG)" $(foreach ARCH,$(DOCKER_ARCHS),$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$(ARCH):$(SANITIZED_DOCKER_IMAGE_TAG)) + DOCKER_CLI_EXPERIMENTAL=enabled docker manifest push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(SANITIZED_DOCKER_IMAGE_TAG)" + +.PHONY: promu +promu: $(PROMU) + +$(PROMU): + $(eval PROMU_TMP := $(shell mktemp -d)) + curl -s -L $(PROMU_URL) | tar -xvzf - -C $(PROMU_TMP) + mkdir -p $(FIRST_GOPATH)/bin + cp $(PROMU_TMP)/promu-$(PROMU_VERSION).$(GO_BUILD_PLATFORM)/promu $(FIRST_GOPATH)/bin/promu + rm -r $(PROMU_TMP) + +.PHONY: proto +proto: + @echo ">> generating code from proto files" + @./scripts/genproto.sh + +ifdef GOLANGCI_LINT +$(GOLANGCI_LINT): + mkdir -p $(FIRST_GOPATH)/bin + curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/$(GOLANGCI_LINT_VERSION)/install.sh \ + | sed -e '/install -d/d' \ + | sh -s -- -b $(FIRST_GOPATH)/bin $(GOLANGCI_LINT_VERSION) +endif + +.PHONY: precheck +precheck:: + +define PRECHECK_COMMAND_template = +precheck:: $(1)_precheck + +PRECHECK_COMMAND_$(1) ?= $(1) $$(strip $$(PRECHECK_OPTIONS_$(1))) +.PHONY: $(1)_precheck +$(1)_precheck: + @if ! $$(PRECHECK_COMMAND_$(1)) 1>/dev/null 2>&1; then \ + echo "Execution of '$$(PRECHECK_COMMAND_$(1))' command failed. Is $(1) installed?"; \ + exit 1; \ + fi +endef + +govulncheck: install-govulncheck + govulncheck ./... + +install-govulncheck: + command -v govulncheck > /dev/null || go install golang.org/x/vuln/cmd/govulncheck@latest diff --git a/vendor/github.com/prometheus/sigv4/NOTICE b/vendor/github.com/prometheus/sigv4/NOTICE new file mode 100644 index 00000000000..cded3c2dfe1 --- /dev/null +++ b/vendor/github.com/prometheus/sigv4/NOTICE @@ -0,0 +1,2 @@ +Common libraries shared by Prometheus Go components. +Copyright 2015 The Prometheus Authors diff --git a/vendor/github.com/prometheus/sigv4/README.md b/vendor/github.com/prometheus/sigv4/README.md new file mode 100644 index 00000000000..e4e8b8de4fb --- /dev/null +++ b/vendor/github.com/prometheus/sigv4/README.md @@ -0,0 +1,12 @@ +github.com/prometheus/sigv4 module +========================================= + +sigv4 provides a http.RoundTripper that will sign requests using +Amazon's Signature Verification V4 signing procedure, using credentials +from the default AWS credential chain. + +This is a separate module from github.com/prometheus/common to prevent +it from having and propagating a dependency on the AWS SDK. + +This module is considered internal to Prometheus, without any stability +guarantees for external usage. diff --git a/vendor/github.com/prometheus/sigv4/RELEASE.md b/vendor/github.com/prometheus/sigv4/RELEASE.md new file mode 100644 index 00000000000..116f067643f --- /dev/null +++ b/vendor/github.com/prometheus/sigv4/RELEASE.md @@ -0,0 +1,18 @@ +# Releases + +## What to do know before cutting a release + +While `prometheus/sigv4` does not have a formal release process. We strongly encourage you follow these steps: + +1. Scan the list of available issues / PRs and make sure that You attempt to merge any pull requests that appear to be ready or almost ready +2. Notify the maintainers listed as part of [`MANTAINERS.md`](MAINTAINERS.md) that you're going to do a release. + +With those steps done, you can proceed to cut a release. + +## How to cut an individual release + +There is no automated process for cutting a release in `prometheus/sigv4`. A manual release using GitHub's release feature via [this link](https://github.com/prometheus/prometheus/releases/new) is the best way to go. The tag name must be prefixed with a `v` e.g. `v0.53.0` and then you can use the "Generate release notes" button to generate the release note automagically ✨. No need to create a discussion or mark it a pre-release, please do mark it as the latest release if needed. + +## Versioning strategy + +We aim to adhere to [Semantic Versioning](https://semver.org/) as much as possible. For example, patch version (e.g. v0.0.x) releases should contain bugfixes only and any sort of major or minor version bump should be a minor or major release respectively. diff --git a/vendor/github.com/prometheus/sigv4/SECURITY.md b/vendor/github.com/prometheus/sigv4/SECURITY.md new file mode 100644 index 00000000000..fed02d85c79 --- /dev/null +++ b/vendor/github.com/prometheus/sigv4/SECURITY.md @@ -0,0 +1,6 @@ +# Reporting a security issue + +The Prometheus security policy, including how to report vulnerabilities, can be +found here: + + diff --git a/vendor/github.com/prometheus/sigv4/sigv4.go b/vendor/github.com/prometheus/sigv4/sigv4.go new file mode 100644 index 00000000000..ae0f76e52fe --- /dev/null +++ b/vendor/github.com/prometheus/sigv4/sigv4.go @@ -0,0 +1,152 @@ +// Copyright 2021 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package sigv4 + +import ( + "bytes" + "fmt" + "io" + "net/http" + "net/textproto" + "path" + "sync" + "time" + + "github.com/aws/aws-sdk-go/aws/endpoints" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/aws/aws-sdk-go/aws/credentials/stscreds" + "github.com/aws/aws-sdk-go/aws/session" + signer "github.com/aws/aws-sdk-go/aws/signer/v4" +) + +var sigv4HeaderDenylist = []string{ + "uber-trace-id", +} + +type sigV4RoundTripper struct { + region string + next http.RoundTripper + pool sync.Pool + + signer *signer.Signer +} + +// NewSigV4RoundTripper returns a new http.RoundTripper that will sign requests +// using Amazon's Signature Verification V4 signing procedure. The request will +// then be handed off to the next RoundTripper provided by next. If next is nil, +// http.DefaultTransport will be used. +// +// Credentials for signing are retrieved using the the default AWS credential +// chain. If credentials cannot be found, an error will be returned. +func NewSigV4RoundTripper(cfg *SigV4Config, next http.RoundTripper) (http.RoundTripper, error) { + if next == nil { + next = http.DefaultTransport + } + + creds := credentials.NewStaticCredentials(cfg.AccessKey, string(cfg.SecretKey), "") + if cfg.AccessKey == "" && cfg.SecretKey == "" { + creds = nil + } + + useFIPSSTSEndpoint := endpoints.FIPSEndpointStateDisabled + if cfg.UseFIPSSTSEndpoint { + useFIPSSTSEndpoint = endpoints.FIPSEndpointStateEnabled + } + + sess, err := session.NewSessionWithOptions(session.Options{ + Config: aws.Config{ + Region: aws.String(cfg.Region), + Credentials: creds, + UseFIPSEndpoint: useFIPSSTSEndpoint, + }, + Profile: cfg.Profile, + }) + if err != nil { + return nil, fmt.Errorf("could not create new AWS session: %w", err) + } + if _, err := sess.Config.Credentials.Get(); err != nil { + return nil, fmt.Errorf("could not get SigV4 credentials: %w", err) + } + if aws.StringValue(sess.Config.Region) == "" { + return nil, fmt.Errorf("region not configured in sigv4 or in default credentials chain") + } + + signerCreds := sess.Config.Credentials + if cfg.RoleARN != "" { + signerCreds = stscreds.NewCredentials(sess, cfg.RoleARN) + } + + rt := &sigV4RoundTripper{ + region: cfg.Region, + next: next, + signer: signer.NewSigner(signerCreds), + } + rt.pool.New = rt.newBuf + return rt, nil +} + +func (rt *sigV4RoundTripper) newBuf() interface{} { + return bytes.NewBuffer(make([]byte, 0, 1024)) +} + +func (rt *sigV4RoundTripper) RoundTrip(req *http.Request) (*http.Response, error) { + // rt.signer.Sign needs a seekable body, so we replace the body with a + // buffered reader filled with the contents of original body. + buf := rt.pool.Get().(*bytes.Buffer) + defer func() { + buf.Reset() + rt.pool.Put(buf) + }() + + if req.Body != nil { + if _, err := io.Copy(buf, req.Body); err != nil { + return nil, err + } + // Close the original body since we don't need it anymore. + _ = req.Body.Close() + } + + // Ensure our seeker is back at the start of the buffer once we return. + var seeker io.ReadSeeker = bytes.NewReader(buf.Bytes()) + defer func() { + _, _ = seeker.Seek(0, io.SeekStart) + }() + req.Body = io.NopCloser(seeker) + + // Clean path like documented in AWS documentation. + // https://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html + req.URL.Path = path.Clean(req.URL.Path) + + // Clone the request and trim out headers that we don't want to sign. + signReq := req.Clone(req.Context()) + for _, header := range sigv4HeaderDenylist { + signReq.Header.Del(header) + } + + headers, err := rt.signer.Sign(signReq, seeker, "aps", rt.region, time.Now().UTC()) + if err != nil { + return nil, fmt.Errorf("failed to sign request: %w", err) + } + + // Copy over signed headers. Authorization header is not returned by + // rt.signer.Sign and needs to be copied separately. + for k, v := range headers { + req.Header[textproto.CanonicalMIMEHeaderKey(k)] = v + } + req.Header.Set("Authorization", signReq.Header.Get("Authorization")) + + return rt.next.RoundTrip(req) +} diff --git a/vendor/github.com/prometheus/sigv4/sigv4_config.go b/vendor/github.com/prometheus/sigv4/sigv4_config.go new file mode 100644 index 00000000000..83ef73d8914 --- /dev/null +++ b/vendor/github.com/prometheus/sigv4/sigv4_config.go @@ -0,0 +1,48 @@ +// Copyright 2021 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package sigv4 + +import ( + "fmt" + + "github.com/prometheus/common/config" +) + +// SigV4Config is the configuration for signing remote write requests with +// AWS's SigV4 verification process. Empty values will be retrieved using the +// AWS default credentials chain. +type SigV4Config struct { + Region string `yaml:"region,omitempty"` + AccessKey string `yaml:"access_key,omitempty"` + SecretKey config.Secret `yaml:"secret_key,omitempty"` + Profile string `yaml:"profile,omitempty"` + RoleARN string `yaml:"role_arn,omitempty"` + UseFIPSSTSEndpoint bool `yaml:"use_fips_sts_endpoint,omitempty"` +} + +func (c *SigV4Config) Validate() error { + if (c.AccessKey == "") != (c.SecretKey == "") { + return fmt.Errorf("must provide a AWS SigV4 Access key and Secret Key if credentials are specified in the SigV4 config") + } + return nil +} + +func (c *SigV4Config) UnmarshalYAML(unmarshal func(interface{}) error) error { + type plain SigV4Config + *c = SigV4Config{} + if err := unmarshal((*plain)(c)); err != nil { + return err + } + return c.Validate() +} diff --git a/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace/version.go b/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace/version.go index a978bb5b288..d14cc52ff7a 100644 --- a/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace/version.go +++ b/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace/version.go @@ -5,7 +5,7 @@ package otelhttptrace // import "go.opentelemetry.io/contrib/instrumentation/net // Version is the current release version of the httptrace instrumentation. func Version() string { - return "0.56.0" + return "0.57.0" // This string is updated by the pre_release.sh script during release } diff --git a/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/handler.go b/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/handler.go index e4236ab398c..e555a475f13 100644 --- a/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/handler.go +++ b/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/handler.go @@ -117,6 +117,11 @@ func (h *middleware) serveHTTP(w http.ResponseWriter, r *http.Request, next http } } + if startTime := StartTimeFromContext(ctx); !startTime.IsZero() { + opts = append(opts, trace.WithTimestamp(startTime)) + requestStartTime = startTime + } + ctx, span := tracer.Start(ctx, h.spanNameFormatter(h.operation, r), opts...) defer span.End() diff --git a/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/start_time_context.go b/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/start_time_context.go new file mode 100644 index 00000000000..9476ef01b01 --- /dev/null +++ b/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/start_time_context.go @@ -0,0 +1,29 @@ +// Copyright The OpenTelemetry Authors +// SPDX-License-Identifier: Apache-2.0 + +package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp" + +import ( + "context" + "time" +) + +type startTimeContextKeyType int + +const startTimeContextKey startTimeContextKeyType = 0 + +// ContextWithStartTime returns a new context with the provided start time. The +// start time will be used for metrics and traces emitted by the +// instrumentation. Only one labeller can be injected into the context. +// Injecting it multiple times will override the previous calls. +func ContextWithStartTime(parent context.Context, start time.Time) context.Context { + return context.WithValue(parent, startTimeContextKey, start) +} + +// StartTimeFromContext retrieves a time.Time from the provided context if one +// is available. If no start time was found in the provided context, a new, +// zero start time is returned and the second return value is false. +func StartTimeFromContext(ctx context.Context) time.Time { + t, _ := ctx.Value(startTimeContextKey).(time.Time) + return t +} diff --git a/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/version.go b/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/version.go index a07d8689d47..16ef3cb9b94 100644 --- a/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/version.go +++ b/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/version.go @@ -5,7 +5,7 @@ package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http // Version is the current release version of the otelhttp instrumentation. func Version() string { - return "0.56.0" + return "0.57.0" // This string is updated by the pre_release.sh script during release } diff --git a/vendor/golang.org/x/exp/constraints/constraints.go b/vendor/golang.org/x/exp/constraints/constraints.go index 2c033dff47e..a9392af7c1a 100644 --- a/vendor/golang.org/x/exp/constraints/constraints.go +++ b/vendor/golang.org/x/exp/constraints/constraints.go @@ -45,6 +45,8 @@ type Complex interface { // that supports the operators < <= >= >. // If future releases of Go add new ordered types, // this constraint will be modified to include them. +// +// This type is redundant since Go 1.21 introduced [cmp.Ordered]. type Ordered interface { Integer | Float | ~string } diff --git a/vendor/golang.org/x/exp/slices/cmp.go b/vendor/golang.org/x/exp/slices/cmp.go deleted file mode 100644 index fbf1934a061..00000000000 --- a/vendor/golang.org/x/exp/slices/cmp.go +++ /dev/null @@ -1,44 +0,0 @@ -// Copyright 2023 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package slices - -import "golang.org/x/exp/constraints" - -// min is a version of the predeclared function from the Go 1.21 release. -func min[T constraints.Ordered](a, b T) T { - if a < b || isNaN(a) { - return a - } - return b -} - -// max is a version of the predeclared function from the Go 1.21 release. -func max[T constraints.Ordered](a, b T) T { - if a > b || isNaN(a) { - return a - } - return b -} - -// cmpLess is a copy of cmp.Less from the Go 1.21 release. -func cmpLess[T constraints.Ordered](x, y T) bool { - return (isNaN(x) && !isNaN(y)) || x < y -} - -// cmpCompare is a copy of cmp.Compare from the Go 1.21 release. -func cmpCompare[T constraints.Ordered](x, y T) int { - xNaN := isNaN(x) - yNaN := isNaN(y) - if xNaN && yNaN { - return 0 - } - if xNaN || x < y { - return -1 - } - if yNaN || x > y { - return +1 - } - return 0 -} diff --git a/vendor/golang.org/x/exp/slices/slices.go b/vendor/golang.org/x/exp/slices/slices.go index 46ceac34399..757383ea1c0 100644 --- a/vendor/golang.org/x/exp/slices/slices.go +++ b/vendor/golang.org/x/exp/slices/slices.go @@ -6,26 +6,22 @@ package slices import ( - "unsafe" - - "golang.org/x/exp/constraints" + "cmp" + "slices" ) +// TODO(adonovan): when https://go.dev/issue/32816 is accepted, all of +// these functions should be annotated (provisionally with "//go:fix +// inline") so that tools can safely and automatically replace calls +// to exp/slices with calls to std slices by inlining them. + // Equal reports whether two slices are equal: the same length and all // elements equal. If the lengths are different, Equal returns false. // Otherwise, the elements are compared in increasing index order, and the // comparison stops at the first unequal pair. // Floating point NaNs are not considered equal. func Equal[S ~[]E, E comparable](s1, s2 S) bool { - if len(s1) != len(s2) { - return false - } - for i := range s1 { - if s1[i] != s2[i] { - return false - } - } - return true + return slices.Equal(s1, s2) } // EqualFunc reports whether two slices are equal using an equality @@ -34,16 +30,7 @@ func Equal[S ~[]E, E comparable](s1, s2 S) bool { // increasing index order, and the comparison stops at the first index // for which eq returns false. func EqualFunc[S1 ~[]E1, S2 ~[]E2, E1, E2 any](s1 S1, s2 S2, eq func(E1, E2) bool) bool { - if len(s1) != len(s2) { - return false - } - for i, v1 := range s1 { - v2 := s2[i] - if !eq(v1, v2) { - return false - } - } - return true + return slices.EqualFunc(s1, s2, eq) } // Compare compares the elements of s1 and s2, using [cmp.Compare] on each pair @@ -53,20 +40,8 @@ func EqualFunc[S1 ~[]E1, S2 ~[]E2, E1, E2 any](s1 S1, s2 S2, eq func(E1, E2) boo // If both slices are equal until one of them ends, the shorter slice is // considered less than the longer one. // The result is 0 if s1 == s2, -1 if s1 < s2, and +1 if s1 > s2. -func Compare[S ~[]E, E constraints.Ordered](s1, s2 S) int { - for i, v1 := range s1 { - if i >= len(s2) { - return +1 - } - v2 := s2[i] - if c := cmpCompare(v1, v2); c != 0 { - return c - } - } - if len(s1) < len(s2) { - return -1 - } - return 0 +func Compare[S ~[]E, E cmp.Ordered](s1, s2 S) int { + return slices.Compare(s1, s2) } // CompareFunc is like [Compare] but uses a custom comparison function on each @@ -75,52 +50,30 @@ func Compare[S ~[]E, E constraints.Ordered](s1, s2 S) int { // returns 0 the result is 0 if len(s1) == len(s2), -1 if len(s1) < len(s2), // and +1 if len(s1) > len(s2). func CompareFunc[S1 ~[]E1, S2 ~[]E2, E1, E2 any](s1 S1, s2 S2, cmp func(E1, E2) int) int { - for i, v1 := range s1 { - if i >= len(s2) { - return +1 - } - v2 := s2[i] - if c := cmp(v1, v2); c != 0 { - return c - } - } - if len(s1) < len(s2) { - return -1 - } - return 0 + return slices.CompareFunc(s1, s2, cmp) } // Index returns the index of the first occurrence of v in s, // or -1 if not present. func Index[S ~[]E, E comparable](s S, v E) int { - for i := range s { - if v == s[i] { - return i - } - } - return -1 + return slices.Index(s, v) } // IndexFunc returns the first index i satisfying f(s[i]), // or -1 if none do. func IndexFunc[S ~[]E, E any](s S, f func(E) bool) int { - for i := range s { - if f(s[i]) { - return i - } - } - return -1 + return slices.IndexFunc(s, f) } // Contains reports whether v is present in s. func Contains[S ~[]E, E comparable](s S, v E) bool { - return Index(s, v) >= 0 + return slices.Contains(s, v) } // ContainsFunc reports whether at least one // element e of s satisfies f(e). func ContainsFunc[S ~[]E, E any](s S, f func(E) bool) bool { - return IndexFunc(s, f) >= 0 + return slices.ContainsFunc(s, f) } // Insert inserts the values v... into s at index i, @@ -131,92 +84,7 @@ func ContainsFunc[S ~[]E, E any](s S, f func(E) bool) bool { // Insert panics if i is out of range. // This function is O(len(s) + len(v)). func Insert[S ~[]E, E any](s S, i int, v ...E) S { - m := len(v) - if m == 0 { - return s - } - n := len(s) - if i == n { - return append(s, v...) - } - if n+m > cap(s) { - // Use append rather than make so that we bump the size of - // the slice up to the next storage class. - // This is what Grow does but we don't call Grow because - // that might copy the values twice. - s2 := append(s[:i], make(S, n+m-i)...) - copy(s2[i:], v) - copy(s2[i+m:], s[i:]) - return s2 - } - s = s[:n+m] - - // before: - // s: aaaaaaaabbbbccccccccdddd - // ^ ^ ^ ^ - // i i+m n n+m - // after: - // s: aaaaaaaavvvvbbbbcccccccc - // ^ ^ ^ ^ - // i i+m n n+m - // - // a are the values that don't move in s. - // v are the values copied in from v. - // b and c are the values from s that are shifted up in index. - // d are the values that get overwritten, never to be seen again. - - if !overlaps(v, s[i+m:]) { - // Easy case - v does not overlap either the c or d regions. - // (It might be in some of a or b, or elsewhere entirely.) - // The data we copy up doesn't write to v at all, so just do it. - - copy(s[i+m:], s[i:]) - - // Now we have - // s: aaaaaaaabbbbbbbbcccccccc - // ^ ^ ^ ^ - // i i+m n n+m - // Note the b values are duplicated. - - copy(s[i:], v) - - // Now we have - // s: aaaaaaaavvvvbbbbcccccccc - // ^ ^ ^ ^ - // i i+m n n+m - // That's the result we want. - return s - } - - // The hard case - v overlaps c or d. We can't just shift up - // the data because we'd move or clobber the values we're trying - // to insert. - // So instead, write v on top of d, then rotate. - copy(s[n:], v) - - // Now we have - // s: aaaaaaaabbbbccccccccvvvv - // ^ ^ ^ ^ - // i i+m n n+m - - rotateRight(s[i:], m) - - // Now we have - // s: aaaaaaaavvvvbbbbcccccccc - // ^ ^ ^ ^ - // i i+m n n+m - // That's the result we want. - return s -} - -// clearSlice sets all elements up to the length of s to the zero value of E. -// We may use the builtin clear func instead, and remove clearSlice, when upgrading -// to Go 1.21+. -func clearSlice[S ~[]E, E any](s S) { - var zero E - for i := range s { - s[i] = zero - } + return slices.Insert(s, i, v...) } // Delete removes the elements s[i:j] from s, returning the modified slice. @@ -225,135 +93,27 @@ func clearSlice[S ~[]E, E any](s S) { // make a single call deleting them all together than to delete one at a time. // Delete zeroes the elements s[len(s)-(j-i):len(s)]. func Delete[S ~[]E, E any](s S, i, j int) S { - _ = s[i:j:len(s)] // bounds check - - if i == j { - return s - } - - oldlen := len(s) - s = append(s[:i], s[j:]...) - clearSlice(s[len(s):oldlen]) // zero/nil out the obsolete elements, for GC - return s + return slices.Delete(s, i, j) } // DeleteFunc removes any elements from s for which del returns true, // returning the modified slice. // DeleteFunc zeroes the elements between the new length and the original length. func DeleteFunc[S ~[]E, E any](s S, del func(E) bool) S { - i := IndexFunc(s, del) - if i == -1 { - return s - } - // Don't start copying elements until we find one to delete. - for j := i + 1; j < len(s); j++ { - if v := s[j]; !del(v) { - s[i] = v - i++ - } - } - clearSlice(s[i:]) // zero/nil out the obsolete elements, for GC - return s[:i] + return slices.DeleteFunc(s, del) } // Replace replaces the elements s[i:j] by the given v, and returns the // modified slice. Replace panics if s[i:j] is not a valid slice of s. // When len(v) < (j-i), Replace zeroes the elements between the new length and the original length. func Replace[S ~[]E, E any](s S, i, j int, v ...E) S { - _ = s[i:j] // verify that i:j is a valid subslice - - if i == j { - return Insert(s, i, v...) - } - if j == len(s) { - return append(s[:i], v...) - } - - tot := len(s[:i]) + len(v) + len(s[j:]) - if tot > cap(s) { - // Too big to fit, allocate and copy over. - s2 := append(s[:i], make(S, tot-i)...) // See Insert - copy(s2[i:], v) - copy(s2[i+len(v):], s[j:]) - return s2 - } - - r := s[:tot] - - if i+len(v) <= j { - // Easy, as v fits in the deleted portion. - copy(r[i:], v) - if i+len(v) != j { - copy(r[i+len(v):], s[j:]) - } - clearSlice(s[tot:]) // zero/nil out the obsolete elements, for GC - return r - } - - // We are expanding (v is bigger than j-i). - // The situation is something like this: - // (example has i=4,j=8,len(s)=16,len(v)=6) - // s: aaaaxxxxbbbbbbbbyy - // ^ ^ ^ ^ - // i j len(s) tot - // a: prefix of s - // x: deleted range - // b: more of s - // y: area to expand into - - if !overlaps(r[i+len(v):], v) { - // Easy, as v is not clobbered by the first copy. - copy(r[i+len(v):], s[j:]) - copy(r[i:], v) - return r - } - - // This is a situation where we don't have a single place to which - // we can copy v. Parts of it need to go to two different places. - // We want to copy the prefix of v into y and the suffix into x, then - // rotate |y| spots to the right. - // - // v[2:] v[:2] - // | | - // s: aaaavvvvbbbbbbbbvv - // ^ ^ ^ ^ - // i j len(s) tot - // - // If either of those two destinations don't alias v, then we're good. - y := len(v) - (j - i) // length of y portion - - if !overlaps(r[i:j], v) { - copy(r[i:j], v[y:]) - copy(r[len(s):], v[:y]) - rotateRight(r[i:], y) - return r - } - if !overlaps(r[len(s):], v) { - copy(r[len(s):], v[:y]) - copy(r[i:j], v[y:]) - rotateRight(r[i:], y) - return r - } - - // Now we know that v overlaps both x and y. - // That means that the entirety of b is *inside* v. - // So we don't need to preserve b at all; instead we - // can copy v first, then copy the b part of v out of - // v to the right destination. - k := startIdx(v, s[j:]) - copy(r[i:], v) - copy(r[i+len(v):], r[i+k:]) - return r + return slices.Replace(s, i, j, v...) } // Clone returns a copy of the slice. // The elements are copied using assignment, so this is a shallow clone. func Clone[S ~[]E, E any](s S) S { - // Preserve nil in case it matters. - if s == nil { - return nil - } - return append(S([]E{}), s...) + return slices.Clone(s) } // Compact replaces consecutive runs of equal elements with a single copy. @@ -362,40 +122,14 @@ func Clone[S ~[]E, E any](s S) S { // which may have a smaller length. // Compact zeroes the elements between the new length and the original length. func Compact[S ~[]E, E comparable](s S) S { - if len(s) < 2 { - return s - } - i := 1 - for k := 1; k < len(s); k++ { - if s[k] != s[k-1] { - if i != k { - s[i] = s[k] - } - i++ - } - } - clearSlice(s[i:]) // zero/nil out the obsolete elements, for GC - return s[:i] + return slices.Compact(s) } // CompactFunc is like [Compact] but uses an equality function to compare elements. // For runs of elements that compare equal, CompactFunc keeps the first one. // CompactFunc zeroes the elements between the new length and the original length. func CompactFunc[S ~[]E, E any](s S, eq func(E, E) bool) S { - if len(s) < 2 { - return s - } - i := 1 - for k := 1; k < len(s); k++ { - if !eq(s[k], s[k-1]) { - if i != k { - s[i] = s[k] - } - i++ - } - } - clearSlice(s[i:]) // zero/nil out the obsolete elements, for GC - return s[:i] + return slices.CompactFunc(s, eq) } // Grow increases the slice's capacity, if necessary, to guarantee space for @@ -403,113 +137,15 @@ func CompactFunc[S ~[]E, E any](s S, eq func(E, E) bool) S { // to the slice without another allocation. If n is negative or too large to // allocate the memory, Grow panics. func Grow[S ~[]E, E any](s S, n int) S { - if n < 0 { - panic("cannot be negative") - } - if n -= cap(s) - len(s); n > 0 { - // TODO(https://go.dev/issue/53888): Make using []E instead of S - // to workaround a compiler bug where the runtime.growslice optimization - // does not take effect. Revert when the compiler is fixed. - s = append([]E(s)[:cap(s)], make([]E, n)...)[:len(s)] - } - return s + return slices.Grow(s, n) } // Clip removes unused capacity from the slice, returning s[:len(s):len(s)]. func Clip[S ~[]E, E any](s S) S { - return s[:len(s):len(s)] -} - -// Rotation algorithm explanation: -// -// rotate left by 2 -// start with -// 0123456789 -// split up like this -// 01 234567 89 -// swap first 2 and last 2 -// 89 234567 01 -// join first parts -// 89234567 01 -// recursively rotate first left part by 2 -// 23456789 01 -// join at the end -// 2345678901 -// -// rotate left by 8 -// start with -// 0123456789 -// split up like this -// 01 234567 89 -// swap first 2 and last 2 -// 89 234567 01 -// join last parts -// 89 23456701 -// recursively rotate second part left by 6 -// 89 01234567 -// join at the end -// 8901234567 - -// TODO: There are other rotate algorithms. -// This algorithm has the desirable property that it moves each element exactly twice. -// The triple-reverse algorithm is simpler and more cache friendly, but takes more writes. -// The follow-cycles algorithm can be 1-write but it is not very cache friendly. - -// rotateLeft rotates b left by n spaces. -// s_final[i] = s_orig[i+r], wrapping around. -func rotateLeft[E any](s []E, r int) { - for r != 0 && r != len(s) { - if r*2 <= len(s) { - swap(s[:r], s[len(s)-r:]) - s = s[:len(s)-r] - } else { - swap(s[:len(s)-r], s[r:]) - s, r = s[len(s)-r:], r*2-len(s) - } - } -} -func rotateRight[E any](s []E, r int) { - rotateLeft(s, len(s)-r) -} - -// swap swaps the contents of x and y. x and y must be equal length and disjoint. -func swap[E any](x, y []E) { - for i := 0; i < len(x); i++ { - x[i], y[i] = y[i], x[i] - } -} - -// overlaps reports whether the memory ranges a[0:len(a)] and b[0:len(b)] overlap. -func overlaps[E any](a, b []E) bool { - if len(a) == 0 || len(b) == 0 { - return false - } - elemSize := unsafe.Sizeof(a[0]) - if elemSize == 0 { - return false - } - // TODO: use a runtime/unsafe facility once one becomes available. See issue 12445. - // Also see crypto/internal/alias/alias.go:AnyOverlap - return uintptr(unsafe.Pointer(&a[0])) <= uintptr(unsafe.Pointer(&b[len(b)-1]))+(elemSize-1) && - uintptr(unsafe.Pointer(&b[0])) <= uintptr(unsafe.Pointer(&a[len(a)-1]))+(elemSize-1) -} - -// startIdx returns the index in haystack where the needle starts. -// prerequisite: the needle must be aliased entirely inside the haystack. -func startIdx[E any](haystack, needle []E) int { - p := &needle[0] - for i := range haystack { - if p == &haystack[i] { - return i - } - } - // TODO: what if the overlap is by a non-integral number of Es? - panic("needle not found") + return slices.Clip(s) } // Reverse reverses the elements of the slice in place. func Reverse[S ~[]E, E any](s S) { - for i, j := 0, len(s)-1; i < j; i, j = i+1, j-1 { - s[i], s[j] = s[j], s[i] - } + slices.Reverse(s) } diff --git a/vendor/golang.org/x/exp/slices/sort.go b/vendor/golang.org/x/exp/slices/sort.go index f58bbc7ba4d..e270a74652f 100644 --- a/vendor/golang.org/x/exp/slices/sort.go +++ b/vendor/golang.org/x/exp/slices/sort.go @@ -2,21 +2,20 @@ // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. -//go:generate go run $GOROOT/src/sort/gen_sort_variants.go -exp - package slices import ( - "math/bits" - - "golang.org/x/exp/constraints" + "cmp" + "slices" ) +// TODO(adonovan): add a "//go:fix inline" annotation to each function +// in this file; see https://go.dev/issue/32816. + // Sort sorts a slice of any ordered type in ascending order. // When sorting floating-point numbers, NaNs are ordered before other values. -func Sort[S ~[]E, E constraints.Ordered](x S) { - n := len(x) - pdqsortOrdered(x, 0, n, bits.Len(uint(n))) +func Sort[S ~[]E, E cmp.Ordered](x S) { + slices.Sort(x) } // SortFunc sorts the slice x in ascending order as determined by the cmp @@ -29,118 +28,60 @@ func Sort[S ~[]E, E constraints.Ordered](x S) { // See https://en.wikipedia.org/wiki/Weak_ordering#Strict_weak_orderings. // To indicate 'uncomparable', return 0 from the function. func SortFunc[S ~[]E, E any](x S, cmp func(a, b E) int) { - n := len(x) - pdqsortCmpFunc(x, 0, n, bits.Len(uint(n)), cmp) + slices.SortFunc(x, cmp) } // SortStableFunc sorts the slice x while keeping the original order of equal // elements, using cmp to compare elements in the same way as [SortFunc]. func SortStableFunc[S ~[]E, E any](x S, cmp func(a, b E) int) { - stableCmpFunc(x, len(x), cmp) + slices.SortStableFunc(x, cmp) } // IsSorted reports whether x is sorted in ascending order. -func IsSorted[S ~[]E, E constraints.Ordered](x S) bool { - for i := len(x) - 1; i > 0; i-- { - if cmpLess(x[i], x[i-1]) { - return false - } - } - return true +func IsSorted[S ~[]E, E cmp.Ordered](x S) bool { + return slices.IsSorted(x) } // IsSortedFunc reports whether x is sorted in ascending order, with cmp as the // comparison function as defined by [SortFunc]. func IsSortedFunc[S ~[]E, E any](x S, cmp func(a, b E) int) bool { - for i := len(x) - 1; i > 0; i-- { - if cmp(x[i], x[i-1]) < 0 { - return false - } - } - return true + return slices.IsSortedFunc(x, cmp) } // Min returns the minimal value in x. It panics if x is empty. // For floating-point numbers, Min propagates NaNs (any NaN value in x // forces the output to be NaN). -func Min[S ~[]E, E constraints.Ordered](x S) E { - if len(x) < 1 { - panic("slices.Min: empty list") - } - m := x[0] - for i := 1; i < len(x); i++ { - m = min(m, x[i]) - } - return m +func Min[S ~[]E, E cmp.Ordered](x S) E { + return slices.Min(x) } // MinFunc returns the minimal value in x, using cmp to compare elements. // It panics if x is empty. If there is more than one minimal element // according to the cmp function, MinFunc returns the first one. func MinFunc[S ~[]E, E any](x S, cmp func(a, b E) int) E { - if len(x) < 1 { - panic("slices.MinFunc: empty list") - } - m := x[0] - for i := 1; i < len(x); i++ { - if cmp(x[i], m) < 0 { - m = x[i] - } - } - return m + return slices.MinFunc(x, cmp) } // Max returns the maximal value in x. It panics if x is empty. // For floating-point E, Max propagates NaNs (any NaN value in x // forces the output to be NaN). -func Max[S ~[]E, E constraints.Ordered](x S) E { - if len(x) < 1 { - panic("slices.Max: empty list") - } - m := x[0] - for i := 1; i < len(x); i++ { - m = max(m, x[i]) - } - return m +func Max[S ~[]E, E cmp.Ordered](x S) E { + return slices.Max(x) } // MaxFunc returns the maximal value in x, using cmp to compare elements. // It panics if x is empty. If there is more than one maximal element // according to the cmp function, MaxFunc returns the first one. func MaxFunc[S ~[]E, E any](x S, cmp func(a, b E) int) E { - if len(x) < 1 { - panic("slices.MaxFunc: empty list") - } - m := x[0] - for i := 1; i < len(x); i++ { - if cmp(x[i], m) > 0 { - m = x[i] - } - } - return m + return slices.MaxFunc(x, cmp) } // BinarySearch searches for target in a sorted slice and returns the position // where target is found, or the position where target would appear in the // sort order; it also returns a bool saying whether the target is really found // in the slice. The slice must be sorted in increasing order. -func BinarySearch[S ~[]E, E constraints.Ordered](x S, target E) (int, bool) { - // Inlining is faster than calling BinarySearchFunc with a lambda. - n := len(x) - // Define x[-1] < target and x[n] >= target. - // Invariant: x[i-1] < target, x[j] >= target. - i, j := 0, n - for i < j { - h := int(uint(i+j) >> 1) // avoid overflow when computing h - // i ≤ h < j - if cmpLess(x[h], target) { - i = h + 1 // preserves x[i-1] < target - } else { - j = h // preserves x[j] >= target - } - } - // i == j, x[i-1] < target, and x[j] (= x[i]) >= target => answer is i. - return i, i < n && (x[i] == target || (isNaN(x[i]) && isNaN(target))) +func BinarySearch[S ~[]E, E cmp.Ordered](x S, target E) (int, bool) { + return slices.BinarySearch(x, target) } // BinarySearchFunc works like [BinarySearch], but uses a custom comparison @@ -151,47 +92,5 @@ func BinarySearch[S ~[]E, E constraints.Ordered](x S, target E) (int, bool) { // cmp must implement the same ordering as the slice, such that if // cmp(a, t) < 0 and cmp(b, t) >= 0, then a must precede b in the slice. func BinarySearchFunc[S ~[]E, E, T any](x S, target T, cmp func(E, T) int) (int, bool) { - n := len(x) - // Define cmp(x[-1], target) < 0 and cmp(x[n], target) >= 0 . - // Invariant: cmp(x[i - 1], target) < 0, cmp(x[j], target) >= 0. - i, j := 0, n - for i < j { - h := int(uint(i+j) >> 1) // avoid overflow when computing h - // i ≤ h < j - if cmp(x[h], target) < 0 { - i = h + 1 // preserves cmp(x[i - 1], target) < 0 - } else { - j = h // preserves cmp(x[j], target) >= 0 - } - } - // i == j, cmp(x[i-1], target) < 0, and cmp(x[j], target) (= cmp(x[i], target)) >= 0 => answer is i. - return i, i < n && cmp(x[i], target) == 0 -} - -type sortedHint int // hint for pdqsort when choosing the pivot - -const ( - unknownHint sortedHint = iota - increasingHint - decreasingHint -) - -// xorshift paper: https://www.jstatsoft.org/article/view/v008i14/xorshift.pdf -type xorshift uint64 - -func (r *xorshift) Next() uint64 { - *r ^= *r << 13 - *r ^= *r >> 17 - *r ^= *r << 5 - return uint64(*r) -} - -func nextPowerOfTwo(length int) uint { - return 1 << bits.Len(uint(length)) -} - -// isNaN reports whether x is a NaN without requiring the math package. -// This will always return false if T is not floating-point. -func isNaN[T constraints.Ordered](x T) bool { - return x != x + return slices.BinarySearchFunc(x, target, cmp) } diff --git a/vendor/golang.org/x/exp/slices/zsortanyfunc.go b/vendor/golang.org/x/exp/slices/zsortanyfunc.go deleted file mode 100644 index 06f2c7a2481..00000000000 --- a/vendor/golang.org/x/exp/slices/zsortanyfunc.go +++ /dev/null @@ -1,479 +0,0 @@ -// Code generated by gen_sort_variants.go; DO NOT EDIT. - -// Copyright 2022 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package slices - -// insertionSortCmpFunc sorts data[a:b] using insertion sort. -func insertionSortCmpFunc[E any](data []E, a, b int, cmp func(a, b E) int) { - for i := a + 1; i < b; i++ { - for j := i; j > a && (cmp(data[j], data[j-1]) < 0); j-- { - data[j], data[j-1] = data[j-1], data[j] - } - } -} - -// siftDownCmpFunc implements the heap property on data[lo:hi]. -// first is an offset into the array where the root of the heap lies. -func siftDownCmpFunc[E any](data []E, lo, hi, first int, cmp func(a, b E) int) { - root := lo - for { - child := 2*root + 1 - if child >= hi { - break - } - if child+1 < hi && (cmp(data[first+child], data[first+child+1]) < 0) { - child++ - } - if !(cmp(data[first+root], data[first+child]) < 0) { - return - } - data[first+root], data[first+child] = data[first+child], data[first+root] - root = child - } -} - -func heapSortCmpFunc[E any](data []E, a, b int, cmp func(a, b E) int) { - first := a - lo := 0 - hi := b - a - - // Build heap with greatest element at top. - for i := (hi - 1) / 2; i >= 0; i-- { - siftDownCmpFunc(data, i, hi, first, cmp) - } - - // Pop elements, largest first, into end of data. - for i := hi - 1; i >= 0; i-- { - data[first], data[first+i] = data[first+i], data[first] - siftDownCmpFunc(data, lo, i, first, cmp) - } -} - -// pdqsortCmpFunc sorts data[a:b]. -// The algorithm based on pattern-defeating quicksort(pdqsort), but without the optimizations from BlockQuicksort. -// pdqsort paper: https://arxiv.org/pdf/2106.05123.pdf -// C++ implementation: https://github.com/orlp/pdqsort -// Rust implementation: https://docs.rs/pdqsort/latest/pdqsort/ -// limit is the number of allowed bad (very unbalanced) pivots before falling back to heapsort. -func pdqsortCmpFunc[E any](data []E, a, b, limit int, cmp func(a, b E) int) { - const maxInsertion = 12 - - var ( - wasBalanced = true // whether the last partitioning was reasonably balanced - wasPartitioned = true // whether the slice was already partitioned - ) - - for { - length := b - a - - if length <= maxInsertion { - insertionSortCmpFunc(data, a, b, cmp) - return - } - - // Fall back to heapsort if too many bad choices were made. - if limit == 0 { - heapSortCmpFunc(data, a, b, cmp) - return - } - - // If the last partitioning was imbalanced, we need to breaking patterns. - if !wasBalanced { - breakPatternsCmpFunc(data, a, b, cmp) - limit-- - } - - pivot, hint := choosePivotCmpFunc(data, a, b, cmp) - if hint == decreasingHint { - reverseRangeCmpFunc(data, a, b, cmp) - // The chosen pivot was pivot-a elements after the start of the array. - // After reversing it is pivot-a elements before the end of the array. - // The idea came from Rust's implementation. - pivot = (b - 1) - (pivot - a) - hint = increasingHint - } - - // The slice is likely already sorted. - if wasBalanced && wasPartitioned && hint == increasingHint { - if partialInsertionSortCmpFunc(data, a, b, cmp) { - return - } - } - - // Probably the slice contains many duplicate elements, partition the slice into - // elements equal to and elements greater than the pivot. - if a > 0 && !(cmp(data[a-1], data[pivot]) < 0) { - mid := partitionEqualCmpFunc(data, a, b, pivot, cmp) - a = mid - continue - } - - mid, alreadyPartitioned := partitionCmpFunc(data, a, b, pivot, cmp) - wasPartitioned = alreadyPartitioned - - leftLen, rightLen := mid-a, b-mid - balanceThreshold := length / 8 - if leftLen < rightLen { - wasBalanced = leftLen >= balanceThreshold - pdqsortCmpFunc(data, a, mid, limit, cmp) - a = mid + 1 - } else { - wasBalanced = rightLen >= balanceThreshold - pdqsortCmpFunc(data, mid+1, b, limit, cmp) - b = mid - } - } -} - -// partitionCmpFunc does one quicksort partition. -// Let p = data[pivot] -// Moves elements in data[a:b] around, so that data[i]

=p for inewpivot. -// On return, data[newpivot] = p -func partitionCmpFunc[E any](data []E, a, b, pivot int, cmp func(a, b E) int) (newpivot int, alreadyPartitioned bool) { - data[a], data[pivot] = data[pivot], data[a] - i, j := a+1, b-1 // i and j are inclusive of the elements remaining to be partitioned - - for i <= j && (cmp(data[i], data[a]) < 0) { - i++ - } - for i <= j && !(cmp(data[j], data[a]) < 0) { - j-- - } - if i > j { - data[j], data[a] = data[a], data[j] - return j, true - } - data[i], data[j] = data[j], data[i] - i++ - j-- - - for { - for i <= j && (cmp(data[i], data[a]) < 0) { - i++ - } - for i <= j && !(cmp(data[j], data[a]) < 0) { - j-- - } - if i > j { - break - } - data[i], data[j] = data[j], data[i] - i++ - j-- - } - data[j], data[a] = data[a], data[j] - return j, false -} - -// partitionEqualCmpFunc partitions data[a:b] into elements equal to data[pivot] followed by elements greater than data[pivot]. -// It assumed that data[a:b] does not contain elements smaller than the data[pivot]. -func partitionEqualCmpFunc[E any](data []E, a, b, pivot int, cmp func(a, b E) int) (newpivot int) { - data[a], data[pivot] = data[pivot], data[a] - i, j := a+1, b-1 // i and j are inclusive of the elements remaining to be partitioned - - for { - for i <= j && !(cmp(data[a], data[i]) < 0) { - i++ - } - for i <= j && (cmp(data[a], data[j]) < 0) { - j-- - } - if i > j { - break - } - data[i], data[j] = data[j], data[i] - i++ - j-- - } - return i -} - -// partialInsertionSortCmpFunc partially sorts a slice, returns true if the slice is sorted at the end. -func partialInsertionSortCmpFunc[E any](data []E, a, b int, cmp func(a, b E) int) bool { - const ( - maxSteps = 5 // maximum number of adjacent out-of-order pairs that will get shifted - shortestShifting = 50 // don't shift any elements on short arrays - ) - i := a + 1 - for j := 0; j < maxSteps; j++ { - for i < b && !(cmp(data[i], data[i-1]) < 0) { - i++ - } - - if i == b { - return true - } - - if b-a < shortestShifting { - return false - } - - data[i], data[i-1] = data[i-1], data[i] - - // Shift the smaller one to the left. - if i-a >= 2 { - for j := i - 1; j >= 1; j-- { - if !(cmp(data[j], data[j-1]) < 0) { - break - } - data[j], data[j-1] = data[j-1], data[j] - } - } - // Shift the greater one to the right. - if b-i >= 2 { - for j := i + 1; j < b; j++ { - if !(cmp(data[j], data[j-1]) < 0) { - break - } - data[j], data[j-1] = data[j-1], data[j] - } - } - } - return false -} - -// breakPatternsCmpFunc scatters some elements around in an attempt to break some patterns -// that might cause imbalanced partitions in quicksort. -func breakPatternsCmpFunc[E any](data []E, a, b int, cmp func(a, b E) int) { - length := b - a - if length >= 8 { - random := xorshift(length) - modulus := nextPowerOfTwo(length) - - for idx := a + (length/4)*2 - 1; idx <= a+(length/4)*2+1; idx++ { - other := int(uint(random.Next()) & (modulus - 1)) - if other >= length { - other -= length - } - data[idx], data[a+other] = data[a+other], data[idx] - } - } -} - -// choosePivotCmpFunc chooses a pivot in data[a:b]. -// -// [0,8): chooses a static pivot. -// [8,shortestNinther): uses the simple median-of-three method. -// [shortestNinther,∞): uses the Tukey ninther method. -func choosePivotCmpFunc[E any](data []E, a, b int, cmp func(a, b E) int) (pivot int, hint sortedHint) { - const ( - shortestNinther = 50 - maxSwaps = 4 * 3 - ) - - l := b - a - - var ( - swaps int - i = a + l/4*1 - j = a + l/4*2 - k = a + l/4*3 - ) - - if l >= 8 { - if l >= shortestNinther { - // Tukey ninther method, the idea came from Rust's implementation. - i = medianAdjacentCmpFunc(data, i, &swaps, cmp) - j = medianAdjacentCmpFunc(data, j, &swaps, cmp) - k = medianAdjacentCmpFunc(data, k, &swaps, cmp) - } - // Find the median among i, j, k and stores it into j. - j = medianCmpFunc(data, i, j, k, &swaps, cmp) - } - - switch swaps { - case 0: - return j, increasingHint - case maxSwaps: - return j, decreasingHint - default: - return j, unknownHint - } -} - -// order2CmpFunc returns x,y where data[x] <= data[y], where x,y=a,b or x,y=b,a. -func order2CmpFunc[E any](data []E, a, b int, swaps *int, cmp func(a, b E) int) (int, int) { - if cmp(data[b], data[a]) < 0 { - *swaps++ - return b, a - } - return a, b -} - -// medianCmpFunc returns x where data[x] is the median of data[a],data[b],data[c], where x is a, b, or c. -func medianCmpFunc[E any](data []E, a, b, c int, swaps *int, cmp func(a, b E) int) int { - a, b = order2CmpFunc(data, a, b, swaps, cmp) - b, c = order2CmpFunc(data, b, c, swaps, cmp) - a, b = order2CmpFunc(data, a, b, swaps, cmp) - return b -} - -// medianAdjacentCmpFunc finds the median of data[a - 1], data[a], data[a + 1] and stores the index into a. -func medianAdjacentCmpFunc[E any](data []E, a int, swaps *int, cmp func(a, b E) int) int { - return medianCmpFunc(data, a-1, a, a+1, swaps, cmp) -} - -func reverseRangeCmpFunc[E any](data []E, a, b int, cmp func(a, b E) int) { - i := a - j := b - 1 - for i < j { - data[i], data[j] = data[j], data[i] - i++ - j-- - } -} - -func swapRangeCmpFunc[E any](data []E, a, b, n int, cmp func(a, b E) int) { - for i := 0; i < n; i++ { - data[a+i], data[b+i] = data[b+i], data[a+i] - } -} - -func stableCmpFunc[E any](data []E, n int, cmp func(a, b E) int) { - blockSize := 20 // must be > 0 - a, b := 0, blockSize - for b <= n { - insertionSortCmpFunc(data, a, b, cmp) - a = b - b += blockSize - } - insertionSortCmpFunc(data, a, n, cmp) - - for blockSize < n { - a, b = 0, 2*blockSize - for b <= n { - symMergeCmpFunc(data, a, a+blockSize, b, cmp) - a = b - b += 2 * blockSize - } - if m := a + blockSize; m < n { - symMergeCmpFunc(data, a, m, n, cmp) - } - blockSize *= 2 - } -} - -// symMergeCmpFunc merges the two sorted subsequences data[a:m] and data[m:b] using -// the SymMerge algorithm from Pok-Son Kim and Arne Kutzner, "Stable Minimum -// Storage Merging by Symmetric Comparisons", in Susanne Albers and Tomasz -// Radzik, editors, Algorithms - ESA 2004, volume 3221 of Lecture Notes in -// Computer Science, pages 714-723. Springer, 2004. -// -// Let M = m-a and N = b-n. Wolog M < N. -// The recursion depth is bound by ceil(log(N+M)). -// The algorithm needs O(M*log(N/M + 1)) calls to data.Less. -// The algorithm needs O((M+N)*log(M)) calls to data.Swap. -// -// The paper gives O((M+N)*log(M)) as the number of assignments assuming a -// rotation algorithm which uses O(M+N+gcd(M+N)) assignments. The argumentation -// in the paper carries through for Swap operations, especially as the block -// swapping rotate uses only O(M+N) Swaps. -// -// symMerge assumes non-degenerate arguments: a < m && m < b. -// Having the caller check this condition eliminates many leaf recursion calls, -// which improves performance. -func symMergeCmpFunc[E any](data []E, a, m, b int, cmp func(a, b E) int) { - // Avoid unnecessary recursions of symMerge - // by direct insertion of data[a] into data[m:b] - // if data[a:m] only contains one element. - if m-a == 1 { - // Use binary search to find the lowest index i - // such that data[i] >= data[a] for m <= i < b. - // Exit the search loop with i == b in case no such index exists. - i := m - j := b - for i < j { - h := int(uint(i+j) >> 1) - if cmp(data[h], data[a]) < 0 { - i = h + 1 - } else { - j = h - } - } - // Swap values until data[a] reaches the position before i. - for k := a; k < i-1; k++ { - data[k], data[k+1] = data[k+1], data[k] - } - return - } - - // Avoid unnecessary recursions of symMerge - // by direct insertion of data[m] into data[a:m] - // if data[m:b] only contains one element. - if b-m == 1 { - // Use binary search to find the lowest index i - // such that data[i] > data[m] for a <= i < m. - // Exit the search loop with i == m in case no such index exists. - i := a - j := m - for i < j { - h := int(uint(i+j) >> 1) - if !(cmp(data[m], data[h]) < 0) { - i = h + 1 - } else { - j = h - } - } - // Swap values until data[m] reaches the position i. - for k := m; k > i; k-- { - data[k], data[k-1] = data[k-1], data[k] - } - return - } - - mid := int(uint(a+b) >> 1) - n := mid + m - var start, r int - if m > mid { - start = n - b - r = mid - } else { - start = a - r = m - } - p := n - 1 - - for start < r { - c := int(uint(start+r) >> 1) - if !(cmp(data[p-c], data[c]) < 0) { - start = c + 1 - } else { - r = c - } - } - - end := n - start - if start < m && m < end { - rotateCmpFunc(data, start, m, end, cmp) - } - if a < start && start < mid { - symMergeCmpFunc(data, a, start, mid, cmp) - } - if mid < end && end < b { - symMergeCmpFunc(data, mid, end, b, cmp) - } -} - -// rotateCmpFunc rotates two consecutive blocks u = data[a:m] and v = data[m:b] in data: -// Data of the form 'x u v y' is changed to 'x v u y'. -// rotate performs at most b-a many calls to data.Swap, -// and it assumes non-degenerate arguments: a < m && m < b. -func rotateCmpFunc[E any](data []E, a, m, b int, cmp func(a, b E) int) { - i := m - a - j := b - m - - for i != j { - if i > j { - swapRangeCmpFunc(data, m-i, m, j, cmp) - i -= j - } else { - swapRangeCmpFunc(data, m-i, m+j-i, i, cmp) - j -= i - } - } - // i == j - swapRangeCmpFunc(data, m-i, m, i, cmp) -} diff --git a/vendor/golang.org/x/exp/slices/zsortordered.go b/vendor/golang.org/x/exp/slices/zsortordered.go deleted file mode 100644 index 99b47c3986a..00000000000 --- a/vendor/golang.org/x/exp/slices/zsortordered.go +++ /dev/null @@ -1,481 +0,0 @@ -// Code generated by gen_sort_variants.go; DO NOT EDIT. - -// Copyright 2022 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package slices - -import "golang.org/x/exp/constraints" - -// insertionSortOrdered sorts data[a:b] using insertion sort. -func insertionSortOrdered[E constraints.Ordered](data []E, a, b int) { - for i := a + 1; i < b; i++ { - for j := i; j > a && cmpLess(data[j], data[j-1]); j-- { - data[j], data[j-1] = data[j-1], data[j] - } - } -} - -// siftDownOrdered implements the heap property on data[lo:hi]. -// first is an offset into the array where the root of the heap lies. -func siftDownOrdered[E constraints.Ordered](data []E, lo, hi, first int) { - root := lo - for { - child := 2*root + 1 - if child >= hi { - break - } - if child+1 < hi && cmpLess(data[first+child], data[first+child+1]) { - child++ - } - if !cmpLess(data[first+root], data[first+child]) { - return - } - data[first+root], data[first+child] = data[first+child], data[first+root] - root = child - } -} - -func heapSortOrdered[E constraints.Ordered](data []E, a, b int) { - first := a - lo := 0 - hi := b - a - - // Build heap with greatest element at top. - for i := (hi - 1) / 2; i >= 0; i-- { - siftDownOrdered(data, i, hi, first) - } - - // Pop elements, largest first, into end of data. - for i := hi - 1; i >= 0; i-- { - data[first], data[first+i] = data[first+i], data[first] - siftDownOrdered(data, lo, i, first) - } -} - -// pdqsortOrdered sorts data[a:b]. -// The algorithm based on pattern-defeating quicksort(pdqsort), but without the optimizations from BlockQuicksort. -// pdqsort paper: https://arxiv.org/pdf/2106.05123.pdf -// C++ implementation: https://github.com/orlp/pdqsort -// Rust implementation: https://docs.rs/pdqsort/latest/pdqsort/ -// limit is the number of allowed bad (very unbalanced) pivots before falling back to heapsort. -func pdqsortOrdered[E constraints.Ordered](data []E, a, b, limit int) { - const maxInsertion = 12 - - var ( - wasBalanced = true // whether the last partitioning was reasonably balanced - wasPartitioned = true // whether the slice was already partitioned - ) - - for { - length := b - a - - if length <= maxInsertion { - insertionSortOrdered(data, a, b) - return - } - - // Fall back to heapsort if too many bad choices were made. - if limit == 0 { - heapSortOrdered(data, a, b) - return - } - - // If the last partitioning was imbalanced, we need to breaking patterns. - if !wasBalanced { - breakPatternsOrdered(data, a, b) - limit-- - } - - pivot, hint := choosePivotOrdered(data, a, b) - if hint == decreasingHint { - reverseRangeOrdered(data, a, b) - // The chosen pivot was pivot-a elements after the start of the array. - // After reversing it is pivot-a elements before the end of the array. - // The idea came from Rust's implementation. - pivot = (b - 1) - (pivot - a) - hint = increasingHint - } - - // The slice is likely already sorted. - if wasBalanced && wasPartitioned && hint == increasingHint { - if partialInsertionSortOrdered(data, a, b) { - return - } - } - - // Probably the slice contains many duplicate elements, partition the slice into - // elements equal to and elements greater than the pivot. - if a > 0 && !cmpLess(data[a-1], data[pivot]) { - mid := partitionEqualOrdered(data, a, b, pivot) - a = mid - continue - } - - mid, alreadyPartitioned := partitionOrdered(data, a, b, pivot) - wasPartitioned = alreadyPartitioned - - leftLen, rightLen := mid-a, b-mid - balanceThreshold := length / 8 - if leftLen < rightLen { - wasBalanced = leftLen >= balanceThreshold - pdqsortOrdered(data, a, mid, limit) - a = mid + 1 - } else { - wasBalanced = rightLen >= balanceThreshold - pdqsortOrdered(data, mid+1, b, limit) - b = mid - } - } -} - -// partitionOrdered does one quicksort partition. -// Let p = data[pivot] -// Moves elements in data[a:b] around, so that data[i]

=p for inewpivot. -// On return, data[newpivot] = p -func partitionOrdered[E constraints.Ordered](data []E, a, b, pivot int) (newpivot int, alreadyPartitioned bool) { - data[a], data[pivot] = data[pivot], data[a] - i, j := a+1, b-1 // i and j are inclusive of the elements remaining to be partitioned - - for i <= j && cmpLess(data[i], data[a]) { - i++ - } - for i <= j && !cmpLess(data[j], data[a]) { - j-- - } - if i > j { - data[j], data[a] = data[a], data[j] - return j, true - } - data[i], data[j] = data[j], data[i] - i++ - j-- - - for { - for i <= j && cmpLess(data[i], data[a]) { - i++ - } - for i <= j && !cmpLess(data[j], data[a]) { - j-- - } - if i > j { - break - } - data[i], data[j] = data[j], data[i] - i++ - j-- - } - data[j], data[a] = data[a], data[j] - return j, false -} - -// partitionEqualOrdered partitions data[a:b] into elements equal to data[pivot] followed by elements greater than data[pivot]. -// It assumed that data[a:b] does not contain elements smaller than the data[pivot]. -func partitionEqualOrdered[E constraints.Ordered](data []E, a, b, pivot int) (newpivot int) { - data[a], data[pivot] = data[pivot], data[a] - i, j := a+1, b-1 // i and j are inclusive of the elements remaining to be partitioned - - for { - for i <= j && !cmpLess(data[a], data[i]) { - i++ - } - for i <= j && cmpLess(data[a], data[j]) { - j-- - } - if i > j { - break - } - data[i], data[j] = data[j], data[i] - i++ - j-- - } - return i -} - -// partialInsertionSortOrdered partially sorts a slice, returns true if the slice is sorted at the end. -func partialInsertionSortOrdered[E constraints.Ordered](data []E, a, b int) bool { - const ( - maxSteps = 5 // maximum number of adjacent out-of-order pairs that will get shifted - shortestShifting = 50 // don't shift any elements on short arrays - ) - i := a + 1 - for j := 0; j < maxSteps; j++ { - for i < b && !cmpLess(data[i], data[i-1]) { - i++ - } - - if i == b { - return true - } - - if b-a < shortestShifting { - return false - } - - data[i], data[i-1] = data[i-1], data[i] - - // Shift the smaller one to the left. - if i-a >= 2 { - for j := i - 1; j >= 1; j-- { - if !cmpLess(data[j], data[j-1]) { - break - } - data[j], data[j-1] = data[j-1], data[j] - } - } - // Shift the greater one to the right. - if b-i >= 2 { - for j := i + 1; j < b; j++ { - if !cmpLess(data[j], data[j-1]) { - break - } - data[j], data[j-1] = data[j-1], data[j] - } - } - } - return false -} - -// breakPatternsOrdered scatters some elements around in an attempt to break some patterns -// that might cause imbalanced partitions in quicksort. -func breakPatternsOrdered[E constraints.Ordered](data []E, a, b int) { - length := b - a - if length >= 8 { - random := xorshift(length) - modulus := nextPowerOfTwo(length) - - for idx := a + (length/4)*2 - 1; idx <= a+(length/4)*2+1; idx++ { - other := int(uint(random.Next()) & (modulus - 1)) - if other >= length { - other -= length - } - data[idx], data[a+other] = data[a+other], data[idx] - } - } -} - -// choosePivotOrdered chooses a pivot in data[a:b]. -// -// [0,8): chooses a static pivot. -// [8,shortestNinther): uses the simple median-of-three method. -// [shortestNinther,∞): uses the Tukey ninther method. -func choosePivotOrdered[E constraints.Ordered](data []E, a, b int) (pivot int, hint sortedHint) { - const ( - shortestNinther = 50 - maxSwaps = 4 * 3 - ) - - l := b - a - - var ( - swaps int - i = a + l/4*1 - j = a + l/4*2 - k = a + l/4*3 - ) - - if l >= 8 { - if l >= shortestNinther { - // Tukey ninther method, the idea came from Rust's implementation. - i = medianAdjacentOrdered(data, i, &swaps) - j = medianAdjacentOrdered(data, j, &swaps) - k = medianAdjacentOrdered(data, k, &swaps) - } - // Find the median among i, j, k and stores it into j. - j = medianOrdered(data, i, j, k, &swaps) - } - - switch swaps { - case 0: - return j, increasingHint - case maxSwaps: - return j, decreasingHint - default: - return j, unknownHint - } -} - -// order2Ordered returns x,y where data[x] <= data[y], where x,y=a,b or x,y=b,a. -func order2Ordered[E constraints.Ordered](data []E, a, b int, swaps *int) (int, int) { - if cmpLess(data[b], data[a]) { - *swaps++ - return b, a - } - return a, b -} - -// medianOrdered returns x where data[x] is the median of data[a],data[b],data[c], where x is a, b, or c. -func medianOrdered[E constraints.Ordered](data []E, a, b, c int, swaps *int) int { - a, b = order2Ordered(data, a, b, swaps) - b, c = order2Ordered(data, b, c, swaps) - a, b = order2Ordered(data, a, b, swaps) - return b -} - -// medianAdjacentOrdered finds the median of data[a - 1], data[a], data[a + 1] and stores the index into a. -func medianAdjacentOrdered[E constraints.Ordered](data []E, a int, swaps *int) int { - return medianOrdered(data, a-1, a, a+1, swaps) -} - -func reverseRangeOrdered[E constraints.Ordered](data []E, a, b int) { - i := a - j := b - 1 - for i < j { - data[i], data[j] = data[j], data[i] - i++ - j-- - } -} - -func swapRangeOrdered[E constraints.Ordered](data []E, a, b, n int) { - for i := 0; i < n; i++ { - data[a+i], data[b+i] = data[b+i], data[a+i] - } -} - -func stableOrdered[E constraints.Ordered](data []E, n int) { - blockSize := 20 // must be > 0 - a, b := 0, blockSize - for b <= n { - insertionSortOrdered(data, a, b) - a = b - b += blockSize - } - insertionSortOrdered(data, a, n) - - for blockSize < n { - a, b = 0, 2*blockSize - for b <= n { - symMergeOrdered(data, a, a+blockSize, b) - a = b - b += 2 * blockSize - } - if m := a + blockSize; m < n { - symMergeOrdered(data, a, m, n) - } - blockSize *= 2 - } -} - -// symMergeOrdered merges the two sorted subsequences data[a:m] and data[m:b] using -// the SymMerge algorithm from Pok-Son Kim and Arne Kutzner, "Stable Minimum -// Storage Merging by Symmetric Comparisons", in Susanne Albers and Tomasz -// Radzik, editors, Algorithms - ESA 2004, volume 3221 of Lecture Notes in -// Computer Science, pages 714-723. Springer, 2004. -// -// Let M = m-a and N = b-n. Wolog M < N. -// The recursion depth is bound by ceil(log(N+M)). -// The algorithm needs O(M*log(N/M + 1)) calls to data.Less. -// The algorithm needs O((M+N)*log(M)) calls to data.Swap. -// -// The paper gives O((M+N)*log(M)) as the number of assignments assuming a -// rotation algorithm which uses O(M+N+gcd(M+N)) assignments. The argumentation -// in the paper carries through for Swap operations, especially as the block -// swapping rotate uses only O(M+N) Swaps. -// -// symMerge assumes non-degenerate arguments: a < m && m < b. -// Having the caller check this condition eliminates many leaf recursion calls, -// which improves performance. -func symMergeOrdered[E constraints.Ordered](data []E, a, m, b int) { - // Avoid unnecessary recursions of symMerge - // by direct insertion of data[a] into data[m:b] - // if data[a:m] only contains one element. - if m-a == 1 { - // Use binary search to find the lowest index i - // such that data[i] >= data[a] for m <= i < b. - // Exit the search loop with i == b in case no such index exists. - i := m - j := b - for i < j { - h := int(uint(i+j) >> 1) - if cmpLess(data[h], data[a]) { - i = h + 1 - } else { - j = h - } - } - // Swap values until data[a] reaches the position before i. - for k := a; k < i-1; k++ { - data[k], data[k+1] = data[k+1], data[k] - } - return - } - - // Avoid unnecessary recursions of symMerge - // by direct insertion of data[m] into data[a:m] - // if data[m:b] only contains one element. - if b-m == 1 { - // Use binary search to find the lowest index i - // such that data[i] > data[m] for a <= i < m. - // Exit the search loop with i == m in case no such index exists. - i := a - j := m - for i < j { - h := int(uint(i+j) >> 1) - if !cmpLess(data[m], data[h]) { - i = h + 1 - } else { - j = h - } - } - // Swap values until data[m] reaches the position i. - for k := m; k > i; k-- { - data[k], data[k-1] = data[k-1], data[k] - } - return - } - - mid := int(uint(a+b) >> 1) - n := mid + m - var start, r int - if m > mid { - start = n - b - r = mid - } else { - start = a - r = m - } - p := n - 1 - - for start < r { - c := int(uint(start+r) >> 1) - if !cmpLess(data[p-c], data[c]) { - start = c + 1 - } else { - r = c - } - } - - end := n - start - if start < m && m < end { - rotateOrdered(data, start, m, end) - } - if a < start && start < mid { - symMergeOrdered(data, a, start, mid) - } - if mid < end && end < b { - symMergeOrdered(data, mid, end, b) - } -} - -// rotateOrdered rotates two consecutive blocks u = data[a:m] and v = data[m:b] in data: -// Data of the form 'x u v y' is changed to 'x v u y'. -// rotate performs at most b-a many calls to data.Swap, -// and it assumes non-degenerate arguments: a < m && m < b. -func rotateOrdered[E constraints.Ordered](data []E, a, m, b int) { - i := m - a - j := b - m - - for i != j { - if i > j { - swapRangeOrdered(data, m-i, m, j) - i -= j - } else { - swapRangeOrdered(data, m-i, m+j-i, i) - j -= i - } - } - // i == j - swapRangeOrdered(data, m-i, m, i) -} diff --git a/vendor/golang.org/x/net/http2/frame.go b/vendor/golang.org/x/net/http2/frame.go index 105c3b279c0..81faec7e75d 100644 --- a/vendor/golang.org/x/net/http2/frame.go +++ b/vendor/golang.org/x/net/http2/frame.go @@ -1490,7 +1490,7 @@ func (mh *MetaHeadersFrame) checkPseudos() error { pf := mh.PseudoFields() for i, hf := range pf { switch hf.Name { - case ":method", ":path", ":scheme", ":authority": + case ":method", ":path", ":scheme", ":authority", ":protocol": isRequest = true case ":status": isResponse = true @@ -1498,7 +1498,7 @@ func (mh *MetaHeadersFrame) checkPseudos() error { return pseudoHeaderError(hf.Name) } // Check for duplicates. - // This would be a bad algorithm, but N is 4. + // This would be a bad algorithm, but N is 5. // And this doesn't allocate. for _, hf2 := range pf[:i] { if hf.Name == hf2.Name { diff --git a/vendor/golang.org/x/net/http2/http2.go b/vendor/golang.org/x/net/http2/http2.go index 7688c356b7c..c7601c909ff 100644 --- a/vendor/golang.org/x/net/http2/http2.go +++ b/vendor/golang.org/x/net/http2/http2.go @@ -34,10 +34,11 @@ import ( ) var ( - VerboseLogs bool - logFrameWrites bool - logFrameReads bool - inTests bool + VerboseLogs bool + logFrameWrites bool + logFrameReads bool + inTests bool + disableExtendedConnectProtocol bool ) func init() { @@ -50,6 +51,9 @@ func init() { logFrameWrites = true logFrameReads = true } + if strings.Contains(e, "http2xconnect=0") { + disableExtendedConnectProtocol = true + } } const ( @@ -141,6 +145,10 @@ func (s Setting) Valid() error { if s.Val < 16384 || s.Val > 1<<24-1 { return ConnectionError(ErrCodeProtocol) } + case SettingEnableConnectProtocol: + if s.Val != 1 && s.Val != 0 { + return ConnectionError(ErrCodeProtocol) + } } return nil } @@ -150,21 +158,23 @@ func (s Setting) Valid() error { type SettingID uint16 const ( - SettingHeaderTableSize SettingID = 0x1 - SettingEnablePush SettingID = 0x2 - SettingMaxConcurrentStreams SettingID = 0x3 - SettingInitialWindowSize SettingID = 0x4 - SettingMaxFrameSize SettingID = 0x5 - SettingMaxHeaderListSize SettingID = 0x6 + SettingHeaderTableSize SettingID = 0x1 + SettingEnablePush SettingID = 0x2 + SettingMaxConcurrentStreams SettingID = 0x3 + SettingInitialWindowSize SettingID = 0x4 + SettingMaxFrameSize SettingID = 0x5 + SettingMaxHeaderListSize SettingID = 0x6 + SettingEnableConnectProtocol SettingID = 0x8 ) var settingName = map[SettingID]string{ - SettingHeaderTableSize: "HEADER_TABLE_SIZE", - SettingEnablePush: "ENABLE_PUSH", - SettingMaxConcurrentStreams: "MAX_CONCURRENT_STREAMS", - SettingInitialWindowSize: "INITIAL_WINDOW_SIZE", - SettingMaxFrameSize: "MAX_FRAME_SIZE", - SettingMaxHeaderListSize: "MAX_HEADER_LIST_SIZE", + SettingHeaderTableSize: "HEADER_TABLE_SIZE", + SettingEnablePush: "ENABLE_PUSH", + SettingMaxConcurrentStreams: "MAX_CONCURRENT_STREAMS", + SettingInitialWindowSize: "INITIAL_WINDOW_SIZE", + SettingMaxFrameSize: "MAX_FRAME_SIZE", + SettingMaxHeaderListSize: "MAX_HEADER_LIST_SIZE", + SettingEnableConnectProtocol: "ENABLE_CONNECT_PROTOCOL", } func (s SettingID) String() string { diff --git a/vendor/golang.org/x/net/http2/server.go b/vendor/golang.org/x/net/http2/server.go index 832414b450c..b55547aec64 100644 --- a/vendor/golang.org/x/net/http2/server.go +++ b/vendor/golang.org/x/net/http2/server.go @@ -932,14 +932,18 @@ func (sc *serverConn) serve(conf http2Config) { sc.vlogf("http2: server connection from %v on %p", sc.conn.RemoteAddr(), sc.hs) } + settings := writeSettings{ + {SettingMaxFrameSize, conf.MaxReadFrameSize}, + {SettingMaxConcurrentStreams, sc.advMaxStreams}, + {SettingMaxHeaderListSize, sc.maxHeaderListSize()}, + {SettingHeaderTableSize, conf.MaxDecoderHeaderTableSize}, + {SettingInitialWindowSize, uint32(sc.initialStreamRecvWindowSize)}, + } + if !disableExtendedConnectProtocol { + settings = append(settings, Setting{SettingEnableConnectProtocol, 1}) + } sc.writeFrame(FrameWriteRequest{ - write: writeSettings{ - {SettingMaxFrameSize, conf.MaxReadFrameSize}, - {SettingMaxConcurrentStreams, sc.advMaxStreams}, - {SettingMaxHeaderListSize, sc.maxHeaderListSize()}, - {SettingHeaderTableSize, conf.MaxDecoderHeaderTableSize}, - {SettingInitialWindowSize, uint32(sc.initialStreamRecvWindowSize)}, - }, + write: settings, }) sc.unackedSettings++ @@ -1801,6 +1805,9 @@ func (sc *serverConn) processSetting(s Setting) error { sc.maxFrameSize = int32(s.Val) // the maximum valid s.Val is < 2^31 case SettingMaxHeaderListSize: sc.peerMaxHeaderListSize = s.Val + case SettingEnableConnectProtocol: + // Receipt of this parameter by a server does not + // have any impact default: // Unknown setting: "An endpoint that receives a SETTINGS // frame with any unknown or unsupported identifier MUST @@ -2231,11 +2238,17 @@ func (sc *serverConn) newWriterAndRequest(st *stream, f *MetaHeadersFrame) (*res scheme: f.PseudoValue("scheme"), authority: f.PseudoValue("authority"), path: f.PseudoValue("path"), + protocol: f.PseudoValue("protocol"), + } + + // extended connect is disabled, so we should not see :protocol + if disableExtendedConnectProtocol && rp.protocol != "" { + return nil, nil, sc.countError("bad_connect", streamError(f.StreamID, ErrCodeProtocol)) } isConnect := rp.method == "CONNECT" if isConnect { - if rp.path != "" || rp.scheme != "" || rp.authority == "" { + if rp.protocol == "" && (rp.path != "" || rp.scheme != "" || rp.authority == "") { return nil, nil, sc.countError("bad_connect", streamError(f.StreamID, ErrCodeProtocol)) } } else if rp.method == "" || rp.path == "" || (rp.scheme != "https" && rp.scheme != "http") { @@ -2259,6 +2272,9 @@ func (sc *serverConn) newWriterAndRequest(st *stream, f *MetaHeadersFrame) (*res if rp.authority == "" { rp.authority = rp.header.Get("Host") } + if rp.protocol != "" { + rp.header.Set(":protocol", rp.protocol) + } rw, req, err := sc.newWriterAndRequestNoBody(st, rp) if err != nil { @@ -2285,6 +2301,7 @@ func (sc *serverConn) newWriterAndRequest(st *stream, f *MetaHeadersFrame) (*res type requestParam struct { method string scheme, authority, path string + protocol string header http.Header } @@ -2326,7 +2343,7 @@ func (sc *serverConn) newWriterAndRequestNoBody(st *stream, rp requestParam) (*r var url_ *url.URL var requestURI string - if rp.method == "CONNECT" { + if rp.method == "CONNECT" && rp.protocol == "" { url_ = &url.URL{Host: rp.authority} requestURI = rp.authority // mimic HTTP/1 server behavior } else { diff --git a/vendor/golang.org/x/net/http2/transport.go b/vendor/golang.org/x/net/http2/transport.go index f5968f44071..090d0e1bdb5 100644 --- a/vendor/golang.org/x/net/http2/transport.go +++ b/vendor/golang.org/x/net/http2/transport.go @@ -368,25 +368,26 @@ type ClientConn struct { idleTimeout time.Duration // or 0 for never idleTimer timer - mu sync.Mutex // guards following - cond *sync.Cond // hold mu; broadcast on flow/closed changes - flow outflow // our conn-level flow control quota (cs.outflow is per stream) - inflow inflow // peer's conn-level flow control - doNotReuse bool // whether conn is marked to not be reused for any future requests - closing bool - closed bool - seenSettings bool // true if we've seen a settings frame, false otherwise - wantSettingsAck bool // we sent a SETTINGS frame and haven't heard back - goAway *GoAwayFrame // if non-nil, the GoAwayFrame we received - goAwayDebug string // goAway frame's debug data, retained as a string - streams map[uint32]*clientStream // client-initiated - streamsReserved int // incr by ReserveNewRequest; decr on RoundTrip - nextStreamID uint32 - pendingRequests int // requests blocked and waiting to be sent because len(streams) == maxConcurrentStreams - pings map[[8]byte]chan struct{} // in flight ping data to notification channel - br *bufio.Reader - lastActive time.Time - lastIdle time.Time // time last idle + mu sync.Mutex // guards following + cond *sync.Cond // hold mu; broadcast on flow/closed changes + flow outflow // our conn-level flow control quota (cs.outflow is per stream) + inflow inflow // peer's conn-level flow control + doNotReuse bool // whether conn is marked to not be reused for any future requests + closing bool + closed bool + seenSettings bool // true if we've seen a settings frame, false otherwise + seenSettingsChan chan struct{} // closed when seenSettings is true or frame reading fails + wantSettingsAck bool // we sent a SETTINGS frame and haven't heard back + goAway *GoAwayFrame // if non-nil, the GoAwayFrame we received + goAwayDebug string // goAway frame's debug data, retained as a string + streams map[uint32]*clientStream // client-initiated + streamsReserved int // incr by ReserveNewRequest; decr on RoundTrip + nextStreamID uint32 + pendingRequests int // requests blocked and waiting to be sent because len(streams) == maxConcurrentStreams + pings map[[8]byte]chan struct{} // in flight ping data to notification channel + br *bufio.Reader + lastActive time.Time + lastIdle time.Time // time last idle // Settings from peer: (also guarded by wmu) maxFrameSize uint32 maxConcurrentStreams uint32 @@ -396,6 +397,17 @@ type ClientConn struct { initialStreamRecvWindowSize int32 readIdleTimeout time.Duration pingTimeout time.Duration + extendedConnectAllowed bool + + // rstStreamPingsBlocked works around an unfortunate gRPC behavior. + // gRPC strictly limits the number of PING frames that it will receive. + // The default is two pings per two hours, but the limit resets every time + // the gRPC endpoint sends a HEADERS or DATA frame. See golang/go#70575. + // + // rstStreamPingsBlocked is set after receiving a response to a PING frame + // bundled with an RST_STREAM (see pendingResets below), and cleared after + // receiving a HEADERS or DATA frame. + rstStreamPingsBlocked bool // pendingResets is the number of RST_STREAM frames we have sent to the peer, // without confirming that the peer has received them. When we send a RST_STREAM, @@ -819,6 +831,7 @@ func (t *Transport) newClientConn(c net.Conn, singleUse bool) (*ClientConn, erro peerMaxHeaderListSize: 0xffffffffffffffff, // "infinite", per spec. Use 2^64-1 instead. streams: make(map[uint32]*clientStream), singleUse: singleUse, + seenSettingsChan: make(chan struct{}), wantSettingsAck: true, readIdleTimeout: conf.SendPingTimeout, pingTimeout: conf.PingTimeout, @@ -1466,6 +1479,8 @@ func (cs *clientStream) doRequest(req *http.Request, streamf func(*clientStream) cs.cleanupWriteRequest(err) } +var errExtendedConnectNotSupported = errors.New("net/http: extended connect not supported by peer") + // writeRequest sends a request. // // It returns nil after the request is written, the response read, @@ -1481,12 +1496,31 @@ func (cs *clientStream) writeRequest(req *http.Request, streamf func(*clientStre return err } + // wait for setting frames to be received, a server can change this value later, + // but we just wait for the first settings frame + var isExtendedConnect bool + if req.Method == "CONNECT" && req.Header.Get(":protocol") != "" { + isExtendedConnect = true + } + // Acquire the new-request lock by writing to reqHeaderMu. // This lock guards the critical section covering allocating a new stream ID // (requires mu) and creating the stream (requires wmu). if cc.reqHeaderMu == nil { panic("RoundTrip on uninitialized ClientConn") // for tests } + if isExtendedConnect { + select { + case <-cs.reqCancel: + return errRequestCanceled + case <-ctx.Done(): + return ctx.Err() + case <-cc.seenSettingsChan: + if !cc.extendedConnectAllowed { + return errExtendedConnectNotSupported + } + } + } select { case cc.reqHeaderMu <- struct{}{}: case <-cs.reqCancel: @@ -1714,10 +1748,14 @@ func (cs *clientStream) cleanupWriteRequest(err error) { ping := false if !closeOnIdle { cc.mu.Lock() - if cc.pendingResets == 0 { - ping = true + // rstStreamPingsBlocked works around a gRPC behavior: + // see comment on the field for details. + if !cc.rstStreamPingsBlocked { + if cc.pendingResets == 0 { + ping = true + } + cc.pendingResets++ } - cc.pendingResets++ cc.mu.Unlock() } cc.writeStreamReset(cs.ID, ErrCodeCancel, ping, err) @@ -2030,7 +2068,7 @@ func (cs *clientStream) awaitFlowControl(maxBytes int) (taken int32, err error) func validateHeaders(hdrs http.Header) string { for k, vv := range hdrs { - if !httpguts.ValidHeaderFieldName(k) { + if !httpguts.ValidHeaderFieldName(k) && k != ":protocol" { return fmt.Sprintf("name %q", k) } for _, v := range vv { @@ -2046,6 +2084,10 @@ func validateHeaders(hdrs http.Header) string { var errNilRequestURL = errors.New("http2: Request.URI is nil") +func isNormalConnect(req *http.Request) bool { + return req.Method == "CONNECT" && req.Header.Get(":protocol") == "" +} + // requires cc.wmu be held. func (cc *ClientConn) encodeHeaders(req *http.Request, addGzipHeader bool, trailers string, contentLength int64) ([]byte, error) { cc.hbuf.Reset() @@ -2066,7 +2108,7 @@ func (cc *ClientConn) encodeHeaders(req *http.Request, addGzipHeader bool, trail } var path string - if req.Method != "CONNECT" { + if !isNormalConnect(req) { path = req.URL.RequestURI() if !validPseudoPath(path) { orig := path @@ -2103,7 +2145,7 @@ func (cc *ClientConn) encodeHeaders(req *http.Request, addGzipHeader bool, trail m = http.MethodGet } f(":method", m) - if req.Method != "CONNECT" { + if !isNormalConnect(req) { f(":path", path) f(":scheme", req.URL.Scheme) } @@ -2461,7 +2503,7 @@ func (rl *clientConnReadLoop) run() error { cc.vlogf("http2: Transport readFrame error on conn %p: (%T) %v", cc, err, err) } if se, ok := err.(StreamError); ok { - if cs := rl.streamByID(se.StreamID); cs != nil { + if cs := rl.streamByID(se.StreamID, notHeaderOrDataFrame); cs != nil { if se.Cause == nil { se.Cause = cc.fr.errDetail } @@ -2507,13 +2549,16 @@ func (rl *clientConnReadLoop) run() error { if VerboseLogs { cc.vlogf("http2: Transport conn %p received error from processing frame %v: %v", cc, summarizeFrame(f), err) } + if !cc.seenSettings { + close(cc.seenSettingsChan) + } return err } } } func (rl *clientConnReadLoop) processHeaders(f *MetaHeadersFrame) error { - cs := rl.streamByID(f.StreamID) + cs := rl.streamByID(f.StreamID, headerOrDataFrame) if cs == nil { // We'd get here if we canceled a request while the // server had its response still in flight. So if this @@ -2842,7 +2887,7 @@ func (b transportResponseBody) Close() error { func (rl *clientConnReadLoop) processData(f *DataFrame) error { cc := rl.cc - cs := rl.streamByID(f.StreamID) + cs := rl.streamByID(f.StreamID, headerOrDataFrame) data := f.Data() if cs == nil { cc.mu.Lock() @@ -2977,9 +3022,22 @@ func (rl *clientConnReadLoop) endStreamError(cs *clientStream, err error) { cs.abortStream(err) } -func (rl *clientConnReadLoop) streamByID(id uint32) *clientStream { +// Constants passed to streamByID for documentation purposes. +const ( + headerOrDataFrame = true + notHeaderOrDataFrame = false +) + +// streamByID returns the stream with the given id, or nil if no stream has that id. +// If headerOrData is true, it clears rst.StreamPingsBlocked. +func (rl *clientConnReadLoop) streamByID(id uint32, headerOrData bool) *clientStream { rl.cc.mu.Lock() defer rl.cc.mu.Unlock() + if headerOrData { + // Work around an unfortunate gRPC behavior. + // See comment on ClientConn.rstStreamPingsBlocked for details. + rl.cc.rstStreamPingsBlocked = false + } cs := rl.cc.streams[id] if cs != nil && !cs.readAborted { return cs @@ -3073,6 +3131,21 @@ func (rl *clientConnReadLoop) processSettingsNoWrite(f *SettingsFrame) error { case SettingHeaderTableSize: cc.henc.SetMaxDynamicTableSize(s.Val) cc.peerMaxHeaderTableSize = s.Val + case SettingEnableConnectProtocol: + if err := s.Valid(); err != nil { + return err + } + // If the peer wants to send us SETTINGS_ENABLE_CONNECT_PROTOCOL, + // we require that it do so in the first SETTINGS frame. + // + // When we attempt to use extended CONNECT, we wait for the first + // SETTINGS frame to see if the server supports it. If we let the + // server enable the feature with a later SETTINGS frame, then + // users will see inconsistent results depending on whether we've + // seen that frame or not. + if !cc.seenSettings { + cc.extendedConnectAllowed = s.Val == 1 + } default: cc.vlogf("Unhandled Setting: %v", s) } @@ -3090,6 +3163,7 @@ func (rl *clientConnReadLoop) processSettingsNoWrite(f *SettingsFrame) error { // connection can establish to our default. cc.maxConcurrentStreams = defaultMaxConcurrentStreams } + close(cc.seenSettingsChan) cc.seenSettings = true } @@ -3098,7 +3172,7 @@ func (rl *clientConnReadLoop) processSettingsNoWrite(f *SettingsFrame) error { func (rl *clientConnReadLoop) processWindowUpdate(f *WindowUpdateFrame) error { cc := rl.cc - cs := rl.streamByID(f.StreamID) + cs := rl.streamByID(f.StreamID, notHeaderOrDataFrame) if f.StreamID != 0 && cs == nil { return nil } @@ -3127,7 +3201,7 @@ func (rl *clientConnReadLoop) processWindowUpdate(f *WindowUpdateFrame) error { } func (rl *clientConnReadLoop) processResetStream(f *RSTStreamFrame) error { - cs := rl.streamByID(f.StreamID) + cs := rl.streamByID(f.StreamID, notHeaderOrDataFrame) if cs == nil { // TODO: return error if server tries to RST_STREAM an idle stream return nil @@ -3205,6 +3279,7 @@ func (rl *clientConnReadLoop) processPing(f *PingFrame) error { if cc.pendingResets > 0 { // See clientStream.cleanupWriteRequest. cc.pendingResets = 0 + cc.rstStreamPingsBlocked = true cc.cond.Broadcast() } return nil diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux.go b/vendor/golang.org/x/sys/unix/zerrors_linux.go index ccba391c9fb..6ebc48b3fec 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux.go @@ -321,6 +321,9 @@ const ( AUDIT_INTEGRITY_STATUS = 0x70a AUDIT_IPC = 0x517 AUDIT_IPC_SET_PERM = 0x51f + AUDIT_IPE_ACCESS = 0x58c + AUDIT_IPE_CONFIG_CHANGE = 0x58d + AUDIT_IPE_POLICY_LOAD = 0x58e AUDIT_KERNEL = 0x7d0 AUDIT_KERNEL_OTHER = 0x524 AUDIT_KERN_MODULE = 0x532 @@ -489,6 +492,7 @@ const ( BPF_F_ID = 0x20 BPF_F_NETFILTER_IP_DEFRAG = 0x1 BPF_F_QUERY_EFFECTIVE = 0x1 + BPF_F_REDIRECT_FLAGS = 0x19 BPF_F_REPLACE = 0x4 BPF_F_SLEEPABLE = 0x10 BPF_F_STRICT_ALIGNMENT = 0x1 @@ -1166,6 +1170,7 @@ const ( EXTA = 0xe EXTB = 0xf F2FS_SUPER_MAGIC = 0xf2f52010 + FALLOC_FL_ALLOCATE_RANGE = 0x0 FALLOC_FL_COLLAPSE_RANGE = 0x8 FALLOC_FL_INSERT_RANGE = 0x20 FALLOC_FL_KEEP_SIZE = 0x1 @@ -1799,6 +1804,8 @@ const ( LANDLOCK_ACCESS_NET_BIND_TCP = 0x1 LANDLOCK_ACCESS_NET_CONNECT_TCP = 0x2 LANDLOCK_CREATE_RULESET_VERSION = 0x1 + LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET = 0x1 + LANDLOCK_SCOPE_SIGNAL = 0x2 LINUX_REBOOT_CMD_CAD_OFF = 0x0 LINUX_REBOOT_CMD_CAD_ON = 0x89abcdef LINUX_REBOOT_CMD_HALT = 0xcdef0123 @@ -1924,6 +1931,7 @@ const ( MNT_FORCE = 0x1 MNT_ID_REQ_SIZE_VER0 = 0x18 MNT_ID_REQ_SIZE_VER1 = 0x20 + MNT_NS_INFO_SIZE_VER0 = 0x10 MODULE_INIT_COMPRESSED_FILE = 0x4 MODULE_INIT_IGNORE_MODVERSIONS = 0x1 MODULE_INIT_IGNORE_VERMAGIC = 0x2 @@ -2970,6 +2978,7 @@ const ( RWF_WRITE_LIFE_NOT_SET = 0x0 SCHED_BATCH = 0x3 SCHED_DEADLINE = 0x6 + SCHED_EXT = 0x7 SCHED_FIFO = 0x1 SCHED_FLAG_ALL = 0x7f SCHED_FLAG_DL_OVERRUN = 0x4 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_386.go b/vendor/golang.org/x/sys/unix/zerrors_linux_386.go index 0c00cb3f3af..c0d45e32050 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_386.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_386.go @@ -109,6 +109,7 @@ const ( HIDIOCGRAWINFO = 0x80084803 HIDIOCGRDESC = 0x90044802 HIDIOCGRDESCSIZE = 0x80044801 + HIDIOCREVOKE = 0x4004480d HUPCL = 0x400 ICANON = 0x2 IEXTEN = 0x8000 @@ -297,6 +298,8 @@ const ( RTC_WIE_ON = 0x700f RTC_WKALM_RD = 0x80287010 RTC_WKALM_SET = 0x4028700f + SCM_DEVMEM_DMABUF = 0x4f + SCM_DEVMEM_LINEAR = 0x4e SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 SCM_TIMESTAMPING_PKTINFO = 0x3a @@ -335,6 +338,9 @@ const ( SO_CNX_ADVICE = 0x35 SO_COOKIE = 0x39 SO_DETACH_REUSEPORT_BPF = 0x44 + SO_DEVMEM_DMABUF = 0x4f + SO_DEVMEM_DONTNEED = 0x50 + SO_DEVMEM_LINEAR = 0x4e SO_DOMAIN = 0x27 SO_DONTROUTE = 0x5 SO_ERROR = 0x4 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go b/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go index dfb364554dd..c731d24f025 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go @@ -109,6 +109,7 @@ const ( HIDIOCGRAWINFO = 0x80084803 HIDIOCGRDESC = 0x90044802 HIDIOCGRDESCSIZE = 0x80044801 + HIDIOCREVOKE = 0x4004480d HUPCL = 0x400 ICANON = 0x2 IEXTEN = 0x8000 @@ -298,6 +299,8 @@ const ( RTC_WIE_ON = 0x700f RTC_WKALM_RD = 0x80287010 RTC_WKALM_SET = 0x4028700f + SCM_DEVMEM_DMABUF = 0x4f + SCM_DEVMEM_LINEAR = 0x4e SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 SCM_TIMESTAMPING_PKTINFO = 0x3a @@ -336,6 +339,9 @@ const ( SO_CNX_ADVICE = 0x35 SO_COOKIE = 0x39 SO_DETACH_REUSEPORT_BPF = 0x44 + SO_DEVMEM_DMABUF = 0x4f + SO_DEVMEM_DONTNEED = 0x50 + SO_DEVMEM_LINEAR = 0x4e SO_DOMAIN = 0x27 SO_DONTROUTE = 0x5 SO_ERROR = 0x4 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go b/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go index d46dcf78abc..680018a4a7c 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go @@ -108,6 +108,7 @@ const ( HIDIOCGRAWINFO = 0x80084803 HIDIOCGRDESC = 0x90044802 HIDIOCGRDESCSIZE = 0x80044801 + HIDIOCREVOKE = 0x4004480d HUPCL = 0x400 ICANON = 0x2 IEXTEN = 0x8000 @@ -303,6 +304,8 @@ const ( RTC_WIE_ON = 0x700f RTC_WKALM_RD = 0x80287010 RTC_WKALM_SET = 0x4028700f + SCM_DEVMEM_DMABUF = 0x4f + SCM_DEVMEM_LINEAR = 0x4e SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 SCM_TIMESTAMPING_PKTINFO = 0x3a @@ -341,6 +344,9 @@ const ( SO_CNX_ADVICE = 0x35 SO_COOKIE = 0x39 SO_DETACH_REUSEPORT_BPF = 0x44 + SO_DEVMEM_DMABUF = 0x4f + SO_DEVMEM_DONTNEED = 0x50 + SO_DEVMEM_LINEAR = 0x4e SO_DOMAIN = 0x27 SO_DONTROUTE = 0x5 SO_ERROR = 0x4 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go b/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go index 3af3248a7f2..a63909f308d 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go @@ -112,6 +112,7 @@ const ( HIDIOCGRAWINFO = 0x80084803 HIDIOCGRDESC = 0x90044802 HIDIOCGRDESCSIZE = 0x80044801 + HIDIOCREVOKE = 0x4004480d HUPCL = 0x400 ICANON = 0x2 IEXTEN = 0x8000 @@ -205,6 +206,7 @@ const ( PERF_EVENT_IOC_SET_BPF = 0x40042408 PERF_EVENT_IOC_SET_FILTER = 0x40082406 PERF_EVENT_IOC_SET_OUTPUT = 0x2405 + POE_MAGIC = 0x504f4530 PPPIOCATTACH = 0x4004743d PPPIOCATTCHAN = 0x40047438 PPPIOCBRIDGECHAN = 0x40047435 @@ -294,6 +296,8 @@ const ( RTC_WIE_ON = 0x700f RTC_WKALM_RD = 0x80287010 RTC_WKALM_SET = 0x4028700f + SCM_DEVMEM_DMABUF = 0x4f + SCM_DEVMEM_LINEAR = 0x4e SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 SCM_TIMESTAMPING_PKTINFO = 0x3a @@ -332,6 +336,9 @@ const ( SO_CNX_ADVICE = 0x35 SO_COOKIE = 0x39 SO_DETACH_REUSEPORT_BPF = 0x44 + SO_DEVMEM_DMABUF = 0x4f + SO_DEVMEM_DONTNEED = 0x50 + SO_DEVMEM_LINEAR = 0x4e SO_DOMAIN = 0x27 SO_DONTROUTE = 0x5 SO_ERROR = 0x4 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_loong64.go b/vendor/golang.org/x/sys/unix/zerrors_linux_loong64.go index 292bcf0283d..9b0a2573fe3 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_loong64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_loong64.go @@ -109,6 +109,7 @@ const ( HIDIOCGRAWINFO = 0x80084803 HIDIOCGRDESC = 0x90044802 HIDIOCGRDESCSIZE = 0x80044801 + HIDIOCREVOKE = 0x4004480d HUPCL = 0x400 ICANON = 0x2 IEXTEN = 0x8000 @@ -290,6 +291,8 @@ const ( RTC_WIE_ON = 0x700f RTC_WKALM_RD = 0x80287010 RTC_WKALM_SET = 0x4028700f + SCM_DEVMEM_DMABUF = 0x4f + SCM_DEVMEM_LINEAR = 0x4e SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 SCM_TIMESTAMPING_PKTINFO = 0x3a @@ -328,6 +331,9 @@ const ( SO_CNX_ADVICE = 0x35 SO_COOKIE = 0x39 SO_DETACH_REUSEPORT_BPF = 0x44 + SO_DEVMEM_DMABUF = 0x4f + SO_DEVMEM_DONTNEED = 0x50 + SO_DEVMEM_LINEAR = 0x4e SO_DOMAIN = 0x27 SO_DONTROUTE = 0x5 SO_ERROR = 0x4 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go b/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go index 782b7110fa1..958e6e0645a 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go @@ -108,6 +108,7 @@ const ( HIDIOCGRAWINFO = 0x40084803 HIDIOCGRDESC = 0x50044802 HIDIOCGRDESCSIZE = 0x40044801 + HIDIOCREVOKE = 0x8004480d HUPCL = 0x400 ICANON = 0x2 IEXTEN = 0x100 @@ -296,6 +297,8 @@ const ( RTC_WIE_ON = 0x2000700f RTC_WKALM_RD = 0x40287010 RTC_WKALM_SET = 0x8028700f + SCM_DEVMEM_DMABUF = 0x4f + SCM_DEVMEM_LINEAR = 0x4e SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 SCM_TIMESTAMPING_PKTINFO = 0x3a @@ -334,6 +337,9 @@ const ( SO_CNX_ADVICE = 0x35 SO_COOKIE = 0x39 SO_DETACH_REUSEPORT_BPF = 0x44 + SO_DEVMEM_DMABUF = 0x4f + SO_DEVMEM_DONTNEED = 0x50 + SO_DEVMEM_LINEAR = 0x4e SO_DOMAIN = 0x1029 SO_DONTROUTE = 0x10 SO_ERROR = 0x1007 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go b/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go index 84973fd9271..50c7f25bd16 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go @@ -108,6 +108,7 @@ const ( HIDIOCGRAWINFO = 0x40084803 HIDIOCGRDESC = 0x50044802 HIDIOCGRDESCSIZE = 0x40044801 + HIDIOCREVOKE = 0x8004480d HUPCL = 0x400 ICANON = 0x2 IEXTEN = 0x100 @@ -296,6 +297,8 @@ const ( RTC_WIE_ON = 0x2000700f RTC_WKALM_RD = 0x40287010 RTC_WKALM_SET = 0x8028700f + SCM_DEVMEM_DMABUF = 0x4f + SCM_DEVMEM_LINEAR = 0x4e SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 SCM_TIMESTAMPING_PKTINFO = 0x3a @@ -334,6 +337,9 @@ const ( SO_CNX_ADVICE = 0x35 SO_COOKIE = 0x39 SO_DETACH_REUSEPORT_BPF = 0x44 + SO_DEVMEM_DMABUF = 0x4f + SO_DEVMEM_DONTNEED = 0x50 + SO_DEVMEM_LINEAR = 0x4e SO_DOMAIN = 0x1029 SO_DONTROUTE = 0x10 SO_ERROR = 0x1007 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go b/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go index 6d9cbc3b274..ced21d66d95 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go @@ -108,6 +108,7 @@ const ( HIDIOCGRAWINFO = 0x40084803 HIDIOCGRDESC = 0x50044802 HIDIOCGRDESCSIZE = 0x40044801 + HIDIOCREVOKE = 0x8004480d HUPCL = 0x400 ICANON = 0x2 IEXTEN = 0x100 @@ -296,6 +297,8 @@ const ( RTC_WIE_ON = 0x2000700f RTC_WKALM_RD = 0x40287010 RTC_WKALM_SET = 0x8028700f + SCM_DEVMEM_DMABUF = 0x4f + SCM_DEVMEM_LINEAR = 0x4e SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 SCM_TIMESTAMPING_PKTINFO = 0x3a @@ -334,6 +337,9 @@ const ( SO_CNX_ADVICE = 0x35 SO_COOKIE = 0x39 SO_DETACH_REUSEPORT_BPF = 0x44 + SO_DEVMEM_DMABUF = 0x4f + SO_DEVMEM_DONTNEED = 0x50 + SO_DEVMEM_LINEAR = 0x4e SO_DOMAIN = 0x1029 SO_DONTROUTE = 0x10 SO_ERROR = 0x1007 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go b/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go index 5f9fedbce02..226c0441902 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go @@ -108,6 +108,7 @@ const ( HIDIOCGRAWINFO = 0x40084803 HIDIOCGRDESC = 0x50044802 HIDIOCGRDESCSIZE = 0x40044801 + HIDIOCREVOKE = 0x8004480d HUPCL = 0x400 ICANON = 0x2 IEXTEN = 0x100 @@ -296,6 +297,8 @@ const ( RTC_WIE_ON = 0x2000700f RTC_WKALM_RD = 0x40287010 RTC_WKALM_SET = 0x8028700f + SCM_DEVMEM_DMABUF = 0x4f + SCM_DEVMEM_LINEAR = 0x4e SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 SCM_TIMESTAMPING_PKTINFO = 0x3a @@ -334,6 +337,9 @@ const ( SO_CNX_ADVICE = 0x35 SO_COOKIE = 0x39 SO_DETACH_REUSEPORT_BPF = 0x44 + SO_DEVMEM_DMABUF = 0x4f + SO_DEVMEM_DONTNEED = 0x50 + SO_DEVMEM_LINEAR = 0x4e SO_DOMAIN = 0x1029 SO_DONTROUTE = 0x10 SO_ERROR = 0x1007 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_ppc.go b/vendor/golang.org/x/sys/unix/zerrors_linux_ppc.go index bb0026ee0c4..3122737cd46 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_ppc.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_ppc.go @@ -108,6 +108,7 @@ const ( HIDIOCGRAWINFO = 0x40084803 HIDIOCGRDESC = 0x50044802 HIDIOCGRDESCSIZE = 0x40044801 + HIDIOCREVOKE = 0x8004480d HUPCL = 0x4000 ICANON = 0x100 IEXTEN = 0x400 @@ -351,6 +352,8 @@ const ( RTC_WIE_ON = 0x2000700f RTC_WKALM_RD = 0x40287010 RTC_WKALM_SET = 0x8028700f + SCM_DEVMEM_DMABUF = 0x4f + SCM_DEVMEM_LINEAR = 0x4e SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 SCM_TIMESTAMPING_PKTINFO = 0x3a @@ -389,6 +392,9 @@ const ( SO_CNX_ADVICE = 0x35 SO_COOKIE = 0x39 SO_DETACH_REUSEPORT_BPF = 0x44 + SO_DEVMEM_DMABUF = 0x4f + SO_DEVMEM_DONTNEED = 0x50 + SO_DEVMEM_LINEAR = 0x4e SO_DOMAIN = 0x27 SO_DONTROUTE = 0x5 SO_ERROR = 0x4 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go b/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go index 46120db5c9a..eb5d3467edf 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go @@ -108,6 +108,7 @@ const ( HIDIOCGRAWINFO = 0x40084803 HIDIOCGRDESC = 0x50044802 HIDIOCGRDESCSIZE = 0x40044801 + HIDIOCREVOKE = 0x8004480d HUPCL = 0x4000 ICANON = 0x100 IEXTEN = 0x400 @@ -355,6 +356,8 @@ const ( RTC_WIE_ON = 0x2000700f RTC_WKALM_RD = 0x40287010 RTC_WKALM_SET = 0x8028700f + SCM_DEVMEM_DMABUF = 0x4f + SCM_DEVMEM_LINEAR = 0x4e SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 SCM_TIMESTAMPING_PKTINFO = 0x3a @@ -393,6 +396,9 @@ const ( SO_CNX_ADVICE = 0x35 SO_COOKIE = 0x39 SO_DETACH_REUSEPORT_BPF = 0x44 + SO_DEVMEM_DMABUF = 0x4f + SO_DEVMEM_DONTNEED = 0x50 + SO_DEVMEM_LINEAR = 0x4e SO_DOMAIN = 0x27 SO_DONTROUTE = 0x5 SO_ERROR = 0x4 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go b/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go index 5c951634fbe..e921ebc60b7 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go @@ -108,6 +108,7 @@ const ( HIDIOCGRAWINFO = 0x40084803 HIDIOCGRDESC = 0x50044802 HIDIOCGRDESCSIZE = 0x40044801 + HIDIOCREVOKE = 0x8004480d HUPCL = 0x4000 ICANON = 0x100 IEXTEN = 0x400 @@ -355,6 +356,8 @@ const ( RTC_WIE_ON = 0x2000700f RTC_WKALM_RD = 0x40287010 RTC_WKALM_SET = 0x8028700f + SCM_DEVMEM_DMABUF = 0x4f + SCM_DEVMEM_LINEAR = 0x4e SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 SCM_TIMESTAMPING_PKTINFO = 0x3a @@ -393,6 +396,9 @@ const ( SO_CNX_ADVICE = 0x35 SO_COOKIE = 0x39 SO_DETACH_REUSEPORT_BPF = 0x44 + SO_DEVMEM_DMABUF = 0x4f + SO_DEVMEM_DONTNEED = 0x50 + SO_DEVMEM_LINEAR = 0x4e SO_DOMAIN = 0x27 SO_DONTROUTE = 0x5 SO_ERROR = 0x4 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_riscv64.go b/vendor/golang.org/x/sys/unix/zerrors_linux_riscv64.go index 11a84d5af20..38ba81c55c1 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_riscv64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_riscv64.go @@ -108,6 +108,7 @@ const ( HIDIOCGRAWINFO = 0x80084803 HIDIOCGRDESC = 0x90044802 HIDIOCGRDESCSIZE = 0x80044801 + HIDIOCREVOKE = 0x4004480d HUPCL = 0x400 ICANON = 0x2 IEXTEN = 0x8000 @@ -287,6 +288,8 @@ const ( RTC_WIE_ON = 0x700f RTC_WKALM_RD = 0x80287010 RTC_WKALM_SET = 0x4028700f + SCM_DEVMEM_DMABUF = 0x4f + SCM_DEVMEM_LINEAR = 0x4e SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 SCM_TIMESTAMPING_PKTINFO = 0x3a @@ -325,6 +328,9 @@ const ( SO_CNX_ADVICE = 0x35 SO_COOKIE = 0x39 SO_DETACH_REUSEPORT_BPF = 0x44 + SO_DEVMEM_DMABUF = 0x4f + SO_DEVMEM_DONTNEED = 0x50 + SO_DEVMEM_LINEAR = 0x4e SO_DOMAIN = 0x27 SO_DONTROUTE = 0x5 SO_ERROR = 0x4 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go b/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go index f78c4617cac..71f0400977b 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go @@ -108,6 +108,7 @@ const ( HIDIOCGRAWINFO = 0x80084803 HIDIOCGRDESC = 0x90044802 HIDIOCGRDESCSIZE = 0x80044801 + HIDIOCREVOKE = 0x4004480d HUPCL = 0x400 ICANON = 0x2 IEXTEN = 0x8000 @@ -359,6 +360,8 @@ const ( RTC_WIE_ON = 0x700f RTC_WKALM_RD = 0x80287010 RTC_WKALM_SET = 0x4028700f + SCM_DEVMEM_DMABUF = 0x4f + SCM_DEVMEM_LINEAR = 0x4e SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 SCM_TIMESTAMPING_PKTINFO = 0x3a @@ -397,6 +400,9 @@ const ( SO_CNX_ADVICE = 0x35 SO_COOKIE = 0x39 SO_DETACH_REUSEPORT_BPF = 0x44 + SO_DEVMEM_DMABUF = 0x4f + SO_DEVMEM_DONTNEED = 0x50 + SO_DEVMEM_LINEAR = 0x4e SO_DOMAIN = 0x27 SO_DONTROUTE = 0x5 SO_ERROR = 0x4 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go b/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go index aeb777c3442..c44a313322c 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go @@ -112,6 +112,7 @@ const ( HIDIOCGRAWINFO = 0x40084803 HIDIOCGRDESC = 0x50044802 HIDIOCGRDESCSIZE = 0x40044801 + HIDIOCREVOKE = 0x8004480d HUPCL = 0x400 ICANON = 0x2 IEXTEN = 0x8000 @@ -350,6 +351,8 @@ const ( RTC_WIE_ON = 0x2000700f RTC_WKALM_RD = 0x40287010 RTC_WKALM_SET = 0x8028700f + SCM_DEVMEM_DMABUF = 0x58 + SCM_DEVMEM_LINEAR = 0x57 SCM_TIMESTAMPING = 0x23 SCM_TIMESTAMPING_OPT_STATS = 0x38 SCM_TIMESTAMPING_PKTINFO = 0x3c @@ -436,6 +439,9 @@ const ( SO_CNX_ADVICE = 0x37 SO_COOKIE = 0x3b SO_DETACH_REUSEPORT_BPF = 0x47 + SO_DEVMEM_DMABUF = 0x58 + SO_DEVMEM_DONTNEED = 0x59 + SO_DEVMEM_LINEAR = 0x57 SO_DOMAIN = 0x1029 SO_DONTROUTE = 0x10 SO_ERROR = 0x1007 diff --git a/vendor/golang.org/x/sys/unix/ztypes_darwin_amd64.go b/vendor/golang.org/x/sys/unix/ztypes_darwin_amd64.go index d003c3d4378..17c53bd9b33 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_darwin_amd64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_darwin_amd64.go @@ -462,11 +462,14 @@ type FdSet struct { const ( SizeofIfMsghdr = 0x70 + SizeofIfMsghdr2 = 0xa0 SizeofIfData = 0x60 + SizeofIfData64 = 0x80 SizeofIfaMsghdr = 0x14 SizeofIfmaMsghdr = 0x10 SizeofIfmaMsghdr2 = 0x14 SizeofRtMsghdr = 0x5c + SizeofRtMsghdr2 = 0x5c SizeofRtMetrics = 0x38 ) @@ -480,6 +483,20 @@ type IfMsghdr struct { Data IfData } +type IfMsghdr2 struct { + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + Snd_len int32 + Snd_maxlen int32 + Snd_drops int32 + Timer int32 + Data IfData64 +} + type IfData struct { Type uint8 Typelen uint8 @@ -512,6 +529,34 @@ type IfData struct { Reserved2 uint32 } +type IfData64 struct { + Type uint8 + Typelen uint8 + Physical uint8 + Addrlen uint8 + Hdrlen uint8 + Recvquota uint8 + Xmitquota uint8 + Unused1 uint8 + Mtu uint32 + Metric uint32 + Baudrate uint64 + Ipackets uint64 + Ierrors uint64 + Opackets uint64 + Oerrors uint64 + Collisions uint64 + Ibytes uint64 + Obytes uint64 + Imcasts uint64 + Omcasts uint64 + Iqdrops uint64 + Noproto uint64 + Recvtiming uint32 + Xmittiming uint32 + Lastchange Timeval32 +} + type IfaMsghdr struct { Msglen uint16 Version uint8 @@ -557,6 +602,21 @@ type RtMsghdr struct { Rmx RtMetrics } +type RtMsghdr2 struct { + Msglen uint16 + Version uint8 + Type uint8 + Index uint16 + Flags int32 + Addrs int32 + Refcnt int32 + Parentflags int32 + Reserved int32 + Use int32 + Inits uint32 + Rmx RtMetrics +} + type RtMetrics struct { Locks uint32 Mtu uint32 diff --git a/vendor/golang.org/x/sys/unix/ztypes_darwin_arm64.go b/vendor/golang.org/x/sys/unix/ztypes_darwin_arm64.go index 0d45a941aae..2392226a743 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_darwin_arm64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_darwin_arm64.go @@ -462,11 +462,14 @@ type FdSet struct { const ( SizeofIfMsghdr = 0x70 + SizeofIfMsghdr2 = 0xa0 SizeofIfData = 0x60 + SizeofIfData64 = 0x80 SizeofIfaMsghdr = 0x14 SizeofIfmaMsghdr = 0x10 SizeofIfmaMsghdr2 = 0x14 SizeofRtMsghdr = 0x5c + SizeofRtMsghdr2 = 0x5c SizeofRtMetrics = 0x38 ) @@ -480,6 +483,20 @@ type IfMsghdr struct { Data IfData } +type IfMsghdr2 struct { + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + Snd_len int32 + Snd_maxlen int32 + Snd_drops int32 + Timer int32 + Data IfData64 +} + type IfData struct { Type uint8 Typelen uint8 @@ -512,6 +529,34 @@ type IfData struct { Reserved2 uint32 } +type IfData64 struct { + Type uint8 + Typelen uint8 + Physical uint8 + Addrlen uint8 + Hdrlen uint8 + Recvquota uint8 + Xmitquota uint8 + Unused1 uint8 + Mtu uint32 + Metric uint32 + Baudrate uint64 + Ipackets uint64 + Ierrors uint64 + Opackets uint64 + Oerrors uint64 + Collisions uint64 + Ibytes uint64 + Obytes uint64 + Imcasts uint64 + Omcasts uint64 + Iqdrops uint64 + Noproto uint64 + Recvtiming uint32 + Xmittiming uint32 + Lastchange Timeval32 +} + type IfaMsghdr struct { Msglen uint16 Version uint8 @@ -557,6 +602,21 @@ type RtMsghdr struct { Rmx RtMetrics } +type RtMsghdr2 struct { + Msglen uint16 + Version uint8 + Type uint8 + Index uint16 + Flags int32 + Addrs int32 + Refcnt int32 + Parentflags int32 + Reserved int32 + Use int32 + Inits uint32 + Rmx RtMetrics +} + type RtMetrics struct { Locks uint32 Mtu uint32 diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux.go b/vendor/golang.org/x/sys/unix/ztypes_linux.go index 8daaf3faf4c..5537148dcbb 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux.go @@ -2594,8 +2594,8 @@ const ( SOF_TIMESTAMPING_BIND_PHC = 0x8000 SOF_TIMESTAMPING_OPT_ID_TCP = 0x10000 - SOF_TIMESTAMPING_LAST = 0x10000 - SOF_TIMESTAMPING_MASK = 0x1ffff + SOF_TIMESTAMPING_LAST = 0x20000 + SOF_TIMESTAMPING_MASK = 0x3ffff SCM_TSTAMP_SND = 0x0 SCM_TSTAMP_SCHED = 0x1 @@ -3541,7 +3541,7 @@ type Nhmsg struct { type NexthopGrp struct { Id uint32 Weight uint8 - Resvd1 uint8 + High uint8 Resvd2 uint16 } @@ -3802,7 +3802,7 @@ const ( ETHTOOL_MSG_PSE_GET = 0x24 ETHTOOL_MSG_PSE_SET = 0x25 ETHTOOL_MSG_RSS_GET = 0x26 - ETHTOOL_MSG_USER_MAX = 0x2c + ETHTOOL_MSG_USER_MAX = 0x2d ETHTOOL_MSG_KERNEL_NONE = 0x0 ETHTOOL_MSG_STRSET_GET_REPLY = 0x1 ETHTOOL_MSG_LINKINFO_GET_REPLY = 0x2 @@ -3842,7 +3842,7 @@ const ( ETHTOOL_MSG_MODULE_NTF = 0x24 ETHTOOL_MSG_PSE_GET_REPLY = 0x25 ETHTOOL_MSG_RSS_GET_REPLY = 0x26 - ETHTOOL_MSG_KERNEL_MAX = 0x2c + ETHTOOL_MSG_KERNEL_MAX = 0x2e ETHTOOL_FLAG_COMPACT_BITSETS = 0x1 ETHTOOL_FLAG_OMIT_REPLY = 0x2 ETHTOOL_FLAG_STATS = 0x4 @@ -3850,7 +3850,7 @@ const ( ETHTOOL_A_HEADER_DEV_INDEX = 0x1 ETHTOOL_A_HEADER_DEV_NAME = 0x2 ETHTOOL_A_HEADER_FLAGS = 0x3 - ETHTOOL_A_HEADER_MAX = 0x3 + ETHTOOL_A_HEADER_MAX = 0x4 ETHTOOL_A_BITSET_BIT_UNSPEC = 0x0 ETHTOOL_A_BITSET_BIT_INDEX = 0x1 ETHTOOL_A_BITSET_BIT_NAME = 0x2 @@ -4031,11 +4031,11 @@ const ( ETHTOOL_A_CABLE_RESULT_UNSPEC = 0x0 ETHTOOL_A_CABLE_RESULT_PAIR = 0x1 ETHTOOL_A_CABLE_RESULT_CODE = 0x2 - ETHTOOL_A_CABLE_RESULT_MAX = 0x2 + ETHTOOL_A_CABLE_RESULT_MAX = 0x3 ETHTOOL_A_CABLE_FAULT_LENGTH_UNSPEC = 0x0 ETHTOOL_A_CABLE_FAULT_LENGTH_PAIR = 0x1 ETHTOOL_A_CABLE_FAULT_LENGTH_CM = 0x2 - ETHTOOL_A_CABLE_FAULT_LENGTH_MAX = 0x2 + ETHTOOL_A_CABLE_FAULT_LENGTH_MAX = 0x3 ETHTOOL_A_CABLE_TEST_NTF_STATUS_UNSPEC = 0x0 ETHTOOL_A_CABLE_TEST_NTF_STATUS_STARTED = 0x1 ETHTOOL_A_CABLE_TEST_NTF_STATUS_COMPLETED = 0x2 @@ -4200,7 +4200,8 @@ type ( } PtpSysOffsetExtended struct { Samples uint32 - Rsv [3]uint32 + Clockid int32 + Rsv [2]uint32 Ts [25][3]PtpClockTime } PtpSysOffsetPrecise struct { @@ -4399,6 +4400,7 @@ const ( type LandlockRulesetAttr struct { Access_fs uint64 Access_net uint64 + Scoped uint64 } type LandlockPathBeneathAttr struct { diff --git a/vendor/golang.org/x/sys/windows/syscall_windows.go b/vendor/golang.org/x/sys/windows/syscall_windows.go index 4510bfc3f5c..4a325438685 100644 --- a/vendor/golang.org/x/sys/windows/syscall_windows.go +++ b/vendor/golang.org/x/sys/windows/syscall_windows.go @@ -168,6 +168,8 @@ func NewCallbackCDecl(fn interface{}) uintptr { //sys CreateNamedPipe(name *uint16, flags uint32, pipeMode uint32, maxInstances uint32, outSize uint32, inSize uint32, defaultTimeout uint32, sa *SecurityAttributes) (handle Handle, err error) [failretval==InvalidHandle] = CreateNamedPipeW //sys ConnectNamedPipe(pipe Handle, overlapped *Overlapped) (err error) //sys DisconnectNamedPipe(pipe Handle) (err error) +//sys GetNamedPipeClientProcessId(pipe Handle, clientProcessID *uint32) (err error) +//sys GetNamedPipeServerProcessId(pipe Handle, serverProcessID *uint32) (err error) //sys GetNamedPipeInfo(pipe Handle, flags *uint32, outSize *uint32, inSize *uint32, maxInstances *uint32) (err error) //sys GetNamedPipeHandleState(pipe Handle, state *uint32, curInstances *uint32, maxCollectionCount *uint32, collectDataTimeout *uint32, userName *uint16, maxUserNameSize uint32) (err error) = GetNamedPipeHandleStateW //sys SetNamedPipeHandleState(pipe Handle, state *uint32, maxCollectionCount *uint32, collectDataTimeout *uint32) (err error) = SetNamedPipeHandleState diff --git a/vendor/golang.org/x/sys/windows/types_windows.go b/vendor/golang.org/x/sys/windows/types_windows.go index 51311e205ff..9d138de5fed 100644 --- a/vendor/golang.org/x/sys/windows/types_windows.go +++ b/vendor/golang.org/x/sys/windows/types_windows.go @@ -176,6 +176,7 @@ const ( WAIT_FAILED = 0xFFFFFFFF // Access rights for process. + PROCESS_ALL_ACCESS = 0xFFFF PROCESS_CREATE_PROCESS = 0x0080 PROCESS_CREATE_THREAD = 0x0002 PROCESS_DUP_HANDLE = 0x0040 diff --git a/vendor/golang.org/x/sys/windows/zsyscall_windows.go b/vendor/golang.org/x/sys/windows/zsyscall_windows.go index 6f5252880ce..01c0716c2c4 100644 --- a/vendor/golang.org/x/sys/windows/zsyscall_windows.go +++ b/vendor/golang.org/x/sys/windows/zsyscall_windows.go @@ -280,8 +280,10 @@ var ( procGetMaximumProcessorCount = modkernel32.NewProc("GetMaximumProcessorCount") procGetModuleFileNameW = modkernel32.NewProc("GetModuleFileNameW") procGetModuleHandleExW = modkernel32.NewProc("GetModuleHandleExW") + procGetNamedPipeClientProcessId = modkernel32.NewProc("GetNamedPipeClientProcessId") procGetNamedPipeHandleStateW = modkernel32.NewProc("GetNamedPipeHandleStateW") procGetNamedPipeInfo = modkernel32.NewProc("GetNamedPipeInfo") + procGetNamedPipeServerProcessId = modkernel32.NewProc("GetNamedPipeServerProcessId") procGetOverlappedResult = modkernel32.NewProc("GetOverlappedResult") procGetPriorityClass = modkernel32.NewProc("GetPriorityClass") procGetProcAddress = modkernel32.NewProc("GetProcAddress") @@ -1612,7 +1614,7 @@ func DwmSetWindowAttribute(hwnd HWND, attribute uint32, value unsafe.Pointer, si } func CancelMibChangeNotify2(notificationHandle Handle) (errcode error) { - r0, _, _ := syscall.SyscallN(procCancelMibChangeNotify2.Addr(), uintptr(notificationHandle)) + r0, _, _ := syscall.Syscall(procCancelMibChangeNotify2.Addr(), 1, uintptr(notificationHandle), 0, 0) if r0 != 0 { errcode = syscall.Errno(r0) } @@ -1652,7 +1654,7 @@ func GetIfEntry(pIfRow *MibIfRow) (errcode error) { } func GetIfEntry2Ex(level uint32, row *MibIfRow2) (errcode error) { - r0, _, _ := syscall.SyscallN(procGetIfEntry2Ex.Addr(), uintptr(level), uintptr(unsafe.Pointer(row))) + r0, _, _ := syscall.Syscall(procGetIfEntry2Ex.Addr(), 2, uintptr(level), uintptr(unsafe.Pointer(row)), 0) if r0 != 0 { errcode = syscall.Errno(r0) } @@ -1660,7 +1662,7 @@ func GetIfEntry2Ex(level uint32, row *MibIfRow2) (errcode error) { } func GetUnicastIpAddressEntry(row *MibUnicastIpAddressRow) (errcode error) { - r0, _, _ := syscall.SyscallN(procGetUnicastIpAddressEntry.Addr(), uintptr(unsafe.Pointer(row))) + r0, _, _ := syscall.Syscall(procGetUnicastIpAddressEntry.Addr(), 1, uintptr(unsafe.Pointer(row)), 0, 0) if r0 != 0 { errcode = syscall.Errno(r0) } @@ -1672,7 +1674,7 @@ func NotifyIpInterfaceChange(family uint16, callback uintptr, callerContext unsa if initialNotification { _p0 = 1 } - r0, _, _ := syscall.SyscallN(procNotifyIpInterfaceChange.Addr(), uintptr(family), uintptr(callback), uintptr(callerContext), uintptr(_p0), uintptr(unsafe.Pointer(notificationHandle))) + r0, _, _ := syscall.Syscall6(procNotifyIpInterfaceChange.Addr(), 5, uintptr(family), uintptr(callback), uintptr(callerContext), uintptr(_p0), uintptr(unsafe.Pointer(notificationHandle)), 0) if r0 != 0 { errcode = syscall.Errno(r0) } @@ -1684,7 +1686,7 @@ func NotifyUnicastIpAddressChange(family uint16, callback uintptr, callerContext if initialNotification { _p0 = 1 } - r0, _, _ := syscall.SyscallN(procNotifyUnicastIpAddressChange.Addr(), uintptr(family), uintptr(callback), uintptr(callerContext), uintptr(_p0), uintptr(unsafe.Pointer(notificationHandle))) + r0, _, _ := syscall.Syscall6(procNotifyUnicastIpAddressChange.Addr(), 5, uintptr(family), uintptr(callback), uintptr(callerContext), uintptr(_p0), uintptr(unsafe.Pointer(notificationHandle)), 0) if r0 != 0 { errcode = syscall.Errno(r0) } @@ -2446,6 +2448,14 @@ func GetModuleHandleEx(flags uint32, moduleName *uint16, module *Handle) (err er return } +func GetNamedPipeClientProcessId(pipe Handle, clientProcessID *uint32) (err error) { + r1, _, e1 := syscall.Syscall(procGetNamedPipeClientProcessId.Addr(), 2, uintptr(pipe), uintptr(unsafe.Pointer(clientProcessID)), 0) + if r1 == 0 { + err = errnoErr(e1) + } + return +} + func GetNamedPipeHandleState(pipe Handle, state *uint32, curInstances *uint32, maxCollectionCount *uint32, collectDataTimeout *uint32, userName *uint16, maxUserNameSize uint32) (err error) { r1, _, e1 := syscall.Syscall9(procGetNamedPipeHandleStateW.Addr(), 7, uintptr(pipe), uintptr(unsafe.Pointer(state)), uintptr(unsafe.Pointer(curInstances)), uintptr(unsafe.Pointer(maxCollectionCount)), uintptr(unsafe.Pointer(collectDataTimeout)), uintptr(unsafe.Pointer(userName)), uintptr(maxUserNameSize), 0, 0) if r1 == 0 { @@ -2462,6 +2472,14 @@ func GetNamedPipeInfo(pipe Handle, flags *uint32, outSize *uint32, inSize *uint3 return } +func GetNamedPipeServerProcessId(pipe Handle, serverProcessID *uint32) (err error) { + r1, _, e1 := syscall.Syscall(procGetNamedPipeServerProcessId.Addr(), 2, uintptr(pipe), uintptr(unsafe.Pointer(serverProcessID)), 0) + if r1 == 0 { + err = errnoErr(e1) + } + return +} + func GetOverlappedResult(handle Handle, overlapped *Overlapped, done *uint32, wait bool) (err error) { var _p0 uint32 if wait { diff --git a/vendor/golang.org/x/tools/go/gcexportdata/gcexportdata.go b/vendor/golang.org/x/tools/go/gcexportdata/gcexportdata.go index f3ab0a2e126..65fe2628e90 100644 --- a/vendor/golang.org/x/tools/go/gcexportdata/gcexportdata.go +++ b/vendor/golang.org/x/tools/go/gcexportdata/gcexportdata.go @@ -106,24 +106,18 @@ func Find(importPath, srcDir string) (filename, path string) { // additional trailing data beyond the end of the export data. func NewReader(r io.Reader) (io.Reader, error) { buf := bufio.NewReader(r) - _, size, err := gcimporter.FindExportData(buf) + size, err := gcimporter.FindExportData(buf) if err != nil { return nil, err } - if size >= 0 { - // We were given an archive and found the __.PKGDEF in it. - // This tells us the size of the export data, and we don't - // need to return the entire file. - return &io.LimitedReader{ - R: buf, - N: size, - }, nil - } else { - // We were given an object file. As such, we don't know how large - // the export data is and must return the entire file. - return buf, nil - } + // We were given an archive and found the __.PKGDEF in it. + // This tells us the size of the export data, and we don't + // need to return the entire file. + return &io.LimitedReader{ + R: buf, + N: size, + }, nil } // readAll works the same way as io.ReadAll, but avoids allocations and copies diff --git a/vendor/golang.org/x/tools/go/packages/external.go b/vendor/golang.org/x/tools/go/packages/external.go index 96db9daf314..91bd62e83b1 100644 --- a/vendor/golang.org/x/tools/go/packages/external.go +++ b/vendor/golang.org/x/tools/go/packages/external.go @@ -13,6 +13,7 @@ import ( "fmt" "os" "os/exec" + "slices" "strings" ) @@ -131,7 +132,7 @@ func findExternalDriver(cfg *Config) driver { // command. // // (See similar trick in Invocation.run in ../../internal/gocommand/invoke.go) - cmd.Env = append(slicesClip(cfg.Env), "PWD="+cfg.Dir) + cmd.Env = append(slices.Clip(cfg.Env), "PWD="+cfg.Dir) cmd.Stdin = bytes.NewReader(req) cmd.Stdout = buf cmd.Stderr = stderr @@ -150,7 +151,3 @@ func findExternalDriver(cfg *Config) driver { return &response, nil } } - -// slicesClip removes unused capacity from the slice, returning s[:len(s):len(s)]. -// TODO(adonovan): use go1.21 slices.Clip. -func slicesClip[S ~[]E, E any](s S) S { return s[:len(s):len(s)] } diff --git a/vendor/golang.org/x/tools/go/packages/golist.go b/vendor/golang.org/x/tools/go/packages/golist.go index 76f910ecec9..870271ed51f 100644 --- a/vendor/golang.org/x/tools/go/packages/golist.go +++ b/vendor/golang.org/x/tools/go/packages/golist.go @@ -505,13 +505,14 @@ func (state *golistState) createDriverResponse(words ...string) (*DriverResponse pkg := &Package{ Name: p.Name, ID: p.ImportPath, + Dir: p.Dir, GoFiles: absJoin(p.Dir, p.GoFiles, p.CgoFiles), CompiledGoFiles: absJoin(p.Dir, p.CompiledGoFiles), OtherFiles: absJoin(p.Dir, otherFiles(p)...), EmbedFiles: absJoin(p.Dir, p.EmbedFiles), EmbedPatterns: absJoin(p.Dir, p.EmbedPatterns), IgnoredFiles: absJoin(p.Dir, p.IgnoredGoFiles, p.IgnoredOtherFiles), - forTest: p.ForTest, + ForTest: p.ForTest, depsErrors: p.DepsErrors, Module: p.Module, } @@ -795,7 +796,7 @@ func jsonFlag(cfg *Config, goVersion int) string { // Request Dir in the unlikely case Export is not absolute. addFields("Dir", "Export") } - if cfg.Mode&needInternalForTest != 0 { + if cfg.Mode&NeedForTest != 0 { addFields("ForTest") } if cfg.Mode&needInternalDepsErrors != 0 { diff --git a/vendor/golang.org/x/tools/go/packages/loadmode_string.go b/vendor/golang.org/x/tools/go/packages/loadmode_string.go index 5fcad6ea6db..969da4c263c 100644 --- a/vendor/golang.org/x/tools/go/packages/loadmode_string.go +++ b/vendor/golang.org/x/tools/go/packages/loadmode_string.go @@ -23,6 +23,7 @@ var modes = [...]struct { {NeedSyntax, "NeedSyntax"}, {NeedTypesInfo, "NeedTypesInfo"}, {NeedTypesSizes, "NeedTypesSizes"}, + {NeedForTest, "NeedForTest"}, {NeedModule, "NeedModule"}, {NeedEmbedFiles, "NeedEmbedFiles"}, {NeedEmbedPatterns, "NeedEmbedPatterns"}, diff --git a/vendor/golang.org/x/tools/go/packages/packages.go b/vendor/golang.org/x/tools/go/packages/packages.go index 2ecc64238e8..9dedf9777dc 100644 --- a/vendor/golang.org/x/tools/go/packages/packages.go +++ b/vendor/golang.org/x/tools/go/packages/packages.go @@ -43,6 +43,20 @@ import ( // ID and Errors (if present) will always be filled. // [Load] may return more information than requested. // +// The Mode flag is a union of several bits named NeedName, +// NeedFiles, and so on, each of which determines whether +// a given field of Package (Name, Files, etc) should be +// populated. +// +// For convenience, we provide named constants for the most +// common combinations of Need flags: +// +// [LoadFiles] lists of files in each package +// [LoadImports] ... plus imports +// [LoadTypes] ... plus type information +// [LoadSyntax] ... plus type-annotated syntax +// [LoadAllSyntax] ... for all dependencies +// // Unfortunately there are a number of open bugs related to // interactions among the LoadMode bits: // - https://github.com/golang/go/issues/56633 @@ -55,7 +69,7 @@ const ( // NeedName adds Name and PkgPath. NeedName LoadMode = 1 << iota - // NeedFiles adds GoFiles, OtherFiles, and IgnoredFiles + // NeedFiles adds Dir, GoFiles, OtherFiles, and IgnoredFiles NeedFiles // NeedCompiledGoFiles adds CompiledGoFiles. @@ -86,9 +100,10 @@ const ( // needInternalDepsErrors adds the internal deps errors field for use by gopls. needInternalDepsErrors - // needInternalForTest adds the internal forTest field. + // NeedForTest adds ForTest. + // // Tests must also be set on the context for this field to be populated. - needInternalForTest + NeedForTest // typecheckCgo enables full support for type checking cgo. Requires Go 1.15+. // Modifies CompiledGoFiles and Types, and has no effect on its own. @@ -108,33 +123,18 @@ const ( const ( // LoadFiles loads the name and file names for the initial packages. - // - // Deprecated: LoadFiles exists for historical compatibility - // and should not be used. Please directly specify the needed fields using the Need values. LoadFiles = NeedName | NeedFiles | NeedCompiledGoFiles // LoadImports loads the name, file names, and import mapping for the initial packages. - // - // Deprecated: LoadImports exists for historical compatibility - // and should not be used. Please directly specify the needed fields using the Need values. LoadImports = LoadFiles | NeedImports // LoadTypes loads exported type information for the initial packages. - // - // Deprecated: LoadTypes exists for historical compatibility - // and should not be used. Please directly specify the needed fields using the Need values. LoadTypes = LoadImports | NeedTypes | NeedTypesSizes // LoadSyntax loads typed syntax for the initial packages. - // - // Deprecated: LoadSyntax exists for historical compatibility - // and should not be used. Please directly specify the needed fields using the Need values. LoadSyntax = LoadTypes | NeedSyntax | NeedTypesInfo // LoadAllSyntax loads typed syntax for the initial packages and all dependencies. - // - // Deprecated: LoadAllSyntax exists for historical compatibility - // and should not be used. Please directly specify the needed fields using the Need values. LoadAllSyntax = LoadSyntax | NeedDeps // Deprecated: NeedExportsFile is a historical misspelling of NeedExportFile. @@ -434,6 +434,12 @@ type Package struct { // PkgPath is the package path as used by the go/types package. PkgPath string + // Dir is the directory associated with the package, if it exists. + // + // For packages listed by the go command, this is the directory containing + // the package files. + Dir string + // Errors contains any errors encountered querying the metadata // of the package, or while parsing or type-checking its files. Errors []Error @@ -521,8 +527,8 @@ type Package struct { // -- internal -- - // forTest is the package under test, if any. - forTest string + // ForTest is the package under test, if any. + ForTest string // depsErrors is the DepsErrors field from the go list response, if any. depsErrors []*packagesinternal.PackageError @@ -551,9 +557,6 @@ type ModuleError struct { } func init() { - packagesinternal.GetForTest = func(p interface{}) string { - return p.(*Package).forTest - } packagesinternal.GetDepsErrors = func(p interface{}) []*packagesinternal.PackageError { return p.(*Package).depsErrors } @@ -565,7 +568,6 @@ func init() { } packagesinternal.TypecheckCgo = int(typecheckCgo) packagesinternal.DepsErrors = int(needInternalDepsErrors) - packagesinternal.ForTest = int(needInternalForTest) } // An Error describes a problem with a package's metadata, syntax, or types. diff --git a/vendor/golang.org/x/tools/internal/gcimporter/exportdata.go b/vendor/golang.org/x/tools/internal/gcimporter/exportdata.go index f6437feb1cf..6f5d8a21391 100644 --- a/vendor/golang.org/x/tools/internal/gcimporter/exportdata.go +++ b/vendor/golang.org/x/tools/internal/gcimporter/exportdata.go @@ -39,12 +39,15 @@ func readGopackHeader(r *bufio.Reader) (name string, size int64, err error) { } // FindExportData positions the reader r at the beginning of the -// export data section of an underlying GC-created object/archive +// export data section of an underlying cmd/compile created archive // file by reading from it. The reader must be positioned at the -// start of the file before calling this function. The hdr result -// is the string before the export data, either "$$" or "$$B". -// The size result is the length of the export data in bytes, or -1 if not known. -func FindExportData(r *bufio.Reader) (hdr string, size int64, err error) { +// start of the file before calling this function. +// The size result is the length of the export data in bytes. +// +// This function is needed by [gcexportdata.Read], which must +// accept inputs produced by the last two releases of cmd/compile, +// plus tip. +func FindExportData(r *bufio.Reader) (size int64, err error) { // Read first line to make sure this is an object file. line, err := r.ReadSlice('\n') if err != nil { @@ -52,27 +55,32 @@ func FindExportData(r *bufio.Reader) (hdr string, size int64, err error) { return } - if string(line) == "!\n" { - // Archive file. Scan to __.PKGDEF. - var name string - if name, size, err = readGopackHeader(r); err != nil { - return - } + // Is the first line an archive file signature? + if string(line) != "!\n" { + err = fmt.Errorf("not the start of an archive file (%q)", line) + return + } - // First entry should be __.PKGDEF. - if name != "__.PKGDEF" { - err = fmt.Errorf("go archive is missing __.PKGDEF") - return - } + // Archive file. Scan to __.PKGDEF. + var name string + if name, size, err = readGopackHeader(r); err != nil { + return + } + arsize := size - // Read first line of __.PKGDEF data, so that line - // is once again the first line of the input. - if line, err = r.ReadSlice('\n'); err != nil { - err = fmt.Errorf("can't find export data (%v)", err) - return - } - size -= int64(len(line)) + // First entry should be __.PKGDEF. + if name != "__.PKGDEF" { + err = fmt.Errorf("go archive is missing __.PKGDEF") + return + } + + // Read first line of __.PKGDEF data, so that line + // is once again the first line of the input. + if line, err = r.ReadSlice('\n'); err != nil { + err = fmt.Errorf("can't find export data (%v)", err) + return } + size -= int64(len(line)) // Now at __.PKGDEF in archive or still at beginning of file. // Either way, line should begin with "go object ". @@ -81,8 +89,8 @@ func FindExportData(r *bufio.Reader) (hdr string, size int64, err error) { return } - // Skip over object header to export data. - // Begins after first line starting with $$. + // Skip over object headers to get to the export data section header "$$B\n". + // Object headers are lines that do not start with '$'. for line[0] != '$' { if line, err = r.ReadSlice('\n'); err != nil { err = fmt.Errorf("can't find export data (%v)", err) @@ -90,9 +98,18 @@ func FindExportData(r *bufio.Reader) (hdr string, size int64, err error) { } size -= int64(len(line)) } - hdr = string(line) + + // Check for the binary export data section header "$$B\n". + hdr := string(line) + if hdr != "$$B\n" { + err = fmt.Errorf("unknown export data header: %q", hdr) + return + } + // TODO(taking): Remove end-of-section marker "\n$$\n" from size. + if size < 0 { - size = -1 + err = fmt.Errorf("invalid size (%d) in the archive file: %d bytes remain without section headers (recompile package)", arsize, size) + return } return diff --git a/vendor/golang.org/x/tools/internal/gcimporter/gcimporter.go b/vendor/golang.org/x/tools/internal/gcimporter/gcimporter.go index e6c5d51f8e5..dbbca860432 100644 --- a/vendor/golang.org/x/tools/internal/gcimporter/gcimporter.go +++ b/vendor/golang.org/x/tools/internal/gcimporter/gcimporter.go @@ -161,6 +161,8 @@ func FindPkg(path, srcDir string) (filename, id string) { // Import imports a gc-generated package given its import path and srcDir, adds // the corresponding package object to the packages map, and returns the object. // The packages map must contain all packages already imported. +// +// TODO(taking): Import is only used in tests. Move to gcimporter_test. func Import(packages map[string]*types.Package, path, srcDir string, lookup func(path string) (io.ReadCloser, error)) (pkg *types.Package, err error) { var rc io.ReadCloser var filename, id string @@ -210,58 +212,50 @@ func Import(packages map[string]*types.Package, path, srcDir string, lookup func } defer rc.Close() - var hdr string var size int64 buf := bufio.NewReader(rc) - if hdr, size, err = FindExportData(buf); err != nil { + if size, err = FindExportData(buf); err != nil { return } - switch hdr { - case "$$B\n": - var data []byte - data, err = io.ReadAll(buf) - if err != nil { - break - } + var data []byte + data, err = io.ReadAll(buf) + if err != nil { + return + } + if len(data) == 0 { + return nil, fmt.Errorf("no data to load a package from for path %s", id) + } - // TODO(gri): allow clients of go/importer to provide a FileSet. - // Or, define a new standard go/types/gcexportdata package. - fset := token.NewFileSet() - - // Select appropriate importer. - if len(data) > 0 { - switch data[0] { - case 'v', 'c', 'd': - // binary: emitted by cmd/compile till go1.10; obsolete. - return nil, fmt.Errorf("binary (%c) import format is no longer supported", data[0]) - - case 'i': - // indexed: emitted by cmd/compile till go1.19; - // now used only for serializing go/types. - // See https://github.com/golang/go/issues/69491. - _, pkg, err := IImportData(fset, packages, data[1:], id) - return pkg, err - - case 'u': - // unified: emitted by cmd/compile since go1.20. - _, pkg, err := UImportData(fset, packages, data[1:size], id) - return pkg, err - - default: - l := len(data) - if l > 10 { - l = 10 - } - return nil, fmt.Errorf("unexpected export data with prefix %q for path %s", string(data[:l]), id) - } - } + // TODO(gri): allow clients of go/importer to provide a FileSet. + // Or, define a new standard go/types/gcexportdata package. + fset := token.NewFileSet() + + // Select appropriate importer. + switch data[0] { + case 'v', 'c', 'd': + // binary: emitted by cmd/compile till go1.10; obsolete. + return nil, fmt.Errorf("binary (%c) import format is no longer supported", data[0]) + + case 'i': + // indexed: emitted by cmd/compile till go1.19; + // now used only for serializing go/types. + // See https://github.com/golang/go/issues/69491. + _, pkg, err := IImportData(fset, packages, data[1:], id) + return pkg, err + + case 'u': + // unified: emitted by cmd/compile since go1.20. + _, pkg, err := UImportData(fset, packages, data[1:size], id) + return pkg, err default: - err = fmt.Errorf("unknown export data header: %q", hdr) + l := len(data) + if l > 10 { + l = 10 + } + return nil, fmt.Errorf("unexpected export data with prefix %q for path %s", string(data[:l]), id) } - - return } type byPath []*types.Package diff --git a/vendor/golang.org/x/tools/internal/packagesinternal/packages.go b/vendor/golang.org/x/tools/internal/packagesinternal/packages.go index 44719de173b..66e69b4389d 100644 --- a/vendor/golang.org/x/tools/internal/packagesinternal/packages.go +++ b/vendor/golang.org/x/tools/internal/packagesinternal/packages.go @@ -5,7 +5,6 @@ // Package packagesinternal exposes internal-only fields from go/packages. package packagesinternal -var GetForTest = func(p interface{}) string { return "" } var GetDepsErrors = func(p interface{}) []*PackageError { return nil } type PackageError struct { @@ -16,7 +15,6 @@ type PackageError struct { var TypecheckCgo int var DepsErrors int // must be set as a LoadMode to call GetDepsErrors -var ForTest int // must be set as a LoadMode to call GetForTest var SetModFlag = func(config interface{}, value string) {} var SetModFile = func(config interface{}, value string) {} diff --git a/vendor/golang.org/x/tools/internal/typesinternal/zerovalue.go b/vendor/golang.org/x/tools/internal/typesinternal/zerovalue.go new file mode 100644 index 00000000000..1066980649e --- /dev/null +++ b/vendor/golang.org/x/tools/internal/typesinternal/zerovalue.go @@ -0,0 +1,282 @@ +// Copyright 2024 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package typesinternal + +import ( + "fmt" + "go/ast" + "go/token" + "go/types" + "strconv" + "strings" +) + +// ZeroString returns the string representation of the "zero" value of the type t. +// This string can be used on the right-hand side of an assignment where the +// left-hand side has that explicit type. +// Exception: This does not apply to tuples. Their string representation is +// informational only and cannot be used in an assignment. +// When assigning to a wider type (such as 'any'), it's the caller's +// responsibility to handle any necessary type conversions. +// See [ZeroExpr] for a variant that returns an [ast.Expr]. +func ZeroString(t types.Type, qf types.Qualifier) string { + switch t := t.(type) { + case *types.Basic: + switch { + case t.Info()&types.IsBoolean != 0: + return "false" + case t.Info()&types.IsNumeric != 0: + return "0" + case t.Info()&types.IsString != 0: + return `""` + case t.Kind() == types.UnsafePointer: + fallthrough + case t.Kind() == types.UntypedNil: + return "nil" + default: + panic(fmt.Sprint("ZeroString for unexpected type:", t)) + } + + case *types.Pointer, *types.Slice, *types.Interface, *types.Chan, *types.Map, *types.Signature: + return "nil" + + case *types.Named, *types.Alias: + switch under := t.Underlying().(type) { + case *types.Struct, *types.Array: + return types.TypeString(t, qf) + "{}" + default: + return ZeroString(under, qf) + } + + case *types.Array, *types.Struct: + return types.TypeString(t, qf) + "{}" + + case *types.TypeParam: + // Assumes func new is not shadowed. + return "*new(" + types.TypeString(t, qf) + ")" + + case *types.Tuple: + // Tuples are not normal values. + // We are currently format as "(t[0], ..., t[n])". Could be something else. + components := make([]string, t.Len()) + for i := 0; i < t.Len(); i++ { + components[i] = ZeroString(t.At(i).Type(), qf) + } + return "(" + strings.Join(components, ", ") + ")" + + case *types.Union: + // Variables of these types cannot be created, so it makes + // no sense to ask for their zero value. + panic(fmt.Sprintf("invalid type for a variable: %v", t)) + + default: + panic(t) // unreachable. + } +} + +// ZeroExpr returns the ast.Expr representation of the "zero" value of the type t. +// ZeroExpr is defined for types that are suitable for variables. +// It may panic for other types such as Tuple or Union. +// See [ZeroString] for a variant that returns a string. +func ZeroExpr(f *ast.File, pkg *types.Package, typ types.Type) ast.Expr { + switch t := typ.(type) { + case *types.Basic: + switch { + case t.Info()&types.IsBoolean != 0: + return &ast.Ident{Name: "false"} + case t.Info()&types.IsNumeric != 0: + return &ast.BasicLit{Kind: token.INT, Value: "0"} + case t.Info()&types.IsString != 0: + return &ast.BasicLit{Kind: token.STRING, Value: `""`} + case t.Kind() == types.UnsafePointer: + fallthrough + case t.Kind() == types.UntypedNil: + return ast.NewIdent("nil") + default: + panic(fmt.Sprint("ZeroExpr for unexpected type:", t)) + } + + case *types.Pointer, *types.Slice, *types.Interface, *types.Chan, *types.Map, *types.Signature: + return ast.NewIdent("nil") + + case *types.Named, *types.Alias: + switch under := t.Underlying().(type) { + case *types.Struct, *types.Array: + return &ast.CompositeLit{ + Type: TypeExpr(f, pkg, typ), + } + default: + return ZeroExpr(f, pkg, under) + } + + case *types.Array, *types.Struct: + return &ast.CompositeLit{ + Type: TypeExpr(f, pkg, typ), + } + + case *types.TypeParam: + return &ast.StarExpr{ // *new(T) + X: &ast.CallExpr{ + // Assumes func new is not shadowed. + Fun: ast.NewIdent("new"), + Args: []ast.Expr{ + ast.NewIdent(t.Obj().Name()), + }, + }, + } + + case *types.Tuple: + // Unlike ZeroString, there is no ast.Expr can express tuple by + // "(t[0], ..., t[n])". + panic(fmt.Sprintf("invalid type for a variable: %v", t)) + + case *types.Union: + // Variables of these types cannot be created, so it makes + // no sense to ask for their zero value. + panic(fmt.Sprintf("invalid type for a variable: %v", t)) + + default: + panic(t) // unreachable. + } +} + +// IsZeroExpr uses simple syntactic heuristics to report whether expr +// is a obvious zero value, such as 0, "", nil, or false. +// It cannot do better without type information. +func IsZeroExpr(expr ast.Expr) bool { + switch e := expr.(type) { + case *ast.BasicLit: + return e.Value == "0" || e.Value == `""` + case *ast.Ident: + return e.Name == "nil" || e.Name == "false" + default: + return false + } +} + +// TypeExpr returns syntax for the specified type. References to named types +// from packages other than pkg are qualified by an appropriate package name, as +// defined by the import environment of file. +// It may panic for types such as Tuple or Union. +func TypeExpr(f *ast.File, pkg *types.Package, typ types.Type) ast.Expr { + switch t := typ.(type) { + case *types.Basic: + switch t.Kind() { + case types.UnsafePointer: + // TODO(hxjiang): replace the implementation with types.Qualifier. + return &ast.SelectorExpr{X: ast.NewIdent("unsafe"), Sel: ast.NewIdent("Pointer")} + default: + return ast.NewIdent(t.Name()) + } + + case *types.Pointer: + return &ast.UnaryExpr{ + Op: token.MUL, + X: TypeExpr(f, pkg, t.Elem()), + } + + case *types.Array: + return &ast.ArrayType{ + Len: &ast.BasicLit{ + Kind: token.INT, + Value: fmt.Sprintf("%d", t.Len()), + }, + Elt: TypeExpr(f, pkg, t.Elem()), + } + + case *types.Slice: + return &ast.ArrayType{ + Elt: TypeExpr(f, pkg, t.Elem()), + } + + case *types.Map: + return &ast.MapType{ + Key: TypeExpr(f, pkg, t.Key()), + Value: TypeExpr(f, pkg, t.Elem()), + } + + case *types.Chan: + dir := ast.ChanDir(t.Dir()) + if t.Dir() == types.SendRecv { + dir = ast.SEND | ast.RECV + } + return &ast.ChanType{ + Dir: dir, + Value: TypeExpr(f, pkg, t.Elem()), + } + + case *types.Signature: + var params []*ast.Field + for i := 0; i < t.Params().Len(); i++ { + params = append(params, &ast.Field{ + Type: TypeExpr(f, pkg, t.Params().At(i).Type()), + Names: []*ast.Ident{ + { + Name: t.Params().At(i).Name(), + }, + }, + }) + } + if t.Variadic() { + last := params[len(params)-1] + last.Type = &ast.Ellipsis{Elt: last.Type.(*ast.ArrayType).Elt} + } + var returns []*ast.Field + for i := 0; i < t.Results().Len(); i++ { + returns = append(returns, &ast.Field{ + Type: TypeExpr(f, pkg, t.Results().At(i).Type()), + }) + } + return &ast.FuncType{ + Params: &ast.FieldList{ + List: params, + }, + Results: &ast.FieldList{ + List: returns, + }, + } + + case interface{ Obj() *types.TypeName }: // *types.{Alias,Named,TypeParam} + switch t.Obj().Pkg() { + case pkg, nil: + return ast.NewIdent(t.Obj().Name()) + } + pkgName := t.Obj().Pkg().Name() + + // TODO(hxjiang): replace the implementation with types.Qualifier. + // If the file already imports the package under another name, use that. + for _, cand := range f.Imports { + if path, _ := strconv.Unquote(cand.Path.Value); path == t.Obj().Pkg().Path() { + if cand.Name != nil && cand.Name.Name != "" { + pkgName = cand.Name.Name + } + } + } + if pkgName == "." { + return ast.NewIdent(t.Obj().Name()) + } + return &ast.SelectorExpr{ + X: ast.NewIdent(pkgName), + Sel: ast.NewIdent(t.Obj().Name()), + } + + case *types.Struct: + return ast.NewIdent(t.String()) + + case *types.Interface: + return ast.NewIdent(t.String()) + + case *types.Union: + // TODO(hxjiang): handle the union through syntax (~A | ... | ~Z). + // Remove nil check when calling typesinternal.TypeExpr. + return nil + + case *types.Tuple: + panic("invalid input type types.Tuple") + + default: + panic("unreachable") + } +} diff --git a/vendor/golang.org/x/tools/internal/versions/constraint.go b/vendor/golang.org/x/tools/internal/versions/constraint.go deleted file mode 100644 index 179063d4848..00000000000 --- a/vendor/golang.org/x/tools/internal/versions/constraint.go +++ /dev/null @@ -1,13 +0,0 @@ -// Copyright 2024 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package versions - -import "go/build/constraint" - -// ConstraintGoVersion is constraint.GoVersion (if built with go1.21+). -// Otherwise nil. -// -// Deprecate once x/tools is after go1.21. -var ConstraintGoVersion func(x constraint.Expr) string diff --git a/vendor/golang.org/x/tools/internal/versions/constraint_go121.go b/vendor/golang.org/x/tools/internal/versions/constraint_go121.go deleted file mode 100644 index 38011407d5f..00000000000 --- a/vendor/golang.org/x/tools/internal/versions/constraint_go121.go +++ /dev/null @@ -1,14 +0,0 @@ -// Copyright 2024 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -//go:build go1.21 -// +build go1.21 - -package versions - -import "go/build/constraint" - -func init() { - ConstraintGoVersion = constraint.GoVersion -} diff --git a/vendor/modules.txt b/vendor/modules.txt index 005946eaf2e..19a42f9f934 100644 --- a/vendor/modules.txt +++ b/vendor/modules.txt @@ -336,7 +336,7 @@ github.com/ebitengine/purego/internal/strings # github.com/edsrzf/mmap-go v1.2.0 ## explicit; go 1.17 github.com/edsrzf/mmap-go -# github.com/efficientgo/core v1.0.0-rc.0.0.20221201130417-ba593f67d2a4 +# github.com/efficientgo/core v1.0.0-rc.3 ## explicit; go 1.17 github.com/efficientgo/core/errcapture github.com/efficientgo/core/errors @@ -620,7 +620,7 @@ github.com/grafana/alerting/receivers/webex github.com/grafana/alerting/receivers/webhook github.com/grafana/alerting/receivers/wecom github.com/grafana/alerting/templates -# github.com/grafana/dskit v0.0.0-20241125123840-77bb9ddddb0c +# github.com/grafana/dskit v0.0.0-20241212153328-e27df29220ea ## explicit; go 1.21 github.com/grafana/dskit/backoff github.com/grafana/dskit/ballast @@ -837,7 +837,7 @@ github.com/miekg/dns # github.com/minio/md5-simd v1.1.2 ## explicit; go 1.14 github.com/minio/md5-simd -# github.com/minio/minio-go/v7 v7.0.81 +# github.com/minio/minio-go/v7 v7.0.82 ## explicit; go 1.22 github.com/minio/minio-go/v7 github.com/minio/minio-go/v7/pkg/cors @@ -1017,7 +1017,7 @@ github.com/prometheus/exporter-toolkit/web github.com/prometheus/procfs github.com/prometheus/procfs/internal/fs github.com/prometheus/procfs/internal/util -# github.com/prometheus/prometheus v1.99.0 => github.com/grafana/mimir-prometheus v0.0.0-20241204081021-cb322705289f +# github.com/prometheus/prometheus v1.99.0 => github.com/grafana/mimir-prometheus v0.0.0-20241210170917-0a0a41616520 ## explicit; go 1.22.0 github.com/prometheus/prometheus/config github.com/prometheus/prometheus/discovery @@ -1078,6 +1078,9 @@ github.com/prometheus/prometheus/util/teststorage github.com/prometheus/prometheus/util/testutil github.com/prometheus/prometheus/util/zeropool github.com/prometheus/prometheus/web/api/v1 +# github.com/prometheus/sigv4 v0.1.0 +## explicit; go 1.21 +github.com/prometheus/sigv4 # github.com/rainycape/unidecode v0.0.0-20150907023854-cb7f23ec59be ## explicit github.com/rainycape/unidecode @@ -1177,7 +1180,7 @@ github.com/twmb/franz-go/pkg/sasl/plain # github.com/twmb/franz-go/pkg/kadm v1.14.0 ## explicit; go 1.21 github.com/twmb/franz-go/pkg/kadm -# github.com/twmb/franz-go/pkg/kfake v0.0.0-20241128200620-605b994f2744 +# github.com/twmb/franz-go/pkg/kfake v0.0.0-20241202133023-293b7c4c56bb ## explicit; go 1.21 github.com/twmb/franz-go/pkg/kfake # github.com/twmb/franz-go/pkg/kmsg v1.9.0 @@ -1299,11 +1302,11 @@ go.opentelemetry.io/collector/semconv/v1.6.1 ## explicit; go 1.21 go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc/internal -# go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.56.0 +# go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.57.0 ## explicit; go 1.22 go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace/internal/semconvutil -# go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.56.0 +# go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.57.0 ## explicit; go 1.22 go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request @@ -1355,7 +1358,7 @@ go.uber.org/zap/internal/color go.uber.org/zap/internal/exit go.uber.org/zap/zapcore go.uber.org/zap/zapgrpc -# golang.org/x/crypto v0.29.0 +# golang.org/x/crypto v0.31.0 ## explicit; go 1.20 golang.org/x/crypto/argon2 golang.org/x/crypto/bcrypt @@ -1372,14 +1375,14 @@ golang.org/x/crypto/pbkdf2 golang.org/x/crypto/pkcs12 golang.org/x/crypto/pkcs12/internal/rc2 golang.org/x/crypto/scrypt -# golang.org/x/exp v0.0.0-20241108190413-2d47ceb2692f +# golang.org/x/exp v0.0.0-20241215155358-4a5509556b9e ## explicit; go 1.22.0 golang.org/x/exp/constraints golang.org/x/exp/slices # golang.org/x/mod v0.22.0 ## explicit; go 1.22.0 golang.org/x/mod/semver -# golang.org/x/net v0.31.0 +# golang.org/x/net v0.32.0 ## explicit; go 1.18 golang.org/x/net/bpf golang.org/x/net/context/ctxhttp @@ -1410,22 +1413,22 @@ golang.org/x/oauth2/google/internal/stsexchange golang.org/x/oauth2/internal golang.org/x/oauth2/jws golang.org/x/oauth2/jwt -# golang.org/x/sync v0.9.0 +# golang.org/x/sync v0.10.0 ## explicit; go 1.18 golang.org/x/sync/errgroup golang.org/x/sync/semaphore golang.org/x/sync/singleflight -# golang.org/x/sys v0.27.0 +# golang.org/x/sys v0.28.0 ## explicit; go 1.18 golang.org/x/sys/cpu golang.org/x/sys/plan9 golang.org/x/sys/unix golang.org/x/sys/windows golang.org/x/sys/windows/registry -# golang.org/x/term v0.26.0 +# golang.org/x/term v0.27.0 ## explicit; go 1.18 golang.org/x/term -# golang.org/x/text v0.20.0 +# golang.org/x/text v0.21.0 ## explicit; go 1.18 golang.org/x/text/cases golang.org/x/text/internal @@ -1441,7 +1444,7 @@ golang.org/x/text/unicode/norm # golang.org/x/time v0.8.0 ## explicit; go 1.18 golang.org/x/time/rate -# golang.org/x/tools v0.27.0 +# golang.org/x/tools v0.28.0 ## explicit; go 1.22.0 golang.org/x/tools/go/gcexportdata golang.org/x/tools/go/packages @@ -1486,7 +1489,7 @@ google.golang.org/genproto/googleapis/type/expr ## explicit; go 1.21 google.golang.org/genproto/googleapis/api google.golang.org/genproto/googleapis/api/annotations -# google.golang.org/genproto/googleapis/rpc v0.0.0-20241118233622-e639e219e697 +# google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576 ## explicit; go 1.21 google.golang.org/genproto/googleapis/rpc/code google.golang.org/genproto/googleapis/rpc/errdetails @@ -1617,10 +1620,10 @@ gopkg.in/yaml.v2 # gopkg.in/yaml.v3 v3.0.1 => github.com/colega/go-yaml-yaml v0.0.0-20220720105220-255a8d16d094 ## explicit gopkg.in/yaml.v3 -# k8s.io/apimachinery v0.31.1 +# k8s.io/apimachinery v0.31.3 ## explicit; go 1.22.0 k8s.io/apimachinery/pkg/util/runtime -# k8s.io/client-go v0.31.1 +# k8s.io/client-go v0.31.3 ## explicit; go 1.22.0 k8s.io/client-go/tools/metrics k8s.io/client-go/util/workqueue @@ -1685,7 +1688,7 @@ sigs.k8s.io/kustomize/kyaml/yaml/walk sigs.k8s.io/yaml sigs.k8s.io/yaml/goyaml.v2 sigs.k8s.io/yaml/goyaml.v3 -# github.com/prometheus/prometheus => github.com/grafana/mimir-prometheus v0.0.0-20241204081021-cb322705289f +# github.com/prometheus/prometheus => github.com/grafana/mimir-prometheus v0.0.0-20241210170917-0a0a41616520 # github.com/hashicorp/memberlist => github.com/grafana/memberlist v0.3.1-0.20220714140823-09ffed8adbbe # gopkg.in/yaml.v3 => github.com/colega/go-yaml-yaml v0.0.0-20220720105220-255a8d16d094 # github.com/grafana/regexp => github.com/grafana/regexp v0.0.0-20240531075221-3685f1377d7b