You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Complex Prometheus deployments often include metrics sourced from different systems, including high-cardinality metrics producers such as Istio. In these environments, it is often desirable to exclude specific ServiceMonitors & PodMonitors based on the existence of labels, eg app: istio. In my specific use case, we collect Istio metrics with a dedicated Prometheus, aggregate away high cardinality labels via recording rules, and make these derived metrics available in other backends while dropping the high cardinality metrics. This is accomplished by using matchExpressions functionality of Label Selectors in Prometheus Operator. We tell Prometheus to scrape all metrics except Istio.
This functionality is not available in the existing Target Allocator label selector. Target Allocator uses a naive selector that requires a user to specify every ServiceMonitor/PodMonitor via labels, or it consumes all ServiceMonitors and PodMonitors present in the cluster. It does not allow a user to exclude a subset of ServiceMonitors and PodMonitors.
Describe the solution you'd like
As a user, I would like to leverage the same flexible label selector syntax available in Prometheus Operator. This would facilitate migrations from complex Prometheus deployments to Target Allocator by providing feature parity with resource selection.
Describe alternatives you've considered
An alternative approach would be to declare every label:value combination you would like to Target Allocator to watch. This is not a good solution for complex platforms with many users who are deploying ServiceMonitors and PodMonitors with various labels.
Another alternative is to drop all metrics originating from the ServiceMonitor or PodMonitor you would like to exclude. This is a suboptimal solution because it consumes resources unnecessarily.
Additional context
No response
The text was updated successfully, but these errors were encountered:
Component(s)
target allocator
Is your feature request related to a problem? Please describe.
Complex Prometheus deployments often include metrics sourced from different systems, including high-cardinality metrics producers such as Istio. In these environments, it is often desirable to exclude specific ServiceMonitors & PodMonitors based on the existence of labels, eg
app: istio
. In my specific use case, we collect Istio metrics with a dedicated Prometheus, aggregate away high cardinality labels via recording rules, and make these derived metrics available in other backends while dropping the high cardinality metrics. This is accomplished by usingmatchExpressions
functionality of Label Selectors in Prometheus Operator. We tell Prometheus to scrape all metrics except Istio.This functionality is not available in the existing Target Allocator label selector. Target Allocator uses a naive selector that requires a user to specify every ServiceMonitor/PodMonitor via labels, or it consumes all ServiceMonitors and PodMonitors present in the cluster. It does not allow a user to exclude a subset of ServiceMonitors and PodMonitors.
Describe the solution you'd like
As a user, I would like to leverage the same flexible label selector syntax available in Prometheus Operator. This would facilitate migrations from complex Prometheus deployments to Target Allocator by providing feature parity with resource selection.
Describe alternatives you've considered
An alternative approach would be to declare every
label:value
combination you would like to Target Allocator to watch. This is not a good solution for complex platforms with many users who are deploying ServiceMonitors and PodMonitors with various labels.Another alternative is to drop all metrics originating from the ServiceMonitor or PodMonitor you would like to exclude. This is a suboptimal solution because it consumes resources unnecessarily.
Additional context
No response
The text was updated successfully, but these errors were encountered: