Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add encode time to query stats #9062

Merged
merged 2 commits into from
Aug 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,7 @@
* [ENHANCEMENT] Query-scheduler: Add `query-scheduler.prioritize-query-components` which, when enabled, will primarily prioritize dequeuing fairly across queue components, and secondarily prioritize dequeuing fairly across tenants. When disabled, tenant fairness is primarily prioritized. `query-scheduler.use-multi-algorithm-query-queue` must be enabled in order to use this flag. #9016 #9071
* [ENHANCEMENT] Update runtime configuration to read gzip-compressed files with `.gz` extension. #9074
* [ENHANCEMENT] Ingester: add `cortex_lifecycler_read_only` metric which is set to 1 when ingester's lifecycler is set to read-only mode. #9095
* [ENHANCEMENT] Add a new field, `encode_time_seconds` to query stats log messages, to record the amount of time it takes the query-frontend to encode a response. This does not include any serialization time for downstream components. #9062
* [BUGFIX] Ruler: add support for draining any outstanding alert notifications before shutting down. This can be enabled with the `-ruler.drain-notification-queue-on-shutdown=true` CLI flag. #8346
* [BUGFIX] Query-frontend: fix `-querier.max-query-lookback` enforcement when `-compactor.blocks-retention-period` is not set, and viceversa. #8388
* [BUGFIX] Ingester: fix sporadic `not found` error causing an internal server error if label names are queried with matchers during head compaction. #8391
Expand Down
7 changes: 6 additions & 1 deletion pkg/frontend/querymiddleware/codec.go
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ import (
apierror "github.com/grafana/mimir/pkg/api/error"
"github.com/grafana/mimir/pkg/mimirpb"
"github.com/grafana/mimir/pkg/querier/api"
"github.com/grafana/mimir/pkg/querier/stats"
"github.com/grafana/mimir/pkg/streamingpromql/compat"
"github.com/grafana/mimir/pkg/util"
"github.com/grafana/mimir/pkg/util/chunkinfologger"
Expand Down Expand Up @@ -787,10 +788,14 @@ func (c prometheusCodec) EncodeResponse(ctx context.Context, req *http.Request,
return nil, apierror.Newf(apierror.TypeInternal, "error encoding response: %v", err)
}

c.metrics.duration.WithLabelValues(operationEncode, formatter.Name()).Observe(time.Since(start).Seconds())
encodeDuration := time.Since(start)
c.metrics.duration.WithLabelValues(operationEncode, formatter.Name()).Observe(encodeDuration.Seconds())
c.metrics.size.WithLabelValues(operationEncode, formatter.Name()).Observe(float64(len(b)))
sp.LogFields(otlog.Int("bytes", len(b)))

queryStats := stats.FromContext(ctx)
queryStats.AddEncodeTime(encodeDuration)

resp := http.Response{
Header: http.Header{
"Content-Type": []string{selectedContentType},
Expand Down
1 change: 1 addition & 0 deletions pkg/frontend/transport/handler.go
Original file line number Diff line number Diff line change
Expand Up @@ -331,6 +331,7 @@ func (f *Handler) reportQueryStats(
"split_queries", stats.LoadSplitQueries(),
"estimated_series_count", stats.GetEstimatedSeriesCount(),
"queue_time_seconds", stats.LoadQueueTime().Seconds(),
"encode_time_seconds", stats.LoadEncodeTime().Seconds(),
}, formatQueryString(details, queryString)...)

if details != nil {
Expand Down
4 changes: 2 additions & 2 deletions pkg/frontend/v1/frontend.go
Original file line number Diff line number Diff line change
Expand Up @@ -263,15 +263,15 @@ func (f *Frontend) Process(server frontendv1pb.Frontend_ProcessServer) error {
n_active_tenants * n_expired_requests_at_front_of_queue requests being processed
before an active request was handled for the tenant in question.
If this tenant meanwhile continued to queue requests,
it's possible that it's own queue would perpetually contain only expired requests.
it's possible that its own queue would perpetually contain only expired requests.
*/
if req.originalCtx.Err() != nil {
lastTenantIndex = lastTenantIndex.ReuseLastTenant()
continue
}

// Handle the stream sending & receiving on a goroutine so we can
// monitoring the contexts in a select and cancel things appropriately.
// monitor the contexts in a select and cancel things appropriately.
resps := make(chan *frontendv1pb.ClientToFrontend, 1)
errs := make(chan error, 1)
go func() {
Expand Down
17 changes: 17 additions & 0 deletions pkg/querier/stats/stats.go
Original file line number Diff line number Diff line change
Expand Up @@ -187,6 +187,22 @@ func (s *Stats) LoadQueueTime() time.Duration {
return time.Duration(atomic.LoadInt64((*int64)(&s.QueueTime)))
}

func (s *Stats) AddEncodeTime(t time.Duration) {
if s == nil {
return
}

atomic.AddInt64((*int64)(&s.EncodeTime), int64(t))
}

func (s *Stats) LoadEncodeTime() time.Duration {
if s == nil {
return 0
}

return time.Duration(atomic.LoadInt64((*int64)(&s.EncodeTime)))
}

// Merge the provided Stats into this one.
func (s *Stats) Merge(other *Stats) {
chencs marked this conversation as resolved.
Show resolved Hide resolved
if s == nil || other == nil {
Expand All @@ -202,6 +218,7 @@ func (s *Stats) Merge(other *Stats) {
s.AddFetchedIndexBytes(other.LoadFetchedIndexBytes())
s.AddEstimatedSeriesCount(other.LoadEstimatedSeriesCount())
s.AddQueueTime(other.LoadQueueTime())
s.AddEncodeTime(other.LoadEncodeTime())
}

// Copy returns a copy of the stats. Use this rather than regular struct assignment
Expand Down
122 changes: 90 additions & 32 deletions pkg/querier/stats/stats.pb.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions pkg/querier/stats/stats.proto
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,6 @@ message Stats {
uint64 estimated_series_count = 8;
// The sum of durations that the query spent in the queue, before it was handled by querier.
google.protobuf.Duration queue_time = 9 [(gogoproto.stdduration) = true, (gogoproto.nullable) = false];
// The time spent at the frontend encoding the query's final results. Does not include time spent serializing results at the querier.
google.protobuf.Duration encode_time = 10 [(gogoproto.stdduration) = true, (gogoproto.nullable) = false];
}
Loading