Skip to content

Commit

Permalink
Saner defaults for configs (cortexproject#2344)
Browse files Browse the repository at this point in the history
* Saner defaults for configs

Signed-off-by: Goutham Veeramachaneni <[email protected]>

* Set spread_flushes to true in the code.

Signed-off-by: Tom Wilkie <[email protected]>

* Make mockIngester implement HealthCheck and io.Closer so tests pass.

Signed-off-by: Tom Wilkie <[email protected]>

* Address feedback

Signed-off-by: Goutham Veeramachaneni <[email protected]>

* Verify that querier sees correct ring before using it.

Signed-off-by: Peter Štibraný <[email protected]>

* Address feedback

Signed-off-by: Goutham Veeramachaneni <[email protected]>

Co-authored-by: Tom Wilkie <[email protected]>
Co-authored-by: Peter Štibraný <[email protected]>
  • Loading branch information
3 people authored Mar 30, 2020
1 parent f0f0bbe commit ec3fd30
Show file tree
Hide file tree
Showing 12 changed files with 48 additions and 29 deletions.
11 changes: 11 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,17 @@
* [CHANGE] Frontend worker in querier now starts after all Querier module dependencies are started. This fixes issue where frontend worker started to send queries to querier before it was ready to serve them (mostly visible when using experimental blocks storage). #2246
* [CHANGE] Lifecycler component now enters Failed state on errors, and doesn't exit the process. (Important if you're vendoring Cortex and use Lifecycler) #2251
* [CHANGE] `/ready` handler now returns 200 instead of 204. #2330
* [CHANGE] Better defaults for the following options: #2344
- `-<prefix>.consul.consistent-reads`: Old default: `true`, new default: `false`. This reduces the load on Consul.
- `-<prefix>.consul.watch-rate-limit`: Old default: 0, new default: 1. This rate limits the reads to 1 per second. Which is good enough for ring watches.
- `-distributor.health-check-ingesters`: Old default: `false`, new default: `true`.
- `-ingester.max-stale-chunk-idle`: Old default: 0, new default: 2m. This lets us expire series that we know are stale early.
- `-ingester.spread-flushes`: Old default: false, new default: true. This allows to better de-duplicate data and use less space.
- `-ingester.chunk-age-jitter`: Old default: 20mins, new default: 0. This is to enable the `-ingester.spread-flushes` to true.
- `-<prefix>.memcached.batchsize`: Old default: 0, new default: 1024. This allows batching of requests and keeps the concurrent requests low.
- `-<prefix>.memcached.consistent-hash`: Old default: false, new default: true. This allows for better cache hits when the memcaches are scaled up and down.
- `-querier.batch-iterators`: Old default: false, new default: true.
- `-querier.ingester-streaming`: Old default: false, new default: true.
* [FEATURE] Added experimental storage API to the ruler service that is enabled when the `-experimental.ruler.enable-api` is set to true #2269
* `-ruler.storage.type` flag now allows `s3`,`gcs`, and `azure` values
* `-ruler.storage.(s3|gcs|azure)` flags exist to allow the configuration of object clients set for rule storage
Expand Down
2 changes: 1 addition & 1 deletion docs/configuration/arguments.md
Original file line number Diff line number Diff line change
Expand Up @@ -284,7 +284,7 @@ It also talks to a KVStore and has it's own copies of the same flags used by the

- `-ingester.chunk-age-jitter`

To reduce load on the database exactly 12 hours after starting, the age limit is reduced by a varying amount up to this. (default 20m)
To reduce load on the database exactly 12 hours after starting, the age limit is reduced by a varying amount up to this. Don't enable this along with `-ingester.spread-flushes` (default 0m)

- `-ingester.spread-flushes`

Expand Down
18 changes: 9 additions & 9 deletions docs/configuration/config-file-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -312,7 +312,7 @@ pool:
# Run a health check on each ingester client during periodic cleanup.
# CLI flag: -distributor.health-check-ingesters
[health_check_ingesters: <boolean> | default = false]
[health_check_ingesters: <boolean> | default = true]
ha_tracker:
# Enable the distributors HA tracker so that it can accept samples from
Expand Down Expand Up @@ -565,7 +565,7 @@ lifecycler:
# flushing. 0 disables it and a stale series is not flushed until the
# max-chunk-idle timeout is reached.
# CLI flag: -ingester.max-stale-chunk-idle
[max_stale_chunk_idle_time: <duration> | default = 0s]
[max_stale_chunk_idle_time: <duration> | default = 2m0s]
# Timeout for individual flush operations.
# CLI flag: -ingester.flush-op-timeout
Expand All @@ -586,7 +586,7 @@ lifecycler:
# If true, spread series flushes across the whole period of
# -ingester.max-chunk-age.
# CLI flag: -ingester.spread-flushes
[spread_flushes: <boolean> | default = false]
[spread_flushes: <boolean> | default = true]
# Period with which to update the per-user ingestion rates.
# CLI flag: -ingester.rate-update-period
Expand Down Expand Up @@ -614,11 +614,11 @@ The `querier_config` configures the Cortex querier.
# Use batch iterators to execute query, as opposed to fully materialising the
# series in memory. Takes precedent over the -querier.iterators flag.
# CLI flag: -querier.batch-iterators
[batch_iterators: <boolean> | default = false]
[batch_iterators: <boolean> | default = true]
# Use streaming RPCs to query ingester.
# CLI flag: -querier.ingester-streaming
[ingester_streaming: <boolean> | default = false]
[ingester_streaming: <boolean> | default = true]
# Maximum number of samples a single query can load into memory.
# CLI flag: -querier.max-samples
Expand Down Expand Up @@ -1928,12 +1928,12 @@ The `consul_config` configures the consul client. The supported CLI flags `<pref
# Enable consistent reads to Consul.
# CLI flag: -<prefix>.consul.consistent-reads
[consistent_reads: <boolean> | default = true]
[consistent_reads: <boolean> | default = false]
# Rate limit when watching key or prefix in Consul, in requests per second. 0
# disables the rate limit.
# CLI flag: -<prefix>.consul.watch-rate-limit
[watch_rate_limit: <float> | default = 0]
[watch_rate_limit: <float> | default = 1]
# Burst size used in rate limit. Values less than 1 are treated as 1.
# CLI flag: -<prefix>.consul.watch-burst-size
Expand Down Expand Up @@ -2205,7 +2205,7 @@ The `memcached_config` block configures how data is stored in Memcached (ie. exp
# How many keys to fetch in each batch.
# CLI flag: -<prefix>.memcached.batchsize
[batch_size: <int> | default = 0]
[batch_size: <int> | default = 1024]
# Maximum active requests to memcache.
# CLI flag: -<prefix>.memcached.parallelism
Expand Down Expand Up @@ -2252,7 +2252,7 @@ The `memcached_client_config` configures the client used to connect to Memcached
# Use consistent hashing to distribute to memcache servers.
# CLI flag: -<prefix>.memcached.consistent-hash
[consistent_hash: <boolean> | default = false]
[consistent_hash: <boolean> | default = true]
```

### `fifo_cache_config`
Expand Down
10 changes: 0 additions & 10 deletions docs/guides/running.md
Original file line number Diff line number Diff line change
Expand Up @@ -241,14 +241,4 @@ time() - sum by (statefulset_kubernetes_io_pod_name) (prometheus_remote_storage_

### Optimising Storage

These ingester options reduce the chance of storing multiple copies of
the same data:

-ingester.spread-flushes=true
-ingester.chunk-age-jitter=0

Add a chunk cache via `-store.chunks-cache.memcached.hostname` to allow writes to be de-duplicated.

As recommended under [Chunk encoding](#chunk-encoding), use Bigchunk:

-ingester.chunk-encoding=3 # bigchunk
10 changes: 10 additions & 0 deletions integration/ingester_hand_over_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -86,10 +86,20 @@ func runIngesterHandOverTest(t *testing.T, flags map[string]string, setup func(t
}), "")
require.NoError(t, s.Start(ingester2))

// Wait a bit to make sure that querier is caught up. Otherwise, we may be querying for data,
// while querier still knows about old ingester only.
require.NoError(t, querier.WaitForMetricWithLabels(e2e.EqualsSingle(1), "cortex_ring_members", map[string]string{"name": "ingester", "state": "ACTIVE"}))
require.NoError(t, querier.WaitForMetricWithLabels(e2e.EqualsSingle(1), "cortex_ring_members", map[string]string{"name": "ingester", "state": "PENDING"}))

// Stop ingester-1. This function will return once the ingester-1 is successfully
// stopped, which means the transfer to ingester-2 is completed.
require.NoError(t, s.Stop(ingester1))

// Make sure querier now sees only new ingester. We check that by verifying that there is only one ACTIVE, but no PENDING or JOINING ingester.
require.NoError(t, querier.WaitForMetricWithLabels(e2e.EqualsSingle(1), "cortex_ring_members", map[string]string{"name": "ingester", "state": "ACTIVE"}))
require.NoError(t, querier.WaitForMetricWithLabels(e2e.EqualsSingle(0), "cortex_ring_members", map[string]string{"name": "ingester", "state": "JOINING"}))
require.NoError(t, querier.WaitForMetricWithLabels(e2e.EqualsSingle(0), "cortex_ring_members", map[string]string{"name": "ingester", "state": "PENDING"}))

// Query the series again.
result, err = c.Query("series_1", now)
require.NoError(t, err)
Expand Down
2 changes: 1 addition & 1 deletion pkg/chunk/cache/memcached.go
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ type MemcachedConfig struct {
// RegisterFlagsWithPrefix adds the flags required to config this to the given FlagSet
func (cfg *MemcachedConfig) RegisterFlagsWithPrefix(prefix, description string, f *flag.FlagSet) {
f.DurationVar(&cfg.Expiration, prefix+"memcached.expiration", 0, description+"How long keys stay in the memcache.")
f.IntVar(&cfg.BatchSize, prefix+"memcached.batchsize", 0, description+"How many keys to fetch in each batch.")
f.IntVar(&cfg.BatchSize, prefix+"memcached.batchsize", 1024, description+"How many keys to fetch in each batch.")
f.IntVar(&cfg.Parallelism, prefix+"memcached.parallelism", 100, description+"Maximum active requests to memcache.")
}

Expand Down
2 changes: 1 addition & 1 deletion pkg/chunk/cache/memcached_client.go
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ func (cfg *MemcachedClientConfig) RegisterFlagsWithPrefix(prefix, description st
f.IntVar(&cfg.MaxIdleConns, prefix+"memcached.max-idle-conns", 16, description+"Maximum number of idle connections in pool.")
f.DurationVar(&cfg.Timeout, prefix+"memcached.timeout", 100*time.Millisecond, description+"Maximum time to wait before giving up on memcached requests.")
f.DurationVar(&cfg.UpdateInterval, prefix+"memcached.update-interval", 1*time.Minute, description+"Period with which to poll DNS for memcache servers.")
f.BoolVar(&cfg.ConsistentHash, prefix+"memcached.consistent-hash", false, description+"Use consistent hashing to distribute to memcache servers.")
f.BoolVar(&cfg.ConsistentHash, prefix+"memcached.consistent-hash", true, description+"Use consistent hashing to distribute to memcache servers.")
}

// NewMemcachedClient creates a new MemcacheClient that gets its server list
Expand Down
8 changes: 8 additions & 0 deletions pkg/distributor/distributor_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -897,6 +897,14 @@ type mockIngester struct {
queryDelay time.Duration
}

func (i *mockIngester) Check(ctx context.Context, in *grpc_health_v1.HealthCheckRequest, opts ...grpc.CallOption) (*grpc_health_v1.HealthCheckResponse, error) {
return &grpc_health_v1.HealthCheckResponse{}, nil
}

func (i *mockIngester) Close() error {
return nil
}

func (i *mockIngester) Push(ctx context.Context, req *client.WriteRequest, opts ...grpc.CallOption) (*client.WriteResponse, error) {
i.Lock()
defer i.Unlock()
Expand Down
2 changes: 1 addition & 1 deletion pkg/ingester/client/pool.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ type PoolConfig struct {
// RegisterFlags adds the flags required to config this to the given FlagSet.
func (cfg *PoolConfig) RegisterFlags(f *flag.FlagSet) {
f.DurationVar(&cfg.ClientCleanupPeriod, "distributor.client-cleanup-period", 15*time.Second, "How frequently to clean up clients for ingesters that have gone away.")
f.BoolVar(&cfg.HealthCheckIngesters, "distributor.health-check-ingesters", false, "Run a health check on each ingester client during periodic cleanup.")
f.BoolVar(&cfg.HealthCheckIngesters, "distributor.health-check-ingesters", true, "Run a health check on each ingester client during periodic cleanup.")
}

// Pool holds a cache of grpc_health_v1 clients.
Expand Down
4 changes: 2 additions & 2 deletions pkg/ingester/ingester.go
Original file line number Diff line number Diff line change
Expand Up @@ -82,10 +82,10 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
f.DurationVar(&cfg.RetainPeriod, "ingester.retain-period", 5*time.Minute, "Period chunks will remain in memory after flushing.")
f.DurationVar(&cfg.FlushOpTimeout, "ingester.flush-op-timeout", 1*time.Minute, "Timeout for individual flush operations.")
f.DurationVar(&cfg.MaxChunkIdle, "ingester.max-chunk-idle", 5*time.Minute, "Maximum chunk idle time before flushing.")
f.DurationVar(&cfg.MaxStaleChunkIdle, "ingester.max-stale-chunk-idle", 0, "Maximum chunk idle time for chunks terminating in stale markers before flushing. 0 disables it and a stale series is not flushed until the max-chunk-idle timeout is reached.")
f.DurationVar(&cfg.MaxStaleChunkIdle, "ingester.max-stale-chunk-idle", 2*time.Minute, "Maximum chunk idle time for chunks terminating in stale markers before flushing. 0 disables it and a stale series is not flushed until the max-chunk-idle timeout is reached.")
f.DurationVar(&cfg.MaxChunkAge, "ingester.max-chunk-age", 12*time.Hour, "Maximum chunk age before flushing.")
f.DurationVar(&cfg.ChunkAgeJitter, "ingester.chunk-age-jitter", 20*time.Minute, "Range of time to subtract from -ingester.max-chunk-age to spread out flushes")
f.BoolVar(&cfg.SpreadFlushes, "ingester.spread-flushes", false, "If true, spread series flushes across the whole period of -ingester.max-chunk-age.")
f.BoolVar(&cfg.SpreadFlushes, "ingester.spread-flushes", true, "If true, spread series flushes across the whole period of -ingester.max-chunk-age.")
f.IntVar(&cfg.ConcurrentFlushes, "ingester.concurrent-flushes", 50, "Number of concurrent goroutines flushing to dynamodb.")
f.DurationVar(&cfg.RateUpdatePeriod, "ingester.rate-update-period", 15*time.Second, "Period with which to update the per-user ingestion rates.")
}
Expand Down
4 changes: 2 additions & 2 deletions pkg/querier/querier.go
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,8 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
f.DurationVar(&promql.LookbackDelta, "promql.lookback-delta", promql.LookbackDelta, "Time since the last sample after which a time series is considered stale and ignored by expression evaluations.")
}
f.BoolVar(&cfg.Iterators, "querier.iterators", false, "Use iterators to execute query, as opposed to fully materialising the series in memory.")
f.BoolVar(&cfg.BatchIterators, "querier.batch-iterators", false, "Use batch iterators to execute query, as opposed to fully materialising the series in memory. Takes precedent over the -querier.iterators flag.")
f.BoolVar(&cfg.IngesterStreaming, "querier.ingester-streaming", false, "Use streaming RPCs to query ingester.")
f.BoolVar(&cfg.BatchIterators, "querier.batch-iterators", true, "Use batch iterators to execute query, as opposed to fully materialising the series in memory. Takes precedent over the -querier.iterators flag.")
f.BoolVar(&cfg.IngesterStreaming, "querier.ingester-streaming", true, "Use streaming RPCs to query ingester.")
f.IntVar(&cfg.MaxSamples, "querier.max-samples", 50e6, "Maximum number of samples a single query can load into memory.")
f.DurationVar(&cfg.QueryIngestersWithin, "querier.query-ingesters-within", 0, "Maximum lookback beyond which queries are not sent to ingester. 0 means all queries are sent to ingester.")
f.DurationVar(&cfg.MaxQueryIntoFuture, "querier.max-query-into-future", 10*time.Minute, "Maximum duration into the future you can query. 0 to disable.")
Expand Down
4 changes: 2 additions & 2 deletions pkg/ring/kv/consul/client.go
Original file line number Diff line number Diff line change
Expand Up @@ -63,8 +63,8 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet, prefix string) {
f.StringVar(&cfg.Host, prefix+"consul.hostname", "localhost:8500", "Hostname and port of Consul.")
f.StringVar(&cfg.ACLToken, prefix+"consul.acl-token", "", "ACL Token used to interact with Consul.")
f.DurationVar(&cfg.HTTPClientTimeout, prefix+"consul.client-timeout", 2*longPollDuration, "HTTP timeout when talking to Consul")
f.BoolVar(&cfg.ConsistentReads, prefix+"consul.consistent-reads", true, "Enable consistent reads to Consul.")
f.Float64Var(&cfg.WatchKeyRateLimit, prefix+"consul.watch-rate-limit", 0, "Rate limit when watching key or prefix in Consul, in requests per second. 0 disables the rate limit.")
f.BoolVar(&cfg.ConsistentReads, prefix+"consul.consistent-reads", false, "Enable consistent reads to Consul.")
f.Float64Var(&cfg.WatchKeyRateLimit, prefix+"consul.watch-rate-limit", 1, "Rate limit when watching key or prefix in Consul, in requests per second. 0 disables the rate limit.")
f.IntVar(&cfg.WatchKeyBurstSize, prefix+"consul.watch-burst-size", 1, "Burst size used in rate limit. Values less than 1 are treated as 1.")
}

Expand Down

0 comments on commit ec3fd30

Please sign in to comment.