Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prototype of a rate-limiter intended to favor workflows getting history over polling for new workflows. #1221

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion internal/internal_worker_base.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ import (
"go.uber.org/cadence/internal/common/backoff"
"go.uber.org/cadence/internal/common/metrics"
"go.uber.org/cadence/internal/common/util"
"go.uber.org/cadence/internal/pahlimiter"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
"golang.org/x/time/rate"
Expand Down Expand Up @@ -137,6 +138,8 @@ type (
pollerAutoScaler *pollerAutoScaler
taskQueueCh chan interface{}
sessionTokenBucket *sessionTokenBucket

pollAndHistoryLimiter pahlimiter.PollAndHistoryLimiter
}

polledTask struct {
Expand Down Expand Up @@ -288,8 +291,9 @@ func (bw *baseWorker) pollTask() {
}
}

bw.retrier.Throttle()
bw.retrier.Throttle() // sleeps if retry policy determines it should sleep after failures
if bw.pollLimiter == nil || bw.pollLimiter.Wait(bw.limiterContext) == nil {
// TODO: block here on a shared semaphore with history-loading?
Copy link
Member Author

@Groxx Groxx Mar 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is where I'm planning on putting it. I just need to wire it up to config and construct it in the worker-init. and do something to get it into the history iterator loop - probably not too hard.

at this point the error-throttler has already delayed things, and we're inside both the auto-scaler and per-second limiter, so an immediate poll is correct, and a delayed poll is... potentially exceeding per-second, but not exceeding concurrent.

this is not the most hygienic location, I think this whole thing might be worth revamping. but I don't believe it will make it behave worse in the meantime.

task, err = bw.options.taskWorker.PollTask()
if err != nil && enableVerboseLogging {
bw.logger.Debug("Failed to poll for task.", zap.Error(err))
Expand Down
305 changes: 305 additions & 0 deletions internal/pahlimiter/limiter.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,305 @@
// Package pahlimiter contains a PollAndHistoryLimiter, used to share resources between polls and history loading,
// to prevent flooding the server with history requests that will not complete in a reasonable time.
package pahlimiter
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we generalize this a little more? Looks like you have some good building blocks here (a coordinator that manages a shared resource pool, a set of resource consumers, configs to tune the algorithm, etc.) that can be defined a little more abstractly to reduce redundancy.

Imagine adding a third resource consumer in here. It would lead to changing pretty much everything in this package, including the name of the package. Although polls and history fetches are the main calls that we worry about, there are also other calls (heartbeats, respond*, resetsticky, etc.) which we might consider capturing later on. I don't want to force you to over-engineer this, but it'd be good if we can at least make the user-facing API of the package to be more generalized. We should also be able to remove some redundancy in the init() function as well as across Poll() and GetHistory() functions that are nearly identical.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to make selects flexible in quantity involves some rather annoying reflection (or codegen)... but yeah, I think this is relatively generalizable. I'll think about it anyway, may take some fiddling to make something "good".


import (
"context"
"errors"
"sync"
)

type (
// PollAndHistoryLimiter defines an interface used to share request resources between pollers and history iterator
// funcs, to prevent unsustainable growth of history-loading requests.
//
// this is intended to be used with other poller limiters and retry backoffs, not on its own.
//
// implementations include:
// - NewUnlimited (historical behavior, a noop)
// - NewHistoryLimited (limits history requests, does not limit polls)
// - NewWeighted (history requests "consume" poll requests, and can reduce or stop polls)
PollAndHistoryLimiter interface {
// Poll will try to acquire a poll resource,
// blocking until it succeeds or the context is canceled.
//
// The done func will release the resource - it will always be returned and can be called multiple times,
// only the first will have an effect.
// TODO: see if this is necessary... but it's easy and safe.
Poll(context.Context) (ok bool, done func())
// GetHistory will try to acquire a history-downloading resource,
// blocking until it succeeds or the context is canceled.
//
// The done func will release the resource - it will always be returned and can be called multiple times,
// only the first will have an effect.
// TODO: see if this is necessary... but it's easy and safe.
GetHistory(context.Context) (ok bool, done func())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where exactly are you planning to call this one?

Copy link
Member Author

@Groxx Groxx Mar 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not entirely decided tbh.

  • a trivial answer is "inside the get-history backoff callback", but that leaves substantial gaps for polls to occur.
  • around the whole backoff callback allows us to consume this resource during retries, which seems reasonable, though might lead to confusing metrics (pollers dropping without apparent cause, etc).
    • this is what I'm leaning towards though.
  • around the whole "process a decision task" is potentially difficult, but would be the most-complete protection... but I'm not confident that this would be safe, as we leak some workflow goroutines in unknown scenarios (I suspect corrupt histories or panics) and that could lead to permanently consumed resources. probably desirable, but IMO too risky without some kind of detection.


// Close will clean up any resources, call at worker shutdown.
// This blocks until they are cleaned up.
Close()
}
unlimited struct{}
history struct {
tokens chan struct{} // sized at startup
}
weighted struct {
stopOnce sync.Once

// close to clean up resources
stop chan struct{}
// closed when cleaned up
stopped chan struct{}

// used to signal history requests starting and stopping
historyStart, historyDone chan struct{}
// used to signal poll requests starting and stopping
pollStart, pollDone chan struct{}
}
)

var _ PollAndHistoryLimiter = (*unlimited)(nil)
var _ PollAndHistoryLimiter = (*history)(nil)
var _ PollAndHistoryLimiter = (*weighted)(nil)

// NewUnlimited creates a new "unlimited" poll-and-history limiter, which does not constrain either operation.
// This is the default, historical behavior.
func NewUnlimited() (PollAndHistoryLimiter, error) {
return (*unlimited)(nil), nil
}

func (*unlimited) Poll(_ context.Context) (ok bool, done func()) { return true, func() {} }
func (*unlimited) GetHistory(_ context.Context) (ok bool, done func()) { return true, func() {} }
func (*unlimited) Close() {}

// NewHistoryLimited creates a simple limiter, which allows a specified number of concurrent history requests,
// and does not limit polls at all.
//
// This implementation is NOT expected to be used widely, but it exists as a trivially-safe fallback implementation
// that will still behave better than the historical default.
//
// This is very simple and should be sufficient to stop request floods during rate-limiting with many pending decision
// tasks, but seems likely to allow too many workflows to *attempt* to make progress on a host, starving progress
// when the sticky cache is higher than this size and leading to interrupted or timed out decision tasks.
func NewHistoryLimited(concurrentHistoryRequests int) (PollAndHistoryLimiter, error) {
l := &history{
tokens: make(chan struct{}, concurrentHistoryRequests),
}
// fill the token buffer
for i := 0; i < concurrentHistoryRequests; i++ {
l.tokens <- struct{}{}
}
return l, nil
}

func (p *history) Poll(_ context.Context) (ok bool, done func()) { return true, func() {} }
func (p *history) Close() {}
func (p *history) GetHistory(ctx context.Context) (ok bool, done func()) {
select {
case <-p.tokens:
var once sync.Once
return true, func() {
once.Do(func() {
p.tokens <- struct{}{}
})
}
case <-ctx.Done():
return false, func() {} // canceled, nothing to release
}
}

// NewWeighted creates a new "weighted" poll-and-handler limiter, which shares resources between history requests
// and polls.
//
// Each running poll or history request consumes its weight in total available (capped at max) resources, and one
// request type is allowed to reduce resources for or starve the other completely.
//
// Since this runs "inside" other poller limiting, having equal or lesser poll-resources than the poller limiter
// will allow history requests to block polls... and if history weights are lower, they can perpetually starve polls
// by not releasing enough resources.
//
// **This is intended behavior**, as it can be used to cause a heavily-history-loading worker to stop pulling more
// workflows that may also need their history loaded, until some resources free up.
//
// ---
//
// The reverse situation, where history resources cannot prevent polls, may lead to some undesirable behavior.
// Continually adding workflows while not allowing them to pull history degrades to NewHistoryLimited behavior:
// it is easily possible to have hundreds or thousands of workflows trying to load history, but few or none of them
// are allowed through this limiter to actually perform that request.
//
// In this situation it will still limit the number of actual concurrent requests to load history, but with a very
// large increase in complexity. If you want this, strongly consider just using NewHistoryLimited.
//
// ---
//
// All that said: this is NOT built to be a reliable blocker of polls for at least two reasons:
// - History iterators do not hold their resources between loading (and executing) pages of history, causing a gap
// where a poller could claim resources despite the service being "too busy" loading history from a human's view.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When this happens, it'll lead to an increased number of decision tasks that timeout, because the bandwidth will be split across many workflows that are all trying to fetch history concurrently. Given a fixed bandwidth, when the number of actors increases the average latency increases. Once latency is high enough to cause the tasks to timeout before even the replay is done, the worker throughput will drop to zero since the Respond() calls to the cadence server will be rejected due to task timeout.

In order to prevent that, we need to guarantee that a decision task that started fetching history is prioritized over a decision task that hasn't started fetching history. So, I think what that means is that the replayer should hold the resources until it fetches all existing history pages rather than releasing & re-acquiring for each page.

// - History iterators race with polls. If enough resources are available and both possibilities can be satisfied,
// Go chooses fairly between them.
//
// To reduce the chance of this happening, keep history weights relatively small compared to polls, so many concurrent
// workflows loading history will be unlikely to free up enough resources for a poll to occur.
func NewWeighted(pollRequestWeight, historyRequestWeight, maxResources int) (PollAndHistoryLimiter, error) {
if historyRequestWeight > maxResources || pollRequestWeight > maxResources {
return nil, errors.New("weights must be less than max resources, or no requests can be sent")
}

l := &weighted{
stopOnce: sync.Once{},
stop: make(chan struct{}),
stopped: make(chan struct{}),
historyStart: make(chan struct{}),
historyDone: make(chan struct{}),
pollStart: make(chan struct{}),
pollDone: make(chan struct{}),
}
l.init(pollRequestWeight, historyRequestWeight, maxResources)
return l, nil
}

func (p *weighted) init(pollRequestWeight, historyRequestWeight, maxResources int) {
// mutated only by the actor goroutine
available := maxResources

// start an actor-goroutine to simplify concurrency logic with many possibilities at any time.
// all logic is decided single-threaded, run by this goroutine, and every operation (except stop) is blocking.
//
// this actor only sends to history/poll channels.
// modifying functions only read from them.
// both read from "stop" and "stopped".
//
// - by reading from a channel, the caller has successfully acquired or released resources, and it can immediately proceed.
// - by sending on a channel, this actor has observed that resources are changed, and it must update its state.
// - by closing `p.stop`, this limiter will stop reading from channels.
// - ALL channel operations (except stop) will block forever.
// - this means "xDone" resource-releasing must also read from `p.stop`.
// - because `p.stop` races with other channel operations, stop DOES NOT guarantee no further polls will start,
// even on the same goroutine, until `Close()` returns.
// - this is one reason why `Close()` waits for the actor to exit. without it, you do not have sequential
// logic guarantees.
// - you can `Close()` any number of times from any goroutines, all calls will wait for the actor to stop.
//
// all operations are "fast", and it must remain this way.
// callers block while waiting on this actor, including when releasing resources.
go func() {
defer func() { close(p.stopped) }()
for {
// every branch must:
// 1. read from `p.stop`, so this limiter can be stopped.
// 2. write to "done" chans, so resources can be freed.
// 3. optionally write to "start" chans, so resources can be acquired
//
// doing otherwise for any reason risks deadlocks or invalid resource values.

if available >= pollRequestWeight && available >= historyRequestWeight {
// resources available for either == wait for either
select {
case <-p.stop:
return

case p.historyStart <- struct{}{}:
available -= historyRequestWeight
case p.pollStart <- struct{}{}:
available -= pollRequestWeight

case p.historyDone <- struct{}{}:
available += historyRequestWeight
case p.pollDone <- struct{}{}:
available += pollRequestWeight
}
} else if available >= pollRequestWeight && available < historyRequestWeight {
// only poll resources available
select {
case <-p.stop:
return

// case p.historyStart <- struct{}{}: // insufficient resources
case p.pollStart <- struct{}{}:
available -= pollRequestWeight

case p.historyDone <- struct{}{}:
available += historyRequestWeight
case p.pollDone <- struct{}{}:
available += pollRequestWeight
}
} else if available < pollRequestWeight && available >= historyRequestWeight {
// only history resources available
select {
case <-p.stop:
return

case p.historyStart <- struct{}{}:
available -= historyRequestWeight
// case p.pollStart <- struct{}{}: // insufficient resources

case p.historyDone <- struct{}{}:
available += historyRequestWeight
case p.pollDone <- struct{}{}:
available += pollRequestWeight
}
} else {
// no resources for either, wait for something to finish
select {
case <-p.stop:
return

// case p.historyStart <- struct{}{}: // insufficient resources
// case p.pollStart <- struct{}{}: // insufficient resources

case p.historyDone <- struct{}{}:
available += historyRequestWeight
case p.pollDone <- struct{}{}:
available += pollRequestWeight
}
}
}
}()
}

func (p *weighted) Close() {
p.stopOnce.Do(func() {
close(p.stop)
})
<-p.stopped
}

func (p *weighted) Poll(ctx context.Context) (ok bool, done func()) {
select {
case <-ctx.Done():
return false, func() {} // canceled
case <-p.stop:
return false, func() {} // shutting down
case <-p.pollStart:
// resource acquired
var once sync.Once
return true, func() {
once.Do(func() {
select {
case <-p.pollDone: // released
case <-p.stop: // shutting down
}
})
}
}
}

func (p *weighted) GetHistory(ctx context.Context) (ok bool, done func()) {
select {
case <-ctx.Done():
return false, func() {} // canceled
case <-p.stop:
return false, func() {} // shutting down
case <-p.historyStart:
// resource acquired
var once sync.Once
return true, func() {
once.Do(func() {
select {
case <-p.historyDone: // released
case <-p.stop: // shutting down
}
})
}
}
}
Loading