-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Early-return sample #366
Early-return sample #366
Conversation
63b3064
to
6a7a54d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would recommend tests when this becomes a non-draft PR
early-return/workflow.go
Outdated
UpdateName, | ||
func(ctx workflow.Context) error { | ||
condition := func() bool { return initDone } | ||
if completed, err := workflow.AwaitWithTimeout(ctx, earlyReturnTimeout, condition); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you should leave how long a caller is willing to wait for the initial update up to them unless it's really important to differentiate start-to-update timeout from schedule-to-update timeout.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay; I thought there was a risk that the caller might accidentally wait indefinitely if they don't specify a deadline on their end. But I just turned off the worker, and the ExecuteWorkflow request times out after 10s, even though I'm using context.Background()
. I wasn't aware that timeout existed.
I'm learning a lot about writing workflows right now.
return err | ||
} | ||
|
||
// Phase 1: Initialize the transaction synchronously. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Arguably this logic could be flipped and users may prefer that in many scenarios. Can flip where the update is the init and the primary workflow waits for an init update before continuing. There are tradeoffs to both.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In all my examples until now, I've actually had it flipped. Drew convinced me to do it the other way around, but I'm not quite sure anymore why. What makes you say that the other way might be more preferable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since updates are not durable on admitted that is why we did it this way so this is the only safe way to write this type of workflow. if I remember correctly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't necessarily think it's more preferable, there are just tradeoffs. The main tradeoff is probably what you want the workflow to do when it's not called via update with start. If you want it to function normally, no problem, if you want it to wait for an update to get it moving, probably want logic flipped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It isn't more preferable this is the only safe way to write it since update with start is not transactional
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My main reasoning is what chad said: you want the workflow to function properly when the client doesn't call it with an update.
Quinn's reasoning makes sense too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It isn't more preferable this is the only safe way to write it since update with start is not transactional
Do we at least guarantee the update and the start are in the same task? If we don't, all latency bets are off anyways. But whether primary workflow waits on init from update or update waits on init from primary workflow is immaterial I'd think (except if the update can come in a separate task which would be a concern).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do guarantee it for Update-with-Start.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Then yeah I think it's probably just semantics on which coroutine waits on the other and probably doesn't matter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another strong reason to do it this way is that, if the update did the init, then the workflow author has to make sure the workflow is correct in the face of multiple calls to the update handler, i.e. normal updates being sent subsequent to the update with start. But with all steps in the main workflow, multiple calls to the update handler are automatically correct.
early-return/workflow.go
Outdated
logger.Info("cancelling transaction due to error: %v", initErr) | ||
|
||
// Transaction failed to be initialized or not quickly enough; cancel the transaction. | ||
return workflow.ExecuteActivity(activityCtx, CancelTransaction, tx).Get(ctx, nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not usually a common practice to swallow an error as an info-level logger and possibly return success. Usually you would want to mark the workflow failed for various observability reasons.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see your point. My only worry is that users wouldn't be able to distinguish between "failed to init" and "failed to cancel/complete" - which might require very different actions. At the same time, they would probably have some kind of monitoring themselves?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I mean here is that if CancelTransaction
succeeds, operators have no observability into the failure because you have logged the failure as info and did not fail the workflow. Usually with compensating actions, you want to propagate the original failure, not log-and-swallow. But of course it's up to user preference on whether they want to never fail the workflow on failed transaction, but I think most do want to.
early-return/workflow.go
Outdated
var initErr error | ||
var initDone bool | ||
logger := workflow.GetLogger(ctx) | ||
|
||
if err := workflow.SetUpdateHandler( | ||
ctx, | ||
UpdateName, | ||
func(ctx workflow.Context) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feel free to make a struct with your state and your run as a method and your update handler as a method instead of all in one function. What is here is fine of course, but usually when workflows branch out to handlers and many anonymous functions, it is clearer to use traditional structs with method declarations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea 👍 Only part I don't quite follow is the "run as a method". Is that possible in the Go SDK? I saw an error when trying to register the workflow method of the struct and couldn't find any example doing that (only for activities).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only part I don't quite follow is the "run as a method". Is that possible in the Go SDK?
You'd do the wrapping w/ a one-liner, so something like:
type myWorkflow struct { SomeState }
func MyWorkflow(ctx workflow.Context, someState SomeState) (*SomeResult, error) {
return myWorkflow{ someState }.run(ctx)
}
func (m *myWorkflow) run(ctx workflow.Context) (*SomeResult, error) {
// This kind of setup could go into a newMyWorkflow(...) call instead of in here
if err := workflow.SetUpdateHandler(ctx, "myUpdate", m.myUpdate); err != nil {
return nil, err
}
panic("TODO")
}
func (m *myWorkflow) myUpdate(ctx workflow.Context, someParam SomeParam) (*SomeUpdateResult, error) {
panic("TODO")
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 I thought I might have missed a trick to do it with a single method
early-return/workflow.go
Outdated
) | ||
|
||
var ( | ||
activityTimeout = 2 * time.Second |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand we want to demonstrate low latency, but this is a pretty aggressive timeout. But it is probably ok.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the idea for the sample is to push the low latency story; and this is actually on the upper end of what I'm hearing for customer use cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If that's the case, may not want a default retry policy with an initial interval of a second. May actually want max attempts as 1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would be as lax as you can on these timeouts. Don't want to unnecessarily fail requests that would have otherwise succeeded. So if the overall workflow task timeout is 10s, maybe give it 9 or 10s?
early-return/workflow.go
Outdated
ID string | ||
FromAccount string | ||
ToAccount string | ||
Amount float64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Usually a sin these days when talking about money to use floats
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had the same thought at first, but I figured it's more intuitive this way in the context of a sample? If this has shifted and I didn't get the memo, I'm happy to change it. I didn't want to add extra complexity/confusion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think int would be better, I doubt it adds complexity. We do this in our tutorial too at https://github.com/temporalio/money-transfer-project-template-go/blob/2bb1672af07cb76d449f14beb046e412f44a7afb/shared.go#L12
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see 👍 I'll change it. I thought this would go into the idea of representing cents, too, but the linked example just uses "250" without any denomination.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd be ok if you documented that it was in cents or added Cents
to the field or something
early-return/workflow.go
Outdated
|
||
// Phase 2: Complete or cancel the transaction asychronously. | ||
activityCtx := workflow.WithActivityOptions(ctx, workflow.ActivityOptions{ | ||
StartToCloseTimeout: 10 * time.Second, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is confusing that the activityTimeout
global is only used once and is for local activity timeout and this isn't even a global and is actually an activity timeout. Arguably there is no need for these single-use globals instead of just inlining, but if there is, consider consistently using globals for these.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right; I've just made them global to make it easier to see at a glance what the timeouts are without reading line-by-line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But you've only made some global and the global name is ambiguous because it's not general activity timeout (that's hardcoded right here), it's init transaction timeout.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: can you use a different, longer timeout here? since this is the async part, and I think 10s was used elsewhere? I think 30s is a fairly standard timeout for a rando activity.
// By using a local activity, an additional server roundtrip is avoided. | ||
// See https://docs.temporal.io/activities#local-activity for more details. | ||
|
||
activityOptions := workflow.WithLocalActivityOptions(ctx, workflow.LocalActivityOptions{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I doubt you'll want the default retry options with such an aggressive schedule to close of 2s
early-return/workflow.go
Outdated
|
||
// Phase 2: Complete or cancel the transaction asychronously. | ||
activityCtx := workflow.WithActivityOptions(ctx, workflow.ActivityOptions{ | ||
StartToCloseTimeout: 10 * time.Second, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: can you use a different, longer timeout here? since this is the async part, and I think 10s was used elsewhere? I think 30s is a fairly standard timeout for a rando activity.
early-return/workflow.go
Outdated
UpdateName = "early-return" | ||
TaskQueueName = "early-return-tq" | ||
activityTimeout = 2 * time.Second | ||
earlyReturnTimeout = 5 * time.Second |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is aggressive. Our usual advice is to have more generous timeouts but to monitor latencies and keep them low.
Think about sitting at a computer waiting for an operation to complete. If it was going to take more than 5 seconds, would you want it to fail saying "maybe that worked" ? Or would you rather wait longer? You'd probably be willing to wait a little longer.
Many client-facing RPC servers time out after around 30 seconds, and so these sorts of timeouts can be calibrated to be shorter than that.
Here's a draft of a comment:
One common heuristic: Calibrate this number relative to your overall remaining client timeout. So, if your client will timeout in 29 more seconds, you might choose 28s to give time to return and report the correct error.
In general, err on the generous side so as not to fail operations that would have succeeded.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious if @cretz 's advice would be similar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Completely situational I think. No strong opinion. Arguably the caller should determine how long they're willing to wait. A timer inside a workflow does not account for, say, the workflow being slow to start.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've removed the "Await" timeout (earlyReturnTimeout
) (see other convo).
And I've bumped the local activity timeout to 5s now; and the async activity to 30s.
|
||
func (tx *Transaction) ReturnInitResult(ctx workflow.Context) error { | ||
if err := workflow.Await(ctx, func() bool { return tx.initDone }); err != nil { | ||
return fmt.Errorf("transaction init cancelled: %w", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAICT, this is the only untested line of the workflow. I'm not quite sure how to use the Go SDK testing env to trigger this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You would need to test cancellation using https://pkg.go.dev/go.temporal.io/[email protected]/internal#TestWorkflowEnvironment.CancelWorkflow no? Not saying you need to for this sample though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, that's the missing piece! Yeah, I agree, it's prob fine without.
We should link this sample from the readme as we do for all samples https://github.com/temporalio/samples-go/blob/main/README.md |
env.RegisterActivity(tx.CompleteTransaction) | ||
|
||
uc := &updateCallback{} | ||
env.RegisterDelayedCallback(func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: would add a comment explaining this will guarantee the update is sent in the first WFT.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW I'm confused even after Quinn's comment. Why is this test seemingly not using the normal interface for UwS?
I assume you have a good reason related to the Go test library and that there's nothing fixable in this PR. But I'm wondering if there's something we can improve as part of UwS public preview or GA.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it not backed by the Java test service?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct; Go SDK has it's own time-skipping test server, and it has its own APIs. We haven't added an UwS API since this approach here works, too. But I agree, it's not immediately obvious (at least wasn't for me, had to ask Quinn).
What was changed
Why?
Checklist
Closes
How was this tested: