Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Early-return sample #366
Early-return sample #366
Changes from 5 commits
7c726e9
509449f
3565530
9b4a47a
733a0b6
ecd78c8
2097b86
a0d812c
d400b38
c509f54
3c68ea4
eac75d8
fb87344
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand we want to demonstrate low latency, but this is a pretty aggressive timeout. But it is probably ok.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the idea for the sample is to push the low latency story; and this is actually on the upper end of what I'm hearing for customer use cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If that's the case, may not want a default retry policy with an initial interval of a second. May actually want max attempts as 1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would be as lax as you can on these timeouts. Don't want to unnecessarily fail requests that would have otherwise succeeded. So if the overall workflow task timeout is 10s, maybe give it 9 or 10s?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is aggressive. Our usual advice is to have more generous timeouts but to monitor latencies and keep them low.
Think about sitting at a computer waiting for an operation to complete. If it was going to take more than 5 seconds, would you want it to fail saying "maybe that worked" ? Or would you rather wait longer? You'd probably be willing to wait a little longer.
Many client-facing RPC servers time out after around 30 seconds, and so these sorts of timeouts can be calibrated to be shorter than that.
Here's a draft of a comment:
One common heuristic: Calibrate this number relative to your overall remaining client timeout. So, if your client will timeout in 29 more seconds, you might choose 28s to give time to return and report the correct error.
In general, err on the generous side so as not to fail operations that would have succeeded.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious if @cretz 's advice would be similar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Completely situational I think. No strong opinion. Arguably the caller should determine how long they're willing to wait. A timer inside a workflow does not account for, say, the workflow being slow to start.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've removed the "Await" timeout (
earlyReturnTimeout
) (see other convo).And I've bumped the local activity timeout to 5s now; and the async activity to 30s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Usually a sin these days when talking about money to use floats
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had the same thought at first, but I figured it's more intuitive this way in the context of a sample? If this has shifted and I didn't get the memo, I'm happy to change it. I didn't want to add extra complexity/confusion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think int would be better, I doubt it adds complexity. We do this in our tutorial too at https://github.com/temporalio/money-transfer-project-template-go/blob/2bb1672af07cb76d449f14beb046e412f44a7afb/shared.go#L12
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see 👍 I'll change it. I thought this would go into the idea of representing cents, too, but the linked example just uses "250" without any denomination.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd be ok if you documented that it was in cents or added
Cents
to the field or somethingThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feel free to make a struct with your state and your run as a method and your update handler as a method instead of all in one function. What is here is fine of course, but usually when workflows branch out to handlers and many anonymous functions, it is clearer to use traditional structs with method declarations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea 👍 Only part I don't quite follow is the "run as a method". Is that possible in the Go SDK? I saw an error when trying to register the workflow method of the struct and couldn't find any example doing that (only for activities).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You'd do the wrapping w/ a one-liner, so something like:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 I thought I might have missed a trick to do it with a single method
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you should leave how long a caller is willing to wait for the initial update up to them unless it's really important to differentiate start-to-update timeout from schedule-to-update timeout.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay; I thought there was a risk that the caller might accidentally wait indefinitely if they don't specify a deadline on their end. But I just turned off the worker, and the ExecuteWorkflow request times out after 10s, even though I'm using
context.Background()
. I wasn't aware that timeout existed.I'm learning a lot about writing workflows right now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Arguably this logic could be flipped and users may prefer that in many scenarios. Can flip where the update is the init and the primary workflow waits for an init update before continuing. There are tradeoffs to both.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In all my examples until now, I've actually had it flipped. Drew convinced me to do it the other way around, but I'm not quite sure anymore why. What makes you say that the other way might be more preferable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since updates are not durable on admitted that is why we did it this way so this is the only safe way to write this type of workflow. if I remember correctly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't necessarily think it's more preferable, there are just tradeoffs. The main tradeoff is probably what you want the workflow to do when it's not called via update with start. If you want it to function normally, no problem, if you want it to wait for an update to get it moving, probably want logic flipped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It isn't more preferable this is the only safe way to write it since update with start is not transactional
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My main reasoning is what chad said: you want the workflow to function properly when the client doesn't call it with an update.
Quinn's reasoning makes sense too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we at least guarantee the update and the start are in the same task? If we don't, all latency bets are off anyways. But whether primary workflow waits on init from update or update waits on init from primary workflow is immaterial I'd think (except if the update can come in a separate task which would be a concern).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do guarantee it for Update-with-Start.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Then yeah I think it's probably just semantics on which coroutine waits on the other and probably doesn't matter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another strong reason to do it this way is that, if the update did the init, then the workflow author has to make sure the workflow is correct in the face of multiple calls to the update handler, i.e. normal updates being sent subsequent to the update with start. But with all steps in the main workflow, multiple calls to the update handler are automatically correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I doubt you'll want the default retry options with such an aggressive schedule to close of 2s
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is confusing that the
activityTimeout
global is only used once and is for local activity timeout and this isn't even a global and is actually an activity timeout. Arguably there is no need for these single-use globals instead of just inlining, but if there is, consider consistently using globals for these.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right; I've just made them global to make it easier to see at a glance what the timeouts are without reading line-by-line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But you've only made some global and the global name is ambiguous because it's not general activity timeout (that's hardcoded right here), it's init transaction timeout.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: can you use a different, longer timeout here? since this is the async part, and I think 10s was used elsewhere? I think 30s is a fairly standard timeout for a rando activity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not usually a common practice to swallow an error as an info-level logger and possibly return success. Usually you would want to mark the workflow failed for various observability reasons.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see your point. My only worry is that users wouldn't be able to distinguish between "failed to init" and "failed to cancel/complete" - which might require very different actions. At the same time, they would probably have some kind of monitoring themselves?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I mean here is that if
CancelTransaction
succeeds, operators have no observability into the failure because you have logged the failure as info and did not fail the workflow. Usually with compensating actions, you want to propagate the original failure, not log-and-swallow. But of course it's up to user preference on whether they want to never fail the workflow on failed transaction, but I think most do want to.