-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Asynchronous function invocation support #368
base: master
Are you sure you want to change the base?
Conversation
82f50b7
to
504dc68
Compare
504dc68
to
298dd39
Compare
While this PR does not specifically attempt to address #356, it is possible to use the |
FYI @mswest46 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per our conversation:
It looks like there are a few separate features here which can largely be used independently:
- a result cache which can be used by expensive functions;
- parallel execution for high-latency calls;
- a potentially arbitrary argument passed in at the top level and propagated to specially-declared extension functions.
- Due to strange golang magic, restricted to be just
context.Context
, but hey, as long as you're digging one tunnel, why not build a parallel shaft and plumb through ainterface{}
? Consider it. - Even though
context.Context
allows arbitrary key/value pairs to piggyback on it, use of this as a general-purpose optional argument is contra-indicated in its docs.
- Due to strange golang magic, restricted to be just
At a minimum, consider allowing these functions to be used in the API independently.
Consider also having the result cache be implemented as a wrapper layer exploiting the plumbed-through interface{}
for the cache. Yes, for peak efficiency you'd want this a little more integrated when you're using it with parallelism, so you can avoid creating a new goroutine when there'd be a cache hit, but in cases where the cache is useful this overhead is probably negligible, so it might be worthwhile for the cleaner stratification of features. OTOH, the deeper integration might not be that much more complex.
Consider having these new features just be enabled/disabled at each extension function, maintaining just one kind of Program
.
Lastly, these all need better names and documentation. It's not an async function
since you're never returning from the call stack. The docs need to make it clear what's being implemented here.
optSet = mergedOpts | ||
} | ||
return newProgram(e, ast, optSet) | ||
return e.newProgram(ast, opts /* async= */, false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's odd to have the comma between the comment and the field it documents, here and below.
} | ||
|
||
// AsyncProgram generates an evaluable instance of the Ast with support for asynchronous extension | ||
// functions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's unclear what an "asynchronous extension function" is.
As a workaround for CEL-Go currently not supporting threading the user context through expression and function evaluation, this patch adds support for the caller to thread request deadline and trace context through to the CEL functions, so that downstream gRPC calls can inherit the request deadline and trace context. This might all be removable if/when CEL attains built-in support for "async functions", as per google/cel-go#368
As a workaround for CEL-Go currently not supporting threading the user context through expression and function evaluation, this patch adds support for the caller to thread request deadline and trace context through to the CEL functions, so that downstream gRPC calls can inherit the request deadline and trace context. This might all be removable if/when CEL attains built-in support for "async functions", as per google/cel-go#368
As a workaround for CEL-Go currently not supporting threading the user context through expression and function evaluation, this patch adds support for the caller to thread request deadline and trace context through to the CEL functions, so that downstream gRPC calls can inherit the request deadline and trace context. This might all be removable if/when CEL attains built-in support for "async functions", as per google/cel-go#368
As a workaround for CEL-Go currently not supporting threading the user context through expression and function evaluation, this patch adds support for the caller to thread request deadline and trace context through to the CEL functions, so that downstream gRPC calls can inherit the request deadline and trace context. This might all be removable if/when CEL attains built-in support for "async functions", as per google/cel-go#368
The PR introduces support for asynchronous functions as CEL extensions.
The general algorithm for evaluation relies on the
types.Unknown
value to indicate whenasync calls relevant to the outcome of an expression. Async calls arguments are tracked,
deduped, and memoized based on the overload id associated with the implementation.
It is possible to use partial state evaluation with async evaluation, though async calls will
be resolved before any unknown attribute patterns. Using both features together may be
inefficient; however, there is a general expectation that async function implementations may
implement their own caching logic in order to handle repeated invocations of the same
async function with the same arguments.