-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for client and server message buffering #28
Open
jvican
wants to merge
5
commits into
master
Choose a base branch
from
topic/client-buffering
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Commits on Jun 4, 2021
-
Configuration menu - View commit details
-
Copy full SHA for fcd33b6 - Browse repository at this point
Copy the full SHA fcd33b6View commit details -
Configuration menu - View commit details
-
Copy full SHA for 359e5c9 - Browse repository at this point
Copy the full SHA 359e5c9View commit details -
Remove unnecessary
CallOptionsMethods
It has been renamed to `ClientCallOptionsApi` so it's no longer needed.
Configuration menu - View commit details
-
Copy full SHA for 5d89d68 - Browse repository at this point
Copy the full SHA 5d89d68View commit details -
Implement client buffering with
asyncBoundary
One of the current problems with our implementation is that `request` is only called when the downstream consumer finishes the processing of a message received from the server. That is, if the server sends 10 messages on the same pipe and then the client consumer is busy processing the first message, the other nine messages will not be populated in the observable and might not even reach the client grpc netty buffer. Only when the client finishes processing the first message, it then calls `request`, processes another message, and the same pattern continues where every message is pulled right before it's going to be processed and not before. This means that there is obviously a performance penalty and that such behavior can change the way the consumer processes messages that can be sent or processed in parallel, resulting in unexpected user semantics. This commit uses `asyncBoundary` to fix this problem and have monix-grpc populate an internal buffer with as many messages as they fit, while still allowing the consumers to process these messages individually without waiting to completely fill the buffer. If the server sends more messages than they fit in the buffer, then `asyncBoundary` stops consuming messages from the grpc source and consequently stop running `request` calls, so the grpc runtime doesn't call `onMessage` anymore until the messages in the buffer are processed downstream. This behavior is great because the messages processed by `onMessage` are added to a hot observable and `asyncBoundary` gives us all the benefits of a backpressured source without actually using that source. As a result, this means our hot observable will never grow larger than our buffer because `onMessage` is only called when `request` is called. So this simple solution actually gives us the semantics we want. However, there are still some things to improve, namely: - Users might want to specify a "low tide" to this buffer. That is, a number below which we will always rehydrate the buffer until completing it but above which we will not rehydrate it. This argument gives the client the flexibility to optimize for memory if it doesn't want the buffer filled up to its limit to save space. - We might want to `request` several messages at once instead of one at a time when repopulating the buffer. It's not clear if this gives us any benefit actually as grpc's internal engine is likely already buffering things in Netty byte buffers and calling `request` for each message might not poise any performance penalty.
Configuration menu - View commit details
-
Copy full SHA for 9f04101 - Browse repository at this point
Copy the full SHA 9f04101View commit details
Commits on Jun 14, 2021
-
Add server call options API + support server-side buffering
A similar adjustment to the one made for the client call options API. I've taken the libery to redesign how server call options are passed in regardless of binary compatibility as we don't promise backwards compatibility until we make a stable release >= 1.0. I have added some more documentation to the server call to make it clear that we are trying to replicate the API in the underlying grpc java library.
Configuration menu - View commit details
-
Copy full SHA for 104b3b1 - Browse repository at this point
Copy the full SHA 104b3b1View commit details
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.