-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade the task scheduler to stable Tokio #75
Comments
May be switch to smol |
@rohitjoshi interesting suggestion. I really like the simplicity of |
Have you guys figured out a way you would like to move forward with this? I have spent some time looking at your Runtime code and I was wondering if I might be able to provide some help! (Disclaimer: I am relatively new to Rust and especially async Rust) |
@ratnadeepb I'll let @drunkirishcoder say more, but we do already have a new, updated run to completion model that's currently WIP. We've just been a bit busy with other projects, but plan to finish up the work soon. Once it's up for review however, we could def use some review, testing, etc... |
I have implemented a runtime agnostic version of capsule, leveraging |
Background
Capsule
v0.1.0
uses a customized task scheduler leveraging a preview version of Tokio andfutures
. We need to upgrade to the stable releases offutures
and Tokio now that they are available.Problem
Tokio lib has gone through a major project restructure. The components we leveraged in building the custom task scheduler are not accessible from outside the Tokio crate anymore. Instead we have to use the default threadpool scheduler if we want to avoid writing our own task scheduling. To move off of the preview releases, the current scheduler has to be reimplemented.
Proposal
The current threading model Capsule uses is a run-to-completion model, where each thread is responsible for reading a batch of packets from the RX queue, feed the batch through the combinator pipeline for processing, send them out through the TX queue, rinse and repeat.
Instead, we can move to a threadpool-based model where a single receiving thread is used for reading batches of packets from the RX queue, and queues up each batch as a task that the threads in a threadpool can then pick up and process.
This will be a major breaking change. And we will need to measure and compare the performance of the two threading models.
The text was updated successfully, but these errors were encountered: