-
-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Idea: Concurrency manager #1534
Comments
Nice to see a discussion on this. My main concern about just having the usual threads is that we can have connections that can last for quite while and even worst they just hang there. I've dealt with production servers that had hundreds of websockets connections just sitting there, and other connections that were long polling. Creating a thread for each blocking connection gets very heavy. I would also like to add another article to the discussion: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/. Now i don't think the solution needs to be the async/await that we see in the other languages, but having a good way approach this problem would be fantastic. The default seems to be just callbacks, but i'm not a fan of that. |
I've worked with different solutions. Note that a thread that is blocking in the case of connections is only valid if you only either read or write. If you need both you end up with 1 or 2 extra threads(!) (where the last thread is to control the other two...) In that case, I almost always just write a single thread that does a Now it's common to just use lambdas instead of proper events. Indeed, for something general that's probably the only really flexible solution in a lot of languages. However, this just spreads code EVERYWHERE, besides the problem of having to use callbacks. The design I keep coming back to because it just works so well, is to pass events to event handlers, where the event handlers keep the implementation internal to itself, and the event message becomes the async "call". So an event is then an enum value + a payload. This allows very nice features such as:
All of these are very beneficial, even though it looks a bit primitive compared to closures. It's a much more solid approach. However, this is for applications where there are simultaneous read/writes, for example like in a game. |
It's still a bit unclear how that would look like to me. In a simpler version of just sending requests, how would you make it available for use? I know you said it's often not the best solution, but having something would already help. Searching for examples of these is not easy either. |
Inspired by the article: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/
Inspired by scoped allocators:
An opt-in scope with a job scheduler which can spin up and pause a "thread-like" on IO or other interrupts (?) would be really interesting
DynamicArenaAllocator
, the job scheduler behaviour could be configurable to suit an application**how many "thread-like"s to make
Some psuedocode of network based concurrency
Some psuedocode of long running jobs based concurrency
But also support things which are of a very different nature, eg high performance computing job scheduling
variables **
latency, throughput, cross-job-synchronisation, number of jobs, job length, job scheduling
platform constraints
platform memory amount, platform memory bandwidth, platform memory access times if NUMA, platform CPU/GPU, platform network
Task constraints
Cross task communication
Task groups (just a kind of task)
eg fetching details from a database to use in an API request.
Expression blocks or functions might be a nice way to express this
Task priority
Fault tolerance - How to handle failed tasks
Scheduler types
So this abstraction should work through the range of use-cases from an embedded system, to a video game, webserver, NUMA system to a distributed system.
The text was updated successfully, but these errors were encountered: