-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better-behaved HBONE pooling #931
Conversation
Skipping CI for Draft Pull Request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be missing something but it seems like correctly resolving hash collisions is critical for the pool's functionality and not yet implemented or tested.
f2c21fb
to
0a22c61
Compare
Added a form of conn timeout to evict connections from the pool that haven't been used in a while. Once evicted, they will be closed by hyper when clients close, due to |
440f533
to
37a586f
Compare
Switched to a proper concurrent hashmap that actually has a decent pedigree, which should allow for nonblocking reads (at the cost of using a little more memory, TANSTAAFL) |
f30bd94
to
b9cf358
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not fully through the PR but this seemed like it was worth commenting on early perhaps so here goes:
src/proxy/pool.rs
Outdated
// and has no actual hyper/http/connection logic. | ||
connected_pool: Arc<pingora_pool::ConnectionPool<ConnClient>>, | ||
// this must be an atomic/concurrent-safe list-of-locks, so we can lock per-key, not globally, and avoid holding up all conn attempts | ||
established_conn_writelock: HashMap<u64, Option<Arc<Mutex<()>>>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking more about this it feels, really non-rusty. I suppose the issue pingora and there's not, at present, a very good way around this though so I don't have a suggestion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does? The use of locks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just locks which don't wrap the thing they're locking...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, yep. It's not very cosmetic, agree.
But the kind of locking we have to do is not something the pools out there will do for us, and pingora_pool has some use, at least.
The important thing about a Mutex is what holds it, what's inside is sort of moot. The mutex here is almost a semaphore arguably, which is why it feels weird.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its moot unless we want to fork or contribute to pingora (or I guess lock the entire pool, which we don't care to do) but the point of wrapping the mutex around the thing being accessed is more than cosmetic. It ties holding the lock and accessing the thing together in a way that we can't do with pingora. We'll need to be more careful since we can't do take advantage of that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we wanted to make this simpler later, I would advocate we throw out pingora and implement our own inner data structure from scratch, what we want to do is non-standard and pingora is not likely to want or need what we would have to change to make it work like this - nor is any other "generic" HTTP2 pool.
Which is why for now I just use it and then do the other stuff outside.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ill review more closely tomorrow, only had a chance for a quick look. As such, many of my comments are probably not accurate
src/proxy/pool.rs
Outdated
// After waiting, we found an available conn, but for whatever reason couldn't use it. | ||
// (streamcount maxed, etc) | ||
// Start a new one, clone the underlying client, return a copy, and put the other back in the pool. | ||
None => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we have multiple waiters, can they all end up here at once and spam a bunch of new connections?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, because we only get a NONE here if we found a conn, checked it out, and it was at maxed streamcount.
(and we don't put it back in in that case, we leave it popped to die a natural death once streamcount + refcount == 0)
If we popped it, no one else can.
8cb7925
to
a8224bf
Compare
I am seeing unexpected behavior. Open up 99 connections at once, I got 9 connections. Only 1 of them closes after the idle tieout, the rest are zombies. Also if I then open up 99 more, I get the same - 8 more zombies, 16 total. |
I may have broken it with recent commits then, and the tests aren't sufficient/needs more tests - |
a8224bf
to
ca913b6
Compare
Signed-off-by: Benjamin Leggett <[email protected]>
Signed-off-by: Benjamin Leggett <[email protected]>
Signed-off-by: Benjamin Leggett <[email protected]>
Signed-off-by: Benjamin Leggett <[email protected]>
Signed-off-by: Benjamin Leggett <[email protected]>
Signed-off-by: Benjamin Leggett <[email protected]>
Signed-off-by: Benjamin Leggett <[email protected]>
Signed-off-by: Benjamin Leggett <[email protected]>
Signed-off-by: Benjamin Leggett <[email protected]>
Signed-off-by: Benjamin Leggett <[email protected]>
Signed-off-by: Benjamin Leggett <[email protected]>
Signed-off-by: Benjamin Leggett <[email protected]>
rig flakes Signed-off-by: Benjamin Leggett <[email protected]>
Signed-off-by: Benjamin Leggett <[email protected]>
b02750a
to
fcc845f
Compare
/test test |
04dbb34
to
4622532
Compare
Signed-off-by: Benjamin Leggett <[email protected]>
4622532
to
7e794b6
Compare
|
Signed-off-by: Benjamin Leggett <[email protected]>
In response to a cherrypick label: new pull request created: #983 |
(ty vm to @howardjohn for tracking down that |
This works around several issues found in #864 relating to clients (loadgen mostly) that open many connections very rapidly.
hyper
's pooling library has rather basic scheduling controls, to the point where we can rather easily overwhelm "send_request" such that it never resolves. Tweaking server/client streamlimits does not work around this, as the pooling scheduler does not respect them when selecting pooled streams, and will simply allow a single conn to get backed up.hyper
's pooling library doesn't really have the concept of multiple HTTP/2 connections to the same dest, which makes sense as the HTTP/2 spec doesn't recommend this (the spec says SHOULD NOT instead of MUST NOT, which in spec-speak is "we can't stop you, I guess")Racing to create connections creates problems in scenarios where a lot of connection attempts are made at once due to lack of throttling - many connections can actually be established or "cross the finish line at the same time", when we only need one, and this noise can compromise the ability of the local or dest ztunnel to respond to other workloads.
Notes:
By design, this pool does not manage Hyper client state, nor is it tightly coupled with it. All it does is hold references to Hyper clients that can be reused, and evict its own held references to those Hyper clients when they cannot be reused. The rest is managing async access.
By design, this pool is designed to create backpressure when one client opens many connections to the same destination at the same time. It does this by locking connection-mutative operations per src/dest key.
The only write lock should be on that inner src/dest key. The only operations inside the pool that must be ordered/force ordering are 1. Ensuring that we have a key writelock in the outer map. 2. Locking the inner key writelock. The rest should not demand any sort of ordering to be safe.
This pool does not decrement stream counts - when a connection hits its max stream count, the pool pops the reference and removes it from the rotation, leaving the connection to die when whatever is using it drops all the refs, and will create a new connection on the next go round. This leads to simpler accounting, while still maintaining the protections against connection storms that we need.