-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deadpool-lapin - recycling connections and channels #47
Comments
This would benefit us greatly as well (similar use-case - hyper service which sometimes needs to send something to RabbitMQ, and currently has to get a connection from the pool, open a channel, publish, and then close the channel). I think there may be some complexity around making sure the channel is safe to re-use, though. Channels have some state - they can be placed in confirm mode (by sending the We use confirms on all channels, but in an application that didn't want them on some channels, you would not want to get given a re-used channel which was previously put in confirm mode. |
This is something that's very similar to something I'm thinking about for When first working on I think it's pretty obvious that connection pooling only makes sense when sending a message to the broker. For anything that receives messages from the broker the pool is mostly useless as the service very likely never returns the connection to the pool. Right now e.g. impl Pool {
pub async fn basic_publish(
&self,
exchange: &str,
routing_key: &str,
options: BasicPublishOptions,
payload: Vec<u8>,
properties: BasicProperties
) -> Result<()> {
...
}
} Personally I would also love to add some kind of declaration system similar to kombu where you declare the exchanges and queue in a separate structure and just reference it from there. e.g. struct Publisher {
exchange: &str,
routing_key: &str,
options: BasicPublishOptions,
}
impl Publisher {
pub async fn publish(&self, pool: &Pool, payload: Vec<u8>) -> Result<()> {
...
}
} Just sketching some ideas here... |
It is my understanding that a server should have very few RabbitMQ connections and instead many channels: https://www.cloudamqp.com/blog/2018-01-19-part4-rabbitmq-13-common-errors.html I don't know how authoritative a source this is, but the company specializes in this domain. |
While I don't have much experience, I share both the understanding, and CloudAMQP's best practice 🙂 I was thinking on @bikeshedder raw ideas, and I was wondering how far we can go without affecting the spirit of dead simple, in usage, and in this crate implementation. And it made me wondering again on this abstraction on |
I'm currently tinkering with #196 which should allow to implement this feature. The most difficult part about it is the detection and handling of broken connections. That part could be solved by a new recycle method which is called by the |
Has there been any recent progress made on this potential feature? I hate to bump a thread with nothing to contribute, but I ran into the same issue in my own application and my workaround is sub-par at best. |
Per my understanding, RabbitMQ requires using a channel (
lapin::Channel
) within a connection (lapin::Connection
) in order for a Publisher to send messages.Unfortunately, I'm struggling to recycle
deadpool_lapin::Connection
objects provided bydeadpool_lapin::Pool
Manager, as the Manager does not handle channels, andlapin::Connection.channels
is a private field.My use case is a
hyper
based microservice publishing events to RabbitMQ.Currently, the less worse performance is achieved by reusing a channel directly:
While providing the best performance so far, the overall impact on req/s is quite important, as the single connection and channel to RabbitMQ becomes the bottleneck.
I would prefer using code similar to
deadpool-redis
ordeadpool-postgres
, allowing several connections to RabbitMQ, each with its channel:However, for the time being, as
deadpool_lapin::Pool
does not handle channels, andlapin::Connection.channels
is a private field, channels must be created and removed with each request, resulting in an even greater performance impact than a single connection/channel bottleneck.While having a limited experience on the matter, my intuition, to match the Pool pattern, is that it would be great if the Manager implementation managed channels on top of the connections.
The text was updated successfully, but these errors were encountered: