-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support multiple shared connections for Redis standalone mode in LettuceConnectionFactory
#2917
Comments
What is the difference to configuring For pipelining were require dedicated connections as the result of pipelining is a single return object containing all results of the pipelining batch. A single thread that receives the pipelining result will be surprised seeing other result elements that aren't expected. So from that perspective, pipelining is also a matter of isolation. |
Thanks for your reply :) I agree that for the traditional pipelining behavior where a single thread emits multiple commands and then waits for corresponding responses, a dedicated connection should be used instead of a shared connection. It is exactly what However, in the issue I'm in fact trying discussing another kind of pipelining behavior, where a single shared connection is used to serve commands (which are non-pipelining, non-transactional and non-blocking, such as The multiple-shared-connections pattern (illustrated by the third picture) I would like to propose shares the same pipelining behavior mentioned above with the single-shared-connection pattern currently used in As for public class LettuceConnectionFactory implements RedisConnectionFactory, ReactiveRedisConnectionFactory,
InitializingBean, DisposableBean, SmartLifecycle {
private final int sharedConnectionNumber; // number of shared connections
// a list of shared connections instead of a single shared connection
private List<SharedConnection<byte[]>> connections = new ArrayList<>(sharedConnectionNumber);
private int sharedConnectionIndex = 0; // round-robin index
private SharedConnection<byte[]> getOrCreateSharedConnection() {
return doInLock(() -> {
if (connections.size() < sharedConnectionNumber) {
// create a new shared connection, put it into the list, and return it.
SharedConnection<byte[]> connection = new SharedConnection<>(this.connectionProvider);
connections.add(connection);
return connection;
}
// fetch a connection in a round-robin manner and return it
return this.connections.get(sharedConnectionIndex++ % sharedConnectionNumber);
});
}
} In my opinion, the multiple-shared-connections pattern could take advantage of both pipelining behavior and the underlying system's parallel processing capacity, and therefore may yield a better performance result, as showed in the fourth picture. Please let me know if it's clearer and makes sense to you, thanks! |
Thank you for the explanation, it makes more clear what you're aiming for. I marked the ticket for team attention. Given the complexity I doubt that we will implement the feature the way you envisioned it. However, we could make |
@mp911de Thanks for reply :) I agree that the feature may be somewhat hard to implement... I think the complexity lies partly in that we should take care of spreading evenly the shared connections between different I'll try communicating with Netty's maintainers to see if it's possible to work out a way that may support sophisticated chooser management. In our case, employing different |
Hi @mp911de, after having introduced an enhancement for Lettuce with your help, I'm currently looking at a potential performance improvement for Lettuce connection usages managed by Spring :) The general idea is to support multiple shared connections for Redis standalone mode.
By default,
LettuceConnectionFactory
maintains a single shared connection and will inject it into a newly createdLettuceConnection
instance each time the factory'sgetConnection
method is called. It's a pretty graceful design as it gets rid of the complexity of maintaining a connection pool, and interacts with Redis in a pipelining way when serving multiple threads, which ensures a high level of performance.Alternatively, under certain circumstances, one can also create and use a connection pool by using the native Lettuce API. A connection pool may bring about positive impacts on performance by taking advantage of the parallel processing capacity of the underlying multi-core processor. However, due to the thread confinement enforced by the connection pool, the pipelining feature mentioned above would not be well exploited.
I'm therefore thinking about a possibly new pattern of connection usage, which may take advantage of both the pipelining feature of Lettuce and the parallel processing capacity of modern processors, and get the most of both worlds. It may look like something showed below:
To verify the idea, I manually built a list of connections, and shared them between business threads. I then conducted some benchmarking of throughput(ops/s) using
RedisClientBenchmark#syncSet
method, comparing it with the single-shared-connection pattern and the connection pooling, and even with Jedis. The results show that the multiple-shared-connections pattern outperforms the others when it keeps a good balance between the use of the pipelining feature and the parallel processing capacity, by aligning the number of connections with the number of processors (8 in my case).I think therefore that it might be worth considering supporting the multiple-shared-connections pattern in Spring for Redis standalone mode, as it might bring about a notable performance improvement under heavy load. Specifically, we could perhaps consider maintaining a list of shared connections in
LettuceConnectionFactory
. Each time the factory'sgetConnection
method is called, a shared connection could be fetched from the list in a round-robin manner. However, we should carefully create these shared connections so that they are attached to different nettyEventLoop
s, and thus benefit from the system's parallel processing capacity. I would like to know your opinion on this, thanks!The text was updated successfully, but these errors were encountered: