-
Notifications
You must be signed in to change notification settings - Fork 677
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Psync Zero-Copy Transmission #1335
Comments
I think zero-copy is pretty promising. As you mentioned zero-copy isn't "free lunch" though, there is some overhead in tracking references, registering for new events in the event loop for the I think it makes sense to proceed with a prototype and get some benchmark information on how this changes PSYNC/replication streaming performance. It could be the case it doesn't improve much, or it could be a big improvement. I have something in the works - will add a comment when I have some data to share |
So I have a prototype where I enable zero copy on outgoing replication links. I can post a draft PR soon. I did some local testing on my development machine. Test setup is as follows:
What I found is the following:
I want to test this on a network interface that isn't loopback next. I am guessing things may look a bit different if we are actually going over the wire. |
Thanks for picking this up and coming up with a working prototype so quickly! The numbers are showing solid improvement and I'm curious to see the numbers over the network interface. Can you also capture the memory footprint during the time that the replica catching up? |
Tested in a cloud deployment on GCP with two This time, I use a key space of 4 GiB (10,000 keys of 400 KiB each) and use a test time in Memtier of 30 seconds. This way, we can measure the write throughput against the primary and estimate the write throughput on the replication stream by how long it takes to catch up:
I think these results make sense. We are able to increase primary throughput by removing the data copy to the TCP kernel buffer, while the replica throughput is not significantly changed. |
Across different data sizes:
|
Given the data I've collected so far and the advice from the Linux documentation, I think it would make sense to limit zero-copy to only writes over a certain size (e.g. 10KiB). I'll update my prototype with this logic and see what setting makes sense here for a default, but it may vary based on a per-deployment basis so making it a parameter might make sense (e.g. |
I ran some experiments with 10 KiB as the minimum write size to activate TCP zero copy, and it seems to be a net positive:
Limiting the write size to only writes over the threshold also improves large write performance, likely since they can get split across the borders of replication blocks. Performance boost on the <10KiB data size could be attributed to batching of writes, or some could just be transient differences in hardware (it is running in a cloud environment so there could be some discrepancy there). I'll put the performance testing code here for future reference (it is a python script that uses Memtier and redis-cli): |
Problem Statement
In the current design, the primary maintains a replication buffer to record mutation commands for syncing the replicas. This replication buffer is implemented as a linked list of chunked buffers. The primary periodically transmits these recorded commands to each replica by issuing socket writes on the replica connections, which involve copying data from the user-space buffer to the kernel.
The transmission is performed by the writeToReplica function, which uses connWrite to send data over the socket.
This user-space to kernel buffer copy consumes CPU cycles and increases the memory footprint. The overhead becomes more noticeable when a replica lags significantly behind the primary, as pysnc triggers a transmission burst. This burst may temporarily reduce the primary's responsiveness, with excessive copying and potential TCP write buffer exhaustion being major contributing factors.
Proposal
Modern Linux systems support zero-copy transmission, which operates by:
The primary downside of zero-copy is the need for userspace to manage the send buffer. However, this limitation is much less applicable for the psync use case as Valkey already manages the pysnc replication buffers.
It’s important to note that using zero-copy for psync requires careful adjustments of the replica client write buffers management logic. Specifically, the logic to ensure that the total accumulated replication write buffer size, across all the replica connections, is limited to the value of client-output-buffer-limit replica.
Further reading on zero-copy can be found here.
Note that this article states that zero-copy is most effective for large payloads, and experimentation is necessary to determine the minimum payload size. For Memorystore vector search cluster communication, enabling zero-copy in gRPC improved QPS by approximately 8.6%.
Zero-Copy Beyond Psync
Zero-copy can also optimize transmission to clients. In the current implementation, dictionary entries are first copied into the client object's write buffer and then copied again during transmission to the client socket, resulting in two memory copies. Using zero-copy eliminates the client socket copy.
Similarly to the psync use case, implementing zero-copy for client transmission requires careful adjustments to the client’s write buffer management logic. The following considerations, while not exhaustive, outline key aspects to address:
Since zero-copy doesn’t consume TCP buffers, excessive memory usage prevention must be handled differently. One approach is to defer copying the next portion of the dictionary entry until a confirmation is received that a significant part of the pending write buffer has been received by the replica.
The text was updated successfully, but these errors were encountered: