-
Notifications
You must be signed in to change notification settings - Fork 387
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
netkvm: Enhancing Host Throughput by Combining Virtio Header and Data in a Single Memory Block for NetKVM #1078
Comments
I have been exploring the details surrounding VIRTIO_NET_F_MRG_RXBUF and found myself a bit perplexed regarding the association between the merging of the Virtio header and the packet as mentioned in the issue, in relation to the activation of VIRTIO_NET_F_MRG_RXBUF. It appears to me that the merging of the Virtio Header with the packet is feasible even when VIRTIO_NET_F_MRG_RXBUF is not utilized. If it's not too much trouble, I would greatly appreciate your insights on how these two aspects are interconnected. |
@zjmletang
|
@ybendito
host features 0x000001035867ffe3, guest features 0x00000003186799a3
16
We are more concerned with how this change would enhance the overall PCIe bandwidth on the host side. |
… in a single memory block for the problem description please visit virtio-win#1078
… in a single memory block for the problem description please visit virtio-win#1078
|
@ybendito , |
cc @sb-ntnx fyi - this would be neat for us as well. |
hi @ybendito Could you please let me know if there are any plans for further modifications or additional concerns that need to be addressed? Your feedback will greatly help us decide our next steps, whether to await further changes or start making some adjustments based on the current state of the community's code. Thank you very much for your time and assistance. Best regards,
|
Open the downstream Jira issue for Investment Evaluation. RHEL-71813 netkvm: evaluate merging header and data to optimize host throughput |
Is your feature request related to a problem? Please describe.
In the current reception logic of netkvm, due to virtio protocol headers and data packets being in two separate memory blocks. so at least two memory blocks are needed for one descriptor. From the host's perspective, this means that the network card (hardware implementation) requires two DMA operations to retrieve a single packet, thus consuming more PCIe bandwidth. When PCIe bandwidth resources are strained, this leads to a performance bottleneck as the CPU retrieves descriptors from the Windows virtio frontend driver more slowly.
Describe the solution you'd like
I'd like to discuss with you whether this can be done: The virtio header and data packet can be combined into the same memory block, so that in most scenarios, a descriptor need only contain one memory block.
I conducted tests on our own cloud platform according to the improvement approach and the tests show that this method is fairly effective at increasing hardware bandwidth utilization, with the overall network card throughput increasing about 10%.
The text was updated successfully, but these errors were encountered: