-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
doca tcp frame builder performance improvement #10
Comments
Additonal report. Frame builder part skips frame building and only send ack.
(env, process, session/process) are (1, 1, 1) and (1, 1, 2) is close to theoretical performance. |
Addtional report same as #12 (comment) I run heavy mempy kernel while only send ack app (Frame builder without frame building). The result is below. We measured the performance when there are one or two heavy memcpy runing. The chunk size is enough, then we can get high performance, but the performance with chunk size that frame building can work is slow and similar to the result of actual frame building performance. Also the heavy mempcy increase the performance decrease.
The performances without heavy memcpy
|
Performance on VMWare VM. VM has RTX A6000 Ampere arch, ConnectX6, PCIe gen2, CPU is Intel(R) Xeon(R) Silver 4214R CPU @ 2.40GHz
RTT performance
|
Purpose
Based on d0ea36e I measured the performance of docagpunetio.
Current server structure is below
receive_tcp
pollsdoca_gpu_dev_eth_rxq_receive_warp
,send_ack
send ack to client and calculate the latest seq number formake frame
,make frame
builds frames usingcudaMemcpyAsync
.According to some trial, the throughput is influenced by client side ack checking frequency, and the number of sessions.
Environment
Result
Here is the result.
env is the Environment described in Environment session.
process means the number of processes. session/process means the number of sessions per process, when process is 2 and session/process is 1, total number of sessions is 2. chunk size is that client checks ack from server every time when it sends this number of bytes. Gbps/session is the throughput per session.
Theoretically, when env is 1, the total throughput is 100Gbps, so we expect we can get 100Gbps when 1 session, 50Gbps/session when 2 sessions. When env is 2, we use 2 ports of connectx7, so the total hroughput is 200Gbps
The result when env is 1, process is 1 and session/process is 1, over 16MByte chunk size doesn't work because the cyclic buffer handled by doca overwritten. The chunk size increased, the throughput improved. This means that the average RTT is long.
The result when env is 1 and 2, process is 2 and session/process is 1, we get half of result when env is 1, process is 1 and session/process is 1. We expected the throughput is the same because NIC bandwidth, PCIe bandwidth and GPU device memory bandwidth is enough. So there is a limitation or limitations in doca library.
The result when env is 1, process is 1 and session/process is 2, we only get 6Gbps. We expected we can get the same result of when process is 2 and session/process is 1. Maybe cuda kernels affect other kernels in the same process.
The text was updated successfully, but these errors were encountered: