Typical Throughput on 10Gbps LAN? #5372
mechaniputer
started this conversation in
General
Replies: 1 comment 10 replies
-
If the data type you are using is plain (i.e. POD), the first thing I would try is using The After changing the reading side, you could also do a similar change in the writer side. See documentation here |
Beta Was this translation helpful? Give feedback.
10 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone,
I have written a simple round trip FastDDS app which communicates between two machines which are directly connected to each other (no router or switch) via 10Gbps ethernet.
I based it on the example described in [1]. I am using FastDDS 2.14.0. I know it is old, but I just want to understand the performance of what I currently have before changing things too much.
Things I already tried to improve performance:
I am using the RELIABLE_RELIABILITY_QOS. I am specifically interested in the performance of this setting, and I already know that other settings can improve throughput.
I tried the following with the default 1 second heartbeat period for all writers, and then I tried it with a 500 ns period as described in [2]. In both cases the results were similar. It is possible that there is a sweet spot and that I just need to do more tuning, but these results seem worse than I expected either way.
I also tried increasing the max reader+writer socket sizes, also as described in [2]. It likewise had no measurable effect.
Finally, I tried increasing the txqueuelen to 10000 as described in [2], again with no measurable effect.
Here is what my code is doing:
On the first machine, my "driver" app publishes a single sample of some configured size to topic A, and immediately calls DataReader::wait_for_unread_message() on topic B to wait for a response from the server. Once it receives this response, it takes the sample with DataReader::take_next_sample() and then loops again, sending another sample.
I also use std::chrono::steady_clock_now() before sending a sample and after receiving the response, in order to measure the round trip latency.
On the other machine, the "server" app calls DataReader::wait_for_unread_message() on topic A, takes the sample with DataReader::take_next_sample(), immediately sends the same received data back on topic B and re-loops.
What I am wondering:
What kind of throughput should I reasonably expect for various message sizes? At large sizes it appears remarkably slower than I expected. It runs at around 1 sample per second at a size of ~80 kilobytes. Beyond this size, it appears that packet loss becomes too extreme for meaningful results. What am I doing wrong? I'm wondering if my usage of the API is sub-optimal in some way.
Thanks in advance for any tips.
References:
[1] https://fast-dds.docs.eprosima.com/en/latest/fastddsgen/pubsub_app/pubsub_app.html#fastddsgen-pubsub-app
[2] https://fast-dds.docs.eprosima.com/en/latest/fastdds/use_cases/large_data/large_data.html
Beta Was this translation helpful? Give feedback.
All reactions