You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've observed on multiple systems and OS that profiler speed resullts never exceed approx 3 GiB/s regardless of nonces and available hardware.
Edit: I have observed the same limitation on production nodes during the cycle gap.
Running multiple instances of profiler allows for cumulative read speeds that scale to the system CPU and IO limits, but still no more than ~3GiB/s per instance.
Tests run with --data-size=32
The text was updated successfully, but these errors were encountered:
I didn't PR this because I suspect it would impact weaker machines and general home user accessibility. I also use 16-32 GiB post files which are not necessarily representative of most user's setups.
That said, 8x the read buffer is the sweet spot for me.
8x got got me to about 4 GiB/s - about inline with what I'd expect from a QD1 / single threaded read - but leaving a lot on the table for a RAID0 NVMe.
It would be ideal if readers could be partitioned to process multiple postdata_bin files in parallel, but I suspect very few users will benefit from it when running separate nodes has the same effect.
I've observed on multiple systems and OS that profiler speed resullts never exceed approx 3 GiB/s regardless of nonces and available hardware.
Edit: I have observed the same limitation on production nodes during the cycle gap.
Running multiple instances of profiler allows for cumulative read speeds that scale to the system CPU and IO limits, but still no more than ~3GiB/s per instance.
Tests run with --data-size=32
The text was updated successfully, but these errors were encountered: