-
Notifications
You must be signed in to change notification settings - Fork 463
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dpdk_setup_ports.py gets NUMA topology wrong with Sub-NUMA clustering enabled #1119
Comments
Alternatively same information should be obtained from
|
It is actually more weird, my topology is:
And that fails with:
Even though it is not true, I have about 15 (+HT) cores per NUMA Node here. |
Found the problem:
I'm not sure why the limit is there, but I remove it - I can generate a config with improved numa detection code. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I have a new test bench setup that accidentally have 8 NUMA nodes.
However when I'm trying to run dpdk_setup_ports.py I get KeyError message:
After looking around it seems that cpu_topology for some reason have only 2 NUMA nodes and that is because instead of using bindings to libnuma or some other library that would report correct result in this case there is an assumption that physical_package_id == NUMA Node ID. Which is not entierly correct on a lot of CPUs.
I have Sapphire Rapids which have 4 NUMA clusters per CPU, Emerald Rapids should have 2 NUMA nodes per CPU. There are also AMD Epyc's that have >1 NUMA node per CPU and so on.
I think it would be better to rely on libnuma instead and fallback to old way only if numa bindings are not available.
The text was updated successfully, but these errors were encountered: