Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add thread pinning optimization #233

Open
wants to merge 2 commits into
base: scaling_archer2
Choose a base branch
from

Conversation

tkoskela
Copy link
Member

I've added thread pinning opimizations (thanks to @giordano) that could improve performance on Archer2. I would like to test the performance with

  • 8 ranks per node, 16 threads per rank
  • 16 ranks per node, 8 threads per rank
  • 32 ranks per node, 4 threads per rank

@tkoskela tkoskela requested a review from DanGiles January 27, 2023 16:32
@giordano giordano changed the title Tk/thread pinning Add thread pinning optimization Jan 27, 2023
Comment on lines +8 to +26
comm = MPI.COMM_WORLD
mpi_size = MPI.Comm_size(comm)
my_rank = MPI.Comm_rank(comm)

cores_per_numa = 16
threads_per_rank = Threads.nthreads()
ranks_per_numa = div(cores_per_numa, threads_per_rank)

# Pin threads so that threads of a MPI rank will be pinned to cores with
# contiguous IDs. This will ensure that
# - When running 16 or less threads per rank, all threads will be pinned to the same
# NUMA region as their master (sharing a memory controller within Infinity fabric)
# - When running 8 or less threads per rank, all threads will be pinned to the same
# Core Complex Die
# - When running 4 or less threads per rank, all threads will be pinned to the same
# Core Complex (sharing a L3 cache)

my_numa, my_id_in_numa = divrem(my_rank, ranks_per_numa) .+ (1, 0)
pinthreads( numa( my_numa, 1:Threads.nthreads() ) .+ threads_per_rank .* my_id_in_numa )
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With ThreadPinning v0.7.3 you can use simply pinthreads(:affinitymask)

@matt-graham
Copy link
Member

I think the failed CI jobs on nightly build here may have been due to the same upstream problems that were causing issues in #236 (comment) but we've now exceeded the 30 day window for being able to re-run workflows.

This otherwise looks good to merge to me other than @giordano's suggestion above to use pinthreads(:affinitymask). Not sure if there is any other way to trigger a new workflow run than pushing a new commit to this branch as we don't have a workflow_dispatch trigger specified to allow running manually.

Also just noticed this is set to merge in to scaling_archer2 branch - as the extra/weak_scaling/run_particleda.jl script currently on master hasn't been updated for the changes in #232 and so doesn't run, it would probably be worth merging both those changes and those here in to master?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants