- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We have an additional request re: hyperthreading.
"it also appears that my code is not getting much from hyperthreading (I was expecting a factor of two drop in seconds per timestep going from:
#SBATCH --exclusive
#SBATCH -n 80
to:
#SBATCH --exclusive
#SBATCH -n 160
[both cases will run on a single node]. is there a flag I should activate to leverage on hyperthreading?
I've found some of these suggestions:
. For example, are there differences when setting I_MPI_PIN_ORDER and/or I_MPI_PIN_PROCESSOR_LIST ?
Suggests "I_MPI_PIN_PROCESSOR_LIST with <procset> Specify a processor subset based on the topological numeration. The default value is allcores."
It also mentions to use 'all' rather than 'allcores':
"all All logical processors. Specify this subset to define the number of CPUs on a node."
The full example from the other suggestion was this:
"1. To place the processes exclusively on physical cores regardless of Hyper Threading mode,
$ mpirun –genv I_MPI_PIN_PROCESSOR_LIST allcores -n <# total processes> ./app
2. To avoid sharing of common resources by adjacent MPI processes, use map=scatter setting
$ mpirun –genv I_MPI_PIN_PROCESSOR_LIST map=scatter -n <# total processes> ./apI
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@RobbieTheK
if you see a drop in performance it's very likely that hyperthreading is not beneficial for you at all. Do you have some proof points why you expect an improve in performance using hyper threading?
Please provide the output of I_MPI_DEBUG=10 which displays the affinity. For Slurm it's usually best to enable Slurm with cpu affinity and let Slurm handle the pinning. For that it's best to use srun instead of mpirun to launch your application.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page