Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2154 Discussions

How to bind MPI process to core from mpirun argument

Terrence_Liao
Beginner
2,963 Views
Dear Intel,

I use "sched_setaffinity" in the code to pin MPI process to core. But can only do so if I have access to source code.Of course, I can pin it after the code is running, but sometimes this is not a good solution since pin will need to be done on after process been created but before it starts to execute the compute kernel.
So, a very simple question, isthere an option in mpirun (or mpiexec) such that I can pin the MPI process to core? For example, something like this:

mpirun -nc 2 -pincore 0 6 -np 10 .....

here
-np 10 => 10 mpi processes.
-nc 2 => use 2 cores per node, or run 2 mpi processes per node
-pincore0 6 => pin the mpi processes to core ID 0 and core ID 6

Obvious, I am thinking about a two sockets Westmere per node, and I want each mpi process run on on different socket, so I pin to core 0 and core 6 (of course, it can make is more general, like 0:5, will pin the process to 0,1,2,...5)

Thanks.

-- Terrence
0 Kudos
6 Replies
Andrey_D_Intel
Employee
2,963 Views

Dear Terrence,

The Intel MPI Library does a process pinning automatically. It also provides a set of options to control process pinning behavior. See the description of the I_MPI_PIN_* environment variables in the Reference Manual for details.

To control number of processes placed per node use the mpirun perhost option or I_MPI_PERHOST environment variable.

For instance, use the following syntax for your example using Intel MPI

$ mpirun perhost 2 env I_MPI_PIN_PROCESSOR_LIST 0,6 n 10

Set I_MPI_DEBUG to5 if you want to see process pining table.

Does it answer your question?

Best regards,

Andrey

0 Kudos
Terrence_Liao
Beginner
2,963 Views
Thank Andrey. This is what I need. I checked the reference manual, PIN_PROCESSOR_LIST and PIN_DOMAIN cover 11 pages out of 115 pages of the manual. Indeed, these can become complcate.
-- terrence
0 Kudos
Andrey_D_Intel
Employee
2,963 Views

Terrence,

Usually the default pinning scheme work well for most customers. Let us know if you have special requirements. We will be able to disscuss possible solutions then.

Best regards,
Andrey

0 Kudos
Miah__Wadud
Beginner
2,963 Views

Hello,

I would like to pin MPI processes across all CPU sockets. For example, I would like to run 10 MPI processes on a two socket machine with 5 MPI processes on each socket. Could you please send me the instructions on doing this?

Many thanks,

0 Kudos
TimP
Honored Contributor III
2,963 Views

As andrey said, the default pinning should be good if your numbers of ranks and cores match.  If  not, pinning may be inadvisable.

0 Kudos
Miah__Wadud
Beginner
2,963 Views

Hi Tim,

as mentioned in my question, I have 10 MPI ranks that I would like to run on a 20 core node with 5 MPI ranks on each socket. So the number of ranks and cores are not equal. I don't want the 10 MPI ranks to run on a single socket, as I would like the 5 MPI ranks to have their own NUMA region.

Regards,

0 Kudos
Reply