Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

Selective MPI process placements on processors

soni__vineet
Beginner
1,666 Views

Hello,

I am looking for a way to place some selected MPI process ranks (MPI_COMM_WORLD ranks) on different sockets of the nodes.

For example, if I have 4 MPI processes say 0,1,2,3. And, I have two nodes each having 2 sockets. Is there any way I can specify that the processes 0 and 4 be on two different sockets of node 1, and similarly processes 1 and 3 be on node 2?

Basically, I am looking for an equivalent to 'rankfile' of Open MPI on Intel MPI.

I hope I am clear enough.

Regards,

Vineet

0 Kudos
3 Replies
Artem_R_Intel1
Employee
1,666 Views

Hi Vineet,

You can find the related controls and its description in the Intel® MPI Library for Linux* OS Reference Manual / chapter "3.2. Process Pinning".

0 Kudos
soni__vineet
Beginner
1,666 Views

Hi Artem,

Thank you for the reply.

I have gone through the link you mentioned. So, for the example that I mentioned initially (e.g. MPI processes - 0,1,2,3; with the group of processes 0&3 on node1, and processes 1&2 on node2), it should work in the following way:

mpirun -host node1 -genv I_MPI_PIN 1 I_MPI_PIN_PROCESSOR_LIST 0,3 -n 2 ./a.out : \

-host node2 -genv I_MPI_PIN 1 I_MPI_PIN_PROCESSOR_LIST 1,2 -n 2 ./a.out

I am confused here with the numbers following the variable I_MPI_PIN_PROCESSOR_LIST. Would you please tell me, if it shows the logical processor number or the MPI process rank? Also, I am not sure if I should use '-perhost 2' to place these processes on individual sockets.

Also, how will it handle the OpenMP threads, if it has any? Let say each socket has a quad core processor, and I use OMP_SET_NUM_THREADS(4). Will it use all physical cores of a socket as OpenMP threads for a single MPI process?

Regards,

Vineet

0 Kudos
Artem_R_Intel1
Employee
1,666 Views

Hi Vineet,

As far as I understand your scenario in the initial message it should work as you want with the default process pinning schema - you can see it with I_MPI_DEBUG=4 (see the cpuinfo utility for the detailed information about your CPU configuration):

$ I_MPI_DEBUG=4 mpirun -ppn 1 -n 4 -hosts node01,node02 IMB-MPI1 bcast -npmin 4
...

[0] MPI startup(): Rank    Pid      Node name                  Pin cpu                                            
[0] MPI startup(): 0       24472    node01  {0,1,2,3,4,5,6,7,8,9,10,11,12,13,28,29,30,31,32,33,34,35,36,37,38,39,40,41}
[0] MPI startup(): 1       122065   node02  {0,1,2,3,4,5,6,7,8,9,10,11,12,13,28,29,30,31,32,33,34,35,36,37,38,39,40,41}
[0] MPI startup(): 2       24473    node01  {14,15,16,17,18,19,20,21,22,23,24,25,26,27,42,43,44,45,46,47,48,49,50,51,52,53,54
                                                 ,55}                                                                                           
[0] MPI startup(): 3       122066   node02  {14,15,16,17,18,19,20,21,22,23,24,25,26,27,42,43,44,45,46,47,48,49,50,51,52,53,54
                                                 ,55} 

...

0 Kudos
Reply