Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

MPI Rank Binding

HPCS
Beginner
522 Views

Hello all,

Intel MPI 4.1.3 on RHEL6.4: trying to bind ranks in two simple fashions:(a) 2 ranks to the same processor socket and (b) 2 ranks to different processor sockets.

Looking at the Intel MPI Reference Manual (3.2. Process Pinning pp.98+), we should be able to use options in mpiexec.hydra when the hostfile points to the same host

-genv I_MPI_PIN 1  -genv I_MPI_PIN_PROCESSOR_LIST all:bunch
-genv I_MPI_PIN 1  -genv I_MPI_PIN_PROCESSOR_LIST all:scatter

 

Unfortunately, the "scatter" option STILL binds the MPI ranks to the same socket. 

Should I be using the  "I_MPI_PIN_DOMAIN" instead?

Any sugestions?

 

Thanks, Michael

0 Kudos
1 Reply
drMikeT
New Contributor I
522 Views

Can you try using these options: 

(a)  

"-genv I_MPI_PIN_DOMAIN=core -genv I_MPI_PIN_ORDER=compact"

 

(b)

"-genv I_MPI_PIN_DOMAIN=core -genv I_MPI_PIN_ORDER=scatter"

 

0 Kudos
Reply