Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

mpirun and LSF

Jose_Gordillo
Beginner
1,953 Views

Hi,

I'm trying to use option -ppn (or -perhost) with Hydra and LSF 9.1 but It doesn't work (nodes have 16 cores):

$ bsub -q q_32p_1h -I -n 32 mpirun -perhost 8 -np 16 ./a.out

Job <750954> is submitted to queue <q_32p_1h>.
<<Waiting for dispatch ...>>
<<Starting on mn3>>
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3

my Intel MPI version is 5.0.2

I tried to fix this problem some years ago. At that time, the answer was upgrade Intel MPI because it should have a better LSF support ...

 

0 Kudos
2 Replies
Artem_R_Intel1
Employee
1,953 Views

Hi Jose,

For your scenario it's recommended to use job manager options to set correct number of processes per a node (e.g. see span[ptile=X] parameter).

Alternatively you can use I_MPI_JOB_RESPECT_PROCESS_PLACEMENT variable which lets you disable job manager process-per-node settings (see Intel® MPI Library for Linux* OS Reference Manual for details).

0 Kudos
Jose_Gordillo
Beginner
1,953 Views

 

Artem,

 

thanks a lot. Both alternatives work well. In particular, I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=no enables the -ppn functionality of mpirun

 

regards,

 

José Luis

0 Kudos
Reply