- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I'm trying to use option -ppn (or -perhost) with Hydra and LSF 9.1 but It doesn't work (nodes have 16 cores):
$ bsub -q q_32p_1h -I -n 32 mpirun -perhost 8 -np 16 ./a.out
Job <750954> is submitted to queue <q_32p_1h>.
<<Waiting for dispatch ...>>
<<Starting on mn3>>
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
my Intel MPI version is 5.0.2
I tried to fix this problem some years ago. At that time, the answer was upgrade Intel MPI because it should have a better LSF support ...
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jose,
For your scenario it's recommended to use job manager options to set correct number of processes per a node (e.g. see span[ptile=X] parameter).
Alternatively you can use I_MPI_JOB_RESPECT_PROCESS_PLACEMENT variable which lets you disable job manager process-per-node settings (see Intel® MPI Library for Linux* OS Reference Manual for details).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Artem,
thanks a lot. Both alternatives work well. In particular, I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=no enables the -ppn functionality of mpirun
regards,
José Luis
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page