- Marcar como novo
- Marcador
- Subscrever
- Silenciar
- Subscrever fonte RSS
- Destacar
- Imprimir
- Denunciar conteúdo inapropriado
Hi,
I'm trying to use option -ppn (or -perhost) with Hydra and LSF 9.1 but It doesn't work (nodes have 16 cores):
$ bsub -q q_32p_1h -I -n 32 mpirun -perhost 8 -np 16 ./a.out
Job <750954> is submitted to queue <q_32p_1h>.
<<Waiting for dispatch ...>>
<<Starting on mn3>>
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
Hello world!I'm 0 of 1 on mn3
my Intel MPI version is 5.0.2
I tried to fix this problem some years ago. At that time, the answer was upgrade Intel MPI because it should have a better LSF support ...
Link copiado
- Marcar como novo
- Marcador
- Subscrever
- Silenciar
- Subscrever fonte RSS
- Destacar
- Imprimir
- Denunciar conteúdo inapropriado
Hi Jose,
For your scenario it's recommended to use job manager options to set correct number of processes per a node (e.g. see span[ptile=X] parameter).
Alternatively you can use I_MPI_JOB_RESPECT_PROCESS_PLACEMENT variable which lets you disable job manager process-per-node settings (see Intel® MPI Library for Linux* OS Reference Manual for details).
- Marcar como novo
- Marcador
- Subscrever
- Silenciar
- Subscrever fonte RSS
- Destacar
- Imprimir
- Denunciar conteúdo inapropriado
Artem,
thanks a lot. Both alternatives work well. In particular, I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=no enables the -ppn functionality of mpirun
regards,
José Luis

- Subscrever fonte RSS
- Marcar tópico como novo
- Marcar tópico como lido
- Flutuar este Tópico para o utilizador atual
- Marcador
- Subscrever
- Página amigável para impressora