Software Tuning, Performance Optimization & Platform Monitoring
Discussion regarding monitoring and software tuning methodologies, Performance Monitoring Unit (PMU) of Intel microprocessors, and platform updating.

jobs in lsf fails with more than 127 nodes



I've an awkward issue.

I'm using LSF 9.1 as job manager, and Intel Parallel Studio 2015_update1 

When a I submit a simple program (hello word) using 2032 cores (117 nodes), it works well, but when I use more cores, all the processes are

created on all nodes but they hang and the program doesn't finish (it even doesn't starts).

I've tried launching the process outside LSF (mpirun -hostfile ... ) and it works fine with 2048 cores.


Anny suggestions?







0 Kudos
3 Replies

Hi Jose. Can you provide the bsub command line, and the output of bhist -l <jobid>.  Do you have access to the log files, especially the log files on the job's head node?

0 Kudos


the bsub command line is


% bsub -q q_1080p_1h -n 2048 -oo salida mpirun  -genv I_MPI_FABRICS shm:ofa ./a.out

I run it several times. Most of them there were no output, but once I got this:

Sender: LSF System <lsfadmin@mn269>
Subject: Job 523670: <mpirun -genv I_MPI_FABRICS shm:ofa ./a.out> in cluster <cluster1> Exited

Job <mpirun -genv I_MPI_FABRICS shm:ofa ./a.out> was submitted from host <mn328> by user <jlgr> in cluster <cluster1>.
Job was executed on host(s) <16*mn269>, in queue <q_1800p_1h>, as user <jlgr> in cluster <cluster1>.






</home/dgsca/jlgr> was used as the home directory.
</tmpu/dgsca/jlgr> was used as the working directory.
Started at Mon Jun  8 12:26:55 2015
Results reported on Mon Jun  8 12:31:57 2015

Your job looked like:

# LSBATCH: User input
mpirun -genv I_MPI_FABRICS shm:ofa ./a.out

Exited with exit code 255.

Resource usage summary:

    CPU time :                                   1.84 sec.
    Max Memory :                                 4597.02 MB
    Average Memory :                             3947.10 MB
    Total Requested Memory :                     -
    Delta Memory :                               -
    Max Swap :                                   58004 MB

The output (if any) follows:

[proxy:0:32@mn273] HYDU_sock_write (../../utils/sock/sock.c:417): write error (Bad file descriptor)
[proxy:0:32@mn273] main (../../pm/pmiserv/pmip.c:406): unable to send control code to the server
[proxy:0:33@mn222] HYDU_sock_write (../../utils/sock/sock.c:417): write error (Bad file descriptor)
[proxy:0:33@mn222] main (../../pm/pmiserv/pmip.c:406): unable to send control code to the server
[proxy:0:34@mn223] HYDU_sock_write (../../utils/sock/sock.c:417): write error (Bad file descriptor)
[proxy:0:34@mn223] main (../../pm/pmiserv/pmip.c:406): unable to send control code to the server
Jun  8 12:31:51 2015 21018 3 9.1.3 lsb_launch(): Failed while executing tasks.
[proxy:0:9@mn293] HYDT_bscu_wait_for_completion (../../tools/bootstrap/utils/bscu_wait.c:113): one of the processes terminated badly; aborting
[proxy:0:9@mn293] HYDT_bsci_wait_for_completion (../../tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting for completion
[proxy:0:9@mn293] HYD_pmci_wait_for_childs_completion (../../pm/pmiserv/pmip_utils.c:1718): bootstrap server returned error waiting for complet
[proxy:0:9@mn293] main (../../pm/pmiserv/pmip.c:454): error waiting for event children completion
[mpiexec@mn269] control_cb (../../pm/pmiserv/pmiserv_cb.c:823): connection to proxy 9 at host mn293 failed
[mpiexec@mn269] HYDT_dmxu_poll_wait_for_event (../../tools/demux/demux_poll.c:76): callback returned error status
[mpiexec@mn269] HYD_pmci_wait_for_completion (../../pm/pmiserv/pmiserv_pmci.c:495): error waiting for event
[mpiexec@mn269] main (../../ui/mpich/mpiexec.c:1011): process manager error waiting for completion


I've access to the log files of all nodes, but so far I've not found anything relevant.

I've also run the program outside LSF using


% mpirun -hostsfile lh ./a.out 

and it works fine (lh is just a node per line) .

I've also tried 

% bsub -q q_1080p_1h -n 2048 -oo salida -m <list of nodes> mpirun -hostfile lh -genv I_MPI_FABRICS shm:ofa ./a.out


(where <list of nodes> has the same nodes as lh file) and it also works!)







0 Kudos

we have exactly the same problem with LSF 9.1.3 and more than 127 nodes with intelmpi.

Is there any solution?

0 Kudos