Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2159 Discussions

Problem with intelmpi 4.0, process desapear, will be zombies or just finish

jperaltac
Novice
644 Views
Dear Support team

i have some problems using intelmpi, sometimes the process work fine without problems, (I use quantum-espresso software) but other times the process just desappear of the nodes and the queue system (torque) not finish the job. By the way, some works that are working (and in the nodes the process appears R) not continue writing in my ouput file

i use qsub to send the pbs system this is a example of the 'principal part' of the pbs file :

tmpfile=nodelist
rm -f ${tmpfile}
for s in `sort < ${PBS_NODEFILE} | uniq `
do echo " ${s}" >> ${tmpfile} ; numcoresf=`expr ${numcoresf} + ${NCORES}`; done
:
source /lustre/jperalta/intel/impi/4.0.0.028/intel64/bin/mpivars.sh
export I_MPI_PERHOST=8
export I_MPI_FABRIC="shm:dapl"
export I_MPI_DAPL_PROVIDER="ofa-v2-mlx4_0-2"
# DEFINE THE COMMAND
PWCOMMAND="mpirun -f ${tmpfile} -n ${numcoresf} /lustre/jperalta/src/espresso-4.2.1/bin-impi/qe_pw.x "
echo Final executable command $PWCOMMAND

# EXECUTE THE COMMAND
${PWCOMMAND} < ${INPUTFILE} >> ${OUTPUTFILE}


Sometimes the work finish well other times not, and others send me messages like as :

mpdboot_n13 (handle_mpd_output 883): Failed to establish a socket connection with n9:53000 : (111, 'Connection refused')
mpdboot_n13 (handle_mpd_output 900): failed to connect to mpd on n9

But if i send again .. this run! .. i don't know what happend.

If this is a problem of the cluster, What i should say to the admin?

And the last .. i have a 'very strange?' excellent performance vs openmpi 1.4 using espresso ... from 1d6h to 3 hours! .. so is very important to my try to correct and take the desicion of buy (i'm in my trial period) (if i buy intel-mpi i have upgrades free too?)

Regards
JP
0 Kudos
4 Replies
Andres_M_Intel4
Employee
644 Views
If you are using expresso, you may find interesting the following document.

Hope it helps.

-- Andres
0 Kudos
jperaltac
Novice
644 Views
The options that appear in the doccument '--mca' not work for me ... it is normal?

Thank you ro the answer.
JP
0 Kudos
TimP
Honored Contributor III
644 Views
--mca option is specific to openmpi. I don't know that doc, but you can use only advice which is good for MPI in general or given specifically for Intel MPI. If you are having difficulty understanding the Intel MPI equivalent of one of the common --mca options, you could likely get help here if you would explain what you want.
0 Kudos
jperaltac
Novice
644 Views
Thank you.

The administrator was clean and restart all nodes, now some jobs work better but sometimes if i send a work and this fails (for technical reasons, like a input wrong or similars) the job not finished in torque. The job still 'R' and then I kill this (by hand using qdel) but sometimes i recibe this information and the node continue with this process in Zombie status (with PPID=1).

257 Traceback (most recent call last):
258 File "/lustre/jperalta/intel/impi/4.0.0.028/intel64/bin/mpdcleanup", line 239, in ?
259 mpdcleanup()
260 File "/lustre/jperalta/intel/impi/4.0.0.028/intel64/bin/mpdcleanup", line 215, in mpdcleanup
261 pid = re.split(r'\s+', first_string)[5]
262 IndexError: list index out of range

Anybody can help me, in order to avoid leave Zombie process in the nodes? How i can kill the jobs and make a deep clean of mpdboot before start en each node?

Thanks in advance
Joaquin
0 Kudos
Reply