Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2154 Discussions

mpirun: connection refused and mpi_init_ problem

sm00th
Beginner
1,131 Views

Hello,

we are trying to run compiled (using ifort + gcc) application using Intel MPI. Please have a look at problems we are facing:

Problem1: When trying to mpirun software we are receiving:

mpdboot_wn1 (handle_mpd_output 752): from mpd on wn2, invalid port info:
connect to address 192.168.0.2: Connection refused
connect to address 192.168.0.2: Connection refused
trying normal rsh (/usr/bin/rsh)

wn2.cluster: Connection refused

Problem2:when trying to avoid Problem1 and run under different MPI (mvapich2) we are receiving
an error about missing symbol mpi_init_. I thought binary should be
portable between MPI implementations.

We followed the procedure:

1. Created mpd.hosts file:
wn1:8
wn2:8
wn3:8
wn4:8
2. checked if I can login to every node without password via ssh
3. Environment:
export I_MPI_DEVICE=rdma
export I_MPI_FALLBACK_DEVICE=0
4. mpdboot -n 4 -f mpd.hosts -r ssh
5. checked manually if mpdboot is running on every node - it was and was using a port showed by mpdtrace
6. mpdtrace -l
wn1.cluster_34120 (192.168.0.1)
wn4.cluster_32821 (192.168.0.4)
wn3.cluster_32947 (192.168.0.3)
wn2.cluster_32934 (192.168.0.2)
7. mpirun -n 4 [executable_and_params]

We have installed Cluster Toolkit Compiler Edition:
/opt/intel/cc/11.0.074
/opt/intel/fc/11.0.074
/opt/intel/ictce/3.2.0.020
/opt/intel/impi/3.2.0.011
/opt/intel/itac/7.2.0.011
/opt/intel/mkl/10.1.0.015
on Scientific Linux SL release 4.7 (Beryllium).
We are currently trying to compile and run custom aplication compiled using gcc and fortran compiler on 4-node (2x QC CPUs, 8cores/node) cluster with InfiniBand.

Software was compiled against Intel MPI.

One more question: What is the _proper_ way of linking compiled GCC and Intel Fortran code?


0 Kudos
4 Replies
Gergana_S_Intel
Employee
1,131 Views

Hello sm00th,

The steps you complete up to #7 are actually all correct. What you have to understand is how the mpirun script functions. You can think of mpirun as a "wrapper" that actually executes 3 other commands:
mpdboot (to start the MPD daemons),
then mpiexec (to run your app),
and finally mpdallexit (to close out the MPD ring).

So steps 1 - 6 are fine but then you use mpirun, which would actually close out your MPD daemons and try to restart them. But when you restart them, you no longer use ssh (since you don't specify that option). So mpirun is unable to connect to the other nodes via rsh (which is default) and you see the error.

That means you have two options here:
A. Use the mpdboot/mpiexec/mpdallexit sequence
B. Use mpirun only (the arguments for mpirun are: mpirun )



To run your app in case A, you just have to replace mpirun with mpiexec:
1 - 6. Those are all correct
7. mpiexec -n 4 [executable_and_params]
8. mpdallexit #close out the MPD daemons

To run you app in case B, you just combine all your arguments when using your mpirun command:
1. mpirun -r ssh -f mpd.hosts -n 4 [executable_and_params]

In case B, the -n options is for the mpiexec command. That means that mpirun would start the MPD daemons on all nodes in mpd.hosts. In the same spirit, this:

mpirun -r ssh -f mpd.hosts -n 8 [executable_and_params]

will starts only 4 daemons (1 on each node) but will run 8 MPI processes on your cluster.

Problem 2: That's not the case. A binary will be portable between MPI implementations only if those implementations are binary compatible (e.g. Intel MPI is binary compatible with MPICH2). On the other hand, your source code will be portable between MPI implementations (all MPI implementations use the same set of routines - e.g. MPI_Init() ) but you'd need to relink your code with your preferred MPI library.

I hope this helps. Let me know what happens when you try either mpiexec or mpirun.

Regards,
~Gergana
0 Kudos
sm00th
Beginner
1,131 Views
Thank you for help. It's working now.
mpirun -r ssh -f mpd.hosts -n 8 [executable_and_params]

will starts only 4 daemons (1 on each node) but will run 8 MPI processes on your cluster.

Problem 2: That's not the case. A binary will be portable between MPI implementations only if those implementations are binary compatible (e.g. Intel MPI is binary compatible with MPICH2). On the other hand, your source code will be portable between MPI implementations (all MPI implementations use the same set of routines - e.g. MPI_Init() ) but you'd need to relink your code with your preferred MPI library.

I hope this helps. Let me know what happens when you try either mpiexec or mpirun.

So If I want to use all the cores by MPI processes (4 nodes * 8 cores) I need to run

mpirun -r ssh -f mpd.hosts -n 32 [executable_and_params]
right?
And what if -n will be larger than total 'virtual' CPU count defined in mpd.hosts?




0 Kudos
Gergana_S_Intel
Employee
1,131 Views
Quoting - sm00th
So If I want to use all the cores by MPI processes (4 nodes * 8 cores) I need to run

right?
And what if -n will be larger than total 'virtual' CPU count defined in mpd.hosts?

mpirun -r ssh -f mpd.hosts -n 32 [executable_and_params]

Yes, that's exactly right.

If the value of -n is larger than the total 'virtual' CPU count, then you'll be oversubscribing the processor (meaning you'll be running more procs than available cores). That, in turn, means that you'll have at least 1 core that's trying to run 2 MPI procs simultaneously. In reality, since the single core cannot do two instructions at the same time, it will instead do a lot of swapping between MPI procs to make sure both are executed in a simultaneous-like manner.

As you can expect, this might result in some performance degradation (especially as you increase the MPI proc count), so it's not something we recommend doing.

Regards,
~Gergana
0 Kudos
TimP
Honored Contributor III
1,131 Views
Some of my colleagues prefer OpenMPI when running multiple processes per logical processor. They do this in order to check for correct execution without the use of a large cluster, not expecting this to be an efficient way of running.
0 Kudos
Reply