Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2229 Discussions

mpdboot fails to start nodes with different users.

tahgroupiastate_edu
1,411 Views
I am trying to figure out why a few nodes in my cluster are acting differently.

We are running Rocks 5.2 with RHEL 5
We use torque/maui as our queing system.
They submit jobs that use
MPI version 3.2.1.009

When I start a job as a user with this
mpdboot --rsh=ssh -d -v -n 16 -f /scr/username/testinput.nodes.mpd

I get the ususal
---
LAUNCHED mpd on compute-0-13 via compute-0-15
debug: launch cmd= ssh -x -n compute-0-13 env I_MPI_JOB_TAGGED_PORT_OUTPUT=1 HOSTTYPE=$HOSTTYPE MACHTYPE=$MACHTYPE HOST=$HOST OSTYPE=$OSTYPE /opt/intel/impi/3.2.1.009/bin64/mpd.py -h compute-0-15 -p 41983 --ifhn=10.1.3.241 --ncpus=1 --myhost=compute-0-13 --myip=10.1.3.241 -e -d -s 16
debug: mpd on compute-0-13 on port 58382
RUNNING: mpd on compute-0-13
debug: info for running mpd: {'ip': '10.1.3.241', 'ncpus': 1, 'list_port': 58382, 'entry_port': 41983, 'host': 'compute-0-13', 'entry_host': 'compute-0-15', 'ifhn': '', 'pid': 19147}
---

for most nodes
however when it gets to here

---
LAUNCHED mpd on compute-0-6 via compute-0-11
debug: launch cmd= ssh -x -n compute-0-6 env I_MPI_JOB_TAGGED_PORT_OUTPUT=1 HOSTTYPE=$HOSTTYPE MACHTYPE=$MACHTYPE HOST=$HOST OSTYPE=$OSTYPE /opt/intel/impi/3.2.1.009/bin64/mpd.py -h compute-0-11 -p 51916 --ifhn=10.1.3.248 --ncpus=1 --myhost=compute-0-6 --myip=10.1.3.248 -e -d -s 16
debug: mpd on compute-0-6 on port 47012
---
mpdboot_compute-0-15.local (handle_mpd_output 828): Failed to establish a socket connection with compute-0-6:47012 : (111, 'Connection refused')
mpdboot_compute-0-15.local (handle_mpd_output 845): failed to connect to mpd on compute-0-6
---

I have tried taking compute-0-6 out of the system and it tosses similar errors for compute-0-5 and so forth all the way to compute-0-0

When I run the same job as root
mpdboot --rsh=ssh -d -v -n 16 -f /scr/username/testinput.nodes.mpd
it starts fine.

We have ssh set up so that it does not require a password to log in, and I have successfully attemped logging in without password from the mpdboot node without any problems.

I am a relatively new cluster administrator and
I was hoping someone could help point me towards the solution to this problem

0 Kudos
8 Replies
TimP
Honored Contributor III
1,411 Views
Did you check for a bad or stale entry in .ssh/known_hosts for the account?
0 Kudos
tahgroupiastate_edu
1,411 Views
Quoting - tim18
Did you check for a bad or stale entry in .ssh/known_hosts for the account?
every node has the same known_hosts file
0 Kudos
TimP
Honored Contributor III
1,411 Views
every node has the same known_hosts file
but it has a separate entry for each node. You could check, for example, that ssh is working to the troublesome nodes with that known_hosts file and that account. It's often as simple as removing the bad entries and letting them be regenerated.
0 Kudos
tahgroupiastate_edu
1,411 Views
Quoting - tim18
but it has a separate entry for each node. You could check, for example, that ssh is working to the troublesome nodes with that known_hosts file and that account. It's often as simple as removing the bad entries and letting them be regenerated.

Cleared the entries and had them regenerated

The problem still persists
0 Kudos
Dmitry_K_Intel2
Employee
1,411 Views

Cleared the entries and had them regenerated

The problem still persists

Could you try to figure out the problem by using mpdcheck and mpdringtest?

Regards!
Dmitry

0 Kudos
Gergana_S_Intel
Employee
1,411 Views

Hi tahgroup,

If you believe this might be an issue with the way that ssh is setup for your users on the cluster, you can try using the Expect script we provide with the Intel MPI Library installation. It's called sshconnectivity.exp and it should be located in the original directory where the contents of the l_mpi_p_3.2.1.009 package were untarred. Of course, you would need to install the expect software first in order to run the script.

If you do decide to go this route, the script would setup secure shell connectivity across the entire cluster for the particular user account for you. To run it, all you have to do is provide a list of hosts:

$ ./sshconnectivity.exp machines.LINUX

where machines.LINUX contains the hostnames of all nodes on the cluster (including the head node), one per line.

This is just another option, if you're stuck. Let us know how it goes.

Regards,
~Gergana

0 Kudos
tahgroupiastate_edu
1,411 Views
Sorry It took a while to get back to this.
I was away form the problem for a bit.

Anyway,
I ran ./sshconnectivity.exp machines.LINUX
It reported that all nodes connect properly.

What other things could be causing the problem besides ssh?

0 Kudos
tahgroupiastate_edu
1,411 Views

Could you try to figure out the problem by using mpdcheck and mpdringtest?

Regards!
Dmitry


I tried this method as the user.
I have attached the results of the mpdcheck test
Essentially no errors with the mpdcheck
mpdringtest wont work till I can get the ring up
mpdboot still fails with a connection refused error
0 Kudos
Reply