- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We are running Rocks 5.2 with RHEL 5
We use torque/maui as our queing system.
They submit jobs that use
MPI version 3.2.1.009
When I start a job as a user with this
mpdboot --rsh=ssh -d -v -n 16 -f /scr/username/testinput.nodes.mpd
I get the ususal
---
LAUNCHED mpd on compute-0-13 via compute-0-15
debug: launch cmd= ssh -x -n compute-0-13 env I_MPI_JOB_TAGGED_PORT_OUTPUT=1 HOSTTYPE=$HOSTTYPE MACHTYPE=$MACHTYPE HOST=$HOST OSTYPE=$OSTYPE /opt/intel/impi/3.2.1.009/bin64/mpd.py -h compute-0-15 -p 41983 --ifhn=10.1.3.241 --ncpus=1 --myhost=compute-0-13 --myip=10.1.3.241 -e -d -s 16
debug: mpd on compute-0-13 on port 58382
RUNNING: mpd on compute-0-13
debug: info for running mpd: {'ip': '10.1.3.241', 'ncpus': 1, 'list_port': 58382, 'entry_port': 41983, 'host': 'compute-0-13', 'entry_host': 'compute-0-15', 'ifhn': '', 'pid': 19147}
---
for most nodes
however when it gets to here
---
LAUNCHED mpd on compute-0-6 via compute-0-11
debug: launch cmd= ssh -x -n compute-0-6 env I_MPI_JOB_TAGGED_PORT_OUTPUT=1 HOSTTYPE=$HOSTTYPE MACHTYPE=$MACHTYPE HOST=$HOST OSTYPE=$OSTYPE /opt/intel/impi/3.2.1.009/bin64/mpd.py -h compute-0-11 -p 51916 --ifhn=10.1.3.248 --ncpus=1 --myhost=compute-0-6 --myip=10.1.3.248 -e -d -s 16
debug: mpd on compute-0-6 on port 47012
---
mpdboot_compute-0-15.local (handle_mpd_output 828): Failed to establish a socket connection with compute-0-6:47012 : (111, 'Connection refused')
mpdboot_compute-0-15.local (handle_mpd_output 845): failed to connect to mpd on compute-0-6
---
I have tried taking compute-0-6 out of the system and it tosses similar errors for compute-0-5 and so forth all the way to compute-0-0
When I run the same job as root
mpdboot --rsh=ssh -d -v -n 16 -f /scr/username/testinput.nodes.mpd
it starts fine.
We have ssh set up so that it does not require a password to log in, and I have successfully attemped logging in without password from the mpdboot node without any problems.
I am a relatively new cluster administrator and
I was hoping someone could help point me towards the solution to this problem
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Cleared the entries and had them regenerated
The problem still persists
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Cleared the entries and had them regenerated
The problem still persists
Could you try to figure out the problem by using mpdcheck and mpdringtest?
Regards!
Dmitry
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi tahgroup,
If you believe this might be an issue with the way that ssh is setup for your users on the cluster, you can try using the Expect script we provide with the Intel MPI Library installation. It's called sshconnectivity.exp and it should be located in the original directory where the contents of the l_mpi_p_3.2.1.009 package were untarred. Of course, you would need to install the expect
software first in order to run the script.
If you do decide to go this route, the script would setup secure shell connectivity across the entire cluster for the particular user account for you. To run it, all you have to do is provide a list of hosts:
$ ./sshconnectivity.exp machines.LINUX
where machines.LINUX
contains the hostnames of all nodes on the cluster (including the head node), one per line.
This is just another option, if you're stuck. Let us know how it goes.
Regards,
~Gergana
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I was away form the problem for a bit.
Anyway,
I ran ./sshconnectivity.exp machines.LINUX
It reported that all nodes connect properly.
What other things could be causing the problem besides ssh?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Could you try to figure out the problem by using mpdcheck and mpdringtest?
Regards!
Dmitry
I tried this method as the user.
I have attached the results of the mpdcheck test
Essentially no errors with the mpdcheck
mpdringtest wont work till I can get the ring up
mpdboot still fails with a connection refused error
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page