Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2161 Discussions

Running cluster checker on LSF network

Parviz
Beginner
2,182 Views

 

Hi All,

I need to run cluster checker on a cluster that does not support passwordless ssh and is managed by LSF. Can that be done? If so, please advise on how to do that.

Thanks in advance.

-Parviz

 

 

0 Kudos
15 Replies
PrasanthD_intel
Moderator
2,139 Views

Hi,


The steps on how to proceed when the passwordless ssh is not available have been mentioned in the prerequisite steps of Getting Started (Getting Started (intel.com)) of Cluster checker.

Please go through it and let us know if you face any issues.


Regards

Prasanth


0 Kudos
PrasanthD_intel
Moderator
2,108 Views

Hi Parviz,


Does following the steps provided in the getting started guide helped? are you able to run cluster checker?

Let us know if you face any issues.


Regards

Prasanth


0 Kudos
Parviz
Beginner
2,081 Views

Hi Prasanth,

I am afraid not. The problem persists.

I followed the instructions in the "Getting Started" web page by taking the following steps :

  • Changed the config file, clck.xml, according to what is indicated (uncommented the line <extension>mpi.so</extension>) and used that file as the argument to clck -c option.
  • Removed my ~/.ssh directory expecting clck to work without ssh.
  • Ran clck with -l debug switch.

I saw that the there was a long pause in the call to mpirun from inside clck. The mpirun then timed out. Do you see anything wrong in the above steps?

We have other compute clusters that are managed by LSF and SGE. I have not found any reference to running clck in such clusters in your documents. Is there a way to do that? If so, please send me the steps or the link to the correct web page.

Thanks

-Parviz

 

 

 

0 Kudos
PrasanthD_intel
Moderator
2,048 Views

Hi Parviz,


We are sorry that it didn't work.

Could you please tell us which version of cluster checker you were using along with the Parallel studio/OneAPI version?

And also please let us know your environment details(OS, version, etc).


Regards

Prasanth


0 Kudos
Parviz
Beginner
2,034 Views

Hi Prasanth,

Thanks for checking back.

Below are the data the you have asked for.

Version of the cluster checker :

clck -v
Intel(R) Cluster Checker 2021 Update 1 (build 20201104)
Copyright (C) 2006-2020 Intel Corporation. All rights reserved.

 

The host OS version where I ran the cluster checker on:

cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)

uname -a
Linux  *******  3.10.0-957.27.2.el7.x86_64 #1 SMP Mon Jul 29 17:46:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Our cluster is composed of machines with versions 7.6 and 6.7 of CentOS.

I do not know how to get the version of the OneAPI software. Please advise.

 

Thanks

-Parviz

 

0 Kudos
PrasanthD_intel
Moderator
2,019 Views

Hi Parviz,


Could you also set the below environment variable on top of the steps provided in the getting started documentation and try


I_MPI_HYDRA_BOOTSTRAP=lsf


The above option is valid for a cluster running LSF job scheduler.


In case of clusters running the SGE scheduler. set

I_MPI_HYDRA_BOOTSTRAP=sge


You may use the following link as reference, https://software.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-linux/top/environment-variable-reference/hydra-environment-variables.html


Regards

Prasanth


0 Kudos
Parviz
Beginner
2,007 Views

Hi Prasanth,

Thanks for your reply.

I did as you suggested: I set the scheduler to sge, then re-ran clck with the settings that I have described in my earlier post. Below is the tail of the output when ran with -l debug option (the names of the internal machines and the paths are masked as "*****") :

Starting pre-check............
data collection command:
mpirun -genv CLCK_CONNECT_WITH_MPI=1 -ppn 1 -hosts ,******** -prepend-pattern '%h:' /bin/bash -c 'echo "#####CLCK_HOSTNAME `hostname`#####";if [[ ! -d /home/parvizf/.clck ]]; then echo CLCK_PRECHECK_ND;elif [[ ! -w /home/parvizf/.clck ]] || [[ ! -x /home/parvizf/.clck ]] || [[ ! -r /home/parvizf/.clck ]]; then echo CLCK_PRECHECK_NON_RW;else echo CLCK_PRECHECK_OK;fi;stat -c "#####SHAREDDIR_INODE %i#####" /home/parvizf/.clck;'

The command 'mpirun' has timed out and will be killed

sending terminate signal to process 225391
process 225391 has exited

During the runtime of Intel(R) Cluster Checker the underlying mpirun command has timed out and will be killed. The following nodes have failed pre-check because the command 'mpirun' could not be executed with the requested nodes. Please verify the following are all accessible through Intel(R) MPI Library: ********
For more information, please run with '-l debug'.
Error running data providers
Stopping the accumulate server

clck-collect temp-shared location deleted
clck-collect is done

=======================================================================================================

As in my previous runs, the mpirun command times out and fails.

To verify that mpirun is submitted to sge, I set the environment variable I_MPI_HYDRA_DEBUG to 1, then ran the last mpirun command from the command. Below is the output :


linux==> mpirun -genv CLCK_CONNECT_WITH_MPI=1 -ppn 1 -hosts ,******** -prepend-pattern '%h:' /bin/bash -c 'echo "#####CLCK_HOSTNAME `hostname`#####";if [[ ! -d /home/parvizf/.clck ]]; then echo CLCK_PRECHECK_ND;elif [[ ! -w /home/parvizf/.clck ]] || [[ ! -x /home/parvizf/.clck ]] || [[ ! -r /home/parvizf/.clck ]]; then echo CLCK_PRECHECK_NON_RW;else echo CLCK_PRECHECK_OK;fi;stat -c "#####SHAREDDIR_INODE %i#####" /home/parvizf/.clck;'

mpirun -genv CLCK_CONNECT_WITH_MPI=1 -ppn 1 -hosts ,******** -prepend-pattern '%h:' /bin/bash -c 'echo "#####CLCK_HOSTNAME `hostname`#####";if [[ ! -d /home/parvizf/.clck ]]; then echo CLCK_PRECHECK_ND;elif [[ ! -w /home/parvizf/.clck ]] || [[ ! -x /home/parvizf/.clck ]] || [[ ! -r /home/parvizf/.clck ]]; then echo CLCK_PRECHECK_NON_RW;else echo CLCK_PRECHECK_OK;fi;stat -c "#####SHAREDDIR_INODE %i#####" /home/parvizf/.clck;'
[mpiexec@********] Launch arguments: /****/sge/bin/lx-amd64/qrsh -inherit -V ******** /*****/intel-tools/MPI/compilers_and_libraries_2020.1.217/linux/mpi/intel64/bin//hydra_bstrap_proxy --upstream-host ******** --upstream-port 45833 --pgid 0 --launcher sge --launcher-number 4 --base-path /*****/intel-tools/MPI/compilers_and_libraries_2020.1.217/linux/mpi/intel64/bin/ --tree-width 16 --tree-level 1 --time-left -1 --collective-launch 1 --debug --proxy-id 0 --node-id 0 --subtree-size 1 /*****/intel-tools/MPI/compilers_and_libraries_2020.1.217/linux/mpi/intel64/bin//hydra_pmi_proxy --usize -1 --auto-cleanup 1 --abort-signal 9
[mpiexec@********] check_exit_codes (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:117): unable to run bstrap_proxy on ******** (pid 226159, exit code 256)
[mpiexec@********] poll_for_event (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:159): check exit codes error
[mpiexec@********] HYD_dmx_poll_wait_for_proxy_event (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:212): poll for event error
[mpiexec@********] HYD_bstrap_setup (../../../../../src/pm/i_hydra/libhydra/bstrap/src/intel/i_hydra_bstrap.c:770): error waiting for event
[mpiexec@********] main (../../../../../src/pm/i_hydra/mpiexec/mpiexec.c:1956): error setting up the boostrap proxies
=======================================================================================================

Looks like the submission to the sge happens correctly, but then there is an error.

Any idea how I can go around the error?

Thanks
-Parviz

0 Kudos
PrasanthD_intel
Moderator
1,997 Views

Hi Parviz,


Could you please

  1. Check if Intel MPI Library was initialized correctly. (check for mpirun -v etc,)
  2. Run a simple application with Intel MPI on the nodes of interest with sge set as the bootstrap mechanism and Check whether you were able to run MPI successfully

Command:$ I_MPI_HYDRA_BOOTSTRAP=sge mpiexec.hydra -n 4 -ppn 1 -f hostfile ./app



Let us know if you face any issues while trying to run MPI.


Regards

Prasanth


0 Kudos
Parviz
Beginner
1,967 Views

Hi Prasanth,

The issue seems to be with sending the jobs through sge.

Without setting I_MPI_HYDRA_BOOTSTAP to sge (using passwordless ssh) things work :

===>> mpiexec.hydra -f hostfile date
[mpiexec@******] Launch arguments: /usr/bin/ssh -q -x ****** /*****/intel-tools/MPI/compilers_and_libraries_2020.1.217/linux/mpi/intel64/bin//hydra_bstrap_proxy --upstream-host ****** --upstream-port 38358 --pgid 0 --launcher ssh --launcher-number 0 --base-path /*****/intel-tools/MPI/compilers_and_libraries_2020.1.217/linux/mpi/intel64/bin/ --tree-width 16 --tree-level 1 --time-left -1 --collective-launch 1 --debug --proxy-id 0 --node-id 0 --subtree-size 1 /*****/intel-tools/MPI/compilers_and_libraries_2020.1.217/linux/mpi/intel64/bin//hydra_pmi_proxy --usize -1 --auto-cleanup 1 --abort-signal 9
[proxy:0:0@******] Warning - oversubscription detected: 24 processes will be placed on 16 cores
Mon May 3 07:23:27 EDT 2021
Mon May 3 07:23:27 EDT 2021
Mon May 3 07:23:27 EDT 2021
Mon May 3 07:23:27 EDT 2021
Mon May 3 07:23:27 EDT 2021

...

 

When using sge, the same command produces the following error :

 

===>> mpiexec.hydra -f sox1 date
[mpiexec@******] Launch arguments: /*****/sge/bin/lx-amd64/qrsh -inherit -V ******* /*******/intel-tools/MPI/compilers_and_libraries_2020.1.217/linux/mpi/intel64/bin//hydra_bstrap_proxy --upstream-host ****** --upstream-port 34808 --pgid 0 --launcher sge --launcher-number 4 --base-path /*******/intel-tools/MPI/compilers_and_libraries_2020.1.217/linux/mpi/intel64/bin/ --tree-width 16 --tree-level 1 --time-left -1 --collective-launch 1 --debug --proxy-id 0 --node-id 0 --subtree-size 1 /*******/intel-tools/MPI/compilers_and_libraries_2020.1.217/linux/mpi/intel64/bin//hydra_pmi_proxy --usize -1 --auto-cleanup 1 --abort-signal 9
[mpiexec@******] check_exit_codes (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:117): unable to run bstrap_proxy on ******* (pid 75847, exit code 256)
[mpiexec@******] poll_for_event (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:159): check exit codes error
[mpiexec@******] HYD_dmx_poll_wait_for_proxy_event (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:212): poll for event error
[mpiexec@******] HYD_bstrap_setup (../../../../../src/pm/i_hydra/libhydra/bstrap/src/intel/i_hydra_bstrap.c:770): error waiting for event
[mpiexec@******] main (../../../../../src/pm/i_hydra/mpiexec/mpiexec.c:1956): error setting up the boostrap proxies

 

The above error has already been reported in the posting below :

 

https://community.intel.com/t5/Intel-oneAPI-HPC-Toolkit/Intel-MPI-Unable-to-run-bstrap-proxy-error-setting-up-the/m-p/1204677

 

The issue reported above was resolved by setting some parameters in the host machine. I may have to so the same. Can you please have a look? I do not follow the resolution of the issue.

 

Thanks

-Parviz

0 Kudos
PrasanthD_intel
Moderator
1,939 Views

Hi Parviz,


Could you let us know that the nodes on which you have tested with setting I_MPI_HYDRA_BOOTSTAP to sge do they have SGE job scheduler? Have you used SGE for obtaining those nodes?

Please let us know the command you have used for obtaining those nodes.


Regards

Prasanth


0 Kudos
PrasanthD_intel
Moderator
1,908 Views

Hi Parviz,


We haven't heard back from you.

Could you please assure us that the nodes you are using are allocated by the SGE job scheduler as asked in the previous post?


Regards

Prasanth


0 Kudos
Parviz
Beginner
1,899 Views

Hi Prasanth,

 

Please clarify what you mean by "Have you used SGE for obtaining those nodes". My assumption is that as long as the environment variable I_MPI_HYDRA_BOOTSTAP is set to SGE and the nodes in the hostfile are in a cluster managed by SGE, then there is nothing else required, the Cluster Checker will internally interact with SGE to access the nodes and run. Is this the case? If not, please advise me on how to reserve the nodes that I have in my hostfile through SGE. A list of commands to run would be helpful.

 

Thanks

-Parviz

 

 

Sorry, I am not familiar with SGE and do not know how to check what you have asked? Please be more specific. What steps do I need to take to allocate the jobs through SGE. My assumption was that setting of the environment variable I_MPI_HYDRA_BOOTSTAP to SGE  would be sufficient and the scheduling of the jobs through SGE is done internally by the Cluster Checker?

 

 

Can you please be more specific? What steps do I need to take to do the check?

0 Kudos
PrasanthD_intel
Moderator
1,889 Views

Hi Parviz,

 

Could you please let us know the command you were using to submit jobs to the nodes in your cluster?

It would somewhat be in the lines qsub,bsub etc.

Please contact your system administrator for the exact command to submit jobs using the SGE scheduler in your cluster.

Once you submit jobs through an SGE batch system then only SGE would be used and the remaining steps would be successful.

 

Regards

Prasanth

0 Kudos
PrasanthD_intel
Moderator
1,824 Views

Hi Parviz,


Have you found how to submit jobs using SGE? You can refer to the Oracle documentation(Chapter 3 Submitting Jobs (Sun N1 Grid Engine 6.1 User's Guide) (oracle.com)) on how to submit batch and interactive jobs.

Let us know if you face any issues.


Regards

Prasanth


0 Kudos
PrasanthD_intel
Moderator
1,779 Views

Hi Parviz,


We are closing this thread assuming your issue has been resolved. We will no longer respond to this thread. If you require additional assistance from Intel, please start a new thread. Any further interaction in this thread will be considered community only.


Regards

Prasanth


0 Kudos
Reply