Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2154 Discussions

Installing the Intel Cluster Toolkit Compiler Edition on a GE + IB cluster

hpcsoftwaremail_csi_
709 Views
All,

I have successfully installed the ICTCE on a simple GE cluster with
one switch and set of interfaces. What, if any, are the tricks and
complications when installing on a system with two networks (GE and
IB). I obviously wish the be able to run MPI over the IB network,
but would like to use and test either as I wish.

The installation guide offers no information on this question as far
as I can tell. The instructions ask you with construct the machines.LINUX
file using the hostname generated names for the compute nodes.
These associate with the GE interface. This file may only be used
for the push out of the installation, but I would like to know for
sure.

What might be different in the installation procedure on a dual network
cluster (GE and IB).

Thanks,

Richard Walsh
Parallel Applications and Systems Manager, CUNY HPC Center
0 Kudos
9 Replies
Gergana_S_Intel
Employee
709 Views

Hi Richard,

The Intel Cluster Toolkit or any of its components doesn't really care what networks are available on your system during installation. Because of our architecture, what network fabric you use to run your MPI programs is all tunable at runtime and not something you have to worry about at install-time.

In fact, we have a few customers who start using the tools on a GigE cluster. After some time (and the availability of money), they buy a few IB network switches, and get those plugged in as well. They don't have to re-install the Cluster Tools yet again, but simply have to change an environment option before running.

After you've completed your installation, you can select between different networks as follows (this procedure is also described in the Getting Started PDF document):

When running over GigE, use the ssm (shared memory + sockets) device:

# assuming your cluster is setup to use ssh (-r ssh)
# and you have a mpd.hosts file available,
# which lists all hosts on your cluster, one hostname per line

$ mpirun -r ssh -f mpd.hosts -genv I_MPI_DEVICE ssm ./a.out

When running over IB, to use the rdssm (RDMA + shared memory) device:

# assuming your cluster is setup to use ssh (-r ssh)
# and you have a mpd.hosts file available,
# which lists all hosts on your cluster, one hostname per line

$ mpirun -r ssh -f mpd.hosts -genv I_MPI_DEVICE rdssm ./a.out

In fact, the Intel MPI Library would pick the fastest available fabric on your cluster at runtime. In your case, it'll use the IB network as default. If you'd like to instead run over GigE, you can go head and select the ssm device as I describe above.

I hope this helps. Let us know if you hit any problems during installation, or at runtime.

Regards,
~Gergana

0 Kudos
hpcsoftwaremail_csi_
709 Views

Hi Richard,

The Intel Cluster Toolkit or any of its components doesn't really care what networks are available on your system during installation. Because of our architecture, what network fabric you use to run your MPI programs is all tunable at runtime and not something you have to worry about at install-time.

In fact, we have a few customers who start using the tools on a GigE cluster. After some time (and the availability of money), they buy a few IB network switches, and get those plugged in as well. They don't have to re-install the Cluster Tools yet again, but simply have to change an environment option before running.

After you've completed your installation, you can select between different networks as follows (this procedure is also described in the Getting Started PDF document):

When running over GigE, use the ssm (shared memory + sockets) device:

# assuming your cluster is setup to use ssh (-r ssh)
# and you have a mpd.hosts file available,
# which lists all hosts on your cluster, one hostname per line

$ mpirun -r ssh -f mpd.hosts -genv I_MPI_DEVICE ssm ./a.out

When running over IB, to use the rdssm (RDMA + shared memory) device:

# assuming your cluster is setup to use ssh (-r ssh)
# and you have a mpd.hosts file available,
# which lists all hosts on your cluster, one hostname per line

$ mpirun -r ssh -f mpd.hosts -genv I_MPI_DEVICE rdssm ./a.out

In fact, the Intel MPI Library would pick the fastest available fabric on your cluster at runtime. In your case, it'll use the IB network as default. If you'd like to instead run over GigE, you can go head and select the ssm device as I describe above.

I hope this helps. Let us know if you hit any problems during installation, or at runtime.

Regards,
~Gergana


Gergana,

Spaceeba bolshoi ... !!

Richard
0 Kudos
Gergana_S_Intel
Employee
709 Views
Gergana,

Spaceeba bolshoi ... !!

Richard

Any time, Richard :)

0 Kudos
hpcsoftwaremail_csi_
709 Views

Any time, Richard :)



Gergana,

Who is the lady in your window, if I might ask?

Richard
0 Kudos
Gergana_S_Intel
Employee
709 Views

Hi Richard,

It's Lady Ada Lovelace, she of the awesome computing skills, also recognized as the "first programmer" ... such are my idols :)

Regards,
~Gergana

0 Kudos
Dmitry_K_Intel2
Employee
709 Views

Gergana,

Spaceeba bolshoi ... !!

Richard

Richard, I'm afraid Gergana doesn't understand russian... do you, Gergana?

For those who don't understand: "thanks a lot"

Dmitry

0 Kudos
TimP
Honored Contributor III
709 Views
I had the impression Gergana was a native speaker of Bulgarian.
0 Kudos
Gergana_S_Intel
Employee
709 Views
Quoting - tim18
I had the impression Gergana was a native speaker of Bulgarian.

Yup, you're entirely correct, Tim. I was born in Bulgaria so that's my native tongue.

Thanks for the translation, Dmitry. I've actually seen a few Russian movies (my parents like them a lot) so "spasiba" was pretty easy :)

~Gergana

0 Kudos
hpcsoftwaremail_csi_
709 Views

Yup, you're entirely correct, Tim. I was born in Bulgaria so that's my native tongue.

Thanks for the translation, Dmitry. I've actually seen a few Russian movies (my parents like them a lot) so "spasiba" was pretty easy :)

~Gergana


All,

Ada Lovelace ... of course ... and sorry about assuming that
you were a Russian speaker.

Thanks to all ... the install went smoothly, building OpenMPI
now, which is also humming along ...

Take care,

Richard Walsh
0 Kudos
Reply