Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2159 Discussions

Intel Cluster Ready and Shared Memory Model

moh1367
Beginner
373 Views
Hello Every One!

I am looking for a HPC solution for a medium sized business and just looked at the web about "Intel Cluster Ready".
I searched a lot about this thing and now I am wondering about a question:

Which programming model can be applied to this type of cluster? Shared Memory or Message Passing? or both?
In the "Specification 1.2" I found nothing about the shared memory,
Is it possible to e.g. use OpenMP on the "/home" directory that is shared?

I really appreciate your answer.
Best
Mohammad
0 Kudos
3 Replies
TimP
Honored Contributor III
373 Views
Intel Cluster Ready refers to specification for an MPI distributed memory system, including Intel MPI and likely open source alternatives. The compilers required under the specification happen to support shared memory parallelism, with the MPI including facilities to coordinate the combination of OpenMP shared memory and MPI distributed memory. As you noticed, the specification doesn't go much into those details, although the facilities for coordinating threading under MPI which go beyond MPI standard might reasonably be expected to be visible in the specification.
If you're interested only in shared memory parallelism, you don't need a system such as this specification covers; it's supported by the individual compilers (OpenMP, Cilk+, TBB, plus threading libraries).
0 Kudos
moh1367
Beginner
373 Views
Thanks a lot for your reply, but how is the case with RWTH cluster?
As I read in your site about that it includes two parts, one MPI part and another SMP shared memory part?
Is that also coordinated by MPI facilities for shared memory?

Best
0 Kudos
TimP
Honored Contributor III
373 Views
As RWTH cluster includes 32-core nodes, as well as a larger number of 12-core nodes, you certainly have a variety of options to run shared memory mode on a single node or to run shared memory ranks across a group of nodes on the cluster. You wouldn't normally run under MPI if using a single rank; you could run in that mode under a cluster management system by requesting the single node for your job.
0 Kudos
Reply