Intel® oneAPI HPC Toolkit
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

MPI shared memory

rabbitsoft
Beginner
239 Views
Hello,

I have program which needs about 50GB of RAM. Is Intel MPI able to automaticly manage allocation of memory for this program over the network? In other words If I had 25 machines with 2GB of RAM, is MPI able to use RAM from this machines? If yes, is it enough to just add -env I_MPI_DEVICE ssm?. What about performance if I use 100MB ethernet? Perhaps just swapping to hard drive would works quicker?

Thank you for answer and best wishes,
Milosz
0 Kudos
2 Replies
TimP
Black Belt
239 Views
I'm not certain what you're asking, but the answer may be no. I_MPI_DEVICE=ssm specifies that message passing is done by local memory copy when possible; it might manage allocation of message buffers only. If your program requires swapping to disk, it certainly will be faster for each rank to use the local disk, rather than shipping the data across ethernet. To run a cluster job with such small memory and such a slow interface, it would have to be "embarrassingly parallel;" very little communication required between nodes.
Dmitry_K_Intel2
Employee
239 Views
Hi Milosz,

>If I had 25 machines with 2GB of RAM, is MPI able to use RAM from this machines? If yes, is it enough to just add -env I_MPI_DEVICE ssm?
You can use memory of each node. But it doesn't depend on the fabric you use. A process should allocate memory (buffer) on a node and grant access to this memory for other processes. But you need to make extra efforts (code) to use this memory.
It seems to me that swapping will work faster than your slow network. But of cause it depends on the application.

Regards!
Dmitry
Reply