Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2159 Discussions

Assign specific mpi tasks to specific IB interfaces

David_Race
Beginner
436 Views
I have a system with 16 cores and 2 IB interfaces. I need to use both rails, so have been using
export I_MPI_OFA_NUM_ADAPTERS=2
export I_MPI_OFA_NUM_PORTS=1
export I_MPI_OFA_RAIL_SCHEDULER=ROUND_ROBIN
export I_MPI_FABRICS="shm:ofa"
But this has the undesirable effect of sending some of the data from the second 8 cores to the IB on the first eight cores and vice-versa.
I tried
export I_MPI_OFA_NUM_ADAPTERS=2
export I_MPI_OFA_NUM_PORTS=1
export I_MPI_OFA_RAIL_SCHEDULER=PROCESS_BIND
export I_MPI_FABRICS="shm:ofa"
But this assigns process 0 to interface 1, process 1 to interface 2, process 3 to interface 1, etc. This sends alot of data from one set of cores to the opposing interface. Furthermore, the code is written so that it assumes that process 0 and process 1 are in the same cpu so it tries to do a cyclic data movement.
I need to assign process 0 - 7 to interface 1 and process 8 - 15 to interface 2.
Is this possible?
Thanks
David
Thanks
0 Kudos
2 Replies
James_T_Intel
Moderator
436 Views
Hi David,

Try using something like the following:

[bash]export I_MPI_FABRICS=shm:ofa mpirun -n 8 -env I_MPI_OFA_ADAPTER_NAME adap1 : -n 8 -env I_MPI_OFA_ADAPTER_NAME adap2 [/bash]
This should set the first 8 processes to use the first adapter, and the next 8 to use the second adapter.

Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
0 Kudos
Dmitry_K_Intel2
Employee
436 Views
Hi David,

I need to make some clarifications.
If you use I_MPI_FABRICS=shm:ofa it means that 'shm' will be used for INTRA-node communication and 'ofa' will be used for INTER-node communication.
Since you are going to use OFA for intra-node communication you need to set I_MPI_FABRICS to 'ofa' or 'ofa:ofa'.
And all other paratemers as James mentioned.
Please give it a try and compare results with default settings. It would be nice if you could share with us results of different runs.

Regards!
Dmitry
0 Kudos
Reply