- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have a system with 16 cores and 2 IB interfaces. I need to use both rails, so have been using
export I_MPI_OFA_NUM_ADAPTERS=2
export I_MPI_OFA_NUM_PORTS=1
export I_MPI_OFA_RAIL_SCHEDULER=ROUND_ROBIN
export I_MPI_FABRICS="shm:ofa"But this has the undesirable effect of sending some of the data from the second 8 cores to the IB on the first eight cores and vice-versa.
I tried
export I_MPI_OFA_NUM_ADAPTERS=2
export I_MPI_OFA_NUM_PORTS=1
export I_MPI_OFA_RAIL_SCHEDULER=PROCESS_BIND
export I_MPI_FABRICS="shm:ofa"But this assigns process 0 to interface 1, process 1 to interface 2, process 3 to interface 1, etc. This sends alot of data from one set of cores to the opposing interface. Furthermore, the code is written so that it assumes that process 0 and process 1 are in the same cpu so it tries to do a cyclic data movement.
I need to assign process 0 - 7 to interface 1 and process 8 - 15 to interface 2.
Is this possible?
Thanks
David
Thanks
Link Copied
2 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi David,
Try using something like the following:
[bash]export I_MPI_FABRICS=shm:ofa mpirun -n 8 -env I_MPI_OFA_ADAPTER_NAME adap1 : -n 8 -env I_MPI_OFA_ADAPTER_NAME adap2 [/bash]
This should set the first 8 processes to use the first adapter, and the next 8 to use the second adapter.
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
Try using something like the following:
[bash]export I_MPI_FABRICS=shm:ofa mpirun -n 8 -env I_MPI_OFA_ADAPTER_NAME adap1
This should set the first 8 processes to use the first adapter, and the next 8 to use the second adapter.
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi David,
I need to make some clarifications.
If you use I_MPI_FABRICS=shm:ofa it means that 'shm' will be used for INTRA-node communication and 'ofa' will be used for INTER-node communication.
Since you are going to use OFA for intra-node communication you need to set I_MPI_FABRICS to 'ofa' or 'ofa:ofa'.
And all other paratemers as James mentioned.
Please give it a try and compare results with default settings. It would be nice if you could share with us results of different runs.
Regards!
Dmitry
I need to make some clarifications.
If you use I_MPI_FABRICS=shm:ofa it means that 'shm' will be used for INTRA-node communication and 'ofa' will be used for INTER-node communication.
Since you are going to use OFA for intra-node communication you need to set I_MPI_FABRICS to 'ofa' or 'ofa:ofa'.
And all other paratemers as James mentioned.
Please give it a try and compare results with default settings. It would be nice if you could share with us results of different runs.
Regards!
Dmitry

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page