- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello all!
I have a SGI cluster with two separated infiniband networks. I want to use both networks for MPI comunications, how do I configure the Intel MPI this way?
There's anything to do with the "Set I_MPI_DEVICE= :" command?
There's anything to do with the "Set I_MPI_DEVICE=
Thank you for your attention.
Best regards.
Link Copied
2 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi!
Yes, it is possible with multirail.
Release Notes:
"Native InfiniBand* interface (OFED* verbs) support with multirail capability for ultimate InfiniBand* performance
- Set I_MPI_FABRICS=ofa for OFED* verbs only
- Set I_MPI_FABRICS=shm:ofa for shared memory and OFED* verbs
- Set I_MPI_OFA_NUM_ADAPTERS, etc., for multirail transfers"
Please take a look atI_MPI_OFA_NUM_ADAPTERS andI_MPI_OFA_NUM_PORTS also.
--
Dmitry Sivkov
Yes, it is possible with multirail.
Release Notes:
"Native InfiniBand* interface (OFED* verbs) support with multirail capability for ultimate InfiniBand* performance
- Set I_MPI_FABRICS=ofa for OFED* verbs only
- Set I_MPI_FABRICS=shm:ofa for shared memory and OFED* verbs
- Set I_MPI_OFA_NUM_ADAPTERS, etc., for multirail transfers"
Please take a look atI_MPI_OFA_NUM_ADAPTERS andI_MPI_OFA_NUM_PORTS also.
--
Dmitry Sivkov
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Dmitry, thank you for your quick response!
Your advice will be very helpful, I'll take a look at those options in the manual to be sure of what I'm doing :)
Thank you again!
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page