I have a SGI cluster with two separated infiniband networks. I want to use both networks for MPI comunications, how do I configure the Intel MPI this way? There's anything to do with the "Set I_MPI_DEVICE=:"command?
Release Notes: "Native InfiniBand* interface (OFED* verbs) support with multirail capability for ultimate InfiniBand* performance - Set I_MPI_FABRICS=ofa for OFED* verbs only - Set I_MPI_FABRICS=shm:ofa for shared memory and OFED* verbs - Set I_MPI_OFA_NUM_ADAPTERS, etc., for multirail transfers"
Please take a look atI_MPI_OFA_NUM_ADAPTERS andI_MPI_OFA_NUM_PORTS also.