Using a small cluster of Skylake-based systems, as below:
- 2019 beta update 1
- Red Hat 7.6 beta x86_64 (3.10.0-938.el7.x86_64)
- Systems include Intel Omni-Path HFAs in addition to an onboard gigabit Ethernet nic
- Systems are using the the RH7.6 inbox Omni-Path support
Attempting to run the included IMB-MPI1 binary over the OPA HFAs specifying psm2 as the transport appears to work correctly. However, trying to run it across the onboard Ethernet network specifying tcp as the transport generates the below message from MPI startup:
MPI startup(): tcp fabric is unknown or has been removed from the product, please use ofi or shm:ofi instead
The job does execute, but over the OPA fabric instead of the Ethernet network. If the OPA HFA is disconnected, the job fails.
Hi Matthew, Since IMPI 2019, IMPI discontinue support of the following fabrics that can be specified by I_MPI_FABRICS:
Currently, IMPI 2019 supports only OFI (intra-/internode) and SHM (intranode) fabrics. OFI is a fraemwork that has replacements for all previous fabrics. Those replacements are called OFI providers:
- TCP fabric - sockets OFI provider
- OFA and DAPL fabrics - verbs OFI provider
- TMI - psm2 OFI provider
The provider can be specified by FI_PROVIDER='OFI provider name'
Thank you for your followup.
Are the changes you mention documented in the 2019 release of IMPI? I was looking through the release notes but it wasn't clear to me.
Disregard the above, I found the reference in the Developer's Guide. Thank you again.