- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
For older Intel MPI versions, we were told to set this:
export I_MPI_HYDRA_PMI_CONNECT=alltoall. Is this needed with Intel MPI 5.2.1? And is 5.2.1 the latest version?
we also set I_MPI_FABRICS="shm:tmi" or mpirun with -PSM2. Is this the correct setting for Omni-Path fabric?
Do you have any other env vars or options we should explore to get optimal perf of Intel MPI on Omni-Path (Broadwell host, RHEL). Willing to experiment a bit for best performance.
Thanks!
Ron
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Compare with using I_MPI_FABRICS=ofi or I_MPI_FABRICS=tmi (not shm:ofi or shm:tmi). When you run with I_MPI_FABRICS=ofi, ensure that you are using the PSM2 provider (I_MPI_OFI_PROVIDER) if you have this provider installed (use I_MPI_OFI_PROVIDER_DUMP=yes to check).
Since you're asking at a very general level, I highly recommend looking through the Developer Reference for more environment variables you can use for performance experiments.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Also, regarding the version, there is no 5.2.1 version. There is 5.1, which went up to Update 3, the latest version is 2017 Update 1.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page