Intel® oneAPI HPC Toolkit
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
Announcements
Welcome to the Intel Community. If you get an answer you like, please mark it as an Accepted Solution to help others. Thank you!

iMPI 5.2.1 Omni-Path tuning

Ronald_G_2
Beginner
114 Views

 

For older Intel MPI versions, we were told to set this:

export I_MPI_HYDRA_PMI_CONNECT=alltoall.   Is this needed with Intel MPI 5.2.1?  And is 5.2.1 the latest version? 

we also set I_MPI_FABRICS="shm:tmi" or mpirun with -PSM2.  Is this the correct setting for Omni-Path fabric?

Do you have any other env vars or options we should explore to get optimal perf of Intel MPI on Omni-Path (Broadwell host, RHEL).  Willing to experiment a bit for best performance.

Thanks!

Ron

0 Kudos
2 Replies
James_T_Intel
Moderator
114 Views

Compare with using I_MPI_FABRICS=ofi or I_MPI_FABRICS=tmi (not shm:ofi or shm:tmi).  When you run with I_MPI_FABRICS=ofi, ensure that you are using the PSM2 provider (I_MPI_OFI_PROVIDER) if you have this provider installed (use I_MPI_OFI_PROVIDER_DUMP=yes to check).

Since you're asking at a very general level, I highly recommend looking through the Developer Reference for more environment variables you can use for performance experiments.

James_T_Intel
Moderator
114 Views

Also, regarding the version, there is no 5.2.1 version.  There is 5.1, which went up to Update 3, the latest version is 2017 Update 1.

Reply