Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

Changes between Intel MPI 2018 and 2019: is this documented?

Bernd_D_
Beginner
1,916 Views

We want to move from Intel MPI 2018update3 to 2019update3, but ran into a number of problems, e.g. with fabric selection, hydra options and environment variables not working (or not supported any longer?).  This makes job in our cluster fail, or hang.  In the announcement for the beta program for the 2019 release, one could read that there are e.g. changes in the hydra mechanism, but are those changes documented somewhere?  Like new options, deprecated options, etc ...

One example:  we use LSF as our scheduler, and we set therefore certain I_MPI_HYDRA_* options to reflect this.  However, a multi-node MPI job still uses 'ssh'  as a launcher, where we expect it use 'blaunch'.  This worked nicely in 2018 and earlier versions, but not in 2019. 

Anybody else encountering problems like this?  Or do I need to open a support case?

Thanks!

Bernd

0 Kudos
6 Replies
Anatoliy_R_Intel
Employee
1,916 Views

Hello,

The issue with blaunch will be fixed in the next Intel MPI release. It is known issue.

Some known removals, deprecated variables/options and not implemented features you can find in the Release Notes (sections "Known Issues and Limitations" and "Removals")

https://software.intel.com/en-us/articles/intel-mpi-library-release-notes-linux#inpage-nav-3-1

--

Best regards, Anatoliy

0 Kudos
Bernd_D_
Beginner
1,916 Views

Dear Anatoliy,

thanks for the timely response. 

Good to know that the blaunch problem will be fixed in an upcoming release.

Regarding the removed variables:  none of the variables that cause problems, or issue warnings at runtime, are listed in the document you mention.  I already had checked that document several times during the last days.  

Regards,

Bernd

0 Kudos
Anatoliy_R_Intel
Employee
1,916 Views

Could you provide some variables that cause problems?

--

Best regards, Anatoliy

 

0 Kudos
Bernd_D_
Beginner
1,916 Views

We get warnings for

I_MPI_SHM_LMT

I_MPI_FABRICS_LIST

Two variables that we had to set in earlier releases, to make things work.

Regards, Bernd

0 Kudos
Anatoliy_R_Intel
Employee
1,916 Views

Now we have only shm, ofi and shm:ofi fabrics. So, I_MPI_FABRICS_LIST is not needed anymore. You can select fabric by I_MPI_FABRICS variable. 

--

Best regards, Anatoliy

0 Kudos
Rashawn_K_Intel1
Employee
1,916 Views

Hello,

I have been struggling for a couple days to figure out the very basic setting of how to correctly instantiate Intel MPI 2019 for use over sockets/TCP. I am able to source mpivars.sh without any parameters and then export FI_PROVIDER=sockets which allows me to compile and run the simple hello world code found all over the place on a single node with n number ranks. However, when I instantiate my environment  in the same way and try to compile PAPI from source, it complains in the configure step that the C compiler (GCC in this case) is not able to create executables. The config.log reveals that it struggles to find libfabric.so.1. Even if I add the libfabrics directory to my LD_LIBRARY_PATH and link to the  libfabrics library, I am not able to build PAPI from source. Additionally, I cannot find good documentation for how to use MPI in the most simple and  basic way - single node and several processes. There is a graphic on several presentations and even software.intel.com/intel-mpi-library which indicates I will be able to choose TCP/IP, among other fabric options, at runtime. I will appreciate your comments and assistance in letting me know the correct way to do this.

Regards,

-Rashawn

0 Kudos
Reply