- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We want to move from Intel MPI 2018update3 to 2019update3, but ran into a number of problems, e.g. with fabric selection, hydra options and environment variables not working (or not supported any longer?). This makes job in our cluster fail, or hang. In the announcement for the beta program for the 2019 release, one could read that there are e.g. changes in the hydra mechanism, but are those changes documented somewhere? Like new options, deprecated options, etc ...
One example: we use LSF as our scheduler, and we set therefore certain I_MPI_HYDRA_* options to reflect this. However, a multi-node MPI job still uses 'ssh' as a launcher, where we expect it use 'blaunch'. This worked nicely in 2018 and earlier versions, but not in 2019.
Anybody else encountering problems like this? Or do I need to open a support case?
Thanks!
Bernd
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
The issue with blaunch will be fixed in the next Intel MPI release. It is known issue.
Some known removals, deprecated variables/options and not implemented features you can find in the Release Notes (sections "Known Issues and Limitations" and "Removals")
https://software.intel.com/en-us/articles/intel-mpi-library-release-notes-linux#inpage-nav-3-1
--
Best regards, Anatoliy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Anatoliy,
thanks for the timely response.
Good to know that the blaunch problem will be fixed in an upcoming release.
Regarding the removed variables: none of the variables that cause problems, or issue warnings at runtime, are listed in the document you mention. I already had checked that document several times during the last days.
Regards,
Bernd
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Could you provide some variables that cause problems?
--
Best regards, Anatoliy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We get warnings for
I_MPI_SHM_LMT
I_MPI_FABRICS_LIST
Two variables that we had to set in earlier releases, to make things work.
Regards, Bernd
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Now we have only shm, ofi and shm:ofi fabrics. So, I_MPI_FABRICS_LIST is not needed anymore. You can select fabric by I_MPI_FABRICS variable.
--
Best regards, Anatoliy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I have been struggling for a couple days to figure out the very basic setting of how to correctly instantiate Intel MPI 2019 for use over sockets/TCP. I am able to source mpivars.sh without any parameters and then export FI_PROVIDER=sockets which allows me to compile and run the simple hello world code found all over the place on a single node with n number ranks. However, when I instantiate my environment in the same way and try to compile PAPI from source, it complains in the configure step that the C compiler (GCC in this case) is not able to create executables. The config.log reveals that it struggles to find libfabric.so.1. Even if I add the libfabrics directory to my LD_LIBRARY_PATH and link to the libfabrics library, I am not able to build PAPI from source. Additionally, I cannot find good documentation for how to use MPI in the most simple and basic way - single node and several processes. There is a graphic on several presentations and even software.intel.com/intel-mpi-library which indicates I will be able to choose TCP/IP, among other fabric options, at runtime. I will appreciate your comments and assistance in letting me know the correct way to do this.
Regards,
-Rashawn

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page