Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

mpivars.sh to build path automatically

Tiago_M_1
Beginner
1,026 Views

Looks like on Linux the intel MPI runtime hardcodes the path of mpivars.sh, eg:
I_MPI_ROOT=/opt/intel/impi/4.1.3.049; export I_MPI_ROOT

On windows on the other hand the path is dynamically generated:
SET I_MPI_ROOT=%~dp0..\..

Is there any reason to why the path cannot also be automatically generated on Linux?
http://stackoverflow.com/questions/242538/unix-shell-script-find-out-which-directory-the-script-file-resides

We have two situations were this would help:
1 - redistributing the mpi runtime with our application, and
2 - having machines with that folder mounted into different mount points.

0 Kudos
10 Replies
Artem_R_Intel1
Employee
1,026 Views

Hi,
Generic solution for different shell interpreters, different usage scenarios (different current directories, using by absolute or relative paths, different ways of sourcing and so on) is too complex and may be unsafe.
I'd say mpivars.sh is designed according to the principle "keep it short and simple".

0 Kudos
Kittur_G_Intel
Employee
1,026 Views

Hi Tiago,
Good observation and I'll get back to you after finding out more on the reason for such an implementation, thanks
_Kittur

0 Kudos
Kittur_G_Intel
Employee
1,026 Views

Hi Tiago,
I think Artem's response appears to me as a very valid response. Nevertheless, let me file an issue with our developers as a feature request and they can investigate to see if there's anything that can be done further down the road. Thanks for bringing this to our attention and I'll update you accordingly if I've any further update on this.
_Kittur

0 Kudos
Tiago_M_1
Beginner
1,026 Views

Hi Kittur,

Agreed, his response does makes sense. I guess that with the number of platform you support it gets complicated to support every shell variation.
Maybe it would be possible to have variations of that script that target certain mainstream shells (bash - mpivars.bash for example) which would solve the problem.

Anyway, thanks for having a look at it.

Thanks,
Tiago

0 Kudos
Gergana_S_Intel
Employee
1,026 Views

Hi Tiago,

Just to clarify, we already have 2 separate scripts to cover the major shells: BASH and CSH.  The mpivars.sh script is meant for BASH or compatible environments and mpivars.csh does the same for CSH.  Both of those are available in the bin64/ directory.

But, as Kittur, pointed out, he'll submit a feature request to the team so it's entered into our internal system.

All the best,
~Gergana

0 Kudos
Tiago_M_1
Beginner
1,026 Views

Hi Gergana,

Yeah I saw that. mpivars.sh point to "#! /bin/sh" which does not necessarily points to bash. in ubuntu it points to dash and in older redhat systems it is not unusual to come accros /bin/csh.

Why not use "#! /usr/bin/env bash" and explicitly point to bash?

Sorry being a PITA, but HEY that's how we learn things =)

Best,
Tiago

0 Kudos
Gergana_S_Intel
Employee
1,026 Views

Thanks, Tiago.  You've hit on the Ubuntu issue which we have seen before.  Part of it is usage of the Intel MPI Library: we don't see a lot of it on Ubuntu systems.  As for red hat, we officially support only the last 2 releases: RHEL 6 and 7 although we certainly have customers using older versions.

Regardless, your points are well taken and well argued :)  Kittur, I trust you'll get this added to the feature request you're submitting.

~Gergana

0 Kudos
Tiago_M_1
Beginner
1,026 Views

Just FYI I fixed my own bash script and now hit the same issue with mpdboot.py looks like it also has a hardcoded path.

0 Kudos
Gergana_S_Intel
Employee
1,026 Views

Yeah, we use the hardcoded path in a couple of spots.

Don't use mpdboot since the MPDs are being deprecated.  I would recommend using mpiexec.hydra or mpirun directly.  What version of Intel MPI do you have, by the way?  Latest is Intel MPI 5.0.3 so might be good to upgrade.

~Gergana

0 Kudos
Tiago_M_1
Beginner
1,026 Views

It is one of those situations that I am only wrapping around someone else's code. It seems like they are still using 4.1.2.040. I will pass the info forward. Thanks!

0 Kudos
Reply