Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

Why does my Octopus 9.1's build show an extremely different performance with different MPI implementations?

efnacy
New Contributor I
2,350 Views

I am compiling a scientific program package called Octopus 9.1 on a cluster by specifying BLAS library with

-L${MKL_DIR} -Wl,-Bstatic -Wl,--start-group -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -Wl,-Bdynamic -lpthread -lm -ldl

BLACS with

-L${MKL_DIR} -Wl,-Bstatic -lmkl_scalapack_lp64

and SCALAPCK with

-L${MKL_DIR} -Wl,-Bstatic -lmkl_scalapack_lp64

All of those options and flags are what the Intel Link Line Advisor spits given my computer architecture. The compilers are openmpi's mpif90 and mpicc compiled with Intel 18.0.0 compilers. The program works fine, it runs fast, nothing to be worried except for a few segfault errors during test run which I suspect can be remedied by ulimit -s unlimited. But I would like to know why -lmkl_intel_lp64, -lmkl_sequential, -lmkl_core, as well as that blacs and scalapack libraries have to be statically linked? For instance, when those -Wl,-Bstatic and -Wl,-Bdynamic are removed, I got segfault runtime error for any calculation I launched. Looking at Octopus's manual, it doesn't say anything about which intel's library should be linked statically or dynamically, in fact those intel-advised compiler options are architecture-dependent. Moreover, if I switched compilers to MPICH which also wrap intels compiler (same version as before), the program runs significantly slower (in one calculation it was 50 seconds with openmpi vs. 1 hour with mpich) and -Wl,-Bstatic and -Wl,-Bdynamic options have to be absent otherwise segfault. This is really bugging me, how come a mere difference in MPI implementation can lead to such a huge difference in performance and linking behavior. Any thought on this?

0 Kudos
10 Replies
Pradeep_G_Intel
Employee
2,350 Views

Hi Efnacy,

We are looking into your concern and will get back to you soon.

Regards,

Pradeep

0 Kudos
efnacy
New Contributor I
2,350 Views

Any news?

0 Kudos
James_T_Intel
Moderator
2,350 Views

I see two concerns in your post.  One is about the requirement to link Intel® MKL statically rather than dynamically.  I don't know why Octopus requires this, but I would suggest that you post that question in our forum for Intel® MKL (https://software.intel.com/en-us/forums/intel-math-kernel-library).

Regarding the differences in performance between MPI implementations, there are usually significant differences in MPI implementations, especially around various optimizations.  I would actually suggest testing with Intel® MPI Library as well to determine what is best for your system.

0 Kudos
efnacy
New Contributor I
2,350 Views

Hi James,

thank you for your reply. Which options should I use in order to invoke Intel MPI library?

0 Kudos
James_T_Intel
Moderator
2,350 Views

If you have it installed on your system, run the following:

source <IMPI_install_path>/<version>/intel64/bin/mpivars.sh

This will set up your environment such that Intel® MPI Library will be the first found by PATH and LD_LIBRARY_PATH.  Then, you can use the same mpif90 and mpicc scripts you normally use.  If you want to ensure you are using the Intel compilers as well, set the following.  You only need to set the ones relevant for what you are using.

I_MPI_CC=icc
I_MPI_CXX=icpc
I_MPI_FC=ifort
I_MPI_F77=ifort
I_MPI_F90=ifort

At runtime, use mpirun as normal and the environment should default to Intel® MPI Library.

0 Kudos
James_T_Intel
Moderator
2,350 Views

Also, if you don't have Intel® MPI Library already installed, visit https://software.intel.com/en-us/mpi-library and follow the Choose & Download button to obtain it.

0 Kudos
efnacy
New Contributor I
2,350 Views

Really appreciate the help! I checked my cluster and the following directory exist:

/home/compilers/Intel/parallel_studio_xe_2018.0/compilers_and_libraries_2018/linux/mpi/lib64/

I think that is where the MPI libraries reside, am I wrong? And in post #6, just to be sure, are the values of those variables indeed the base intel compilers, that is not the MPI wrappers?

0 Kudos
James_T_Intel
Moderator
2,350 Views

The environment variables listed in post #6 instruct the compiler wrappers which compiler to use.  The default for the Intel® MPI Library general wrapper scripts on Linux* is to use the GNU* compilers, hence you need to specify the Intel compilers.  There is a separate set of wrapper scripts (mpiifort, mpiicc, mpiicpc) that will only use the Intel compilers, but this requires modifying your existing compile methods, rather than just setting a few environment variables.

The version you have installed is very old.  I strongly recommend getting the latest version, it is freely available.  Follow the instructions in post #7 to get it.

0 Kudos
efnacy
New Contributor I
2,350 Views

I am sorry but 2018 is already very old? I will see if there is still enough space in my cluster account though because I only have 5 GB of storage and it is already taken up by other libraries and files.

0 Kudos
James_T_Intel
Moderator
2,183 Views

This thread is closed for Intel support. Any further replies will be considered community only. If you need Intel support, please start a new thread.


0 Kudos
Reply