Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
7074 Discussions

How to compile and run Quantum Espresso with intel MKL and Open MPI on Mac OS X

Hidetoshi_M_
Beginner
9,362 Views

I want to run Quantum Espresso faster on Mac Pro (multi-core). I think using intel MKL(+FTW3) and Open MPI with intel compiler is the best solution. It was a process of trial and error, and I finally achieved to compile pw.x. But, when I run pw.x for example 

~/q-e/bin/pw.x < graphene.scf.in > graphene.scf.out

I got an eroor.

forrtl: severe (174): SIGSEGV, segmentation fault occurred 
Image PC Routine Line Source
pw.x 000000011021EB14 Unknown Unknown Unknown
libsystem_platfor 00007FFF560EFF5A Unknown Unknown Unknown
pw.x 000000010FE3D7CB Unknown Unknown Unknown
....

I assume that the compilation is wrong. I show machine environments and compile processes below.

  • Mac Pro : 2.7 GHz 12-Core Intel Xeon E5 : 64 GB 1866 MHz DDR3 : High Sierra 10.13.6
  • Xcode : 9.4.1
  • Intel compiler(fortran) : compilers_and_libraries_2019.1.144
  • Intel compiler(C++,MKL) : compilers_and_libraries_2019.2.184
  • Open MPI : 3.0.3

 And I did  "modify install" to install "Cluster support" and "Intel MKL core libraries for Fortran" at  compilers_and_libraries_2019.2.184

  1. path settings
    1. source /opt/intel/compilers_and_libraries_2019.1.144/mac/bin/compilervars.sh intel64
      source /opt/intel/compilers_and_libraries_2019.2.184/mac/bin/compilervars.sh intel64
    2. source /opt/intel/compilers_and_libraries_2019/mac/mkl/bin/mklvars.sh intel64
      
  2. compile FFTW3 with intel compiler

    1. cd ${MKLROOT}/interfaces/fftw3xf
      sudo make libintel64 compiler=intel
      
    2. cp libfftw3xf_intel.a ${MKLROOT}/lib/
      
  3. compile Open MPI with intel compiler
    1. ./configure --prefix=/opt/openmpi CC=icc CXX=icpc F77=ifort FC=ifort
      sudo make
      sudo make install
      
    2. echo 'export PATH=/opt/openmpi/bin:$PATH' >> .bashrc
      
  4. make settings for using mkl on openmpi
    1. cd ${MKLROOT}/interfaces/mklmpi
      sudo make libintel64 interface=ilp64 MPICC='mpicc' INSTALL_LIBNAME
        ='libmkl_blacs_openmpi_ilp64'
    2. cp obj_ilp64/libmkl_blacs_openmpi_ilp64.a ${MKLROOT}/lib/
      
  5. make QE (only PW)
    1. ./configure --enable-parallel --with-scalapack=openmpi --disable-openmp F90=ifort
        MPIF90=mpif90 CC=mpicc CXX=icc F77=ifort 
        LAPACK_LIBS=" ${MKLROOT}/lib/libmkl_intel_ilp64.a ${MKLROOT}/lib/libmkl
        _sequential.a ${MKLROOT}/lib/libmkl_core.a -lpthread -lm -ldl" 
        BLAS_LIBS="${MKLROOT}/lib/libmkl_intel_ilp64.a ${MKLROOT}/lib/libmkl
        _sequential.a ${MKLROOT}/lib/libmkl_core.a -lpthread -lm -ldl" 
        FFT_LIBS="${MKLROOT}/lib/libfftw3xf_intel.a -lmkl_intel_ilp64 -lmkl_sequential
         -lmkl_core -lpthread" MPI_LIBS="-L/opt/openmpi/lib" 
        DFLAGS="-D__INTEL -D__FFTW3 -D__MPI -D__SCALAPACK -D__PARA" 
        IFLAGS="-I/Users/username/q-e/include -I/Users/username/q-e/FoX/finclude 
        -I/Users/username/q-e/S3DE/iotk/include/ 
        -I${MKLROOT}/include -I${MKLROOT}/include/fftw" 
        SCALAPACK_LIBS="${MKLROOT}/lib/libmkl_scalapack_ilp64.a 
        ${MKLROOT}/lib/libmkl_cdft_core.a ${MKLROOT}/lib/libmkl_intel_ilp64.a 
        ${MKLROOT}/lib/libmkl_sequential.a ${MKLROOT}/lib/libmkl_core.a 
        ${MKLROOT}/lib/libmkl_blacs_openmpi_ilp64.a -lpthread -lm -ldl" 
        LDFLAGS="-static-intel"
      
    2. make -j8 pw

* At the step 4.1, warnings happened.

mpicc -c -Wall -fPIC    -DMKL_ILP64 -I../../include mklmpi-impl.c -o obj_ilp64/mklmpi-impl.o
mklmpi-impl.c(87): warning #1786: variable "ompi_mpi_ub" (declared at line 928 of "/opt/open
  mpi/include/mpi.h") was declared deprecated ("MPI_UB is deprecated in MPI-2.0")
      RETURN_IF(xdatatype,MPI_UB);
      ^

mklmpi-impl.c(338): warning #1786: function "MPI_Address" (declared at line 1201 of "/opt/open
  mpi/include/mpi.h") was declared deprecated ("MPI_Address is superseded by MPI_Get_address in MPI-2.0")
  	int err = MPI_Address(location, address);
  	          ^

mklmpi-impl.c(397): warning #1786: function "MPI_Attr_get" (declared at line 1241 of "/opt/open
  mpi/include/mpi.h") was declared deprecated ("MPI_Attr_get is superseded by MPI_Comm_get_attr in MPI-2.0")
      int res = MPI_Attr_get(X2COMM(comm),X2KEYVAL(keyval),attribute_val,flag);
                ^

Sorry for writing such a long message. How should I modify above steps to run Quantum Espresso?

FYI : I read the topic "How to correctly compile Quantum Espresso with intel MKL especially intel FFTW" , and I post my question in this forum.

0 Kudos
1 Solution
Gennady_F_Intel
Moderator
9,362 Views

Could you try to recompile with lp64 instead of ilp64 API. Check MKL Linker adviser for more details.

View solution in original post

0 Kudos
3 Replies
Gennady_F_Intel
Moderator
9,363 Views

Could you try to recompile with lp64 instead of ilp64 API. Check MKL Linker adviser for more details.

0 Kudos
Hidetoshi_M_
Beginner
9,362 Views

Tank you for the nice reply. I could avoid the "segmentation fault occurred" by recompiling with "lp64" instead of "ilp64".

But, I was faced with a new problem when I run the same command

~/q-e/bin/pw.x < graphene.scf.in > graphene.scf.out

The error message is

MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

And, in the graphene.scf.out, the stoppinig message is

Error in routine allocate_fft (1):
wrong ngm

Is this Open MPI problem?

0 Kudos
Gennady_F_Intel
Moderator
9,362 Views

This looks very strange. We don't know such problem with the latest version of MKL. MKL 2019 supports Intel MPI (v.2017,2018 and 2019) and MPICH (v.2.14, 3.1 and 3,3) and you may check if the problem exists with these version as well. 

0 Kudos
Reply