Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

MPI problems with parallel SIESTA

Jo_H_
Beginner
3,159 Views

Hello,

I need to use the scientific software package SIESTA 3.2 (TranSIESTA actually) but I'm having a hard time getting the code to run on our cluster. I posted this to another forum but someone gave me the hint that this forum fits better for this topic.

With my arch.make I probably give a good overview on the specs I used (I used the Math Kernel Library Link Line Advisor). The lntel compiler/mpi/mkl versions are the most recent available on this cluster.

SIESTA_ARCH=intel-mpi
#
.SUFFIXES: .f .F .o .a .f90 .F90
#
FC=mpiifort
  #Path is: /Applic.PALMA/software/impi/5.0.2.044-iccifort-2015.1.133-GCC-4.9.2/bin64
#
FC_ASIS=$(FC)
#
RANLIB=ranlib
#
SYS=nag
#
MKL_ROOT=/Applic.PALMA/software/imkl/11.2.1.133-iimpi-7.2.3-GCC-4.9.2/composerxe/mkl
#
FFLAGS=-g -check all -traceback -I${MKL_ROOT}/include/intel64/lp64 -I${MKL_ROOT}/include
FPPFLAGS_MPI=-DMPI -DFC_HAVE_FLUSH -DFC_HAVE_ABORT
FPPFLAGS= $(FPPFLAGS_MPI) $(FPPFLAGS_CDF)
#
MPI_INTERFACE=libmpi_f90.a
MPI_INCLUDE=/Applic.PALMA/software/impi/5.0.2.044-iccifort-2015.1.133-GCC-4.9.2/include64
#
COMP_LIBS=dc_lapack.a
#
MKL_LIB=-L${MKL_ROOT}/lib/intel64
#
BLAS_LIBS=-lmkl_blas95_lp64
#
LAPACK_LIBS=-lmkl_lapack95_lp64
#
BLACS_LIBS=-lmkl_blacs_lp64 -lmkl_blacs_intelmpi_lp64   
#
SCALAPACK_LIBS=-lmkl_scalapack_lp64
#
EXTRA_LIBS= -lmkl_intel_lp64 -lmkl_core -lm -lpthread -lmkl_sequential      # Intel thread compilation doesn't work.
#  
LIBS=$(MKL_LIB) $(SCALAPACK_LIBS) $(BLACS_LIBS) $(LAPACK_LIBS) $(BLAS_LIBS) $(NETCDF_LIBS) $(EXTRA_LIBS)
#
.F.o:
  $(FC) -c $(INCFLAGS) $(FFLAGS)  $(FPPFLAGS) $<
.f.o:
  $(FC) -c $(INCFLAGS) $(FFLAGS)   $<
.F90.o:
  $(FC) -c $(INCFLAGS) $(FFLAGS)  $(FPPFLAGS) $<
.f90.o:
  $(FC) -c $(INCFLAGS) $(FFLAGS)   $<

With these settings, the compilation will work. The environment is set coherent to the locations in the arch.make at execution (at least I think it is).

The execution will work until the following errors occur:

Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(124): MPI_Comm_size(comm=0x5b, size=0x2364a2c) failed
PMPI_Comm_size(78).: Invalid communicator
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(124): MPI_Comm_size(comm=0x5b, size=0x2364a2c) failed
PMPI_Comm_size(78).: Invalid communicator
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(124): MPI_Comm_size(comm=0x5b, size=0x2364a2c) failed
PMPI_Comm_size(78).: Invalid communicator
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(124): MPI_Comm_size(comm=0x5b, size=0x2364a2c) failed
PMPI_Comm_size(78).: Invalid communicator

(the program was executed on 4 CPU's). I tried to get it running with Intel MPI, SIESTA's own MPI implementation and MPICH2 but I always get the same error. Since I have little to no experience with this I was hoping I could get some advice here.

Thanks and regards!

0 Kudos
4 Replies
Mark_L_Intel
Moderator
3,159 Views

Probably mpi.h is from a different version of MPI (vs. linkage stage, e.g., mpi.h from mpich1 and linking was done against mpich2), Could you modify your scripts to use mpiif90 wrapper instead of setting up your build manually? The wrappers (mpiif90, mpiicc, etc.) supplied withy Intel MPI library ensure that the build is done correctly. Please remember to set a path to Intel Fortran compiler before using mpiif90 wrapper (it silently assumes that you have done this).

BR,

Mark

 

   

0 Kudos
Jo_H_
Beginner
3,159 Views

Dear Mark,

thanks for your reply. On our cluster, I added the following modules for compilation/execution:

  1) icc/2015.1.133-GCC-4.9.2                       3) iccifort/2015.1.133-GCC-4.9.2
  2) ifort/2015.1.133-GCC-4.9.2                     4) impi/5.0.2.044-iccifort-2015.1.133-GCC-4.9.2

The only intel wrapper compiler is mpiifort as given in the arch.make. There is a mpif90 compiler in the intel MPI bin64 but this seems to be gfortran. Also, by adding the modules given before, the path to ifort should be given.

How can I check for the right mpi.h file?

Best regards,

Jo

0 Kudos
Mark_L_Intel
Moderator
3,159 Views

Hi Jo,

  You seem to be on the right path. mpiifort should call Intel Fortran compiler automatically provided there is a path to it - and it should be since you are loading ifort module. The point is that if you use mpiifort to compile your MPI related sources  -you should not be worrying about correct mpi.h - everything should be done automatically for you. Please give it a try.

Regards,

Mark

 

     

 

0 Kudos
thulsr_0_
Beginner
3,159 Views

Hi,

I encountered exactly the same problem when using Intel® Parallel Studio XE 2016 Update 2 (2016.2.181) to compile WIEN2k 14.2. And the problem is solved by adding -I$(I_MPI_ROOT)/intel64/include/ to the flag for mpiifort. It turns out that mpiifort didn't include the correct mpif.h automatically. No idea which mpif.h is included. In additional, it seems that compilervars.sh doesn't setup INCLUDE environment variable. 

0 Kudos
Reply