Hello,
I need to use the scientific software package SIESTA 3.2 (TranSIESTA actually) but I'm having a hard time getting the code to run on our cluster. I posted this to another forum but someone gave me the hint that this forum fits better for this topic.
With my arch.make I probably give a good overview on the specs I used (I used the Math Kernel Library Link Line Advisor). The lntel compiler/mpi/mkl versions are the most recent available on this cluster.
SIESTA_ARCH=intel-mpi
#
.SUFFIXES: .f .F .o .a .f90 .F90
#
FC=mpiifort
#Path is: /Applic.PALMA/software/impi/5.0.2.044-iccifort-2015.1.133-GCC-4.9.2/bin64
#
FC_ASIS=$(FC)
#
RANLIB=ranlib
#
SYS=nag
#
MKL_ROOT=/Applic.PALMA/software/imkl/11.2.1.133-iimpi-7.2.3-GCC-4.9.2/composerxe/mkl
#
FFLAGS=-g -check all -traceback -I${MKL_ROOT}/include/intel64/lp64 -I${MKL_ROOT}/include
FPPFLAGS_MPI=-DMPI -DFC_HAVE_FLUSH -DFC_HAVE_ABORT
FPPFLAGS= $(FPPFLAGS_MPI) $(FPPFLAGS_CDF)
#
MPI_INTERFACE=libmpi_f90.a
MPI_INCLUDE=/Applic.PALMA/software/impi/5.0.2.044-iccifort-2015.1.133-GCC-4.9.2/include64
#
COMP_LIBS=dc_lapack.a
#
MKL_LIB=-L${MKL_ROOT}/lib/intel64
#
BLAS_LIBS=-lmkl_blas95_lp64
#
LAPACK_LIBS=-lmkl_lapack95_lp64
#
BLACS_LIBS=-lmkl_blacs_lp64 -lmkl_blacs_intelmpi_lp64
#
SCALAPACK_LIBS=-lmkl_scalapack_lp64
#
EXTRA_LIBS= -lmkl_intel_lp64 -lmkl_core -lm -lpthread -lmkl_sequential # Intel thread compilation doesn't work.
#
LIBS=$(MKL_LIB) $(SCALAPACK_LIBS) $(BLACS_LIBS) $(LAPACK_LIBS) $(BLAS_LIBS) $(NETCDF_LIBS) $(EXTRA_LIBS)
#
.F.o:
$(FC) -c $(INCFLAGS) $(FFLAGS) $(FPPFLAGS) $<
.f.o:
$(FC) -c $(INCFLAGS) $(FFLAGS) $<
.F90.o:
$(FC) -c $(INCFLAGS) $(FFLAGS) $(FPPFLAGS) $<
.f90.o:
$(FC) -c $(INCFLAGS) $(FFLAGS) $<
With these settings, the compilation will work. The environment is set coherent to the locations in the arch.make at execution (at least I think it is).
The execution will work until the following errors occur:
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(124): MPI_Comm_size(comm=0x5b, size=0x2364a2c) failed
PMPI_Comm_size(78).: Invalid communicator
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(124): MPI_Comm_size(comm=0x5b, size=0x2364a2c) failed
PMPI_Comm_size(78).: Invalid communicator
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(124): MPI_Comm_size(comm=0x5b, size=0x2364a2c) failed
PMPI_Comm_size(78).: Invalid communicator
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(124): MPI_Comm_size(comm=0x5b, size=0x2364a2c) failed
PMPI_Comm_size(78).: Invalid communicator
(the program was executed on 4 CPU's). I tried to get it running with Intel MPI, SIESTA's own MPI implementation and MPICH2 but I always get the same error. Since I have little to no experience with this I was hoping I could get some advice here.
Thanks and regards!