Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
6981 Discussions

Pb at the execution of a code linked with Intel Cluster Toolkit

md25
Beginner
348 Views

Hi again,

I have a code parallelized by using Scalapack and MPI that works on a first cluster when compiled with PGI pgf90 and linked with MPICH2 and MPICH2 version of BLACS and SCALAPACK.

I want to port this code to a new Intel Xeon quadcores cluster on which we do not have pgf90 but Intel compilers, MKL and ICT

When I try to execute a binary created by:

mpiifort -C -Bdynamic type.o modules.o interfaces_sca.o field_sca.o get_param33.o adin33.o fillmat33.o lu33.o calpol33.o solve_e033.o t33.o fillpol33.o rotate33.o propag33.o init_sca.o distrib_par.o matgen_sca.o lu_sca.o solve_sca.o derf.o -o Linux/bin/mpiifort/dfield -lmkl_scalapack -lmkl_blacs -lmkl_lapack -lmkl -lguide -lpthread

I get (I checked that number ofnodes required in the mpiexec line is the same that the one required by the code (4))

[cli_0]: aborting job:
Fatal error in MPI_Comm_size: Invalid communicator, error stack:
MPI_Comm_size(110): MPI_Comm_size(comm=0x5b, size=0x7897b8) failed
MPI_Comm_size(69).: Invalid communicator
[cli_1]: aborting job:
Fatal error in MPI_Comm_size: Invalid communicator, error stack:
MPI_Comm_size(110): MPI_Comm_size(comm=0x5b, size=0x7897b8) failed
MPI_Comm_size(69).: Invalid communicator
[cli_3]: aborting job:
Fatal error in MPI_Comm_size: Invalid communicator, error stack:
MPI_Comm_size(110): MPI_Comm_size(comm=0x5b, size=0x7897b8) failed
MPI_Comm_size(69).: Invalid communicator
[cli_2]: aborting job:
Fatal error in MPI_Comm_size: Invalid communicator, error stack:
MPI_Comm_size(110): MPI_Comm_size(comm=0x5b, size=0x7897b8) failed
MPI_Comm_size(69).: Invalid communicator

PS: I have Print statements at the beginning of the code and here I get nothing except the MPI errors.

0 Kudos
3 Replies
md25
Beginner
348 Views

I reply to myself since I have just learnt about the -# flag, I post here the information for my problematic code, in which I see a mix of -Bstatic and -Bdynamic...

mpiifort -C -Bdynamic -# type.o modules.o interfaces_sca.o field_sca.o get_param33.o adin33.o fillmat33.o lu33.o calpol33.o solve_e033.o t33.o fillpol33.o rotate33.o propag33.o init_sca.o distrib_par.o matgen_sca.o lu_sca.o solve_sca.o derf.o -o Linux/bin/mpiifort/dfield -lmkl_scalapack -lmkl_blacs -lmkl_lapack -lmkl -lguide -lpthread
/opt/intel/fce/10.0.023/bin/fortcom
-mP1OPT_version=1000
-mGLOB_source_language=GLOB_SOURCE_LANGUAGE_F90
-mGLOB_tune_for_fort
-mGLOB_use_fort_dope_vector
-mP2OPT_static_promotion
-mP1OPT_print_version=FALSE
-mP3OPT_use_mspp_call_convention
-mCG_use_gas_got_workaround=T
-mP2OPT_align_option_used=TRUE
"-mGLOB_options_string=-I/opt/intel/ict/3.0.1/mpi/3.0/include64 -I/opt/intel/ict/3.0.1/mpi/3.0/include64 -C -Bdynamic -# -o Linux/bin/mpiifort/dfield -lmkl_scalapack -lmkl_blacs -lmkl_lapack -lmkl -lguide -lpthread -L/opt/intel/ict/3.0.1/mpi/3.0/lib64 -Xlinker -rpath -Xlinker /opt/intel/ict/3.0.1/mpi/3.0/lib64 -Xlinker -rpath -Xlinker /opt/intel/mpi-rt/3.0 -lmpi -lmpiif -lmpigi -lrt -lpthread -ldl"
-mGLOB_cxx_limited_range=FALSE
-mP2OPT_eh_nirvana
-mGLOB_diag_file=type.diag
-mGLOB_as_output_backup_file_name=/tmp/ifortMjJGTAas_.s
-mGLOB_runtime_check_undefined
-mGLOB_machine_model=GLOB_MACHINE_MODEL_EFI2
-mGLOB_use_base_pointer
-mGLOB_fp_speculation=GLOB_FP_SPECULATION_FAST
-mGLOB_extended_instructions=0x8
-mP2OPT_subs_out_of_bound=FALSE
-mGLOB_ansi_alias
-mIPOPT_ninl_user_level=2
-mIPOPT_args_in_regs=0
-mPGOPTI_value_profile_use=T
-mP2OPT_align_array_to_cache_line=FALSE
-mIPOPT_activate
-mP2OPT_hlo
-mPAROPT_par_report=1
-mIPOPT_link
-mIPOPT_ipo_activate
-mIPOPT_ipo_mo_activate
-mIPOPT_ipo_mo_nfiles=1
-mIPOPT_source_files_list=/tmp/ifortWApQXclst
-mIPOPT_short_data_info=/tmp/ifortIIGrLKsdata
-mIPOPT_link_script_file=/tmp/ifortEFXKziscript
-mIPOPT_global_data
"-mIPOPT_link_version=(GNU Binutils for Debian) 2.18"
"-mIPOPT_cmdline_link="/usr/lib64/crt1.o" "/usr/lib64/crti.o" "/usr/lib/gcc/x86_64-linux-gnu/4.1.2/crtbegin.o" "--eh-frame-hdr" "-dynamic-linker" "/lib64/ld-linux-x86-64.so.2" "-o" "Linux/bin/mpiifort/dfield" "/opt/intel/fce/10.0.023/lib/for_main.o" "-Bdynamic" "type.o" "modules.o" "interfaces_sca.o" "field_sca.o" "get_param33.o" "adin33.o" "fillmat33.o" "lu33.o" "calpol33.o" "solve_e033.o" "t33.o" "fillpol33.o" "rotate33.o" "propag33.o" "init_sca.o" "distrib_par.o" "matgen_sca.o" "lu_sca.o" "solve_sca.o" "derf.o" "-l mkl_scalapack" "-lmkl_blacs" "-lmkl_lapack" "-lmkl" "-lguide" "-lpthread" "-L/opt/intel/ict/3.0.1/mpi/3.0/lib64" "-rpath" "/opt/intel/ict/3.0.1/mpi/3.0/lib64" "-rpath" "/opt/intel/mpi-rt/3.0" "-lmpi" "-lmpiif" "-lmpigi" "-lrt" "-lpthread" "-ldl" "-L/opt/intel/mkl/10.0.2.018/lib/em64t" "-L/opt/intel/ict/3.0.1/cmkl/9.1/lib/em64t" "-L/usr/lib" "-L/home/devel/lib" "-L." "-L/opt/intel/fce/10.0.023/lib" "-L/usr/lib/gcc/x86_64-linux-gnu/4.1.2/" "-L/usr/lib64" "-Bstatic" "-lifport" "-lifcore" "-limf" "-lsvml" "-Bdynamic" "-lm" "-Bstatic" "-lipgo" "-lirc" "-Bdynamic" "-lc" "-lgcc_s" "-lgcc" "-Bstatic" "-lirc_s" "-Bdynamic" "-ldl" "-lc" "/usr/lib/gcc/x86_64-linux-gnu/4.1.2/crtend.o" "/usr/lib64/crtn.o""
-mIPOPT_save_il0
-mIPOPT_il_in_obj
-mIPOPT_ipo_activate_warn=FALSE
-mIPOPT_obj_output_file_name=/tmp/ipo_ifortCagxm7.o
"-mGLOB_linker_version=(GNU Binutils for Debian) 2.18"
-mP3OPT_asm_target=P3OPT_ASM_TARGET_GAS
-mGLOB_obj_output_file=/tmp/ipo_ifortCagxm7.o
-mP1OPT_source_file_name=/tmp/ipo_ifortCagxm7.f
type.o
modules.o
interfaces_sca.o
field_sca.o
get_param33.o
adin33.o
fillmat33.o
lu33.o
calpol33.o
solve_e033.o
t33.o
fillpol33.o
rotate33.o
propag33.o
init_sca.o
distrib_par.o
matgen_sca.o
lu_sca.o
solve_sca.o
derf.o
-mIPOPT_mo_unique_name=dfield
-mIPOPT_object_files=/tmp/iforte5coaFtxt

ifort: warning #10017: couldn't open multi-file optimizations object list
ld
/usr/lib64/crt1.o
/usr/lib64/crti.o
/usr/lib/gcc/x86_64-linux-gnu/4.1.2/crtbegin.o
--eh-frame-hdr
-dynamic-linker
/lib64/ld-linux-x86-64.so.2
-o
Linux/bin/mpiifort/dfield
/opt/intel/fce/10.0.023/lib/for_main.o
-Bdynamic
-lmkl_scalapack
-lmkl_blacs
-lmkl_lapack
-lmkl
-lguide
-lpthread
-L/opt/intel/ict/3.0.1/mpi/3.0/lib64
-rpath
/opt/intel/ict/3.0.1/mpi/3.0/lib64
-rpath
/opt/intel/mpi-rt/3.0
-lmpi
-lmpiif
-lmpigi
-lrt
-lpthread
-ldl
-L/opt/intel/mkl/10.0.2.018/lib/em64t
-L/opt/intel/ict/3.0.1/cmkl/9.1/lib/em64t
-L/usr/lib
-L/home/devel/lib
-L.
-L/opt/intel/fce/10.0.023/lib
  ; -L/usr/lib/gcc/x86_64-linux-gnu/4.1.2/
-L/usr/lib64
-Bstatic
-lifport
-lifcore
-limf
-lsvml
-Bdynamic
-lm
-Bstatic
-lipgo
-lirc
-Bdynamic
-lc
-lgcc_s
-lgcc
-Bstatic
-lirc_s
-Bdynamic
-ldl
-lc
/usr/lib/gcc/x86_64-linux-gnu/4.1.2/crtend.o
/usr/lib64/crtn.o

0 Kudos
md25
Beginner
348 Views
MD25:

PS: I have Print statements at the beginning of the code and here I get nothing except the MPI errors.

Hi again,

in fact my PS is not true: the Print was the second line, the first being a call to BLACS_PINFO!

When I put a print before the BLACS_PINFO, I see the output => Pbseems to bein the MPI context settings in BLACS_PINFO...

0 Kudos
md25
Beginner
348 Views
This one was solved by linking with -lmkl_blacs_intelmpi20 instead of -lmkl_blacs
0 Kudos
Reply