Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

unresolved __I_MPI__intel_fast_memcpy

md25
Beginner
917 Views

Hi,

I have a hard time trying to compile and run a code using the scalapack version included in Intel cluster toolkit :

my LD_LIBRARY_PATH variable will give you the version numbers:

/opt/intel/mkl/10.0.2.018/lib/em64t:/opt/intel/ict/3.0.1/mpi/3.0/lib64:/opt/intel/ict/3.0.1/cmkl/9.1/lib/em64t:/opt/intel/ict/3.0.1/itac/7.0.1/itac/slib_impi2:/opt/intel/fce/10.0.023/lib

When I try to make a static link with ifort:

ifort -C -Bstatic type.o modules.o interfaces_sca.o field_sca.o get_param33.o adin33.o fillmat33.o lu33.o calpol33.o solve_e033.o t33.o fillpol33.o rotate33.o propag33.o init_sca.o distrib_par.o matgen_sca.o lu_sca.o solve_sca.o derf.o -o Linux/bin/ifort/dfield -lmkl_scalapack_lp64 -lmkl_blacs_lp64 -L/opt/intel/ict/3.0.1/mpi/3.0/lib64 -lmpi -lmpi_mt -lrt -openmp -lmkl_lapack -Wl,--start-group -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -Wl,--end-group -lpthread

I get the following errors (just the beginning of it !!!):

ipo: warning #11041: unresolved __I_MPI__intel_fast_memcpy
Referenced in libmpi.a(type_struct.o)
Referenced in libmpi.a(errutil.o)
Referenced in libmpi.a(helper_fns.o)
Referenced in libmpi.a(fx_heap_crsec_aligned.o)
Referenced in libmpi.a(mpid_datatype_contents.o)
Referenced in libmpi.a(ch3u_request.o)
Referenced in libmpi.a(type_create_indexed_block.o)
Referenced in libmpi.a(mpid_segment.o)
Referenced in libmpi.a(gen_dataloop.o)
Referenced in libmpi.a(mpidi_pg.o)
Referenced in libmpi.a(ch3_istartmsgv.o)
Referenced in libmpi.a(ch3_shm.o)
Referenced in libmpi.a(sock.o)
Referenced in libmpi.a(ch3u_buffer.o)
Referenced in libmpi.a(ch3u_comm_spawn_multiple.o)
Referenced in libmpi.a(gen_type_blockindexed.o)
Referenced in libmpi.a(ch3u_handle_recv_pkt.o)
Referenced in libmpi.a(ch3i_shm_bootstrapq.o)
Referenced in libmpi.a(I_MPI_wrap_dat.o)
Referenced in libmpi.a(rdma_iba.o)
Referenced in libmpi.a(rdma_iba_rendezwrite.o)
Referenced in libmpi.a(dapl_gather.o)
Referenced in libmpi.a(dapl_gatherv.o)
Referenced in libmpi.a(dapl_scatter.o)
Referenced in libmpi.a(dapl_scatterv.o)
Referenced in libmpi.a(dapl_allgather.o)
Referenced in libmpi.a(dapl_allgatherv.o)
Referenced in libmpi.a(dapl _alltoall.o)
Referenced in libmpi.a(dapl_alltoallv.o)
Referenced in libmpi.a(dapl_reduce.o)
Referenced in libmpi.a(dapl_allreduce.o)
Referenced in libmpi.a(dapl_red_scat.o)
Referenced in libmpi.a(dapl_scan.o)
Referenced in libmpi.a(type_indexed.o)
ipo: warning #11041: unresolved __I_MPI__intel_fast_memset
Referenced in libmpi.a(initthread.o)
Referenced in libmpi.a(helper_fns.o)
Referenced in libmpi.a(fx_heap_crsec_aligned.o)
Referenced in libmpi.a(rdma_iba_init_d.o)
Referenced in libmpi.a(ch3u_comm_spawn_multiple.o)
Referenced in libmpi.a(dapl_utils.o)
ipo: warning #11041: unresolved __I_MPI___intel_cpu_indicator
Referenced in libmpi.a(opmax.o)
Referenced in libmpi.a(opmin.o)
Referenced in libmpi.a(opsum.o)
Referenced in libmpi.a(opprod.o)

Any Idea ???

PS: I will tell in another post the problems I get when trying dynamic link with mpiifort...

0 Kudos
1 Reply
md25
Beginner
917 Views

Ok this one is solved:

I put -lmpiif -lmpi -lmpgi -lrt instead of -lmpi -lmpi_mt -lrt.

The requested routine is provided by the -lmpgi flag

No I have the same Pb at execution for the dynamically and statically linked binaries!...

[cli_2]: aborting job:
Fatal error in MPI_Comm_size: Invalid communicator, error stack:
MPI_Comm_size(110): MPI_Comm_size(comm=0x5b, size=0xbba858) failed
MPI_Comm_size(69).: Invalid communicator
rank 2 in job 73 clusteru_48585 caused collective abort of all ranks
exit status of rank 2: return code 13
[cli_1]: aborting job:
Fatal error in MPI_Comm_size: Invalid communicator, error stack:
MPI_Comm_size(110): MPI_Comm_size(comm=0x5b, size=0xbba858) failed
MPI_Comm_size(69).: Invalid communicator
rank 1 in job 73 clusteru_48585 caused collective abort of all ranks
exit status of rank 1: return code 13
rank 0 in job 73 clusteru_48585 caused collective abort of all ranks
exit status of rank 0: return code 13

As this Pb did not appear in test programs using only MPI, I think it comes from my code, allthough it works on another cluster... :-(

0 Kudos
Reply