Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2159 Discussions

undefined references when compiling with -fast on Intel MPI

andreas0674
Beginner
1,296 Views
Dear all,

I have a fedora 13 linux system where I have installed the latest version of Intel fortran compilers and Intel MPI libraries
My parallel programs compile and run smoothly when i compile withmpif90 -132 -O3 *.f *.f90

but when I try to compile with -fast (in the hopes of getting a faster executable) I get a series of errors for
undefined and unresolved references

The output is the following
[user@andreas 2p20d]$ mpif90 -132 -fast -O3 *.f *.f90
ipo: warning #11020: unresolved PMPI_Finalize
Referenced in libmpigf.a(finalizef.o)
ipo: warning #11020: unresolved PMPI_Reduce
Referenced in libmpigf.a(reducef.o)
ipo: warning #11020: unresolved PMPI_Barrier
Referenced in libmpigf.a(barrierf.o)
ipo: warning #11020: unresolved PMPI_Allreduce
Referenced in libmpigf.a(allreducef.o)
ipo: warning #11020: unresolved MPI_F_STATUS_IGNORE
Referenced in libmpigf.a(waitf.o)
Referenced in libmpigf.a(setbot.o)
ipo: warning #11020: unresolved PMPI_Wait
Referenced in libmpigf.a(waitf.o)
ipo: warning #11020: unresolved PMPI_Irecv
Referenced in libmpigf.a(irecvf.o)
ipo: warning #11020: unresolved PMPI_Isend
Referenced in libmpigf.a(isendf.o)
ipo: warning #11020: unresolved PMPI_Bcast
Referenced in libmpigf.a(bcastf.o)
ipo: warning #11020: unresolved PMPI_Comm_size
Referenced in libmpigf.a(comm_sizef.o)
ipo: warning #11020: unresolved PMPI_Comm_rank
Referenced in libmpigf.a(comm_rankf.o)
ipo: warning #11020: unresolved PMPI_Init
Referenced in libmpigf.a(initf.o)
ipo: warning #11020: unresolved MPI_F_STATUSES_IGNORE
Referenced in libmpigf.a(setbot.o)
ipo: warning #11020: unresolved __rela_iplt_end
Referenced in libc.a(elf-init.o)
ipo: warning #11020: unresolved __rela_iplt_start
Referenced in libc.a(elf-init.o)
ipo: remark #11000: performing multi-file optimizations
ipo: remark #11005: generating object file /tmp/ipo_ifortXl2tNs.o
/usr/lib/gcc/x86_64-redhat-linux/4.4.4/../../../../lib64/libpthread.a(libpthread.o): In function `sem_open':
(.text+0x75ad): warning: the use of `mktemp' is dangerous, better use `mkstemp'
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(allreducef.o): In function `pmpi_allreduce':
allreducef.c:(.text+0x63): undefined reference to `PMPI_Allreduce'
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(barrierf.o): In function `pmpi_barrier':
barrierf.c:(.text+0x7): undefined reference to `PMPI_Barrier'
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(bcastf.o): In function `pmpi_bcast':
bcastf.c:(.text+0xe): undefined reference to `PMPI_Bcast'
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(comm_rankf.o): In function `pmpi_comm_rank':
comm_rankf.c:(.text+0x7): undefined reference to `PMPI_Comm_rank'
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(comm_sizef.o): In function `pmpi_comm_size':
comm_sizef.c:(.text+0x7): undefined reference to `PMPI_Comm_size'
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(finalizef.o): In function `pmpi_finalize':
finalizef.c:(.text+0x5): undefined reference to `PMPI_Finalize'
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(initf.o): In function `pmpi_init':
initf.c:(.text+0x18): undefined reference to `PMPI_Init'
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(irecvf.o): In function `pmpi_irecv':
irecvf.c:(.text+0x1a): undefined reference to `PMPI_Irecv'
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(isendf.o): In function `pmpi_isend':
isendf.c:(.text+0x1a): undefined reference to `PMPI_Isend'
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(reducef.o): In function `pmpi_reduce':
reducef.c:(.text+0x6d): undefined reference to `PMPI_Reduce'
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(register_datarepf.o): In function `pmpi_register_datarep':
register_datarepf.c:(.text+0x67): undefined reference to `i_malloc'
register_datarepf.c:(.text+0xbe): undefined reference to `PMPI_Register_datarep'
register_datarepf.c:(.text+0xca): undefined reference to `i_free'
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(setbot.o): In function `mpirinitc_':
setbot.c:(.text+0x11): undefined reference to `MPI_F_STATUS_IGNORE'
setbot.c:(.text+0x18): undefined reference to `MPI_F_STATUSES_IGNORE'
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(waitf.o): In function `pmpi_wait':
waitf.c:(.text+0x29): undefined reference to `MPI_F_STATUS_IGNORE'
waitf.c:(.text+0x3a): undefined reference to `PMPI_Wait'
[yiotis@andreas 2p20d]$
[yiotis@andreas 2p20d]$ mpif90 -132 -fast -O3 *.f *.f90ipo: warning #11020: unresolved PMPI_Finalize Referenced in libmpigf.a(finalizef.o)ipo: warning #11020: unresolved PMPI_Reduce Referenced in libmpigf.a(reducef.o)ipo: warning #11020: unresolved PMPI_Barrier Referenced in libmpigf.a(barrierf.o)ipo: warning #11020: unresolved PMPI_Allreduce Referenced in libmpigf.a(allreducef.o)ipo: warning #11020: unresolved MPI_F_STATUS_IGNORE Referenced in libmpigf.a(waitf.o) Referenced in libmpigf.a(setbot.o)ipo: warning #11020: unresolved PMPI_Wait Referenced in libmpigf.a(waitf.o)ipo: warning #11020: unresolved PMPI_Irecv Referenced in libmpigf.a(irecvf.o)ipo: warning #11020: unresolved PMPI_Isend Referenced in libmpigf.a(isendf.o)ipo: warning #11020: unresolved PMPI_Bcast Referenced in libmpigf.a(bcastf.o)ipo: warning #11020: unresolved PMPI_Comm_size Referenced in libmpigf.a(comm_sizef.o)ipo: warning #11020: unresolved PMPI_Comm_rank Referenced in libmpigf.a(comm_rankf.o)ipo: warning #11020: unresolved PMPI_Init Referenced in libmpigf.a(initf.o)ipo: warning #11020: unresolved MPI_F_STATUSES_IGNORE Referenced in libmpigf.a(setbot.o)ipo: warning #11020: unresolved __rela_iplt_end Referenced in libc.a(elf-init.o)ipo: warning #11020: unresolved __rela_iplt_start Referenced in libc.a(elf-init.o)ipo: remark #11000: performing multi-file optimizationsipo: remark #11005: generating object file /tmp/ipo_ifortXl2tNs.o/usr/lib/gcc/x86_64-redhat-linux/4.4.4/../../../../lib64/libpthread.a(libpthread.o): In function `sem_open':(.text+0x75ad): warning: the use of `mktemp' is dangerous, better use `mkstemp'/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(allreducef.o): In function `pmpi_allreduce':allreducef.c:(.text+0x63): undefined reference to `PMPI_Allreduce'/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(barrierf.o): In function `pmpi_barrier':barrierf.c:(.text+0x7): undefined reference to `PMPI_Barrier'/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(bcastf.o): In function `pmpi_bcast':bcastf.c:(.text+0xe): undefined reference to `PMPI_Bcast'/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(comm_rankf.o): In function `pmpi_comm_rank':comm_rankf.c:(.text+0x7): undefined reference to `PMPI_Comm_rank'/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(comm_sizef.o): In function `pmpi_comm_size':comm_sizef.c:(.text+0x7): undefined reference to `PMPI_Comm_size'/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(finalizef.o): In function `pmpi_finalize':finalizef.c:(.text+0x5): undefined reference to `PMPI_Finalize'/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(initf.o): In function `pmpi_init':initf.c:(.text+0x18): undefined reference to `PMPI_Init'/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(irecvf.o): In function `pmpi_irecv':irecvf.c:(.text+0x1a): undefined reference to `PMPI_Irecv'/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(isendf.o): In function `pmpi_isend':isendf.c:(.text+0x1a): undefined reference to `PMPI_Isend'/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(reducef.o): In function `pmpi_reduce':reducef.c:(.text+0x6d): undefined reference to `PMPI_Reduce'/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(register_datarepf.o): In function `pmpi_register_datarep':register_datarepf.c:(.text+0x67): undefined reference to `i_malloc'register_datarepf.c:(.text+0xbe): undefined reference to `PMPI_Register_datarep'register_datarepf.c:(.text+0xca): undefined reference to `i_free'/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(setbot.o): In function `mpirinitc_':setbot.c:(.text+0x11): undefined reference to `MPI_F_STATUS_IGNORE'setbot.c:(.text+0x18): undefined reference to `MPI_F_STATUSES_IGNORE'/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpigf.a(waitf.o): In function `pmpi_wait':waitf.c:(.text+0x29): undefined reference to `MPI_F_STATUS_IGNORE'waitf.c:(.text+0x3a): undefined reference to `PMPI_Wait'
Any ideas on what I am doing wrong here?
Thank you
Andreas
0 Kudos
5 Replies
TimP
Honored Contributor III
1,296 Views
The Intel MPI mpif90 wrapper invokes gfortran, and so doesn't support .o files built by ifort, or by icc with -ipo.
Perhaps you meant to use mpiifort. That would not be compatible with .o files built by gfortran, but it should help with other problems you mentioned.
That said, it's unusual in my experience to attempt -ipo with MPI builds, and it's not unusual for Fortran applications to perform best with all interprocedural stuff turned off, using -fno-inline-functions.
0 Kudos
IDZ_A_Intel
Employee
1,296 Views
Hello Andreas,
The problem is due to IPO,because -fast implies -O3 -ipo -static. Static linking is supported for Intel MPI, sojusttry replacing-fast with -O3 -static. I don't know why IPO is causing the problem, but I've not seen IPO speedup MPI codes more than a few percent, so you won't be giving up much by compiling without -ipo.

Patrick Kennedy
Intel Developer Support

P.S. -- A race condition with Tim's reply put this thread in a funny state (it didn't show me as the author of the abovereply).

I didn't catch you were using the mpif90 wrapper. As Tim said, use mpiifort/mpiicc if you really want to try an IPO link. But be prepared for long link times...I've seen SPEC MPI2007 codes take an hour or more to link with -ipo.

Patrick
0 Kudos
Dmitry_K_Intel2
Employee
1,296 Views
Hi Andreas,

As noted before you need to use wrappers for Intel compilers: mpiifort, mpiicc,mpiicpc...

'-fast' option is a bit dangerous for MPI applications because all libraries are linked statically and you may have problems with fast interconnects (or may not). It's equivalent to:

"-xHOST -O3 -ipo -no-prec-div -static"

(HOST is a code name for platform) Much better to use :

"-xHOST -O3 -ip -no-prec-div"

If you need to link Intel MPI library staticaly he can use '-static_mpi' option.

Regards!
Dmitry
0 Kudos
andreas0674
Beginner
1,296 Views
Thank you all forresponses.
You are right; I was actually using the mpif90 wrapper with theenvironmental parameterI_MPI_F90 set to ifort.
Obviously, that was wrong.
I understand your concerns about using the -fast switch but I have seen some significant speedup when
using a version of OpenMPI (compiled with ifort and icc) and compiling my program with -fast rather than only -O3.

I have switched to the mpiifort wrapper and now the compilation stops with a different error.
[user@andreas 2p20d]$ mpiifort -132 -O3 -fast *.f *.f90
Warning: the -fast option forces static linkage method for the Intel MPI Library.
ipo: warning #11020: unresolved __rela_iplt_end
Referenced in libc.a(elf-init.o)
ipo: warning #11020: unresolved __rela_iplt_start
Referenced in libc.a(elf-init.o)
ipo: remark #11000: performing multi-file optimizations
ipo: remark #11005: generating object file /tmp/ipo_ifortkvtaRV.o
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpi.a(mpid_nem_lmt_vmsplice.o): In function `MPID_nem_lmt_vmsplice_initiate_lmt':
mpid_nem_lmt_vmsplice.c:(.text+0x1093): warning: the use of `tempnam' is dangerous, better use `mkstemp'
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpi.a(dapl_module_util.o): In function `MPID_nem_dapl_module_util_get_ia_addr':
dapl_module_util.c:(.text+0x4e5b): warning: Using 'getaddrinfo' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
/usr/lib/gcc/x86_64-redhat-linux/4.4.4/../../../../lib64/libpthread.a(libpthread.o): In function `sem_open':
(.text+0x75ad): warning: the use of `mktemp' is dangerous, better use `mkstemp'
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpi.a(tcp_init.o): In function `MPID_nem_tcp_get_business_card':
tcp_init.c:(.text+0x716): warning: Using 'gethostbyaddr' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpi.a(simple_pmi.o): In function `PMI_Init':
simple_pmi.c:(.text+0x6798): warning: Using 'gethostbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpi.a(ofa_init.o): In function `load_ibv_library':
ofa_init.c:(.text+0x3fc): warning: Using 'dlopen' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
ld: dynamic STT_GNU_IFUNC symbol `strcmp' with pointer equality in `/usr/lib/gcc/x86_64-redhat-linux/4.4.4/../../../../lib64/libc.a(strcmp.o)' can not be used when making an executable; recompile with -fPIE and relink with -pie
[yiotis@andreas 2p20d]$ mpiifort -132 -O3 -fast *.f *.f90Warning: the -fast option forces static linkage method for the Intel MPI Library.ipo: warning #11020: unresolved __rela_iplt_end Referenced in libc.a(elf-init.o)ipo: warning #11020: unresolved __rela_iplt_start Referenced in libc.a(elf-init.o)ipo: remark #11000: performing multi-file optimizationsipo: remark #11005: generating object file /tmp/ipo_ifortkvtaRV.o/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpi.a(mpid_nem_lmt_vmsplice.o): In function `MPID_nem_lmt_vmsplice_initiate_lmt':mpid_nem_lmt_vmsplice.c:(.text+0x1093): warning: the use of `tempnam' is dangerous, better use `mkstemp'/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpi.a(dapl_module_util.o): In function `MPID_nem_dapl_module_util_get_ia_addr':dapl_module_util.c:(.text+0x4e5b): warning: Using 'getaddrinfo' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking/usr/lib/gcc/x86_64-redhat-linux/4.4.4/../../../../lib64/libpthread.a(libpthread.o): In function `sem_open':(.text+0x75ad): warning: the use of `mktemp' is dangerous, better use `mkstemp'/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpi.a(tcp_init.o): In function `MPID_nem_tcp_get_business_card':tcp_init.c:(.text+0x716): warning: Using 'gethostbyaddr' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpi.a(simple_pmi.o): In function `PMI_Init':simple_pmi.c:(.text+0x6798): warning: Using 'gethostbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking/myprogs/opt/intel/impi/4.0.0.028/intel64/lib/libmpi.a(ofa_init.o): In function `load_ibv_library':ofa_init.c:(.text+0x3fc): warning: Using 'dlopen' in statically linked applications requires at runtime the shared libraries from the glibc version used for linkingld: dynamic STT_GNU_IFUNC symbol `strcmp' with pointer equality in `/usr/lib/gcc/x86_64-redhat-linux/4.4.4/../../../../lib64/libc.a(strcmp.o)' can not be used when making an executable; recompile with -fPIE and relink with -pie
It seems that the problem now is with glibc in system that does not support static linking.
I found that this has been reported as a bug on Red Hat Bugzilla
So I have to wait for a newer version of the glibc library to try again.

In fact I get the same error when I try to statically link with OpenMPI libraries (built with ifort) on this machine.

All other flags that you suggested work fine and the flags
"-xHOST -O3 -ipo -no-prec-div" produce a slightly faster code.

Andreas



0 Kudos
Dmitry_K_Intel2
Employee
1,296 Views
I'd recommend not to use statically linked libc library especially in case if your application will be used by other customers on other OSes.
For "-xHOST -O3 -ipo -no-prec-div" you could add '-static-intel' and all libraries provided by Intel will be linked statically whilst all other will be linked dynamically.
Give it a try - might be this is a solution for you.

Regards!
Dmitry
0 Kudos
Reply