Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
6975 Discussions

Error MKLMPI_Get_wrappers, cluster MKL using gfortran, OpenMPI

Nick_Papior
Beginner
2,872 Views

I am trying to compile a program using the MKL (11.3, 2016.0.109) libraries with the gfortran (5.1.0) compiler and OpenMPI (1.8.5, compiled against gfortran 5.1.0).

I can successfully compile the program without any errors.

However, when executing my program I end up with this error:

Intel MKL FATAL ERROR: Cannot load symbol MKLMPI_Get_wrappers.

I have searched the Intel site for references regarding this issue to no avail (https://software.intel.com/en-us/mkl-reference-manual-for-fortran) does not supply this kind of information.

For your information my compilation flags are these:

-Wl,--no-as-needed -L/opt/intel/mkl/lib/intel64 -lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64 -lmkl_lapack95_lp64 -lmkl_blas95_lp64 -lmkl_gf_lp64 -lmkl_core -lmkl_sequential

Which (as said) compiles fine. I have also tried the explicit https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor which behaves similarly.

In addition I have tried adding start-group and end-group passed to the linker, to no avail.

I have also tried linking the generic library

-lmkl_blacs_openmpi_lp64 -lmkl_blacs_lp64

which did not change anything.

To my knowledge I should be able to link MKL against gfortran, no?

And I suspect some compatibility issues as it is a run-time error.

Note, that I can easily use the same flags for compiling against OpenMPI and Intel compiler.

0 Kudos
1 Solution
Roman_D_Intel1
Employee
2,872 Views

Hi Nick,

Since only a part of the link line is available, I'm just guessing: the reason you are seeing this error is because you are linking to dynamic libraries but there's no dynamic version of -lmkl_blacs_openmpi_lp64 in this MKL version. Depending on your other linker flags, symbols from this library are either not picked up at all or are included but not available to dlsym().

You have two options: a) build a shared library via

-ld -shared --whole-archive libmkl_blacs_openmpi_lp64.a -no-whole-archive -o libmkl_blacs_openmp_lp64.so

or b) link to static libraries by adding

-Wl,-Bstatic -Wl,--start-group

and

-Wl,--end-group -Wl,-Bdynamic

around the list of MKL libraries. Please let me know if this worked for you.

View solution in original post

0 Kudos
4 Replies
Roman_D_Intel1
Employee
2,873 Views

Hi Nick,

Since only a part of the link line is available, I'm just guessing: the reason you are seeing this error is because you are linking to dynamic libraries but there's no dynamic version of -lmkl_blacs_openmpi_lp64 in this MKL version. Depending on your other linker flags, symbols from this library are either not picked up at all or are included but not available to dlsym().

You have two options: a) build a shared library via

-ld -shared --whole-archive libmkl_blacs_openmpi_lp64.a -no-whole-archive -o libmkl_blacs_openmp_lp64.so

or b) link to static libraries by adding

-Wl,-Bstatic -Wl,--start-group

and

-Wl,--end-group -Wl,-Bdynamic

around the list of MKL libraries. Please let me know if this worked for you.

0 Kudos
Nick_Papior
Beginner
2,872 Views

Great this worked! You were correct, I did not link statically.

To summarize,

1) linking with dynamic intel libraries produces this table:

$> nm <exe> | grep -i blacs
0000000000ff9270 T BI_BlacsAbort
0000000000ff9100 T BI_BlacsErr
0000000000ff91c0 T BI_BlacsWarn
0000000000fef390 T blacs_abort_
0000000000fef390 W blacs_abort__
0000000000fef030 T blacs_barrier_
0000000000fef030 W blacs_barrier__
0000000000fef3d0 T blacs_exit_
0000000000fef3d0 W blacs_exit__
0000000000feee90 T blacs_freebuff_
0000000000feee90 W blacs_freebuff__
0000000000fef1a0 T blacs_get_
0000000000fef1a0 W blacs_get__
0000000000feeef0 T blacs_gridexit_
0000000000feeef0 W blacs_gridexit__
0000000000feefe0 T blacs_gridinfo_
0000000000feefe0 W blacs_gridinfo__
0000000000fee6f0 T blacs_gridinit_
0000000000fee6f0 W blacs_gridinit__
0000000000fee890 T blacs_gridmap_
0000000000fee890 W blacs_gridmap__
0000000000fef0a0 T blacs_pinfo_
0000000000fef0a0 W blacs_pinfo__
0000000000fef550 T blacs_pnum_
0000000000fef550 W blacs_pnum__
0000000000fef190 T blacs_setup_
0000000000fef190 W blacs_setup__
0000000000ff7450 T Cblacs2sys_handle
0000000000ff77d0 T Cblacs_abort
0000000000ff73e0 T Cblacs_barrier
0000000000ff7610 T Cblacs_get
0000000000ff72b0 T Cblacs_gridexit
0000000000ff7390 T Cblacs_gridinfo
0000000000ff6800 T Cblacs_gridinit
0000000000ff6a70 T Cblacs_gridmap
0000000000ff7520 T Cblacs_pinfo
0000000000ff7800 T Cblacs_pnum
0000000000ffe1c0 T Csys2blacs_handle
0000000000ffdff0 T MKL_BLACS_ALLOCATE
0000000000ffe030 T MKL_BLACS_Deallocate

while linking with static intel leads to this:

$> nm <exe> | grep -i blacs
0000000001144650 T BI_BlacsAbort
00000000011444e0 T BI_BlacsErr
00000000011445a0 T BI_BlacsWarn
000000000113da60 T blacs_abort_
000000000113da60 W blacs_abort__
000000000113daa0 T blacs_exit_
000000000113daa0 W blacs_exit__
000000000113d5e0 T blacs_freebuff_
000000000113d5e0 W blacs_freebuff__
000000000113d870 T blacs_get_
000000000113d870 W blacs_get__
000000000113d640 T blacs_gridexit_
000000000113d640 W blacs_gridexit__
000000000113d730 T blacs_gridinfo_
000000000113d730 W blacs_gridinfo__
000000000113ce40 T blacs_gridinit_
000000000113ce40 W blacs_gridinit__
000000000113cfe0 T blacs_gridmap_
000000000113cfe0 W blacs_gridmap__
00000000036a5f60 b blacs_library_loader_lock
00000000036a5f64 b blacs_library_lock_flag
000000000113d780 T blacs_pinfo_
000000000113d780 W blacs_pinfo__
000000000113dc20 T blacs_pnum_
000000000113dc20 W blacs_pnum__
0000000001142830 T Cblacs2sys_handle
0000000001142bb0 T Cblacs_abort
00000000011429f0 T Cblacs_get
0000000001142700 T Cblacs_gridexit
00000000011427e0 T Cblacs_gridinfo
0000000001141c50 T Cblacs_gridinit
0000000001141ec0 T Cblacs_gridmap
0000000001142900 T Cblacs_pinfo
0000000001142be0 T Cblacs_pnum
0000000001148650 T Csys2blacs_handle
0000000001148470 T MKL_BLACS_ALLOCATE
00000000011484b0 T MKL_BLACS_Deallocate

which means that it can be checked rather easily (notice blacs_library_* does not exist in the failing executable).

But why do they not get interpreted as a needed symbol upon link-time? For instance when I link against my compiled scalapack it is a static library, and it just gets added to the symbol table, why are intel libraries special in this sense?

0 Kudos
Roman_D_Intel1
Employee
2,872 Views

Glad to hear that it worked!

But why do they not get interpreted as a needed symbol upon link-time? For instance when I link against my compiled scalapack it is a static library, and it just gets added to the symbol table, why are intel libraries special in this sense?

Dynamic MKL libraries do not have BLACS symbols as a dependency but use dlopen + dlsym to locate them at runtime via MKLPI_Get_wrappers function. This is a part of a new MKL feature: support for custom MPI libraries (see https://software.intel.com/en-us/articles/using-intel-mkl-mpi-wrapper-with-the-intel-mkl-cluster-functions). This approach has its rough edges like the one you've found: when you link in a static BLACS library into an executable, the MKLMPI_Get_wrappers symbol is not visible to dlsym unless you export it explicitly (via e.g. -rdynamic/--export-dynamic gcc/ld flag). On the other hand, this symbol is marked as exported in the dynamic BLACS library and thus everything works as expected.

0 Kudos
Nick_Papior
Beginner
2,872 Views

Great, thanks, that makes sense. :)

0 Kudos
Reply