- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am compiling a Fortran program on a Mac that links to the MKL libraries. I get warnings like these when I use the -ipo option. What does this mean and should I be concerned about it?
ipo: warning #11021: unresolved _mkl_spblas_scsr0ttunc__mmout_
Referenced in libmkl_intel_thread.a(scsr0_
ipo: warning #11021: unresolved _mkl_spblas_scsr0ttunf__mmout_
Referenced in libmkl_intel_thread.a(scsr0_
ipo: warning #11021: unresolved _mkl_spblas_scsr0ttuuc__mmout_
Referenced in libmkl_intel_thread.a(scsr0_
ipo: warning #11021: unresolved _mkl_spblas_scsr0ttuuf__mmout_
Referenced in libmkl_intel_thread.a(scsr0_
ipo: warning #11021: unresolved _MKL_malloc
Referenced in libmkl_blacs_custom_lp64.a(
ipo: warning #11021: unresolved _MKL_free
Referenced in libmkl_blacs_custom_lp64.a(
ipo: warning #11021: unresolved _MKL_calloc
Referenced in libmkl_blacs_custom_lp64.a(
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Rajesh, I confirm the OMP: Info #275 issue has been resolved with OneAPI update 2.
Thank you,
Marcos
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Adding to Kevins query, the MPI library being used is the icc, ifort 2020 compiled openmpi 3.1.2. The OS is OSX catalina.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I've added here a self contained source code that demonstrates the issue. Please read the source/README file.
The configuration of OpenMPI 3.1.2 is:
./configure --prefix /usr/local/openmpi31_Intel20 CC=icc C99=icc CXX=icpc F77=ifort FC=ifort CFLAGS=-m64 -O2 C99FLAGS=-m64 -O2 CXXFLAGS=-m64 -O2 FFLAGS=-m64 -O2 FCFLAGS=-m64 -O2 LDFLAGS=-m64 --without-tm --without-psm --enable-mpirun-prefix-by-default --without-verbs --enable-static --disable-shared
Note also that when running the test program with OpenMP threads (ie OMP_NUM_THREADS=2)
mpirun --oversubscribe -n 8 css_test
an OMP: Info #274 statement appears at symbolic factorzation phase. This happens only with the intel 2020 compilers and is also found in Linux Centos 7 with impi.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Gennady or Kirill, if you can please take a look at this request that would be great.
Thank you for your time.
Marcos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
It is not my area of expertise and I don't have openmpi on Mac handy but can you try to add "-Wl,-rpath, ${MKLROOT}/lib" option to your link line, before listing the static MKL libraries? On Linux you would need to add "-Wl,--start-group" and "-Wl,--end-group".
Best,
Kirill
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks. We'll try it. Our Mac user is out this week, so we'll let you know how it goes next week.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Kirill, thank you for replying. I tried using the -Wl,-rpath, ${MKLROOT}/lib flag but it doesn't make a difference. I checked again the link line advisor and it does not ask you to add this in OSX when linking statically, albeit the only MPI lib option it gives is MPICH.
We have been using the same link line for OpenMPI. So, you use MPICH on you Mac and don't see this issue? Could you test compiling the saple program I posted?
Thank you for your time,
Marcos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I have currently problems with the Mac machines, I tried your test with mpich and the following linking line:
mpifort -m64 -O2 -ipo -no-wrap-margin -fpp -I/${MKLROOT}/include -qopenmp -qopenmp-link static -liomp5 -o css_test main.o -Wl,-rpath ${MKLROOT}/lib/libmkl_intel_lp64.a ${MKLROOT}/lib/libmkl_intel_thread.a ${MKLROOT}/lib/libmkl_core.a ${MKLROOT}/lib/libmkl_blacs_mpich_lp64.a -lpthread -lm -ldl
I ended up with a much fewer warnings (one about gfortran as below + a couple about unresolved externals related to MKL still) and when I removed "-ipo", I had only
ld: file not found: /usr/local/gfortran/lib/libgfortran.3.dylib for architecture x86_64
because I don't have gfortran.
As for the link line advisor, I suspect that it might not give a correct link line. At least in all our examples on Mac (even without MPI) we have the rpath option.
Best,
Kirill
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Kirill, thanks for checking this. The gfortran warning you see is because the intel compiled code is requesting a gfortran library? How does that work?
Also, I tried adding -Wl, -rpath for compiling our main code and the warnings are reduced and got more cryptic:
mpifort -m64 -O2 -ipo -no-wrap-margin -fpp -DGITHASH_PP=\"FDS6.7.5-21-g90c3fb25a-dirty-master\" -DGITDATE_PP=\""Mon Aug 24 12:58:05 2020 -0400\"" -DBUILDDATE_PP=\""Aug 25, 2020 17:03:47\"" -DCOMPVER_PP=\""Intel ifort 19.1.1.216"\" -DWITH_MKL -I/opt/intel20/compilers_and_libraries_2020.1.216/mac/mkl/include -static-intel -qopenmp -qopenmp-link static -o fds_mpi_intel_osx_64 prec.o cons.o devc.o type.o data.o mesh.o func.o gsmv.o smvv.o rcal.o turb.o soot.o ieva.o pois.o scrc.o evac.o geom.o radi.o part.o vege.o ctrl.o samr.o dump.o hvac.o mass.o read.o wall.o fire.o divg.o velo.o pres.o init.o main.o -Wl,-rpath /opt/intel20/compilers_and_libraries_2020.1.216/mac/mkl/lib/libmkl_intel_lp64.a /opt/intel20/compilers_and_libraries_2020.1.216/mac/mkl/lib/libmkl_core.a /opt/intel20/compilers_and_libraries_2020.1.216/mac/mkl/lib/libmkl_intel_thread.a /opt/intel20/compilers_and_libraries_2020.1.216/mac/mkl/lib/libmkl_blacs_custom_lp64.a -lpthread -lm -ldl
ipo: warning #11021: unresolved _dscal_
Referenced in /var/folders/ws/57zzpytn345gt57n5v_5p90h001l0l/T/ipo_ifortSM7Bdz.o
ipo: warning #11021: unresolved _dcopy_
Referenced in /var/folders/ws/57zzpytn345gt57n5v_5p90h001l0l/T/ipo_ifortSM7Bdz.o
ipo: warning #11021: unresolved _daxpby_
Referenced in /var/folders/ws/57zzpytn345gt57n5v_5p90h001l0l/T/ipo_ifortSM7Bdz.o
ipo: warning #11021: unresolved _ddot_
Referenced in /var/folders/ws/57zzpytn345gt57n5v_5p90h001l0l/T/ipo_ifortSM7Bdz.o
ipo: warning #11021: unresolved _daxpy_
Referenced in /var/folders/ws/57zzpytn345gt57n5v_5p90h001l0l/T/ipo_ifortSM7Bdz.o
ipo: warning #11021: unresolved _pardiso_d_
Referenced in /var/folders/ws/57zzpytn345gt57n5v_5p90h001l0l/T/ipo_ifortSM7Bdz.o
ipo: warning #11021: unresolved _pardiso_s_
Referenced in /var/folders/ws/57zzpytn345gt57n5v_5p90h001l0l/T/ipo_ifortSM7Bdz.o
ipo: warning #11021: unresolved _cluster_sparse_solver_d_
Referenced in /var/folders/ws/57zzpytn345gt57n5v_5p90h001l0l/T/ipo_ifortSM7Bdz.o
ipo: warning #11021: unresolved _cluster_sparse_solver_s_
Referenced in /var/folders/ws/57zzpytn345gt57n5v_5p90h001l0l/T/ipo_ifortSM7Bdz.o
But then the linking crashes with unresolved externals for the MKL routines we are calling from within our source:
Undefined symbols for architecture x86_64:
"_cluster_sparse_solver_d_", referenced from:
_complex_geometry_mp_ccregion_density_implicit_ in ipo_ifortSM7Bdz2.o
_complex_geometry_mp_symblu_zz_ in ipo_ifortSM7Bdz2.o
_complex_geometry_mp_potential_flow_init_ in ipo_ifortSM7Bdz2.o
_scrc_mp_scarc_setup_cluster_ in ipo_ifortSM7Bdz2.o
_scrc_mp_scarc_method_cluster_ in ipo_ifortSM7Bdz2.o
_scrc_mp_scarc_relaxation_ in ipo_ifortSM7Bdz2.o
_globalmatrix_solver_mp_glmat_solver_h_ in ipo_ifortSM7Bdz4.o
...
"_cluster_sparse_solver_s_", referenced from:
_scrc_mp_scarc_setup_cluster_ in ipo_ifortSM7Bdz2.o
_scrc_mp_scarc_method_cluster_ in ipo_ifortSM7Bdz2.o
_scrc_mp_scarc_relaxation_ in ipo_ifortSM7Bdz2.o
...
So using -Wl, -rpath with static linking doesn't seem to be helping. Could you actually build an executable with -ipo and MPICH?
Thank you!
Marcos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Ok, I'm afraid I gave you a non-working advice about rpath. There was something wrong with the test environment which I used.
I start believing that the ipo-related warnings are not specific to a particular MPI, I got the same warnings when I ran an MPICH-based example for a cluster sparse solver. Maybe it is even not MPI-specific.
Compiler team said that these warnings most likely indicate that there is a potential performance loss due to some IPO-analysis optimizations not performed and are harmless otherwise. So I suggest you don't worry about them.
If I get to know the workaround to suppress the warnings, I'll let you know.
As for the OpenMP runtime warnings about deprecated functionality: we're aware of this issue, it is related to the new OpenMP 5.0 standard. There was a question about these warnings on this forum with a workaround for suppressing them (until we fix the source of them in one of the next MKL releases).
Best,
Kirill
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Kirill, I think we might have found the culprit on this. The MKL guide for OSX has examples where the static libraries have to be stated in the link line 2 or three times. This sounds very strange, but it is probably the only way to assert precedens among ll the different libs. see here for static compilation.
https://registrationcenter.intel.com/irc_nas/2690/mkl_userguide_mac.pdf
Adding the static libraries 3 times seems to take care of the issue. Try it in your computer.
Something to pass to whomever maintains the MKL link advisor online, the advisor states only once for the libs to be stated at link time. See attached pic.
Thank you for taking the time and helping with this. I'm glad the other openmp warning will be fixed in the next MKL release.
Best,
Marcos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Marcos,
You're right and I'm glad the problem is solved. Compiler team also confirmed that your solution is the correct one and that there is nothing which can be done on the side of MKL.
Our MKL Link Line Advisor does not consider "-ipo" and hence does not need this but I agree it would help if somewhere there is a note about how it should work with this option.
Just a small correction: I'm not sure we'll fix the OpenMP warnings in the next release, it is likely to be one of the next releases. As we recommended in the post https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/PARDISO-compatibility-problem-with-new-version-OPENMP/td-p/1169208, there is a suggested workaround for it which is calling kmp_set_warnings_off().
Best,
Kirill
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Kirill, thank you. About the OpenMP workaround. We also have compilation targets using gfortran instead of ifort. It seems gfortran does not recognise these kmp_set_warnings_*() functions and compilation crashes on these targets. Are these routines specific to ifort and iopm5?
Regards,
Marcos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Marcos,
kmp_set_warnings_*() is specific to Intel OpenMP (libiomp5). I believe every time you see something with "kmp" and related to OpenMP, it is Intel-specific (like KMP_AFFINITY variable, e.g.).
Best,
Kirill
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you Kirill, that's what I thought. We'll wait for the update that fixes the OpenMP info statements.
I'll check here for the time being if these functions can be called for the intel targets only using C preprocessor logic.
best,
Marcos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Kirill and Gennady, I tested today a case with the code compiled with oneAPI 2021 and noted that we still have the
OMP: Info #275: omp_get_nested routine deprecated, please use omp_get_max_active_levels instead.
...
Warnings in the LU phase of the cluster solver, when running with more than one OMP thread.
Just wanted to check with you if there have been any advancements on this. So far we have been able, for the most part, to make a successful adoption of oneAPI in our workflow.
Best Regards,
Marcos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
the fix is targeting to be released the nearest( next ) update 2021 u2. it will happen very soon. We will keep this thread updated.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
The Intel oneAPI update 2021 u2 is available to download, please try running the sample on it. Also, please let us know if you face any issue.
Link to download the Intel oneAPI toolkit:
https://software.intel.com/content/www/us/en/develop/tools/oneapi/base-toolkit/download.html
Regards
Rajesh.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Can you please update us whether your issue is resolved or not?
Regards
Rajesh.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We have not had a chance to install the new version. We'll try this week. Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Rajesh, I confirm the OMP: Info #275 issue has been resolved with OneAPI update 2.
Thank you,
Marcos
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page