Intel® C++ Compiler
Community support and assistance for creating C++ code that runs on platforms based on Intel® processors.
7944 Discussions

Mix and match Intel 19 and GCC 8: OMP problems

fpultar
Beginner
439 Views

I have an application that is written in C++, parallelized with OMP, and preferably compiled with GCC/8.2.0. This application accesses a library that is written in Fortran, parallelized with OMP as well, and preferably compiled with the Intel toolchain and a MKL backend. The latter may be replaced with openblas.
Bear in mind I'm on a cluster and have limited control on what I can install.
I have successfully compiled, linked, and tested everything using only GCC and openblas instead of MKL. However, test studies on minimal examples suggest MKL and Intel would be faster.
I have considered two options how to get the Intel/MKL library into my code:

(1) compiling everything using the Intel toolchain (that will be a different post: https://community.intel.com/t5/Intel-C-Compiler/Compilation-error-internal-error-20000-7001/m-p/1365208#M39785)
(2) Sticking with GCC/8.2.0 for my C++ code and linking dynamically to the precompiled library that was compiled with Intel/MKL. The LD_LIBRARY_PATH is expanded to also include:

/cluster/apps/intel/parallel_studio_xe_2020_r0/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64:/cluster/apps/intel/parallel_studio_xe_2020_r0/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64

"icc --version" gives me: "icc (ICC) 19.1.0.166 20191121"; operating system is CentOS 7

ldd of my app gives me the printout below suggesting linking has been successful:

 

linux-vdso.so.1 => (0x00007fff2d7bb000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00002ab227c25000)
libgsl.so.25 => /cluster/apps/gcc-8.2.0/gsl-2.6-x4nsmnz6sgpkm7ksorpmc2qdrkdxym22/lib/libgsl.so.25 (0x00002ab227e41000)
libgslcblas.so.0 => /cluster/apps/gcc-8.2.0/gsl-2.6-x4nsmnz6sgpkm7ksorpmc2qdrkdxym22/lib/libgslcblas.so.0 (0x00002ab228337000)
libfftw3.so.3 => /cluster/apps/gcc-8.2.0/fftw-3.3.9-w5zvgavdpyt5z3ryppx3uwbfg27al4v6/lib/libfftw3.so.3 (0x00002ab228595000)
libz.so.1 => /lib64/libz.so.1 (0x00002ab228a36000)
libfftw3_omp.so.3 => /cluster/apps/gcc-8.2.0/fftw-3.3.9-w5zvgavdpyt5z3ryppx3uwbfg27al4v6/lib/libfftw3_omp.so.3 (0x00002ab228c4c000)
libxtb.so.6 => /cluster/project/igc/fpultar/intel-19.1.0/xtb/build/libxtb.so.6 (0x00002ab228e53000)
libstdc++.so.6 => /cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-8.2.0-6xqov2fhvbmehix42slain67vprec3fs/lib64/libstdc++.so.6 (0x00002ab22a16d000)
libm.so.6 => /lib64/libm.so.6 (0x00002ab22a4f1000)
libgomp.so.1 => /cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-8.2.0-6xqov2fhvbmehix42slain67vprec3fs/lib64/libgomp.so.1 (0x00002ab22a7f3000)
libgcc_s.so.1 => /cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-8.2.0-6xqov2fhvbmehix42slain67vprec3fs/lib64/libgcc_s.so.1 (0x00002ab22aa21000)
libc.so.6 => /lib64/libc.so.6 (0x00002ab22ac39000)
/lib64/ld-linux-x86-64.so.2 (0x00002ab227a01000)
libmkl_intel_lp64.so => /cluster/apps/intel/parallel_studio_xe_2020_r0/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64/libmkl_intel_lp64.so (0x00002ab22b007000)
libmkl_intel_thread.so => /cluster/apps/intel/parallel_studio_xe_2020_r0/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64/libmkl_intel_thread.so (0x00002ab22bb73000)
libmkl_core.so => /cluster/apps/intel/parallel_studio_xe_2020_r0/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64/libmkl_core.so (0x00002ab22e0df000)
libifcore.so.5 => /cluster/apps/intel/parallel_studio_xe_2020_r0/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64/libifcore.so.5 (0x00002ab2323ff000)
libimf.so => /cluster/apps/intel/parallel_studio_xe_2020_r0/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64/libimf.so (0x00002ab232763000)
libsvml.so => /cluster/apps/intel/parallel_studio_xe_2020_r0/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64/libsvml.so (0x00002ab232d01000)
libirng.so => /cluster/apps/intel/parallel_studio_xe_2020_r0/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64/libirng.so (0x00002ab234688000)
libiomp5.so => /cluster/apps/intel/parallel_studio_xe_2020_r0/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64/libiomp5.so (0x00002ab2349f2000)
libintlc.so.5 => /cluster/apps/intel/parallel_studio_xe_2020_r0/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64/libintlc.so.5 (0x00002ab234de2000)
libdl.so.2 => /lib64/libdl.so.2 (0x00002ab235059000)


With these environment variables set:

export OMP_NUM_THREADS=16;
export OPENBLAS_NUM_THREADS=16; # for good measure
export MKL_NUM_THREADS=16;
export VECLIB_MAXIMUM_THREADS=16;
export NUMEXPR_NUM_THREADS=16;


The program can successfully find the library at runtime and execute code involving print outs to the console. However, when accessing parallelized code, the program crashes. If these environment variables have been set to "1", the code executes fine.
It is noteworthy that I could also compile, link, and execute a minimal example of the code in which the main C++ program does not use OMP. In this minimal example, I can use the MKL library with more cores.
I suspect that the OMP parallelized C++ program probably wants to use gomp which collides with the MKL intel-omp parallelized library. Using something like "MKL_THREADING_LAYER=GNU" or "MKL_THREADING_LAYER=INTEL" also did not help.

Do you have a suggestion on what I could try next?

Best,
Felix

Labels (3)
0 Kudos
0 Replies
Reply