Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
6680 Discussions

omp warning when using pardiso and metis threaded version

may_ka
Beginner
525 Views

Hi,

when running pardiso and setting iparm[1]=3 (C interface) I get the warning message:

 

```

OMP: Warning #97: Requested number of active parallel levels "-2147483648" is negative; ignored

```

 

The relevant omp setting is "OMP_MAX_ACTIVE_LEVELS=2147483647", which should be perfectly fine for a 32-bit integer.

 

Mkl version is oneapi 2021-2, operation system linux. The mkl interface is ilp64.

 

Any idea?

 

Best

0 Kudos
7 Replies
Gennady_F_Intel
Moderator
508 Views

Which OpenMP threading do you use? checking with mkl_intel_thread.so linked with the testing examples (mkl 2022), I see no such messages.


may_ka
Beginner
496 Views

Hi

 

the link line is:

 

-Wl,--start-group \
	/opt/oneapi/mkl/2021.2.0/lib/intel64/libmkl_intel_ilp64.a \
	/opt/oneapi/mkl/2021.2.0/lib/intel64/libmkl_core.a \
	/opt/oneapi/mkl/2021.2.0/lib/intel64/libmkl_intel_thread.a -l iomp5 -l pthread -lm -ldl \
	-Wl,--end-group
Gennady_F_Intel
Moderator
493 Views

what is the matrix type?

Checking the problem right now with -2 matrix type on the real workload, I see no problems here.



may_ka
Beginner
485 Views

The matrix type is 2 (real and symmetric positive definite), and iparm[1]=3 (C-Interface, ".. The parallel (OpenMP) version of the nested dissection algorithm..")

Gennady_F_Intel
Moderator
478 Views

Karl,

checking with mtype=2. I have had no success reproducing:

icc --version : icc (ICC) 2021.5.0 20211109

echo $MKLROOT : /opt/intel/oneapi/mkl/2022.0.2

We need the test to go further.


may_ka
Beginner
432 Views

Hi,

thank's for the update.

 

I use g++

 

g++ --version
g++ (GCC) 11.2.0

 Compiling a stand-alone executable which calls pardiso with this

g++ -O3 -std=c++20 -c main.cpp -o main.o  -I /opt/intel/oneapi/mkl/2021.2.0/include/intel64/ilp64 -I /opt/intel/oneapi/mkl/2021.2.0/include/
g++ -static -L /opt/intel/oneapi/compiler/2021.2.0/linux/compiler/lib/intel64_lin -o exe main.o -Wl,--start-group /opt/intel/oneapi/mkl/2021.2.0/lib/intel64/libmkl_intel_ilp64.a /opt/intel/oneapi/mkl/2021.2.0/lib/intel64/libmkl_core.a /opt/intel/oneapi/mkl/2021.2.0/lib/intel64/libmkl_intel_thread.a -l iomp5 -l pthread -lm -ldl -Wl,--end-group
/usr/bin/ld: /opt/intel/oneapi/mkl/2021.2.0/lib/intel64/libmkl_core.a(mkl_memory_patched.o): in function `mkl_serv_set_memory_limit':
mkl_memory.c:(.text+0x5d1): warning: Using 'dlopen' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking

 and this omp setting:

printenv | egrep "OMP|KMP"
OMP_PLACES=cores
OMP_PROC_BIND=true
OMP_DYNAMIC=FALSE
KMP_AFFINITY=granularity=core,scatter
OMP_MAX_ACTIVE_LEVELS=2147483647
OMP_NUM_THREADS=18
OMP_STACKSIZE=2000M

 produces this screen output:

OMP: Warning #182: OMP_PLACES: ignored because KMP_AFFINITY has been defined
OMP: Warning #182: OMP_PROC_BIND: ignored because KMP_AFFINITY has been defined
OMP: Warning #97: Requested number of active parallel levels "-2147483648" is negative; ignored.

 

Hope this helps.

 

may_ka
Beginner
432 Views

Hi,

thanks for looking into this.

 

I am using g++

 

 

 

g++ --version
g++ (GCC) 11.2.0

 

 

 an executable calling pardiso, when compiled and linked with

 

g++ -O3 -std=c++20  -c main.cpp -o main.o  -I /opt/intel/oneapi/mkl/2021.2.0/include/intel64/ilp64 -I /opt/intel/oneapi/mkl/2021.2.0/include/
g++ -static  -L /opt/intel/oneapi/compiler/2021.2.0/linux/compiler/lib/intel64_lin -o exe main.o -Wl,--start-group /opt/intel/oneapi/mkl/2021.2.0/lib/intel64/libmkl_intel_ilp64.a /opt/intel/oneapi/mkl/2021.2.0/lib/intel64/libmkl_core.a /opt/intel/oneapi/mkl/2021.2.0/lib/intel64/libmkl_intel_thread.a -l iomp5 -l pthread -lm -ldl -Wl,--end-group
/usr/bin/ld: /opt/intel/oneapi/mkl/2021.2.0/lib/intel64/libmkl_core.a(mkl_memory_patched.o): in function `mkl_serv_set_memory_limit':
mkl_memory.c:(.text+0x5d1): warning: Using 'dlopen' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking

 

 running with environment variables

 

 

printenv | egrep "OMP|KMP|MKL"
OMP_PLACES=cores
OMP_PROC_BIND=true
OMP_DYNAMIC=FALSE
KMP_AFFINITY=granularity=core,scatter
MKLROOT=/opt/intel/mkl
OMP_MAX_ACTIVE_LEVELS=2147483647
OMP_NUM_THREADS=18
OMP_STACKSIZE=2000M
MKL_NUM_THREADS=18

 

 

 produces these massages:

 

 

OMP: Warning #182: OMP_PLACES: ignored because KMP_AFFINITY has been defined
OMP: Warning #182: OMP_PROC_BIND: ignored because KMP_AFFINITY has been defined
OMP: Warning #97: Requested number of active parallel levels "-2147483648" is negative; ignored.

 

 

 

Hope that helps.

Reply