Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

Pardiso not thread safe

Not thread safe.

1. Parallel right-hand-side (RHS). One thread to do one RHS solution, concurrently. Since RHS are always the same, the results should be exactly the same, at each time of running. However, it's not the case in realty.

If we do a single thread running (numThreads=1), the results are always correct.

Parts of the src code (based on the test program downloaded from your web site). Refer to attachement for all codes.

std::cout << "\\n\\n\\n=============================Paralllel RHS =========================\\n";
#pragma omp parallel num_threads(numThreads)
int tid = omp_get_thread_num();
PARDISO (pt, &maxfct, &mnum, &mtype, &phase, &n, a, ia, ja,
&idum, &nrhs, iparm, &msglvl, &B[5*tid], &X[5*tid], &error);

testPardiso: main.cpp
g++ -g -fopenmp -Ipaht_to/IntelMKL/10.3.9/include\\
-o $@ $< -Wl,--start-group \\
path_to/IntelMKL/10.3.9/lib/em64t/libmkl_intel_lp64.a \\
path_to/IntelMKL/10.3.9/lib/em64t/libmkl_core.a \\
path_th/IntelMKL/10.3.9/lib/em64t/libmkl_gnu_thread.a \\
-lpthread /usr/lib/

#uname -a
#Linux ubuntu 2.6.35-32-generic #66-Ubuntu SMP Mon Feb 13 21:04:32 UTC 2012 x86_64 GNU/Linux
#gcc -v
#Using built-in specs.
#Target: x86_64-linux-gnu
#gcc version 4.4.5 (Ubuntu/Linaro 4.4.4-14ubuntu5.1)

=============================Paralllel RHS =========================

Solve completed ...
The solution of the system is:
x [0] = -0.522321 5 = -0.522321 10 = 0.477679 15 = -0.522321
x [1] = -0.008929 6 = -0.008929 11 = 0.991071 16 = -0.008929
x [2] = 1.220982 7 = 1.220982 12 = 2.220982 17 = 1.220982
x [3] = -0.504464 8 = -0.504464 13 = 0.495536 18 = -0.504464
x [4] = -0.214286 9 = -0.214286 14 = 0.785714 19 = -0.214286

#Note: sometime it just ran okay (gave correct answers), but not already repeatable

0 Kudos
3 Replies


From the code there, it looks that the pardiso structure pt, iparm, etcshared by the different threadings. Actually each threading will have its own structure.


0 Kudos
Hi Xirenwang,

From the code posted here it is not clear if "nrhs" variable was adjusted properly. When PARDISO is called, it should be equal to the actual number of right hand sides to solve, so in this case it should be equal to 5 (except for the last thread, possibly). Could you please provide the whole test case to make sure PARDISO has been called properly?
Could you please also tell the value of such approach? PARDISO has its own parallelization mechanism for many right hand sides, so there is no need to do this manually. One can simply set the number of thread via mkl_set_num_threads, for example, and then call PARDISO.

Best regards,
Dmitry Butyugin.
0 Kudos


Some more clarification that may be helpful to use the functions. There are 4 types of using PARDISO solving steps there:

1. Calling PARDISO with single right hand side. Works fine.

2. Calling PARDISO from parallel region, supplying each thread with one right hand side and the same handle. This works incorrectly indeed, because of the same handle. BTW, our documentation does not say PARDISO is thread safe when called with the same handle from multiple threads.

3. Calling PARDISO from sequential code with multiple right hand sides. This uses internal PARDISO parallelization and works fine as expected. This is the suggested usage model for multiple right hand sides.

4. Calling PARDISO from parallel region with _same_ multiple right hand sides. This code is incorrect, and it would not work correctly even if PARDISO was thread safe simply because there is a data race in the use of output (solution) vector.

Users should only use 1st and 3rd usage models. If user wants to solve systems with multiple right hand sides, he or she should consider gathering right hand sides in a single array and call only one instance of PARDISO solving step with multiple right hand sides. If gathering right hand sides is impossible, user must make sure only one instance of PARDISO is running solving step with the same handle.


0 Kudos