Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

Inconsistent results using PARDISO...

RossK
Beginner
325 Views
Hi, I'm having a few issues getting consistent results when using PARDISO in parallel. I'm using MKL version 10.3 update 11 (32 bit).
I'm solving a symmetric indefinite system, so using mtype=-2. In general I've been using the default solver options via iparm(1)=0.
Using these options, I'm getting a different solution, not only from run to run, but also on repeat solutions of the same factorization/RHS vector. The solutions seem to differ up to the first or second decimal place.. sometimes worse!
I've found that if I change the fill-in reducing ordering with iparm(2)=0, then I get more consistent results, but the solution still differs at the 10th decimal place or so. This isn't ideal for me.
Further, if I set OMP_NUM_THREADS=1 (ie only using 1 processor), then I get completely consistent results every run. For any other number of threads, I get problems.
My compile flags (for ifort) are:-O3 -xSSE4.1 -openmp -ipo -parallel -free
I've tried adding the compiler options: -fp-model precise -fp-model source, but they haven't made any noticable difference.
I'm stumped - anyone have any suggestions? I've attached the matrix in sparse symmetric storage as well as the RHS vector.
Cheers for any help!
EDIT: Have been searching a lot more after posting this and found that ill conditioning of the matrix and openmp reduction operations are to blame, and that the only way to expect bit for bit agreement is using sequential mode.

However, these variations are still quite large for the example I gave. Would also be interested in why the METIS nested dissection gives such a different result to the minimum degree algorithm, which seems less sensitive to the problem. Would also be happy if someone points out a stupid error I've made!
0 Kudos
5 Replies
Zhang_Z_Intel
Employee
325 Views
Hi, thanks for posting your question. I will take a look at your code sample and get back to you later. But at the same time, I'd like to point out that Intel MKL plans to provide conditional bitwise reproducibility in the 11.0 release. Please see this article for more information: http://software.intel.com/en-us/articles/conditional-bitwise-reproducibility/

Besides ill conditioning of matrices and multithreading, another factor contributing to inconsistent results on the same system is memory alignment. It's strongly suggested you always align memory allocation to certain boundary (e.g. 64-byte). Please refer to an earlier related Knowledge Base article: http://software.intel.com/en-us/articles/getting-reproducible-results-with-intel-mkl/

Thanks,
Zhang
0 Kudos
RossK
Beginner
325 Views
Thanks for the reply. While I wait I'll read up on the memory alignment and keep an eye out for the 11.0 release. I'm assuming I just need to allocate memory using pointers and ptr = mkl_malloc(size,64) and free them using the mkl_free_buffers and mkl_free() rather than using the fortran allocate/deallocate?
Cheers
0 Kudos
Konstantin_A_Intel
325 Views
Hi,

I would suggest you to set a number of iterative refinement steps to anon-zero value.. say 3 or 4:

iparm(8)=3

With this settings I've got 1e-13 relative residual that should be more than enough.

Regards,
Konstantin
0 Kudos
Zhang_Z_Intel
Employee
325 Views
Thanks to Konstantin for your help.

Konstantin also clarified with me that because PARDISO does not support full pivoting,the precision of the factorization step depends on fill-in reordering. This is why PARDISO also includes an iterative refinement step (iparm(8)) to help with precision.

Please let us know if you have further questions.
0 Kudos
RossK
Beginner
325 Views
Have tried the memory alignment, which didn't seem to make a huge difference. I've also included the iterative refinement. I have a fairly tight tolerance on convergence for my timestepping, so I found that increasing to 10 iterative refinement steps seemed to give me more consistent results, still not exactly the same from run to run, or even the same for multiple solves of the same RHS, but they will suffice.
Thanks again for the help. If you have any other suggestions to get things to be more consistent then please let me know.
Cheers
0 Kudos
Reply