Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

Pardiso in GPU

Marcos_V_1
New Contributor I
739 Views

Hi Gennady, I was wondering if there are any plans to implement a GPU version of PARDISO.

Also, what is the strategy (if any) to make the SOLVE phase scale with openmp threads?

Thank you for your time and attention,

 

Marcos

0 Kudos
4 Replies
ShanmukhS_Intel
Moderator
707 Views

Hi Marcos Vanella,


Thank you for posting on Intel Communities.


We are working on your queries shared internally, We will get back to you soon with an update.


Best Regards,

Shanmukh.SS


0 Kudos
Marcos_V_1
New Contributor I
695 Views

Thank you Shanmukh,

And thank you guys for the work you do. We have been using the PARDISO and cluster solvers for a while now to solve a discretized PDE, with very good results. Our factorized matrix does not change but we do several solves (forward-back substitution) sequentially in time as part of our time marching scheme. As this part of the scheme can take up to 50%+ of the total wall time, we are always interested in using the latest developments to speed it up.

Best Regards,

Marcos

0 Kudos
Khang_N_Intel
Employee
653 Views

Hi Marcos,


Can you create a ticket about this request at:

https://supporttickets.intel.com/servicecenter?lang=en-US


As for the question about scaling the solving stage with OpenMP, you might want to try either:

1) Running Pardiso with many RHS

or

2) Running Pardiso with iparm[24] = 2

https://www.intel.com/content/www/us/en/develop/documentation/onemkl-developer-reference-c/top/sparse-solver-routines/onemkl-pardiso-parallel-direct-sparse-solver-iface/pardiso-iparm-parameter.html


Best regards,

Khang


0 Kudos
Khang_N_Intel
Employee
576 Views

Hi Marcos,


The request to implement Pardiso version for GPU has been submitted. We just don't know when this feature will be implemented.


Since I have already answered your questions, I am going to close this thread.

There will be no more communication on this thread.


Best regards,

Khang


0 Kudos
Reply