Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

PARDISO OOC: Value of MKL_PARDISO_OOC_MAX_CORE_SIZE

Alessandro_M_
Beginner
481 Views

Is it possible to know the correct value of MKL_PARDISO_OOC_MAX_CORE_SIZE to factorize a matrix in OOC mode? I have seen the suggested value only after an error, and not from the analysis stage. Having a value from the analysis stage, we could change the config file before the call for the factorization.

0 Kudos
3 Replies
Alexander_K_Intel2
481 Views

Hi,

Based on MKL documentation you can calculated it after reordering via following equation:

max(iparm(15)iparm(16) + iparm(63)).

The calculated value is minimal size of RAM need for OOC PARDISO algorithm.

Thanks,

Alexander Kalinkin

0 Kudos
Alessandro_M_
Beginner
481 Views

Actually, I have used max(iparm(15), iparm(16)+iparm(17)) to have an estimation of the required RAM for in-core mode. Then, I compare it with the available memory and I choose to use or not PARDISO in OOC.

In a typical case, by running the analysis for in-core mode, PARDISO give me 91918 MB estimation. I decide runtime to switch to OOC and PARDISO said me: Error in PARDISO: minimal memory required by OOC mode (40842MB) is greater than variable MKL_PARDISO_OOC_MAX_CORE_SIZE (2000MB).

My question is how to get (runtime) the correct value (40842MB) from the analysis stage. Do you saying me that I have to make the analysis specifying OOC mode and use iparm(63)?

0 Kudos
Alexander_K_Intel2
481 Views

HI,

The simplest way of switching between in-core and ooc version of PARDISO is setting iparm(60) to 1. In this case PARDISO internally compare size of available RAM (MKL_PARDISO_OOC_MAX_CORE_SIZE) with needed to correct execution and, based on result, run in-core or ooc version of PARDISO. But if you want to decide yourself than, as you wrote, you need to call reordering phase, obtain data from equation below, set needed parameters and call pardiso again. 

Thanks,

Alexander Kalinkin

0 Kudos
Reply