I recently noticed this and wanted to post it here: when I using PARDISO solver with IC or OOC, I noticed that the solver consumes the same amount of memory in both cases (except I noticed that OOC version creates files on hard disk).
This really puzzles me since the impression I had was that OOC works with files on hard disk for some intermediate calculations instead of keeping all data in memory, and hence it does not require as much memory as IC version. I understand that OOC still needs to allocate some physical memory, but the observation I am seeing is that it uses almost the same amount of memory as the IC does. In one case, IC version consumes 2.5GB of RAM and I am seeing that OOC version also consumes that same amount of memory (except I see some files created on my hard disk),
I have tested this with the latest versions (11.2 Update 2 and Update 1) and got the same observation.
Is this an expected behavior? Then, what is the purpose of using OOC if it consumes the same amount of data as IC does.
Or, what am I missing here?
Could you provide value of OOC_MAX_CORE_SIZE that you set in your example? I ask because if it is greater than 2.5 Gb and you set iparm(60) to 2 (strong OOC mode) than OOC will use similar size to in-core algorithm (because there are enough memory for computation)
I am using this setting:
OOC_MAX_CORE_SIZE = 2000.
mtype = -2
iparm = 2
When "phase=22" is called, it returns "error = -2". If I use "OOC_MAX_CORE_SIZE = 400", I am getting "error = -9".
These are the numbers I am reading:
iparm(15) = 790237
iparm(16) = 727123
iparm(63) = -203575
I don't know why iparm(63) is negative but it seems like it is 790MB needed. Since I am using 2000MB, then why still getting memory error?
I have Windows 8.1 (64bit) with 32GB RAM, iCore7. Virtual memory paging is managed by OS. (total paging file size for all drives 4864 MB)
I am using 32-bit MKL, 11.2. Update 1 (I tried Update 2 but still the same). Solver type is OOC and mtype = -2.
This is what we found:
OOC_MAX_CORE_SIZE = 2000 -------------> phase22 return "-2"
OOC_MAX_CORE_SIZE = 400 -------------> phase22 return "-9"
OOC_MAX_CORE_SIZE = 1000 -------------> phase22 return "0",, SUCCESS. I noticed that there is this file in my hard disk "ooc_file.lnz" with 1.86GB of size. But iparm is still -203575.
What kind of strategy do you recommend for setting a correct number for OOC_MAX_CORE_SIZE? Is this something bound by the virtual memory setting in my computer?
I also noticed this: if it happens to fail due to memory allocation in phase=22, those files created for OOC in the hard disk are left behind even if PARDISO with "-1" called to release all memory. If everything works fine (successful execution), these OOC files are successfully deleted. I thought you want to know.
I think that it would be quite useful if you could recompile using 64-bit versions of the compiler and MKL, and run the same problem as above but with the new EXE. Please consult the Pardiso documentation to ascertain which integer arguments to 64-bit Pardiso will need to be 8-byte integers.
iparm(63) could be less than zero cause namely iparm(16) + iparm(63) make sense. For your case minimal size of RAM that you need for computational is 800Mb. So result for OOC_MAX_CORE_SIZE=400 is expected, 1000 is expected, 2000 is not expected - everything need to work correctly. Is it possible to send this matrix to us?