<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: MKL PARDISO iparm[23] behaviour in Intel® oneAPI Math Kernel Library</title>
    <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-PARDISO-iparm-23-behaviour/m-p/1661405#M36883</link>
    <description>&lt;P&gt;Was anyone able to reproduce the unexpected crash? I could not reproduce it with MKL 2024.0, but I cannot also find a download page for MKL 2024.2. Is there a way to download this version?&lt;/P&gt;</description>
    <pubDate>Wed, 29 Jan 2025 02:59:22 GMT</pubDate>
    <dc:creator>morskaya_svinka_1</dc:creator>
    <dc:date>2025-01-29T02:59:22Z</dc:date>
    <item>
      <title>MKL PARDISO iparm[23] behaviour</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-PARDISO-iparm-23-behaviour/m-p/1650127#M36757</link>
      <description>&lt;P&gt;I use MKL version 2024.2. Please mention in the MKL documentation in iparm[23] description that phase=11 with iparm[23]=0 is incompatible with phase=23 with iparm[23]=1, phase=11 with iparm[23]=1 is incompatible with phase=23 with iparm[23]=0. Please mention about all such incompatibilities or provide a link to the place its mentioned (I didnt find it after searching through all iparm[23] occurencies). Also just in case some user comes by and finds this information helpful iparm[23]=1 requires more RAM than iparm[23]=0 especially in OOC mode. Questions:&lt;BR /&gt;1. When iparm[23] is set to 1 the program prints very little statistics to the screen compared to iparm[23]=0, is it ok?&amp;nbsp;&lt;BR /&gt;2. iparm[16] remains the same after phase=11 and phase=23 for iparm[23]=1 but something is written there on phase=23 for iparm[23]=0. What is written there?&lt;BR /&gt;3. Does MKL_PARDISO_OOC_MAX_CORE_SIZE control the amount of memory can be given to OOC mode for all solver process or just additional memory for factorization and solution on top of analysis memory? I ask that because in the last test I provided 10'000 MB restriction but in the beginning of phase=23 it printed 14'500 MB allocated. I also wonder why iparm[23]=0 provides lower memory bound on OOC but its PeakWorkingSet (11 GB) is bigger than iparm[23]=1 PeakWorkingSet (8 GB) and also iparm[23]=0 PeakWorkingSet is same in IC and OOC in this example. Can you give some advice when I should use iparm[23]=0 or iparm[23]=1 on a shared memory machine?&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;In-core results iparm[23]=0&lt;BR /&gt;minimum memory requirement IC (MB) = 10731&lt;BR /&gt;minimum memory requirement OOC (MB) = 3206&lt;BR /&gt;iparm[14] (MB) = 3206&lt;BR /&gt;iparm[15] (MB) = 2976&lt;BR /&gt;iparm[16] (MB) = 7754&lt;BR /&gt;iparm[15] + iparm[62] (MB) = 2976&lt;BR /&gt;phase=11&lt;BR /&gt;PeakWorkingSet64 = 2875588608&lt;BR /&gt;phase=23&lt;BR /&gt;time in seconds = 32.8&lt;BR /&gt;minimum memory requirement IC (MB) = 10697&lt;BR /&gt;minimum memory requirement OOC (MB) = 3206&lt;BR /&gt;iparm[14] (MB) = 3206&lt;BR /&gt;iparm[15] (MB) = 2976&lt;BR /&gt;iparm[16] (MB) = 7720&lt;BR /&gt;iparm[15] + iparm[62] (MB) = 2976&lt;BR /&gt;PeakWorkingSet64 = 11316170752 bytes&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In-core results iparm[23]=1&lt;BR /&gt;phase=11&lt;BR /&gt;Memory allocated on phase 11 4166.4923 MB&lt;BR /&gt;PeakWorkingSet64 = 4440424448&lt;BR /&gt;minimum memory requirement IC (MB) = 11281&lt;BR /&gt;minimum memory requirement OOC (MB) = 6728&lt;BR /&gt;iparm[14] (MB) = 4192&lt;BR /&gt;iparm[15] (MB) = 4166&lt;BR /&gt;iparm[16] (MB) = 7115&lt;BR /&gt;iparm[15] + iparm[62] (MB) = 6728&lt;BR /&gt;phase=23&lt;BR /&gt;Memory allocated on phase 22 11281.5165 MB&lt;BR /&gt;time in seconds = 34.8&lt;BR /&gt;minimum memory requirement IC (MB) = 11281&lt;BR /&gt;minimum memory requirement OOC (MB) = 6728&lt;BR /&gt;iparm[14] (MB) = 4192&lt;BR /&gt;iparm[15] (MB) = 4166&lt;BR /&gt;iparm[16] (MB) = 7115&lt;BR /&gt;iparm[15] + iparm[62] (MB) = 6728&lt;/P&gt;&lt;P&gt;PeakWorkingSet64 = 11855372288 bytes&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;OOC results iparm[23]=0 MKL_PARDISO_OOC_MAX_CORE_SIZE=10000&lt;/P&gt;&lt;P&gt;phase=11&lt;BR /&gt;minimum memory requirement IC (MB) = 10880&lt;BR /&gt;minimum memory requirement OOC (MB) = 3172&lt;BR /&gt;iparm[14] (MB) = 3172&lt;BR /&gt;iparm[15] (MB) = 2957&lt;BR /&gt;iparm[16] (MB) = 7922&lt;BR /&gt;iparm[15] + iparm[62] (MB) = 2957&lt;BR /&gt;phase=23&lt;BR /&gt;time in seconds = 56.2&lt;BR /&gt;minimum memory requirement IC (MB) = 9295&lt;BR /&gt;minimum memory requirement OOC (MB) = 3172&lt;BR /&gt;iparm[14] (MB) = 3172&lt;BR /&gt;iparm[15] (MB) = 2957&lt;BR /&gt;iparm[16] (MB) = 6337&lt;BR /&gt;iparm[15] + iparm[62] (MB) = 2957&lt;/P&gt;&lt;P&gt;PeakWorkingSet64 = 11338207232 bytes&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;OOC results iparm[23]=1 MKL_PARDISO_OOC_MAX_CORE_SIZE=10000&lt;/P&gt;&lt;P&gt;phase=11&lt;BR /&gt;Memory allocated on phase 11 4166.4923 MB (printed by PARDISO)&lt;BR /&gt;PeakWorkingSet64 = 4308307968&lt;BR /&gt;minimum memory requirement IC (MB) = 11281&lt;BR /&gt;minimum memory requirement OOC (MB) = 6728&lt;BR /&gt;iparm[14] (MB) = 4192&lt;BR /&gt;iparm[15] (MB) = 4166&lt;BR /&gt;iparm[16] (MB) = 7115&lt;BR /&gt;iparm[15] + iparm[62] (MB) = 6728&lt;BR /&gt;phase=23&lt;BR /&gt;Memory allocated on phase 22 14588.5565 MB (printed by PARDISO)&lt;BR /&gt;time in seconds = 44.4&lt;BR /&gt;minimum memory requirement IC (MB) = 11281&lt;BR /&gt;minimum memory requirement OOC (MB) = 6728&lt;BR /&gt;iparm[14] (MB) = 4192&lt;BR /&gt;iparm[15] (MB) = 4166&lt;BR /&gt;iparm[16] (MB) = 7115&lt;BR /&gt;iparm[15] + iparm[62] (MB) = 6728&lt;/P&gt;&lt;P&gt;PeakWorkingSet64 = 7979446272 bytes&lt;/P&gt;</description>
      <pubDate>Tue, 17 Dec 2024 10:45:07 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-PARDISO-iparm-23-behaviour/m-p/1650127#M36757</guid>
      <dc:creator>morskaya_svinka_1</dc:creator>
      <dc:date>2024-12-17T10:45:07Z</dc:date>
    </item>
    <item>
      <title>Re: MKL PARDISO iparm[23] behaviour</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-PARDISO-iparm-23-behaviour/m-p/1652087#M36772</link>
      <description>&lt;P&gt;It seems like iparm[23]=1 does not work at all with OOC mode and 64-bit integer (ILP64 interface). I build the executable with MSBuild in Visual Studio 2022. When I set iparm[23]=0 it works correctly, but iparm[23]=1 in the same code fails. In the debug mode the following message is returned: "Unhandled exception at [some address] (mkl_core.2.dll) in [my_executable.exe]: An invalid parameter was passed to a function that considers invalid parameters fatal".&amp;nbsp; I have not found anything on this in the documentation. It happens on some percentage of factorization (phase=22) with both Parallel and Sequential versions used. For a big matrix it fails on say 21%, but on a small one it fails after 100% of factorization but before leaving phase=22 pardiso subroutine. By the way, percentage printing is broken (some percents are not printed) in multithreaded mode for several combinations of parameters, including iparm[23]=1 with OOC.&lt;BR /&gt;P.S. I tried iparm[1]=2,3 and 0, does not help. All iparms are consistent through all the phases.&lt;/P&gt;</description>
      <pubDate>Wed, 25 Dec 2024 09:57:42 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-PARDISO-iparm-23-behaviour/m-p/1652087#M36772</guid>
      <dc:creator>morskaya_svinka_1</dc:creator>
      <dc:date>2024-12-25T09:57:42Z</dc:date>
    </item>
    <item>
      <title>Re:MKL PARDISO iparm[23] behaviour</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-PARDISO-iparm-23-behaviour/m-p/1652231#M36774</link>
      <description>&lt;P&gt;Thank you for posting the issue. We are investigating it and will update here once there is progress. &lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 26 Dec 2024 03:55:09 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-PARDISO-iparm-23-behaviour/m-p/1652231#M36774</guid>
      <dc:creator>Ruqiu_C_Intel</dc:creator>
      <dc:date>2024-12-26T03:55:09Z</dc:date>
    </item>
    <item>
      <title>Re:MKL PARDISO iparm[23] behaviour</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-PARDISO-iparm-23-behaviour/m-p/1656881#M36823</link>
      <description>&lt;P&gt;Hi Evgeny,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thank you for your patient.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Here are our updates: &lt;/P&gt;&lt;P&gt;When iparm[23]&amp;nbsp;is set to 1 the program prints very less statistics to the screen compared to iparm[23]=0.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;iparm[16] is documented as the extra memory required for factorization in in-core mode after phase 1. The value after phase 2 and 3 can be ignored, it is undocumented and just for internal calculations. We might remove it in future and make the value constant throughout.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;MKL_PARDISO_OOC_MAX_CORE_SIZE is not a parameter to be tuned. It should be close to the size of RAM. Actually, MKL_PARDISO_OOC_MAX_CORE_SIZE will be compared with the memory required by the whole solver process to determine if PARDISO has sufficient memory to perform the solve. This comparison, however, can only be done after phase 1 since only then the structure of LU factors can be determined. If your test case allocated more than 10 GB, it looks like a bug, can you share us a simple reproducer to investigate? Also if you met an unexpected crash with iparm[23]=1, we need the reproducer as well.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Ruqiu&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 14 Jan 2025 11:27:26 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-PARDISO-iparm-23-behaviour/m-p/1656881#M36823</guid>
      <dc:creator>Ruqiu_C_Intel</dc:creator>
      <dc:date>2025-01-14T11:27:26Z</dc:date>
    </item>
    <item>
      <title>Re: Re:MKL PARDISO iparm[23] behaviour</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-PARDISO-iparm-23-behaviour/m-p/1658606#M36847</link>
      <description>&lt;P&gt;I found out what is the problem with iparm[23]=1 OOC mode. Both int32 and int64 versions crash when I specify MKL_PARDISO_OOC_PATH environment variable like this in C++ (tried MSVC and Intel-cpp-compiler): _putenv("MKL_PARDISO_OOC_PATH=C:\\somepath\\pardiso_ooc_tmpdir\\ooc_tmp_file"). OOC files are not written in that directory, and I think the program crashes at the moment it tries to write. iparm[23]=0 works fine in that case. Reproducer attached.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;I also attached the reproducer of the issue when MKL_PARDISO_OOC_MAX_CORE_SIZE=10000 is set and PARDISO prints that 14'000 MB has been allocated. I cannot attach the matrix here though as it weights 500 MB and the restriction here is 23 MB. Show me the way I can upload&amp;nbsp; it if you want it please.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Why should MKL_PARDISO_OOC_MAX_CORE_SIZE not be tuned? I think the tuning is helpful if the user wants to be sure that he will have say 10GB available RAM left while the OOC calculation is running. I saw that restricting MKL_PARDISO_OOC_MAX_CORE_SIZE can significantly slow down the calculation, but I gave about 200% lower bound for OOC mode and it worked fine for all the matrices I have tested. Moreover, if you keep giving more than 150% or 200% of lower bound, PeakWorkingSet grows a bit, but speed does not improve.&lt;BR /&gt;There is another drawback of not tuning MKL_PARDISO_OOC_MAX_CORE_SIZE. If you want to use OOC mode explicitly (iparm[59]=2), you can get about 1.5x slowdown in case you cross minimum IC requirement for shared memory machines, as discussed here: &lt;A href="https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/OneAPI-PARDISO-iparm-62-0/m-p/1636246#M36518" target="_blank"&gt;https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/OneAPI-PARDISO-iparm-62-0/m-p/1636246#M36518&lt;/A&gt;. Though I have not tested this effect on clusters and matrices with factorization larger than 200GB.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 20 Jan 2025 07:34:16 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-PARDISO-iparm-23-behaviour/m-p/1658606#M36847</guid>
      <dc:creator>morskaya_svinka_1</dc:creator>
      <dc:date>2025-01-20T07:34:16Z</dc:date>
    </item>
    <item>
      <title>Re: MKL PARDISO iparm[23] behaviour</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-PARDISO-iparm-23-behaviour/m-p/1661405#M36883</link>
      <description>&lt;P&gt;Was anyone able to reproduce the unexpected crash? I could not reproduce it with MKL 2024.0, but I cannot also find a download page for MKL 2024.2. Is there a way to download this version?&lt;/P&gt;</description>
      <pubDate>Wed, 29 Jan 2025 02:59:22 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-PARDISO-iparm-23-behaviour/m-p/1661405#M36883</guid>
      <dc:creator>morskaya_svinka_1</dc:creator>
      <dc:date>2025-01-29T02:59:22Z</dc:date>
    </item>
    <item>
      <title>Re: MKL PARDISO iparm[23] behaviour</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-PARDISO-iparm-23-behaviour/m-p/1661431#M36885</link>
      <description>&lt;P&gt;I obtained the following behaviour when I specify _putenv("MKL_PARDISO_OOC_PATH=C:\\tmp\\tmptmp").&amp;nbsp; iparm[62]=2 with iparm[23]=1 creates files named "mkl_pardiso_lnz..." in the directory C:\\tmp\\tmptmp, and crashes if such directory does not exist - that is the crash I got. But iparm[62]=2 with iparm[23]=0 creates files named "tmptmp..." in the directory C:\\tmp independently of C:\\tmp\\tmptmp existing or not. It also appears that _putenv("MKL_PARDISO_OOC_KEEP_FILE=0") does not affect iparm[23]=1 OOC deleting temporary files, it deletes them anyways (with MKL_PARDISO_OOC_KEEP_FILE=1 as well). Is this fixed in the current MKL version (2025.0.1)?&lt;/P&gt;</description>
      <pubDate>Wed, 29 Jan 2025 04:19:04 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-PARDISO-iparm-23-behaviour/m-p/1661431#M36885</guid>
      <dc:creator>morskaya_svinka_1</dc:creator>
      <dc:date>2025-01-29T04:19:04Z</dc:date>
    </item>
    <item>
      <title>Re:MKL PARDISO iparm[23] behaviour</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-PARDISO-iparm-23-behaviour/m-p/1666714#M36949</link>
      <description>&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Thank  you for your patient. &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;We reproduced the issue with &lt;/SPAN&gt;&lt;SPAN style="font-size: 16px; font-family: -apple-system, BlinkMacSystemFont, &amp;quot;Segoe UI&amp;quot;, Roboto, Oxygen, Ubuntu, &amp;quot;Fira Sans&amp;quot;, &amp;quot;Droid Sans&amp;quot;, &amp;quot;Helvetica Neue&amp;quot;, sans-serif;"&gt;oneMKL 2025.0, we are fixing it now.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 17 Feb 2025 02:15:29 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-PARDISO-iparm-23-behaviour/m-p/1666714#M36949</guid>
      <dc:creator>Ruqiu_C_Intel</dc:creator>
      <dc:date>2025-02-17T02:15:29Z</dc:date>
    </item>
  </channel>
</rss>

