<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Thema "Hi Vipin," in Intel® oneAPI Math Kernel Library</title>
    <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Estimate-PARDISO-memory-usage-by-matrix-size/m-p/1028799#M20032</link>
    <description>&lt;P&gt;Hi Vipin,&lt;/P&gt;

&lt;P&gt;Thanks for your reply. It seems that predict memory usage before reordering is impossible. But we want to do something like this: we have to solve many big matrices. The reason we want to make a prediction is because we want to do parallel distributed processing. If we can estimate the memory usage, we can arrange proper number of matrices to each computer. Otherwise, too much memory needed for matrices may lead to crashing. Do you have any recommended method to do deal with this kind of problem?&lt;/P&gt;

&lt;P&gt;Regards,&lt;/P&gt;

&lt;P&gt;Gisiu&lt;/P&gt;</description>
    <pubDate>Wed, 05 Aug 2015 05:46:04 GMT</pubDate>
    <dc:creator>Gisiu_T_</dc:creator>
    <dc:date>2015-08-05T05:46:04Z</dc:date>
    <item>
      <title>Estimate PARDISO memory usage by matrix size</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Estimate-PARDISO-memory-usage-by-matrix-size/m-p/1028795#M20028</link>
      <description>&lt;P&gt;Dear all,&lt;/P&gt;

&lt;P&gt;I am running PARDISO to solve sparse symmetric indefinite matrices. Since matrices may be very large, we want to estimate how many memory PARDISO will use given matrix size &lt;STRONG&gt;&lt;EM&gt;n&lt;/EM&gt;&lt;/STRONG&gt; and number of nonzero terms &lt;STRONG&gt;&lt;EM&gt;nz&lt;/EM&gt;&lt;/STRONG&gt;. In this post &lt;A href="https://software.intel.com/en-us/forums/topic/474289" target="_blank"&gt;https://software.intel.com/en-us/forums/topic/474289&lt;/A&gt;, an estimated method is introduced: &lt;EM&gt;&lt;STRONG&gt;1024 * max(iparm(15), iparm(16)+iparm(17)) + n*nrhs*32&lt;/STRONG&gt;&lt;/EM&gt; for in-core mode. Also, there is a MKL function mkl_peak_mem_usage() that can report memory usage information. However, we cannot obtain the information before executing PARDISO. I wonder if there is any way that can estimate memory usage &lt;EM&gt;&lt;STRONG&gt;only&lt;/STRONG&gt;&lt;/EM&gt; by matrix size &lt;STRONG&gt;&lt;EM&gt;n&lt;/EM&gt;&lt;/STRONG&gt; and number of nonzero terms &lt;STRONG&gt;&lt;EM&gt;nz&lt;/EM&gt;&lt;/STRONG&gt;. I think the reordering algorithm may have some affects on memory usage, but I don't know how to analyze it.&lt;/P&gt;

&lt;P&gt;Regards,&lt;/P&gt;

&lt;P&gt;Gisiu&lt;/P&gt;</description>
      <pubDate>Tue, 28 Jul 2015 06:30:56 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Estimate-PARDISO-memory-usage-by-matrix-size/m-p/1028795#M20028</guid>
      <dc:creator>Gisiu_T_</dc:creator>
      <dc:date>2015-07-28T06:30:56Z</dc:date>
    </item>
    <item>
      <title>It will be difficult to</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Estimate-PARDISO-memory-usage-by-matrix-size/m-p/1028796#M20029</link>
      <description>&lt;P&gt;It will be difficult to predict the memory usage in advance as it depends on the matrix type and sparsity pattern.&lt;/P&gt;</description>
      <pubDate>Wed, 29 Jul 2015 11:59:56 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Estimate-PARDISO-memory-usage-by-matrix-size/m-p/1028796#M20029</guid>
      <dc:creator>VipinKumar_E_Intel</dc:creator>
      <dc:date>2015-07-29T11:59:56Z</dc:date>
    </item>
    <item>
      <title>Hi Vipin,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Estimate-PARDISO-memory-usage-by-matrix-size/m-p/1028797#M20030</link>
      <description>&lt;P&gt;Hi Vipin,&lt;/P&gt;

&lt;P&gt;If only a rough upper bound needed? Or if we specify an reordering algorithm (say, Nested Dissection) and choose a particular number of thread. Is it still hard to make a prediction?&lt;/P&gt;

&lt;P&gt;Regards,&lt;/P&gt;

&lt;P&gt;Gisiu&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jul 2015 03:54:43 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Estimate-PARDISO-memory-usage-by-matrix-size/m-p/1028797#M20030</guid>
      <dc:creator>Gisiu_T_</dc:creator>
      <dc:date>2015-07-30T03:54:43Z</dc:date>
    </item>
    <item>
      <title>Hi Gisiu,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Estimate-PARDISO-memory-usage-by-matrix-size/m-p/1028798#M20031</link>
      <description>&lt;P&gt;Hi Gisiu,&lt;/P&gt;

&lt;P&gt;&amp;nbsp; It will be still impossible as from our experiments, we the sizes differ drastically from our estimate and the real usage.&lt;/P&gt;

&lt;P&gt;But, it's possible after&amp;nbsp;&lt;SPAN lang="EN" style="color: rgb(83, 87, 94); line-height: 105%; font-family: &amp;quot;Arial&amp;quot;,sans-serif; font-size: 9.5pt; mso-fareast-font-family: Calibri; mso-fareast-theme-font: minor-latin; mso-ansi-language: EN; mso-fareast-language: EN-US; mso-bidi-language: AR-SA;"&gt;reordering step (not in advance as we mentioned)&amp;nbsp;as you know and the estimator is max(iparm(15), iparm(16)+iparm(17)).&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;Vipin&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jul 2015 08:39:22 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Estimate-PARDISO-memory-usage-by-matrix-size/m-p/1028798#M20031</guid>
      <dc:creator>VipinKumar_E_Intel</dc:creator>
      <dc:date>2015-07-30T08:39:22Z</dc:date>
    </item>
    <item>
      <title>Hi Vipin,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Estimate-PARDISO-memory-usage-by-matrix-size/m-p/1028799#M20032</link>
      <description>&lt;P&gt;Hi Vipin,&lt;/P&gt;

&lt;P&gt;Thanks for your reply. It seems that predict memory usage before reordering is impossible. But we want to do something like this: we have to solve many big matrices. The reason we want to make a prediction is because we want to do parallel distributed processing. If we can estimate the memory usage, we can arrange proper number of matrices to each computer. Otherwise, too much memory needed for matrices may lead to crashing. Do you have any recommended method to do deal with this kind of problem?&lt;/P&gt;

&lt;P&gt;Regards,&lt;/P&gt;

&lt;P&gt;Gisiu&lt;/P&gt;</description>
      <pubDate>Wed, 05 Aug 2015 05:46:04 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Estimate-PARDISO-memory-usage-by-matrix-size/m-p/1028799#M20032</guid>
      <dc:creator>Gisiu_T_</dc:creator>
      <dc:date>2015-08-05T05:46:04Z</dc:date>
    </item>
  </channel>
</rss>

