<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Hi Marcos, in Intel® oneAPI Math Kernel Library</title>
    <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140654#M26273</link>
    <description>&lt;P&gt;Hi Marcos,&lt;/P&gt;&lt;P&gt;First, as e.g. this page says (https://software.intel.com/en-us/articles/intel-math-kernel-library-intel-mkl-2020-system-requirements) MKL 2020 and later officially supports Centos versions not older than&amp;nbsp;7.x. So, a good solution would be to upgrade the OSon the cluster nodes.&lt;/P&gt;&lt;P&gt;Second, you can try to update glibc and gnu-utils packages (or get a newer version locally) and see whether this fixes&amp;nbsp;the problem. Here unfortunately I cannot give a more specific advice / workaround.&lt;/P&gt;&lt;P&gt;Best,&lt;BR /&gt;Kirill&lt;/P&gt;</description>
    <pubDate>Wed, 08 Apr 2020 20:44:12 GMT</pubDate>
    <dc:creator>Kirill_V_Intel</dc:creator>
    <dc:date>2020-04-08T20:44:12Z</dc:date>
    <item>
      <title>Memory Leak using many times the cluster sparse solver</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140638#M26257</link>
      <description>&lt;P&gt;Hello, I'll add here the information on a support ticket I started last month to check if if the community has come up with this issue. We use the MKL parallel cluster solver, together with Intel MPI for our HPC software (called FDS). The software has to solve thousands of times a Poisson equation using the MKL cluster solver &lt;STRONG&gt;solve&lt;/STRONG&gt; phase. We have noted the memory being used increases as the MKL cluster solver is used, eventually leading to a catastrophic out of memory error in MPI.&lt;/P&gt;&lt;P&gt;I isolated the repeated use of the MKL cluster solver on a single standalone program completely separate from our software, and still see the memory use increase.&lt;/P&gt;&lt;P&gt;Try following the instructions on the README file in this tarball, to compile the code and run the case to see if your memory use increases (takes a few hours of runtime). I have verified this is the case in two Linux clusters with Centos 6 and 7 and Intel parallel studio versions from 2018, 2019 and last 2020.&lt;/P&gt;&lt;P&gt;I would really appreciate any help on this.&lt;/P&gt;&lt;P&gt;Marcos&lt;/P&gt;</description>
      <pubDate>Tue, 10 Mar 2020 21:28:56 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140638#M26257</guid>
      <dc:creator>Marcos_V_1</dc:creator>
      <dc:date>2020-03-10T21:28:56Z</dc:date>
    </item>
    <item>
      <title>Marcos, how did you run this</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140639#M26258</link>
      <description>&lt;P&gt;Marcos, how did you run this code?&lt;/P&gt;</description>
      <pubDate>Wed, 11 Mar 2020 09:56:58 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140639#M26258</guid>
      <dc:creator>Gennady_F_Intel</dc:creator>
      <dc:date>2020-03-11T09:56:58Z</dc:date>
    </item>
    <item>
      <title>Hi Gennady, thank you for</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140640#M26259</link>
      <description>&lt;P&gt;Hi Gennady, thank you for taking interest! I used a submission script on both clusters fitting the 8 MPI processes in one node (one cluster has 8 physical cores per node and the other 12). This is the example (torque) for burn (12 core nodes):&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;#!/bin/bash&lt;BR /&gt;#PBS -N test_glmat&lt;BR /&gt;#PBS -W umask=0022&lt;BR /&gt;#PBS -e /home4/mnv/FIREMODELS_FORK/CLUSTER_SPARSE_SOLVER_TEST/test/test_glmat.err&lt;BR /&gt;#PBS -o /home4/mnv/FIREMODELS_FORK/CLUSTER_SPARSE_SOLVER_TEST/test/test_glmat.log&lt;BR /&gt;#PBS -l nodes=1:ppn=8&lt;BR /&gt;#PBS -l walltime=999:0:0&lt;BR /&gt;export MODULEPATH=/usr/local/Modules/versions:/usr/local/Modules/$MODULE_VERSION/modulefiles:/usr/local/Modules/modulefiles&lt;BR /&gt;module purge&lt;BR /&gt;module load null modules torque-maui intel/19u4&lt;BR /&gt;export OMP_NUM_THREADS=1&lt;BR /&gt;export I_MPI_DEBUG=5&lt;BR /&gt;cd /home4/mnv/FIREMODELS_FORK/CLUSTER_SPARSE_SOLVER_TEST/test&lt;BR /&gt;echo&lt;BR /&gt;echo `date`&lt;BR /&gt;echo "&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Directory: `pwd`"&lt;BR /&gt;echo "&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Host: `hostname`"&lt;BR /&gt;/opt/intel19/compilers_and_libraries_2019.4.243/linux/mpi/intel64/bin/mpiexec&amp;nbsp;&amp;nbsp; -np 8 /home4/mnv/FIREMODELS_FORK/CLUSTER_SPARSE_SOLVER_TEST/test/css_test&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;and here is the an example submisison for the test for blaze, our other cluster with 8 cores per node (SLURM):&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;#!/bin/bash&lt;BR /&gt;#SBATCH -J test_glmat&lt;BR /&gt;#SBATCH -e /home/mnv/FireModels_fork/CLUSTER_SPARSE_SOLVER_TEST/test/test_glmat.err&lt;BR /&gt;#SBATCH -o /home/mnv/FireModels_fork/CLUSTER_SPARSE_SOLVER_TEST/test/test_glmat.log&lt;BR /&gt;#SBATCH -p batch&lt;BR /&gt;#SBATCH -n 8&lt;BR /&gt;#SBATCH --cpus-per-task=1&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;#SBATCH -t 99-99:99:99&lt;BR /&gt;export OMP_NUM_THREADS=1&lt;BR /&gt;export I_MPI_DEBUG=5&lt;BR /&gt;cd /home/mnv/FireModels_fork/CLUSTER_SPARSE_SOLVER_TEST/test&lt;BR /&gt;echo&lt;BR /&gt;echo `date`&lt;BR /&gt;echo "&amp;nbsp;&amp;nbsp;&amp;nbsp; Input file: test_glmat.fds"&lt;BR /&gt;echo "&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Directory: `pwd`"&lt;BR /&gt;echo "&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Host: `hostname`"&lt;BR /&gt;/opt/intel20/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpiexec /home/mnv/FireModels_fork/CLUSTER_SPARSE_SOLVER_TEST/test/css_test&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Executing&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;mpirun -n 8 YOUR_DIR/CLUSTER_SPARSE_SOLVER_TEST/test/css_test &lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;on a single workstation should give same outcome. I'm trying to understand if there is any combination of memory flags/routine calls that would take care of this leak I'm seeing but haven't been successful.&lt;/P&gt;&lt;P&gt;BTW, this is how it crashes in both cases (what it writes to the .err file, or screen):&lt;/P&gt;&lt;P&gt;.&lt;STRONG&gt;...&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;NSOLVES =&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 166800&lt;BR /&gt;&amp;nbsp;NSOLVES =&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 166900&lt;BR /&gt;&amp;nbsp;NSOLVES =&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 167000&lt;BR /&gt;&amp;nbsp;NSOLVES =&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 167100&lt;BR /&gt;&amp;nbsp;NSOLVES =&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 167200&lt;BR /&gt;&amp;nbsp;NSOLVES =&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 167300&lt;BR /&gt;&amp;nbsp;NSOLVES =&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 167400&lt;BR /&gt;&amp;nbsp;NSOLVES =&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 167500&lt;BR /&gt;&amp;nbsp;NSOLVES =&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 167600&lt;BR /&gt;&amp;nbsp;NSOLVES =&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 167700&lt;BR /&gt;Abort(606162959) on node 6 (rank 6 in comm 0): Fatal error in PMPI_Comm_split: Other MPI error, error stack:&lt;BR /&gt;PMPI_Comm_split(499).....: MPI_Comm_split(comm=0xc4000012, color=1, key=0, new_comm=0x7ffcc8948b30) failed&lt;BR /&gt;PMPI_Comm_split(481).....:&lt;BR /&gt;MPIR_Comm_split_impl(384):&lt;BR /&gt;MPIR_Comm_commit(598)....:&lt;BR /&gt;MPIR_Info_alloc(61)......: Out of memory (unable to allocate a 'MPI_Info')&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;Marcos&lt;/P&gt;</description>
      <pubDate>Wed, 11 Mar 2020 13:59:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140640#M26259</guid>
      <dc:creator>Marcos_V_1</dc:creator>
      <dc:date>2020-03-11T13:59:00Z</dc:date>
    </item>
    <item>
      <title>Marcos,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140641#M26260</link>
      <description>&lt;P&gt;Marcos,&lt;/P&gt;&lt;P&gt;I made the short experiments ( 10K iterations and with MKL 2020 ) so far and see the size of the memory consumed by a program is the same. I used vmstat utility to track this process.&amp;nbsp;We will run the whole benchmark ( 250K of iterations ), this will take significant time to run.&amp;nbsp;I will&amp;nbsp;keep this thread updated.&lt;/P&gt;</description>
      <pubDate>Fri, 13 Mar 2020 04:10:30 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140641#M26260</guid>
      <dc:creator>Gennady_F_Intel</dc:creator>
      <dc:date>2020-03-13T04:10:30Z</dc:date>
    </item>
    <item>
      <title>Hi Gennady, thank you very</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140642#M26261</link>
      <description>&lt;P&gt;Hi Gennady, thank you very much for taking interest on this. Here is a snapshot of the memory use for this program in one of our clusters (the one with 12 core nodes). This graph is provided by ganglia, the cluster status application.&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;Marcos&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 13 Mar 2020 13:59:25 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140642#M26261</guid>
      <dc:creator>Marcos_V_1</dc:creator>
      <dc:date>2020-03-13T13:59:25Z</dc:date>
    </item>
    <item>
      <title>Hello Marcos,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140643#M26262</link>
      <description>&lt;P&gt;Hello Marcos,&lt;/P&gt;&lt;P&gt;If possible, could you try instead of calling mkl_free_buffers in your solving loop, call PARDISO with phase = -1 and tell us&amp;nbsp;if you still observe the memory leak? This could&amp;nbsp;help our investigation I hope.&amp;nbsp;If you want, you can call mkl_free_buffers, but only after the very last call to MKL routines (i.e., not inside the loop).&lt;/P&gt;&lt;P&gt;Thanks,&lt;BR /&gt;Kirill&lt;/P&gt;</description>
      <pubDate>Mon, 16 Mar 2020 02:40:32 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140643#M26262</guid>
      <dc:creator>Kirill_V_Intel</dc:creator>
      <dc:date>2020-03-16T02:40:32Z</dc:date>
    </item>
    <item>
      <title>Hello again, </title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140644#M26263</link>
      <description>&lt;P&gt;Hello again,&amp;nbsp;&lt;/P&gt;&lt;P&gt;Sorry, I actually wanted to suggest simply removing the call to mkl_free_buffers. It'd be incorrect to plug in calls to phase = -1.&lt;/P&gt;</description>
      <pubDate>Mon, 16 Mar 2020 16:41:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140644#M26263</guid>
      <dc:creator>Kirill_V_Intel</dc:creator>
      <dc:date>2020-03-16T16:41:00Z</dc:date>
    </item>
    <item>
      <title>Hi Kirill, thank you for your</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140645#M26264</link>
      <description>&lt;P&gt;Hi Kirill, thank you for your interest. I've tried with and without the mkl_free_buffers call within the solve loop with the same outcome. It doesn't seem to make any difference. Now, about calling the cluster_sparse_solver with phase -1, wouldn't that get rid of the stored factorization matrix and I could not keep calling the solver phase within the loop?&lt;/P&gt;&lt;P&gt;My other question is, have you been able to reproduce the behavior?&lt;/P&gt;&lt;P&gt;Thank you,&lt;/P&gt;&lt;P&gt;Marcos&lt;/P&gt;</description>
      <pubDate>Mon, 16 Mar 2020 16:55:09 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140645#M26264</guid>
      <dc:creator>Marcos_V_1</dc:creator>
      <dc:date>2020-03-16T16:55:09Z</dc:date>
    </item>
    <item>
      <title>I made such experiments and</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140646#M26265</link>
      <description>&lt;P&gt;I made such experiments and still see the same problem as Marcos reported :&amp;nbsp;&lt;/P&gt;&lt;P&gt;.....................................&lt;/P&gt;&lt;P&gt;&amp;nbsp;NSOLVES = &amp;nbsp; &amp;nbsp; &amp;nbsp; 166000&lt;BR /&gt;&amp;nbsp;NSOLVES = &amp;nbsp; &amp;nbsp; &amp;nbsp; 167000&lt;BR /&gt;Abort(471945231) on node 6 (rank 6 in comm 0): Fatal error in PMPI_Comm_split: Other MPI error, error stack:&lt;BR /&gt;PMPI_Comm_split(499).....: MPI_Comm_split(comm=0xc4000012, color=1, key=0, new_comm=0x7ffc47d65630) failed&lt;BR /&gt;PMPI_Comm_split(481).....:&amp;nbsp;&lt;BR /&gt;MPIR_Comm_split_impl(384):&amp;nbsp;&lt;BR /&gt;MPIR_Comm_commit(598)....:&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 17 Mar 2020 05:25:13 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140646#M26265</guid>
      <dc:creator>Gennady_F_Intel</dc:creator>
      <dc:date>2020-03-17T05:25:13Z</dc:date>
    </item>
    <item>
      <title>Hi Gennady, thank you for</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140647#M26266</link>
      <description>&lt;P&gt;Hi Gennady, thank you for checking this. It is interesting that the error happens at the same instance, even though we are running the case with&amp;nbsp;different hardware. &amp;nbsp;&lt;/P&gt;&lt;P&gt;Let's see if the issue is escalated.&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;Marcos&lt;/P&gt;</description>
      <pubDate>Tue, 17 Mar 2020 22:19:19 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140647#M26266</guid>
      <dc:creator>Marcos_V_1</dc:creator>
      <dc:date>2020-03-17T22:19:19Z</dc:date>
    </item>
    <item>
      <title>Hi Marcos, yes we escalated</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140648#M26267</link>
      <description>&lt;P&gt;Hi Marcos, yes we escalated the issue against solver owners and will keep you informed.&lt;/P&gt;</description>
      <pubDate>Wed, 18 Mar 2020 03:49:28 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140648#M26267</guid>
      <dc:creator>Gennady_F_Intel</dc:creator>
      <dc:date>2020-03-18T03:49:28Z</dc:date>
    </item>
    <item>
      <title>Thank you Gennady, please let</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140649#M26268</link>
      <description>&lt;P&gt;Thank you Gennady, please let us know and stay safe.&lt;/P&gt;&lt;P&gt;Marcos&lt;/P&gt;</description>
      <pubDate>Fri, 20 Mar 2020 18:11:42 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140649#M26268</guid>
      <dc:creator>Marcos_V_1</dc:creator>
      <dc:date>2020-03-20T18:11:42Z</dc:date>
    </item>
    <item>
      <title>Marcos,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140650#M26269</link>
      <description>&lt;P&gt;Marcos,&lt;/P&gt;&lt;P&gt;Please check version 2020 update 1 - MKL and MPI&lt;/P&gt;&lt;P&gt;I checked the example you shared and see the test passed.&lt;/P&gt;&lt;P&gt;.....................................&lt;/P&gt;&lt;P&gt;&amp;nbsp;NSOLVES = &amp;nbsp; &amp;nbsp; &amp;nbsp; 249940&lt;BR /&gt;&amp;nbsp;NSOLVES = &amp;nbsp; &amp;nbsp; &amp;nbsp; 249950&lt;BR /&gt;&amp;nbsp;NSOLVES = &amp;nbsp; &amp;nbsp; &amp;nbsp; 249960&lt;BR /&gt;&amp;nbsp;NSOLVES = &amp;nbsp; &amp;nbsp; &amp;nbsp; 249970&lt;BR /&gt;&amp;nbsp;NSOLVES = &amp;nbsp; &amp;nbsp; &amp;nbsp; 249980&lt;BR /&gt;&amp;nbsp;NSOLVES = &amp;nbsp; &amp;nbsp; &amp;nbsp; 249990&lt;BR /&gt;&amp;nbsp;NSOLVES = &amp;nbsp; &amp;nbsp; &amp;nbsp; 250000&lt;BR /&gt;[gfedorov@cerberos u849887]$&lt;BR /&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 03 Apr 2020 03:57:31 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140650#M26269</guid>
      <dc:creator>Gennady_F_Intel</dc:creator>
      <dc:date>2020-04-03T03:57:31Z</dc:date>
    </item>
    <item>
      <title>Thank you Gennady, we will</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140651#M26270</link>
      <description>&lt;P&gt;Thank you Gennady, we will test 2020 update 1. I'll let you know if we get the same behavior.&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;Marcos&lt;/P&gt;</description>
      <pubDate>Fri, 03 Apr 2020 13:19:03 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140651#M26270</guid>
      <dc:creator>Marcos_V_1</dc:creator>
      <dc:date>2020-04-03T13:19:03Z</dc:date>
    </item>
    <item>
      <title>Hi Marcos,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140652#M26271</link>
      <description>&lt;P&gt;Hi Marcos,&lt;/P&gt;&lt;P&gt;Just adding to what Gennady said, for clarification: the issue (as far as our suggestion goes) is related to MPI and not the Cluster Sparse Solver. So, if for any reason you don't want to use a newer MKL, using a newer MPI should fix the problem already.&lt;/P&gt;&lt;P&gt;Best,&lt;BR /&gt;Kirill&lt;/P&gt;</description>
      <pubDate>Fri, 03 Apr 2020 17:50:29 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140652#M26271</guid>
      <dc:creator>Kirill_V_Intel</dc:creator>
      <dc:date>2020-04-03T17:50:29Z</dc:date>
    </item>
    <item>
      <title>Hi Gennady and Kirill, thank</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140653#M26272</link>
      <description>&lt;P&gt;Hi Gennady and Kirill, thank you for your help. I tested the sample case in one of our clusters with intel 2020 update 1 and it also passed.&lt;/P&gt;&lt;P&gt;We tried installing update 1 on another cluster that has Centos 6 and we are having library issues (glibc_2.14 is missing). It seems the latest suite will not work in Centos 6? is there a workaround for this?&lt;/P&gt;&lt;P&gt;Thank you very much,&lt;/P&gt;&lt;P&gt;Marcos&lt;/P&gt;</description>
      <pubDate>Wed, 08 Apr 2020 20:10:05 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140653#M26272</guid>
      <dc:creator>Marcos_V_1</dc:creator>
      <dc:date>2020-04-08T20:10:05Z</dc:date>
    </item>
    <item>
      <title>Hi Marcos,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140654#M26273</link>
      <description>&lt;P&gt;Hi Marcos,&lt;/P&gt;&lt;P&gt;First, as e.g. this page says (https://software.intel.com/en-us/articles/intel-math-kernel-library-intel-mkl-2020-system-requirements) MKL 2020 and later officially supports Centos versions not older than&amp;nbsp;7.x. So, a good solution would be to upgrade the OSon the cluster nodes.&lt;/P&gt;&lt;P&gt;Second, you can try to update glibc and gnu-utils packages (or get a newer version locally) and see whether this fixes&amp;nbsp;the problem. Here unfortunately I cannot give a more specific advice / workaround.&lt;/P&gt;&lt;P&gt;Best,&lt;BR /&gt;Kirill&lt;/P&gt;</description>
      <pubDate>Wed, 08 Apr 2020 20:44:12 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140654#M26273</guid>
      <dc:creator>Kirill_V_Intel</dc:creator>
      <dc:date>2020-04-08T20:44:12Z</dc:date>
    </item>
    <item>
      <title>Thank you Kirill, we will</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140655#M26274</link>
      <description>&lt;P&gt;Thank you Kirill, we will upgrade to Centos 7, once we are able to return to our physical workspace.&lt;/P&gt;&lt;P&gt;I am trying to make the mpi wrapper for mkl in my Mac workstation, which uses Mac OSX Catalina and Openmpi 4.0.2 (provided by Homebrew). When I execute the commant to make the custom blacs I get the result in the attached figure. It seems some variables on mklmpi have been deprecated in MPI 3.0?&lt;/P&gt;&lt;P&gt;Please let me know if I should start a different thread in the forum.&lt;/P&gt;&lt;P&gt;best Regards,&lt;/P&gt;&lt;P&gt;Marcos&lt;/P&gt;</description>
      <pubDate>Wed, 22 Apr 2020 22:22:07 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140655#M26274</guid>
      <dc:creator>Marcos_V_1</dc:creator>
      <dc:date>2020-04-22T22:22:07Z</dc:date>
    </item>
    <item>
      <title>Marcos, in general starting</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140656#M26275</link>
      <description>&lt;P&gt;Marcos, in general starting the new thread, would be better to easier tracking the issues .... . Regarding to the MPI macros problem. It seem you use one of the latest versions of OpenMPi 4.0.2 which MKL doesn't validate at this moment. Here is the &lt;A href="https://software.intel.com/en-us/articles/intel-math-kernel-library-intel-mkl-2020-system-requirements"&gt;link to the mkl system requirements&lt;/A&gt; for your reference.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Here is the link to the Open MPI FAQ:&amp;nbsp;&lt;STRONG&gt;&lt;A href="https://www.open-mpi.org/faq/?category=mpi-removed#mpi-1-mpi-lb-ub" target="_blank"&gt;https://www.open-mpi.org/faq/?category=mpi-removed#mpi-1-mpi-lb-ub&lt;/A&gt;&lt;/STRONG&gt; where you see this problem has been discussed. We hope that helps.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Apr 2020 04:16:36 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140656#M26275</guid>
      <dc:creator>Gennady_F_Intel</dc:creator>
      <dc:date>2020-04-23T04:16:36Z</dc:date>
    </item>
    <item>
      <title>Thank you Gennady, I will</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140657#M26276</link>
      <description>&lt;P&gt;Thank you Gennady, I will dial back the version of OpenMPI. Are there any plans for updating the macros for MPI versions in Mac OSX?&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;Marcos&lt;/P&gt;</description>
      <pubDate>Thu, 23 Apr 2020 17:24:55 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Memory-Leak-using-many-times-the-cluster-sparse-solver/m-p/1140657#M26276</guid>
      <dc:creator>Marcos_V_1</dc:creator>
      <dc:date>2020-04-23T17:24:55Z</dc:date>
    </item>
  </channel>
</rss>

