<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re:-check_mpi causes code to get stuck in MPI_FINALIZE in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1248899#M7675</link>
    <description>&lt;P&gt;Hi Kevin,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Instead of srun could you once try and check with mpiexec.hydra for launching mpi?&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;Prasanth&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;BR /&gt;</description>
    <pubDate>Fri, 22 Jan 2021 08:48:27 GMT</pubDate>
    <dc:creator>PrasanthD_intel</dc:creator>
    <dc:date>2021-01-22T08:48:27Z</dc:date>
    <item>
      <title>-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1247397#M7641</link>
      <description>&lt;P&gt;I am using the oneAPI "latest" version of Intel MPI, Fortran on a linux cluster. Things are working fine. However, to check my MPI calls, I added -check_mpi to my link step and ran a simple case. The mpi checking works, but the program hangs in MPI_FINALIZE. If I compile without -check_mpi, it does not hang in MPI_FINALIZE. With or without -check_mpi, the calculation runs fine. It just gets stuck in MPI_FINALIZE with -check_mpi.&lt;/P&gt;
&lt;P&gt;I did some searching and there are numerous posts about calculations getting stuck in MPI_FINALIZE, regardless of the -check_mpi. The response to the reports is usually to ensure that all communications have completed. However in my case, that is exactly what I want the check_mpi flag to tell me. I don't think that there are outstanding communications, but who knows. Is there a way I can force my way out of MPI_FINALIZE or prompt it to provide me a coherent error message?&lt;/P&gt;</description>
      <pubDate>Mon, 18 Jan 2021 17:28:24 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1247397#M7641</guid>
      <dc:creator>Kevin_McGrattan</dc:creator>
      <dc:date>2021-01-18T17:28:24Z</dc:date>
    </item>
    <item>
      <title>Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1248047#M7655</link>
      <description>&lt;P&gt;Hi Kevin,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Could you please provide the command line you were using to launch MPI?&lt;/P&gt;&lt;P&gt;If it doesn't contain the number of nodes you were launching on, please mention that too.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;Prasanth&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 20 Jan 2021 07:59:38 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1248047#M7655</guid>
      <dc:creator>PrasanthD_intel</dc:creator>
      <dc:date>2021-01-20T07:59:38Z</dc:date>
    </item>
    <item>
      <title>Re: Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1248141#M7659</link>
      <description>&lt;P&gt;If I run the job directly from the command line on the head node&lt;/P&gt;
&lt;P&gt;&lt;FONT face="courier new,courier"&gt;mpiexec -n 1 &amp;lt;executable&amp;gt; &amp;lt;input_file.txt&amp;gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;the job runs fine. It's just a single process MPI job, in this case, for simplicity.&lt;/P&gt;
&lt;P&gt;However, I typically run jobs via a SLURM script:&lt;/P&gt;
&lt;P&gt;&lt;FONT face="courier new,courier"&gt;#!/bin/bash&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;...&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;#SBATCH --ntasks=1&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;#SBATCH --nodes=1&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;#SBATCH --cpus-per-task=1&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;#SBATCH --ntasks-per-node=1&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT face="courier new,courier"&gt;module purge&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;module load ... tbb/latest compiler-rt/latest dpl/latest mpi/latest psm&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-family: 'courier new', courier;"&gt;module load libfabric/1.10.1&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT face="courier new,courier"&gt;export OMP_NUM_THREADS=1&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;export I_MPI_DEBUG=5&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;export FI_PROVIDER=shm&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT face="courier new,courier"&gt;srun -N 1 -n 1 --ntasks-per-node 1 &amp;lt;executable&amp;gt; &amp;lt;input_file.txt&amp;gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT face="arial,helvetica,sans-serif"&gt;I wonder if this has to do with the psm libfabric, which we use because we have old Qlogic Infiniband cards. Or it could have to do with SLURM, srun, etc.&lt;/FONT&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 20 Jan 2021 14:24:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1248141#M7659</guid>
      <dc:creator>Kevin_McGrattan</dc:creator>
      <dc:date>2021-01-20T14:24:00Z</dc:date>
    </item>
    <item>
      <title>Re: Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1248150#M7660</link>
      <description>&lt;P&gt;More info: I ran this same simple case on another linux cluster that uses Mellanox cards and does not use the psm libfabric. The case runs successfully there. So I suspect that this hanging in MPI_FINALIZE is not related to SLURM, but rather psm. Our Qlogic cards are sufficiently old that we had to build the psm lib ourselves. Can you think of a reason for hanging in MPI_FINALIZE? Could it be that in this case we are only using intranode (shm) communications?&lt;/P&gt;</description>
      <pubDate>Wed, 20 Jan 2021 14:57:10 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1248150#M7660</guid>
      <dc:creator>Kevin_McGrattan</dc:creator>
      <dc:date>2021-01-20T14:57:10Z</dc:date>
    </item>
    <item>
      <title>Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1248899#M7675</link>
      <description>&lt;P&gt;Hi Kevin,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Instead of srun could you once try and check with mpiexec.hydra for launching mpi?&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;Prasanth&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 22 Jan 2021 08:48:27 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1248899#M7675</guid>
      <dc:creator>PrasanthD_intel</dc:creator>
      <dc:date>2021-01-22T08:48:27Z</dc:date>
    </item>
    <item>
      <title>Re: Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1248974#M7677</link>
      <description>&lt;P&gt;I have discovered that srun and SLURM are not the problem. The problem occurs with the psm libfabric that we use on one of our linux clusters because it uses Qlogic Infiniband cards. So basically we are using an old fabric with old cards, and maybe this is just a consequence of that. However, if you can think of a way to force the code to exit MPI_FINALIZE or think of some way to compile and link that would solve the problem, I would appreciate it.&lt;/P&gt;</description>
      <pubDate>Fri, 22 Jan 2021 14:46:14 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1248974#M7677</guid>
      <dc:creator>Kevin_McGrattan</dc:creator>
      <dc:date>2021-01-22T14:46:14Z</dc:date>
    </item>
    <item>
      <title>Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1251936#M7710</link>
      <description>&lt;P&gt;Hi Kevin,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Sorry for the delay in response. Could you please provide the model name and any additional information regarding your Qlogic Infiniband? &lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;Prasanth&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 01 Feb 2021 12:21:30 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1251936#M7710</guid>
      <dc:creator>PrasanthD_intel</dc:creator>
      <dc:date>2021-02-01T12:21:30Z</dc:date>
    </item>
    <item>
      <title>Re: Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1251962#M7714</link>
      <description>&lt;P&gt;Centos 7 linux using the latest oneAPI Fortran and MPI&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;$ ibstat&lt;BR /&gt;CA 'qib0'&lt;BR /&gt;CA type: InfiniPath_QLE7340&lt;BR /&gt;Number of ports: 1&lt;BR /&gt;Firmware version:&lt;BR /&gt;Hardware version: 2&lt;BR /&gt;Node GUID: 0x00117500006fcc26&lt;BR /&gt;System image GUID: 0x00117500006fcc26&lt;BR /&gt;Port 1:&lt;BR /&gt;State: Active&lt;BR /&gt;Physical state: LinkUp&lt;BR /&gt;Rate: 40&lt;BR /&gt;Base lid: 2&lt;BR /&gt;LMC: 0&lt;BR /&gt;SM lid: 1&lt;BR /&gt;Capability mask: 0x07690868&lt;BR /&gt;Port GUID: 0x00117500006fcc26&lt;BR /&gt;Link layer: InfiniBand&lt;/P&gt;</description>
      <pubDate>Mon, 01 Feb 2021 14:21:12 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1251962#M7714</guid>
      <dc:creator>Kevin_McGrattan</dc:creator>
      <dc:date>2021-02-01T14:21:12Z</dc:date>
    </item>
    <item>
      <title>Re: -check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1254349#M7748</link>
      <description>&lt;P&gt;Hi Kevin,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks for being patient, we are sorry for the delay.&lt;/P&gt;
&lt;P&gt;I am escalating this thread to an SME (Subject Matter Expert).&lt;/P&gt;
&lt;P&gt;We will get back to you soon.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards&lt;/P&gt;
&lt;P&gt;Prasanth&lt;/P&gt;</description>
      <pubDate>Tue, 09 Feb 2021 10:58:47 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1254349#M7748</guid>
      <dc:creator>PrasanthD_intel</dc:creator>
      <dc:date>2021-02-09T10:58:47Z</dc:date>
    </item>
    <item>
      <title>Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1254486#M7752</link>
      <description>&lt;P&gt;You mentioned that you have confirmed this is related to the QLogic hardware.  Can you specify another device you tested where it works?&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Please check if you get the same hang using -trace instead of -check_mpi.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Do you see the same hang on a simple Hello World code with -check_mpi on the QLogic hardware.&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 09 Feb 2021 15:11:43 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1254486#M7752</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2021-02-09T15:11:43Z</dc:date>
    </item>
    <item>
      <title>Re: Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1254571#M7756</link>
      <description>&lt;P&gt;We have two linux clusters, both configured more or less the same except that one uses Qlogic/psm (qib0) and the other Mellanox/ofi (mlx4_0). The hang-up in MPI_FINALIZE occurs on the Qlogic system. It occurs when I use -check_mpi. It does not occur when I use -trace. I cannot reproduce the problem with a simple Hello_World problem.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Is there a way I can get information from MPI_FINALIZE that might give a hint as to something I am doing that is not appropriate. I do not get any errors or warnings from the -check_mpi option. The calculations finish fine, but they do not get released and remain running.&lt;/P&gt;</description>
      <pubDate>Tue, 09 Feb 2021 18:56:25 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1254571#M7756</guid>
      <dc:creator>Kevin_McGrattan</dc:creator>
      <dc:date>2021-02-09T18:56:25Z</dc:date>
    </item>
    <item>
      <title>Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1255603#M7767</link>
      <description>&lt;P&gt;For most errors, the message checker will print output immediately.  If you have requests left open, those are printed at the end.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Can you attach a debugger and identify where the hang occurs?&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Also, have you encountered this in an earlier version?&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 12 Feb 2021 14:38:14 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1255603#M7767</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2021-02-12T14:38:14Z</dc:date>
    </item>
    <item>
      <title>Re: Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1255610#M7768</link>
      <description>&lt;P&gt;The code enters MPI_FINALIZE and never returns, even with only a single MPI process running. This only happens when I use -check_mpi. If I do not use -check_mpi, everything works properly. However, the point of using -check_mpi is to see if there is a problem with my MPI calls. I haven't encountered this before because I have only now started using the -check_mpi option. In general, this option has identified a few non-kosher MPI calls which I have fixed. But I want to use the -check_mpi as part of our routine continuous integration process but I cannot because the jobs hang in the MPI_FINALIZE call.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;So my question to you is this --- is there a time-out parameter that would force the code to exit MPI_FINALIZE and tell me if I have done something non-kosher within the code?&lt;/P&gt;</description>
      <pubDate>Fri, 12 Feb 2021 15:12:58 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1255610#M7768</guid>
      <dc:creator>Kevin_McGrattan</dc:creator>
      <dc:date>2021-02-12T15:12:58Z</dc:date>
    </item>
    <item>
      <title>Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1255620#M7769</link>
      <description>&lt;P&gt;There is a VT_DEADLOCK_TIMEOUT option you can set.  The default is 1 minute.&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 12 Feb 2021 15:36:42 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1255620#M7769</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2021-02-12T15:36:42Z</dc:date>
    </item>
    <item>
      <title>Re: Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1255623#M7770</link>
      <description>&lt;P&gt;My calculations remain deadlocked in MPI_FINALIZE indefinitely. The job never ends because it is stuck in the second to last line of the code:&lt;/P&gt;
&lt;P&gt;CALL MPI_FINALIZE&lt;/P&gt;
&lt;P&gt;END PROGRAM&lt;/P&gt;
&lt;P&gt;If the code never ends, the cluster cores are never released, and I cannot run a suite of test cases automatically.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 12 Feb 2021 15:47:25 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1255623#M7770</guid>
      <dc:creator>Kevin_McGrattan</dc:creator>
      <dc:date>2021-02-12T15:47:25Z</dc:date>
    </item>
    <item>
      <title>Re: Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1255625#M7771</link>
      <description>&lt;P&gt;The MPI standard only requires that rank 0 return from MPI_FINALIZE.&amp;nbsp; From version 3.0 of the standard in Chapter 8, section 8.7:&lt;/P&gt;
&lt;P&gt;Although it is not required that all processes return from MPI_FINALIZE, it is required that at least process 0 in MPI_COMM_WORLD return, so that users can know that the MPI portion of the computation is over. In addition, in a POSIX environment, users may desire to supply an exit code for each process that returns from MPI_FINALIZE.&lt;/P&gt;
&lt;P&gt;So this is what I need to do -- exit MPI_FINALIZE with some sort of error code.&lt;/P&gt;</description>
      <pubDate>Fri, 12 Feb 2021 15:55:14 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1255625#M7771</guid>
      <dc:creator>Kevin_McGrattan</dc:creator>
      <dc:date>2021-02-12T15:55:14Z</dc:date>
    </item>
    <item>
      <title>Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1258861#M7836</link>
      <description>&lt;P&gt;Please run with &lt;B&gt;VT_VERBOSE=5&lt;/B&gt; and attach the output.  Also, please get the stack of the hanging process with &lt;I&gt;gstack &amp;lt;pid&amp;gt;&lt;/I&gt;.&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 24 Feb 2021 15:59:53 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1258861#M7836</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2021-02-24T15:59:53Z</dc:date>
    </item>
    <item>
      <title>Re: Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1259027#M7838</link>
      <description>&lt;P&gt;I have attached the output of the gstack comment. The VT_VERBOSE output is extensive, but it appears that the bottom line is that it says I have not freed a datatype. I checked, and the only MPI datatype that I create, I free with MPI_TYPE_FREE.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;[1 Wed Feb 24 14:52:42 2021] WARNING: LOCAL:DATATYPE:NOT_FREED: warning&lt;BR /&gt;[1 Wed Feb 24 14:52:42 2021] WARNING: When calling MPI_Finalize() there were unfreed user-defined datatypes:&lt;BR /&gt;[1 Wed Feb 24 14:52:42 2021] WARNING: 1 in this process.&lt;BR /&gt;[1 Wed Feb 24 14:52:42 2021] WARNING: This may indicate that resources are leaked at runtime.&lt;BR /&gt;[1 Wed Feb 24 14:52:42 2021] WARNING: To clean up properly MPI_Type_free() should be called for&lt;BR /&gt;[1 Wed Feb 24 14:52:42 2021] WARNING: all user-defined datatypes.&lt;BR /&gt;[1 Wed Feb 24 14:52:42 2021] WARNING: 1. 1 time:&lt;BR /&gt;[1 Wed Feb 24 14:52:42 2021] WARNING: mpi_type_create_struct_(count=2, *array_of_blocklens=0x7ffebc055cc0, *array_of_displacements=0x7ffebc055c90, *array_of_types=0x7ffebc055cb0, *newtype=0x7ffebc055b28, *ierr=0xda99900)&lt;BR /&gt;[1 Wed Feb 24 14:52:42 2021] WARNING: fds_IP_exchange_diagnostics_ (/home4/mcgratta/firemodels/fds/Build/impi_intel_linux_64_db/../../Source/main.f90:3510)&lt;BR /&gt;[1 Wed Feb 24 14:52:42 2021] WARNING: MAIN__ (/home4/mcgratta/firemodels/fds/Build/impi_intel_linux_64_db/../../Source/main.f90:922)&lt;BR /&gt;[1 Wed Feb 24 14:52:42 2021] WARNING: main (/home4/mcgratta/firemodels/fds/Build/impi_intel_linux_64_db/fds_impi_intel_linux_64_db)&lt;BR /&gt;[1 Wed Feb 24 14:52:42 2021] WARNING: __libc_start_main (/usr/lib64/libc-2.17.so)&lt;BR /&gt;[1 Wed Feb 24 14:52:42 2021] WARNING: (/home4/mcgratta/firemodels/fds/Build/impi_intel_linux_64_db/fds_impi_intel_linux_64_db)&lt;BR /&gt;[0 Wed Feb 24 14:52:43 2021] INFO: "logging": internal info...&lt;BR /&gt;[0 Wed Feb 24 14:52:43 2021] INFO: "logging": communicators...&lt;BR /&gt;[1 Wed Feb 24 14:52:43 2021] INFO: "logging": internal info...&lt;BR /&gt;[1 Wed Feb 24 14:52:43 2021] INFO: "logging": communicators...&lt;/P&gt;</description>
      <pubDate>Wed, 24 Feb 2021 20:19:24 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1259027#M7838</guid>
      <dc:creator>Kevin_McGrattan</dc:creator>
      <dc:date>2021-02-24T20:19:24Z</dc:date>
    </item>
    <item>
      <title>Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1259039#M7839</link>
      <description>&lt;P&gt;Please go ahead and attach the output with VT_VERBOSE=1, I will provide both of these to our development team for analysis.&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 24 Feb 2021 21:00:49 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1259039#M7839</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2021-02-24T21:00:49Z</dc:date>
    </item>
    <item>
      <title>Re: Re:-check_mpi causes code to get stuck in MPI_FINALIZE</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1259041#M7840</link>
      <description>&lt;P&gt;The file I have attached is standard error. It does not contain much info with VT_VERBOSE=1. Is this the file you meant?&lt;/P&gt;</description>
      <pubDate>Wed, 24 Feb 2021 21:13:46 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/check-mpi-causes-code-to-get-stuck-in-MPI-FINALIZE/m-p/1259041#M7840</guid>
      <dc:creator>Kevin_McGrattan</dc:creator>
      <dc:date>2021-02-24T21:13:46Z</dc:date>
    </item>
  </channel>
</rss>

