<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic MPI hangs on intranode communication in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-hangs-on-intranode-communication/m-p/1640446#M11967</link>
    <description>&lt;P&gt;I am running a fortran+MPI job on a single server (Xeon(R) Gold 6430, 2 sockets, 32 cores per socket, hyperthreading on). The job: &amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 1) pid=0: read some data&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;2) broadcast data to all processors (not huge, &amp;lt; 150 MB)&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;3) each process computes some statistics&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;4) each process sends results to pid0&lt;/P&gt;&lt;P&gt;If I launch the job using 32 cores, the job runs quickly; everything works fine.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If I launch requesting 64 cores, the job hangs at step 2.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;mpirun -np 32 build/mpi/SparseDemand.exe input.prop&lt;/P&gt;&lt;P&gt;mpirun -np 64 build/mpi/SparseDemand.exe input.prop&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have tried various combinations of I_MPI environment variables and tried tuning, but nothing has improved the situation.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any thoughts on how to overcome this problem?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 31 Oct 2024 14:17:23 GMT</pubDate>
    <dc:creator>LPN2024</dc:creator>
    <dc:date>2024-10-31T14:17:23Z</dc:date>
    <item>
      <title>MPI hangs on intranode communication</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-hangs-on-intranode-communication/m-p/1640446#M11967</link>
      <description>&lt;P&gt;I am running a fortran+MPI job on a single server (Xeon(R) Gold 6430, 2 sockets, 32 cores per socket, hyperthreading on). The job: &amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 1) pid=0: read some data&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;2) broadcast data to all processors (not huge, &amp;lt; 150 MB)&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;3) each process computes some statistics&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;4) each process sends results to pid0&lt;/P&gt;&lt;P&gt;If I launch the job using 32 cores, the job runs quickly; everything works fine.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If I launch requesting 64 cores, the job hangs at step 2.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;mpirun -np 32 build/mpi/SparseDemand.exe input.prop&lt;/P&gt;&lt;P&gt;mpirun -np 64 build/mpi/SparseDemand.exe input.prop&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have tried various combinations of I_MPI environment variables and tried tuning, but nothing has improved the situation.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any thoughts on how to overcome this problem?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 31 Oct 2024 14:17:23 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-hangs-on-intranode-communication/m-p/1640446#M11967</guid>
      <dc:creator>LPN2024</dc:creator>
      <dc:date>2024-10-31T14:17:23Z</dc:date>
    </item>
    <item>
      <title>Re: MPI hangs on intranode communication</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-hangs-on-intranode-communication/m-p/1641319#M11970</link>
      <description>&lt;P&gt;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/333172"&gt;@LPN2024&lt;/a&gt;&amp;nbsp; please provide the full output of&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;I_MPI_DEBUG=10 I_MPI_HYDRA_DEBUG=1 mpirun -np 64 IMB-MPI1
&lt;/LI-CODE&gt;</description>
      <pubDate>Tue, 05 Nov 2024 10:30:41 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-hangs-on-intranode-communication/m-p/1641319#M11970</guid>
      <dc:creator>TobiasK</dc:creator>
      <dc:date>2024-11-05T10:30:41Z</dc:date>
    </item>
    <item>
      <title>Re: MPI hangs on intranode communication</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-hangs-on-intranode-communication/m-p/1641337#M11975</link>
      <description>&lt;P&gt;attached is the output of the benchmark&lt;/P&gt;</description>
      <pubDate>Tue, 05 Nov 2024 11:59:50 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-hangs-on-intranode-communication/m-p/1641337#M11975</guid>
      <dc:creator>LPN2024</dc:creator>
      <dc:date>2024-11-05T11:59:50Z</dc:date>
    </item>
    <item>
      <title>Re: MPI hangs on intranode communication</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-hangs-on-intranode-communication/m-p/1641343#M11976</link>
      <description>&lt;P&gt;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/333172"&gt;@LPN2024&lt;/a&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Since the output seems to be fine, the problem is likely related to your application.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you develop the application by yourself, you may try to run it with a suitable debugger, and check if everything works as expected. If you do not develop this application, please ask the developers for guidance.&lt;/P&gt;</description>
      <pubDate>Tue, 05 Nov 2024 12:21:32 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-hangs-on-intranode-communication/m-p/1641343#M11976</guid>
      <dc:creator>TobiasK</dc:creator>
      <dc:date>2024-11-05T12:21:32Z</dc:date>
    </item>
    <item>
      <title>Re: MPI hangs on intranode communication</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-hangs-on-intranode-communication/m-p/1641416#M11981</link>
      <description>&lt;P&gt;Ok, thank you, I will recheck the application in more detail.&lt;/P&gt;</description>
      <pubDate>Tue, 05 Nov 2024 18:01:31 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-hangs-on-intranode-communication/m-p/1641416#M11981</guid>
      <dc:creator>LPN2024</dc:creator>
      <dc:date>2024-11-05T18:01:31Z</dc:date>
    </item>
  </channel>
</rss>

