<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Intel mpiexec 2019.0.6 does in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-issue/m-p/1155346#M6208</link>
    <description>&lt;P&gt;Intel mpiexec 2019.0.6 does not show option -ph.&lt;/P&gt;&lt;P&gt;Are you intending to use -ppn instead (Processes Per Node)?&lt;/P&gt;&lt;P&gt;Jim Dempsey&lt;/P&gt;</description>
    <pubDate>Sun, 05 Apr 2020 12:51:51 GMT</pubDate>
    <dc:creator>jimdempseyatthecove</dc:creator>
    <dc:date>2020-04-05T12:51:51Z</dc:date>
    <item>
      <title>MPI issue</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-issue/m-p/1155345#M6207</link>
      <description>&lt;P&gt;I was running a program on skylake nodes. If I run it using one node (np=2, ph=2), the&amp;nbsp;program is able to complete&amp;nbsp;successfully. However, if I run it using two nodes (np=2, ph=1), I would get the following assertion failure:&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;rank = 1, revents = 8, state = 8&lt;BR /&gt;Assertion failed in file ../../src/mpid/ch3/channels/nemesis/netmod/tcp/socksm.c at line 2988: (it_plfd-&amp;gt;revents &amp;amp; POLLERR) == 0&lt;BR /&gt;internal ABORT - process 0&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;Does anyone know what are the possible causes for this type of assertion failure? Weird thing is: all my colleagues who are using csh can run the program reporting no error, but all other colleagues who are using bash (including me) always saw the same issue failed at the same line (2988).&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 04 Apr 2020 03:18:38 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-issue/m-p/1155345#M6207</guid>
      <dc:creator>llodds</dc:creator>
      <dc:date>2020-04-04T03:18:38Z</dc:date>
    </item>
    <item>
      <title>Intel mpiexec 2019.0.6 does</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-issue/m-p/1155346#M6208</link>
      <description>&lt;P&gt;Intel mpiexec 2019.0.6 does not show option -ph.&lt;/P&gt;&lt;P&gt;Are you intending to use -ppn instead (Processes Per Node)?&lt;/P&gt;&lt;P&gt;Jim Dempsey&lt;/P&gt;</description>
      <pubDate>Sun, 05 Apr 2020 12:51:51 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-issue/m-p/1155346#M6208</guid>
      <dc:creator>jimdempseyatthecove</dc:creator>
      <dc:date>2020-04-05T12:51:51Z</dc:date>
    </item>
    <item>
      <title>Sorry, ph (per host) was an</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-issue/m-p/1155347#M6209</link>
      <description>&lt;P&gt;Sorry, ph (per host)&amp;nbsp;was an option used by our wrapper to call the binary. It is equivalent to ppn.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 05 Apr 2020 15:27:48 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-issue/m-p/1155347#M6209</guid>
      <dc:creator>llodds</dc:creator>
      <dc:date>2020-04-05T15:27:48Z</dc:date>
    </item>
    <item>
      <title>Hi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-issue/m-p/1155348#M6210</link>
      <description>&lt;P&gt;Hi&lt;/P&gt;&lt;P&gt;This type of error occurs when&amp;nbsp;one of the MPI processes is terminated by a signal (for example,&amp;nbsp;SIGTERM&amp;nbsp;or&amp;nbsp;SIGKILL) from TCP.&lt;/P&gt;&lt;P&gt;The reasons might be host reboot, receiving an unexpected signal, OOM manager errors and others.&lt;/P&gt;&lt;P&gt;Could you check whether you are able to ssh to other nodes?&lt;/P&gt;&lt;P&gt;Can you look into this thread once and see if this helps you &lt;A href="https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/747448"&gt;https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/747448&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;Prasanth&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 06 Apr 2020 07:30:48 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-issue/m-p/1155348#M6210</guid>
      <dc:creator>PrasanthD_intel</dc:creator>
      <dc:date>2020-04-06T07:30:48Z</dc:date>
    </item>
    <item>
      <title>The solution in that post</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-issue/m-p/1155349#M6211</link>
      <description>&lt;P&gt;The solution in that post didn't resolve the issue I had. I put another program (np=2, ph=1)&amp;nbsp;into the job script and it terminated successfully, so I don't think it's related to the hardware.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 06 Apr 2020 15:15:21 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-issue/m-p/1155349#M6211</guid>
      <dc:creator>llodds</dc:creator>
      <dc:date>2020-04-06T15:15:21Z</dc:date>
    </item>
    <item>
      <title>Someone may close the ticket</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-issue/m-p/1155350#M6212</link>
      <description>&lt;P&gt;Someone may close the ticket now. I found the issue was related to limited stack size. Some dynamic arrays are not passed into subroutines, resulting in them being allocated as static arrays in subroutines. This leads to memory issue and eventually crashes one of the&amp;nbsp;nodes.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 08 Apr 2020 01:39:45 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-issue/m-p/1155350#M6212</guid>
      <dc:creator>llodds</dc:creator>
      <dc:date>2020-04-08T01:39:45Z</dc:date>
    </item>
    <item>
      <title>Hi,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-issue/m-p/1155351#M6213</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;Thanks for the confirmation. We will go ahead and close this thread. Feel free to reach out to us for more queries.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;--Rahul&lt;/P&gt;</description>
      <pubDate>Wed, 08 Apr 2020 11:33:43 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-issue/m-p/1155351#M6213</guid>
      <dc:creator>RahulV_intel</dc:creator>
      <dc:date>2020-04-08T11:33:43Z</dc:date>
    </item>
  </channel>
</rss>

