<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: MPI_Comm_spawn hangs in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1265653#M7941</link>
    <description>&lt;P&gt;Hi Prasanth,&lt;/P&gt;
&lt;P&gt;No, there are no network changes occuring.&lt;/P&gt;
&lt;P&gt;This is the debug output:&lt;/P&gt;
&lt;P&gt;C:\Temp\mpi&amp;gt;mpiexec.exe -n 1 -localroot -host localhost mpi_test.exe&lt;BR /&gt;[0] MPI startup(): Intel(R) MPI Library, Version 2019 Update 9 Build 20201005&lt;BR /&gt;[0] MPI startup(): Copyright (C) 2003-2020 Intel Corporation. All rights reserved.&lt;BR /&gt;[0] MPI startup(): library kind: release&lt;BR /&gt;[0] MPI startup(): libfabric version: 1.10.1a1-impi&lt;BR /&gt;[0] MPI startup(): libfabric provider: tcp;ofi_rxm&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for ch4 level&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for net level&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for shm level&lt;BR /&gt;[0] MPI startup(): Rank Pid Node name Pin cpu&lt;BR /&gt;[0] MPI startup(): 0 31276 WST18 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23}&lt;BR /&gt;[0] MPI startup(): I_MPI_HYDRA_TOPOLIB=hwloc&lt;BR /&gt;[0] MPI startup(): I_MPI_FABRICS=ofi&lt;BR /&gt;[0] MPI startup(): I_MPI_DEBUG=10&lt;BR /&gt;Before spawn&lt;BR /&gt;[mpiexec@WST18] HYD_sock_connect (..\windows\src\hydra_sock.c:216): getaddrinfo returned error 11001&lt;BR /&gt;[mpiexec@WST18] HYD_connect_to_service (bstrap\service\service_launch.c:76): unable to connect to service at unknownhost:8680&lt;BR /&gt;[mpiexec@WST18] HYDI_bstrap_service_launch (bstrap\service\service_launch.c:417): unable to connect to hydra service&lt;BR /&gt;[mpiexec@WST18] launch_bstrap_proxies (bstrap\src\intel\i_hydra_bstrap.c:564): error launching bstrap proxy&lt;BR /&gt;[mpiexec@WST18] HYD_bstrap_setup (bstrap\src\intel\i_hydra_bstrap.c:754): unable to launch bstrap proxy&lt;BR /&gt;[mpiexec@WST18] do_spawn (mpiexec.c:1129): error setting up the boostrap proxies&lt;/P&gt;
&lt;P&gt;Kind regards,&lt;/P&gt;
&lt;P&gt;Mark&lt;/P&gt;</description>
    <pubDate>Thu, 18 Mar 2021 16:32:01 GMT</pubDate>
    <dc:creator>Mark14</dc:creator>
    <dc:date>2021-03-18T16:32:01Z</dc:date>
    <item>
      <title>MPI_Comm_spawn hangs</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1262716#M7894</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;In our distributed application (C++), the main application launches workers on localhost and/or remote machines with &amp;nbsp;MPI_Comm_spawn.&lt;/P&gt;
&lt;P&gt;If a machine cannot be reached (network issue/non existing name…), then MPI_Comm_spawn dumps the following messages and hangs forever. So our distributed application also hangs…&lt;/P&gt;
&lt;P&gt;&amp;nbsp; [mpiexec@WST18] HYD_sock_connect (..\windows\src\hydra_sock.c:216): getaddrinfo returned error 11001&lt;/P&gt;
&lt;P&gt;&amp;nbsp; [mpiexec@WST18] HYD_connect_to_service (bstrap\service\service_launch.c:76): unable to connect to service at nonexisting:8680&lt;/P&gt;
&lt;P&gt;&amp;nbsp; [mpiexec@WST18] HYDI_bstrap_service_launch (bstrap\service\service_launch.c:417): unable to connect to hydra service&lt;/P&gt;
&lt;P&gt;&amp;nbsp; [mpiexec@WST18] launch_bstrap_proxies (bstrap\src\intel\i_hydra_bstrap.c:564): error launching bstrap proxy&lt;/P&gt;
&lt;P&gt;&amp;nbsp; [mpiexec@WST18] HYD_bstrap_setup (bstrap\src\intel\i_hydra_bstrap.c:754): unable to launch bstrap proxy&lt;/P&gt;
&lt;P&gt;&amp;nbsp; [mpiexec@WST18] do_spawn (mpiexec.c:1129): error setting up the boostrap proxies&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;However if mpiexec is called from the command line, it dumps a similar stack trace and stops with errorlevel=-1. This is the expected behavior.&lt;/P&gt;
&lt;P&gt;&amp;gt; mpiexec.exe&amp;nbsp; -host nonexistinghost &amp;lt;command&amp;gt;&lt;/P&gt;
&lt;P&gt;[mpiexec@WST18] HYD_sock_connect (..\windows\src\hydra_sock.c:216): getaddrinfo returned error 11001&lt;/P&gt;
&lt;P&gt;[mpiexec@WST18] HYD_connect_to_service (bstrap\service\service_launch.c:76): unable to connect to service at nonexistinghost:8680&lt;/P&gt;
&lt;P&gt;[mpiexec@WST18] HYDI_bstrap_service_launch (bstrap\service\service_launch.c:417): unable to connect to hydra service&lt;/P&gt;
&lt;P&gt;[mpiexec@WST18] launch_bstrap_proxies (bstrap\src\intel\i_hydra_bstrap.c:564): error launching bstrap proxy&lt;/P&gt;
&lt;P&gt;[mpiexec@WST18] HYD_bstrap_setup (bstrap\src\intel\i_hydra_bstrap.c:754): unable to launch bstrap proxy&lt;/P&gt;
&lt;P&gt;[mpiexec@WST18] wmain (mpiexec.c:1938): error setting up the boostrap proxies&lt;/P&gt;
&lt;P&gt;Why does MPI_Comm_spawn not return and hang ?&amp;nbsp; Can this be avoided by a setting/environment variable ?&lt;/P&gt;
&lt;P&gt;Environment:&lt;BR /&gt;&amp;nbsp;&amp;nbsp; Windows&lt;BR /&gt;&amp;nbsp;&amp;nbsp; MPI 2019 Update 8 (I_MPI_FABRICS=ofi used)&lt;/P&gt;
&lt;P&gt;Thanks&lt;/P&gt;
&lt;P&gt;Mark&lt;/P&gt;</description>
      <pubDate>Tue, 09 Mar 2021 13:26:17 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1262716#M7894</guid>
      <dc:creator>Mark14</dc:creator>
      <dc:date>2021-03-09T13:26:17Z</dc:date>
    </item>
    <item>
      <title>Re:MPI_Comm_spawn hangs</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1263046#M7896</link>
      <description>&lt;P&gt;Hi Mark,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thanks for reaching out to us.&lt;/P&gt;&lt;P&gt;After you reported that the program is hanging we have tested with a sample MPI_Comm_spawn program in our windows environment and for us, the program stopped with an Exit code -1.&lt;/P&gt;&lt;P&gt;Currently, I am not sure why your code is hanging. Could you please provide us a sample reproducer of your spawn code so we can debug what could be the problem?&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;Prasanth&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 10 Mar 2021 12:21:58 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1263046#M7896</guid>
      <dc:creator>PrasanthD_intel</dc:creator>
      <dc:date>2021-03-10T12:21:58Z</dc:date>
    </item>
    <item>
      <title>Re: Re:MPI_Comm_spawn hangs</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1263385#M7904</link>
      <description>&lt;P&gt;Hi Prasanth,&lt;/P&gt;
&lt;P&gt;Thanks for having a look.&lt;/P&gt;
&lt;P&gt;I’ve uploaded a zip file containing a test program, source code and MPI dlls/exes&lt;/P&gt;
&lt;P&gt;If the test program is launched with mpiexec, it hangs:&lt;/P&gt;
&lt;P&gt;C:\Temp\mpi&amp;gt;mpiexec.exe -n 1 -localroot -host localhost mpi_test.exe&lt;BR /&gt;Before spawn&lt;BR /&gt;[mpiexec@WST18] HYD_sock_connect (..\windows\src\hydra_sock.c:216): getaddrinfo returned error 11001&lt;BR /&gt;[mpiexec@WST18] HYD_connect_to_service (bstrap\service\service_launch.c:76): unable to connect to service at unknownhost:8680&lt;BR /&gt;[mpiexec@WST18] HYDI_bstrap_service_launch (bstrap\service\service_launch.c:417): unable to connect to hydra service&lt;BR /&gt;[mpiexec@WST18] launch_bstrap_proxies (bstrap\src\intel\i_hydra_bstrap.c:564): error launching bstrap proxy&lt;BR /&gt;[mpiexec@WST18] HYD_bstrap_setup (bstrap\src\intel\i_hydra_bstrap.c:754): unable to launch bstrap proxy&lt;BR /&gt;[mpiexec@WST18] do_spawn (mpiexec.c:1129): error setting up the boostrap proxies&lt;/P&gt;
&lt;P&gt;This happens with MPI 2019 Update 9. I’ve also tried this with MPI 2018 Update 2 and then the program ends after a few seconds.&lt;/P&gt;
&lt;P&gt;Can you have a look why it hangs with version 2019 ?&lt;/P&gt;
&lt;P&gt;Kr&lt;/P&gt;
&lt;P&gt;Mark&lt;/P&gt;</description>
      <pubDate>Thu, 11 Mar 2021 12:01:51 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1263385#M7904</guid>
      <dc:creator>Mark14</dc:creator>
      <dc:date>2021-03-11T12:01:51Z</dc:date>
    </item>
    <item>
      <title>Re: Re:MPI_Comm_spawn hangs</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1265104#M7927</link>
      <description>&lt;P&gt;Hi Prasanth,&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="VIiyi"&gt;&lt;SPAN class="JLqJ4b ChMk0b C1N51c" data-language-for-alternatives="en" data-language-to-translate-into="nl" data-phrase-index="0"&gt;&lt;SPAN&gt;Have you been able to look into this issue ?&lt;BR /&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="VIiyi"&gt;&lt;SPAN class="JLqJ4b ChMk0b C1N51c" data-language-for-alternatives="en" data-language-to-translate-into="nl" data-phrase-index="0"&gt;&lt;SPAN&gt;Thanks&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="VIiyi"&gt;&lt;SPAN class="JLqJ4b ChMk0b C1N51c" data-language-for-alternatives="en" data-language-to-translate-into="nl" data-phrase-index="0"&gt;&lt;SPAN&gt;Mark&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 17 Mar 2021 08:46:32 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1265104#M7927</guid>
      <dc:creator>Mark14</dc:creator>
      <dc:date>2021-03-17T08:46:32Z</dc:date>
    </item>
    <item>
      <title>Re: MPI_Comm_spawn hangs</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1265541#M7940</link>
      <description>&lt;P&gt;Hi Mark,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We have tried to reproduce the issue but we are facing a different error while running the program.&lt;/P&gt;
&lt;P&gt;But coming to your issue it is not completely a network issue as we can see that "Before Spawn" is being printed and then the error, any network changes occurred in between?&lt;/P&gt;
&lt;P&gt;Could you provide the Debug logs by setting I_MPI_DEBUG=10?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards&lt;/P&gt;
&lt;P&gt;Prasanth&lt;/P&gt;</description>
      <pubDate>Thu, 18 Mar 2021 12:56:40 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1265541#M7940</guid>
      <dc:creator>PrasanthD_intel</dc:creator>
      <dc:date>2021-03-18T12:56:40Z</dc:date>
    </item>
    <item>
      <title>Re: MPI_Comm_spawn hangs</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1265653#M7941</link>
      <description>&lt;P&gt;Hi Prasanth,&lt;/P&gt;
&lt;P&gt;No, there are no network changes occuring.&lt;/P&gt;
&lt;P&gt;This is the debug output:&lt;/P&gt;
&lt;P&gt;C:\Temp\mpi&amp;gt;mpiexec.exe -n 1 -localroot -host localhost mpi_test.exe&lt;BR /&gt;[0] MPI startup(): Intel(R) MPI Library, Version 2019 Update 9 Build 20201005&lt;BR /&gt;[0] MPI startup(): Copyright (C) 2003-2020 Intel Corporation. All rights reserved.&lt;BR /&gt;[0] MPI startup(): library kind: release&lt;BR /&gt;[0] MPI startup(): libfabric version: 1.10.1a1-impi&lt;BR /&gt;[0] MPI startup(): libfabric provider: tcp;ofi_rxm&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for ch4 level&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for net level&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for shm level&lt;BR /&gt;[0] MPI startup(): Rank Pid Node name Pin cpu&lt;BR /&gt;[0] MPI startup(): 0 31276 WST18 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23}&lt;BR /&gt;[0] MPI startup(): I_MPI_HYDRA_TOPOLIB=hwloc&lt;BR /&gt;[0] MPI startup(): I_MPI_FABRICS=ofi&lt;BR /&gt;[0] MPI startup(): I_MPI_DEBUG=10&lt;BR /&gt;Before spawn&lt;BR /&gt;[mpiexec@WST18] HYD_sock_connect (..\windows\src\hydra_sock.c:216): getaddrinfo returned error 11001&lt;BR /&gt;[mpiexec@WST18] HYD_connect_to_service (bstrap\service\service_launch.c:76): unable to connect to service at unknownhost:8680&lt;BR /&gt;[mpiexec@WST18] HYDI_bstrap_service_launch (bstrap\service\service_launch.c:417): unable to connect to hydra service&lt;BR /&gt;[mpiexec@WST18] launch_bstrap_proxies (bstrap\src\intel\i_hydra_bstrap.c:564): error launching bstrap proxy&lt;BR /&gt;[mpiexec@WST18] HYD_bstrap_setup (bstrap\src\intel\i_hydra_bstrap.c:754): unable to launch bstrap proxy&lt;BR /&gt;[mpiexec@WST18] do_spawn (mpiexec.c:1129): error setting up the boostrap proxies&lt;/P&gt;
&lt;P&gt;Kind regards,&lt;/P&gt;
&lt;P&gt;Mark&lt;/P&gt;</description>
      <pubDate>Thu, 18 Mar 2021 16:32:01 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1265653#M7941</guid>
      <dc:creator>Mark14</dc:creator>
      <dc:date>2021-03-18T16:32:01Z</dc:date>
    </item>
    <item>
      <title>Re:MPI_Comm_spawn hangs</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1267362#M7966</link>
      <description>&lt;P&gt;Hi Mark,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Could you once try with &lt;I&gt;-localonly &lt;/I&gt;instead of&lt;I&gt; -localroot &lt;/I&gt;and see if it helps.&lt;/P&gt;&lt;P&gt;Are you targeting a Cluster or running on a single node?&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;Prasanth&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 24 Mar 2021 15:36:35 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1267362#M7966</guid>
      <dc:creator>PrasanthD_intel</dc:creator>
      <dc:date>2021-03-24T15:36:35Z</dc:date>
    </item>
    <item>
      <title>Re: Re:MPI_Comm_spawn hangs</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1267368#M7968</link>
      <description>&lt;P&gt;Hi Prasanth,&lt;/P&gt;
&lt;P&gt;With -localonly instead of -localroot,&amp;nbsp; MPI_Comm_spawn also hangs.&lt;/P&gt;
&lt;P&gt;This is the output:&lt;/P&gt;
&lt;P&gt;C:\Temp\mpi&amp;gt;mpiexec.exe -n 1 -localonly -host localhost mpi_test.exe&lt;BR /&gt;[0] MPI startup(): Intel(R) MPI Library, Version 2019 Update 9 Build 20201005&lt;BR /&gt;[0] MPI startup(): Copyright (C) 2003-2020 Intel Corporation. All rights reserved.&lt;BR /&gt;[0] MPI startup(): library kind: release&lt;BR /&gt;[0] MPI startup(): libfabric version: 1.10.1a1-impi&lt;BR /&gt;[0] MPI startup(): libfabric provider: tcp;ofi_rxm&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for ch4 level&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for net level&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for shm level&lt;BR /&gt;[0] MPI startup(): Rank Pid Node name Pin cpu&lt;BR /&gt;[0] MPI startup(): 0 27508 WST18 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23}&lt;BR /&gt;[0] MPI startup(): I_MPI_HYDRA_TOPOLIB=hwloc&lt;BR /&gt;[0] MPI startup(): I_MPI_FABRICS=ofi&lt;BR /&gt;[0] MPI startup(): I_MPI_DEBUG=10&lt;BR /&gt;Before spawn&lt;BR /&gt;[proxy:1:0@WST18 ] HYD_spawn (..\windows\src\hydra_spawn.c:245): unable to run process C:\Temp\mpi/worker.exe (error code 2)&lt;BR /&gt;[proxy:1:0@WST18 ] launch_processes (proxy.c:569): error creating process (error code 2). The system cannot find the file specified.&lt;/P&gt;
&lt;P&gt;[proxy:1:0@WST18 ] main (proxy.c:920): error launching_processes&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are targeting a cluster.&lt;/P&gt;
&lt;P&gt;Kind regards&lt;/P&gt;
&lt;P&gt;Mark&lt;/P&gt;</description>
      <pubDate>Wed, 24 Mar 2021 15:49:33 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1267368#M7968</guid>
      <dc:creator>Mark14</dc:creator>
      <dc:date>2021-03-24T15:49:33Z</dc:date>
    </item>
    <item>
      <title>Re: MPI_Comm_spawn hangs</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1267391#M7970</link>
      <description>&lt;P&gt;Hi Mark,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the mpi_test code, you are giving a parameter for the executable in MPI_COMM_SPAWN function as "worker.exe" and there is no executable with that name in your path and hence the error: &lt;I&gt;The system cannot find the file specified.&lt;/I&gt;&lt;/P&gt;
&lt;P&gt;Please compile a sample worker.exe and run it again.&lt;/P&gt;
&lt;P&gt;You can use the below code for worker.cpp&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="cpp"&gt;#include "mpi.h"

#include &amp;lt;stdio.h&amp;gt;



int main(int argc, char *argv[])

{

  MPI_Init(&amp;amp;argc, &amp;amp;argv);



  MPI_Comm com;

  MPI_Comm_get_parent(&amp;amp;com);



  MPI_Finalize();

  return 0;

}&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Compile it using &lt;STRONG&gt;&lt;I&gt;mpiicc worker.cpp -o worker&lt;/I&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let me know the results.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards&lt;/P&gt;
&lt;P&gt;Prasanth&lt;/P&gt;</description>
      <pubDate>Wed, 24 Mar 2021 16:44:06 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1267391#M7970</guid>
      <dc:creator>PrasanthD_intel</dc:creator>
      <dc:date>2021-03-24T16:44:06Z</dc:date>
    </item>
    <item>
      <title>Re: MPI_Comm_spawn hangs</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1267413#M7971</link>
      <description>&lt;P&gt;Hi Prasanth&lt;/P&gt;
&lt;P&gt;With -localonly, it does not hang, but the remote host is ignored.&lt;BR /&gt;With -localroot, MPI_Comm_spawn hangs&lt;/P&gt;
&lt;P&gt;This is the logging with -localonly and -localroot.&lt;/P&gt;
&lt;P&gt;Kind regards,&lt;/P&gt;
&lt;P&gt;Mark&lt;/P&gt;
&lt;P&gt;C:\Temp\mpi&amp;gt;mpiexec.exe -n 1 -localonly -host localhost mpi_test.exe&lt;BR /&gt;[0] MPI startup(): Intel(R) MPI Library, Version 2019 Update 9 Build 20201005&lt;BR /&gt;[0] MPI startup(): Copyright (C) 2003-2020 Intel Corporation. All rights reserved.&lt;BR /&gt;[0] MPI startup(): library kind: release&lt;BR /&gt;[0] MPI startup(): libfabric version: 1.10.1a1-impi&lt;BR /&gt;[0] MPI startup(): libfabric provider: tcp;ofi_rxm&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for ch4 level&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for net level&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for shm level&lt;BR /&gt;[0] MPI startup(): Rank Pid Node name Pin cpu&lt;BR /&gt;[0] MPI startup(): 0 21796 WST18 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23}&lt;BR /&gt;[0] MPI startup(): I_MPI_HYDRA_TOPOLIB=hwloc&lt;BR /&gt;[0] MPI startup(): I_MPI_FABRICS=ofi&lt;BR /&gt;[0] MPI startup(): I_MPI_DEBUG=10&lt;BR /&gt;Before spawn&lt;BR /&gt;[0] MPI startup(): Intel(R) MPI Library, Version 2019 Update 9 Build 20201005&lt;BR /&gt;[0] MPI startup(): Copyright (C) 2003-2020 Intel Corporation. All rights reserved.&lt;BR /&gt;[0] MPI startup(): library kind: release&lt;BR /&gt;[0] MPI startup(): libfabric version: 1.10.1a1-impi&lt;BR /&gt;[0] MPI startup(): libfabric provider: tcp;ofi_rxm&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for ch4 level&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for net level&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for shm level&lt;BR /&gt;[0] MPI startup(): Rank Pid Node name Pin cpu&lt;BR /&gt;[0] MPI startup(): 0 24932 WST18 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23}&lt;BR /&gt;[0] MPI startup(): I_MPI_HYDRA_TOPOLIB=hwloc&lt;BR /&gt;[0] MPI startup(): I_MPI_FABRICS=ofi&lt;BR /&gt;[0] MPI startup(): I_MPI_DEBUG=10&lt;BR /&gt;After spawn&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;C:\Temp\mpi&amp;gt;mpiexec.exe -n 1 -localroot -host localhost mpi_test.exe&lt;BR /&gt;[0] MPI startup(): Intel(R) MPI Library, Version 2019 Update 9 Build 20201005&lt;BR /&gt;[0] MPI startup(): Copyright (C) 2003-2020 Intel Corporation. All rights reserved.&lt;BR /&gt;[0] MPI startup(): library kind: release&lt;BR /&gt;[0] MPI startup(): libfabric version: 1.10.1a1-impi&lt;BR /&gt;[0] MPI startup(): libfabric provider: tcp;ofi_rxm&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for ch4 level&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for net level&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for shm level&lt;BR /&gt;[0] MPI startup(): Rank Pid Node name Pin cpu&lt;BR /&gt;[0] MPI startup(): 0 5244 WST18 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23}&lt;BR /&gt;[0] MPI startup(): I_MPI_HYDRA_TOPOLIB=hwloc&lt;BR /&gt;[0] MPI startup(): I_MPI_FABRICS=ofi&lt;BR /&gt;[0] MPI startup(): I_MPI_DEBUG=10&lt;BR /&gt;Before spawn&lt;BR /&gt;[mpiexec@WST18 ] HYD_sock_connect (..\windows\src\hydra_sock.c:216): getaddrinfo returned error 11001&lt;BR /&gt;[mpiexec@WST18 ] HYD_connect_to_service (bstrap\service\service_launch.c:76): unable to connect to service at unknownhost:8680&lt;BR /&gt;[mpiexec@WST18 ] HYDI_bstrap_service_launch (bstrap\service\service_launch.c:417): unable to connect to hydra service&lt;BR /&gt;[mpiexec@WST18 ] launch_bstrap_proxies (bstrap\src\intel\i_hydra_bstrap.c:564): error launching bstrap proxy&lt;BR /&gt;[mpiexec@WST18 ] HYD_bstrap_setup (bstrap\src\intel\i_hydra_bstrap.c:754): unable to launch bstrap proxy&lt;BR /&gt;[mpiexec@WST18 ] do_spawn (mpiexec.c:1129): error setting up the boostrap proxies&lt;/P&gt;</description>
      <pubDate>Wed, 24 Mar 2021 18:01:29 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1267413#M7971</guid>
      <dc:creator>Mark14</dc:creator>
      <dc:date>2021-03-24T18:01:29Z</dc:date>
    </item>
    <item>
      <title>Re:MPI_Comm_spawn hangs</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1268717#M7988</link>
      <description>&lt;P&gt;Hi Mark,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Here are differences between localroot and localonly:&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;-localroot&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;Use this option to launch the root process directly from&amp;nbsp;mpiexec&amp;nbsp;if the host is local. You can use this option to launch GUI applications. The interactive process should be launched before any other process in a job. For&amp;nbsp;example:&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;&amp;gt; mpiexec -n 1 -host &amp;lt;host2&amp;gt; -localroot interactive.exe : -n 1 -host &amp;lt;host1&amp;gt; background.exe&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;-localonly&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;Use this option to run an application on the local node only. If you use this option only for the local node, the Hydra service is not required.&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Since you have mentioned localhost as the only host I don't think there will be a difference for you.&lt;/P&gt;&lt;P&gt;But you have said that remote host is ignored? what do you mean by that?&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;Prasanth&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Mar 2021 10:30:31 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1268717#M7988</guid>
      <dc:creator>PrasanthD_intel</dc:creator>
      <dc:date>2021-03-29T10:30:31Z</dc:date>
    </item>
    <item>
      <title>Re: Re:MPI_Comm_spawn hangs</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1268724#M7989</link>
      <description>&lt;P&gt;Hi Prasanth&lt;/P&gt;
&lt;P&gt;&amp;gt;&amp;gt;&amp;gt; But you have said that remote host is ignored? what do you mean by that?&lt;/P&gt;
&lt;P&gt;The mpi_test executable wants to spawn the worker.exe on 'unknownhost', see the provided source code.&lt;/P&gt;
&lt;P&gt;The command 'mpiexec.exe -n 1 -hosts localhost -localonly mpi_test.exe' ignores the host setting and launcher worker.exe on localhost.&lt;/P&gt;
&lt;P&gt;The command 'mpiexec.exe -n 1 -hosts localhost -localroot mpi_test.exe' hangs in MPI_Comm_spawn.&lt;/P&gt;
&lt;P&gt;kr&lt;/P&gt;
&lt;P&gt;Mark&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 29 Mar 2021 10:52:05 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1268724#M7989</guid>
      <dc:creator>Mark14</dc:creator>
      <dc:date>2021-03-29T10:52:05Z</dc:date>
    </item>
    <item>
      <title>Re:MPI_Comm_spawn hangs</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1269924#M8020</link>
      <description>&lt;P&gt;Hi Mark,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;We are looking into it and we will get back to you soon.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;Prasanth&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 01 Apr 2021 10:20:49 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1269924#M8020</guid>
      <dc:creator>PrasanthD_intel</dc:creator>
      <dc:date>2021-04-01T10:20:49Z</dc:date>
    </item>
    <item>
      <title>Re: MPI_Comm_spawn hangs</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1348999#M9068</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thank you for your patience.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;IMPI 2021.5 supports FI_TCP_IFACE=lo to select 127.0.0.1, this works independently of VPN.&lt;/P&gt;
&lt;P&gt;So, could you please try using the latest Intel MPI 2021.5 by updating to the Intel oneAPI 2022.1?&lt;/P&gt;
&lt;P&gt;Also, please set FI_TCP_IFACE=lo before running your code as below:&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;set FI_TCP_IFACE=lo
mpiexec -n 5 master.exe&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks &amp;amp; Regards,&lt;/P&gt;
&lt;P&gt;Santosh&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 06 Jan 2022 11:58:55 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1348999#M9068</guid>
      <dc:creator>SantoshY_Intel</dc:creator>
      <dc:date>2022-01-06T11:58:55Z</dc:date>
    </item>
    <item>
      <title>Re: MPI_Comm_spawn hangs</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1349424#M9080</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;I’ve tried this with the simplified setup (see above) and it seems to work.&lt;/P&gt;
&lt;P&gt;However, because of another MPI issue (&lt;A href="https://community.intel.com/t5/Intel-oneAPI-HPC-Toolkit/Windows-MPI-2021-4-unable-to-create-process/td-p/1327189" target="_blank"&gt;https://community.intel.com/t5/Intel-oneAPI-HPC-Toolkit/Windows-MPI-2021-4-unable-to-create-process/td-p/1327189&lt;/A&gt;) we cannot integrate this MPI version in our product and do some tests.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Kr&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Mark&lt;/P&gt;</description>
      <pubDate>Fri, 07 Jan 2022 14:11:03 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1349424#M9080</guid>
      <dc:creator>Mark14</dc:creator>
      <dc:date>2022-01-07T14:11:03Z</dc:date>
    </item>
    <item>
      <title>Re:MPI_Comm_spawn hangs</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1350000#M9083</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thanks for the confirmation. Since your primary issue has been resolved, we are closing this thread. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;For any updates from Intel, you can keep track of your other issue here: &lt;A href="https://community.intel.com/t5/Intel-oneAPI-HPC-Toolkit/Windows-MPI-2021-4-unable-to-create-process/td-p/1327189" target="_blank"&gt;https://community.intel.com/t5/Intel-oneAPI-HPC-Toolkit/Windows-MPI-2021-4-unable-to-create-process/td-p/1327189&lt;/A&gt;.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thanks &amp;amp; Regards,&lt;/P&gt;&lt;P&gt;Santosh &lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 10 Jan 2022 11:46:01 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-Comm-spawn-hangs/m-p/1350000#M9083</guid>
      <dc:creator>SantoshY_Intel</dc:creator>
      <dc:date>2022-01-10T11:46:01Z</dc:date>
    </item>
  </channel>
</rss>

