<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Quote:Maksim B. (Intel) wrote in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/Mpiexec-issue-order-of-machines/m-p/1133960#M5707</link>
    <description>&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;Maksim B. (Intel) wrote:&lt;BR /&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;What is your IMPI version, that can be found with&lt;/P&gt;
&lt;PRE class="brush:bash; class-name:dark;"&gt;mpiexec --version&lt;/PRE&gt;

&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;type helpfds for help on running fds&lt;BR /&gt;C:\Users\CFD&amp;gt;mpiexec --version&lt;BR /&gt;Intel(R) MPI Library for Windows* OS, Version 2019 Build 20180829 (id: 15f5d6c0c)&lt;BR /&gt;Copyright 2003-2018, Intel Corporation.&lt;/P&gt;
&lt;P&gt;C:\Users\CFD&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;Maksim B. (Intel) wrote:&lt;BR /&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;Can you give the output of&lt;/P&gt;

&lt;PRE class="brush:bash; class-name:dark;"&gt;mpiexec -v -n 1 -host non-local-machine test_mpi : -n 1 -host localhost test_mpi&lt;/PRE&gt;

&lt;P&gt;Can you check that remote machine has hydra_service running?&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;No output is produced it just hangs,&lt;/P&gt;
&lt;P&gt;hydra_service.exe is running on the remote pc&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If I open a new cmd window and type mpiexec -n 1 test_mpi, the original console returns:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;C:\Users\CFD&amp;gt;mpiexec -v -n 1 -host cfd-pc2 test_mpi : -n 1 -host localhost test_mpi&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=init pmi_version=1 pmi_subversion=1&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get_maxes&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get_appnum&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=appnum appnum=1&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get_my_kvsname&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=my_kvsname kvsname=kvs_4312_0&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get_my_kvsname&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=my_kvsname kvsname=kvs_4312_0&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get kvsname=kvs_4312_0 key=PMI_process_mapping&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,1))&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get kvsname=kvs_4312_0 key=PMI_active_process_mapping&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,0))&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=barrier_in&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=barrier_out&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get_my_kvsname&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=my_kvsname kvsname=kvs_4312_0&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=put kvsname=kvs_4312_0 key=OFI-1 value=OFI#0200CEC6C0A801330000000000000000$&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=put_result rc=0 msg=success&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=barrier_in&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=barrier_out&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get kvsname=kvs_4312_0 key=OFI-0&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=get_result rc=0 msg=success value=OFI#0200CEC5C0A801330000000000000000$&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get kvsname=kvs_4312_0 key=OFI-1&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=get_result rc=0 msg=success value=OFI#0200CEC6C0A801330000000000000000$&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=barrier_in&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=barrier_out&lt;BR /&gt;&amp;nbsp;Hello world: rank &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0 &amp;nbsp;of &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;2 &amp;nbsp;running on&lt;BR /&gt;&amp;nbsp;CFD-PC11&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&amp;nbsp;Hello world: rank &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;1 &amp;nbsp;of &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;2 &amp;nbsp;running on&lt;BR /&gt;&amp;nbsp;CFD-PC11&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=finalize&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=finalize_ack&lt;/P&gt;
&lt;P&gt;C:\Users\CFD&amp;gt;&lt;/P&gt;
&lt;P&gt;many thanks,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 25 Apr 2019 10:23:04 GMT</pubDate>
    <dc:creator>Hob</dc:creator>
    <dc:date>2019-04-25T10:23:04Z</dc:date>
    <item>
      <title>Mpiexec issue order of machines</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Mpiexec-issue-order-of-machines/m-p/1133954#M5701</link>
      <description>&lt;P&gt;Hi all,&lt;BR /&gt;&lt;BR /&gt;I am having an issue with mpiexec, I have a bundled install with Fire Dynamics Simulator (FDS) and I am attempting to run a simple hello world script that is bundled with FDS called test_mpi link:&amp;nbsp;https://github.com/firemodels/fds/blob/master/Utilities/test_mpi/test_mpi.f90&lt;/P&gt;&lt;P&gt;The issue I have is if I run:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;'mpiexec -hosts 2 non-local-machine 1 local-machine 1 test_mpi'&lt;/P&gt;&lt;P&gt;I get the hello work with the rank, however, if I swap such that the local-machine is first, I only get the localhost machine reported with the non-local-machine never replying.&lt;BR /&gt;&lt;BR /&gt;Should this be an expected result or is there an issue somewhere?&lt;/P&gt;</description>
      <pubDate>Wed, 24 Apr 2019 15:23:02 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Mpiexec-issue-order-of-machines/m-p/1133954#M5701</guid>
      <dc:creator>Hob</dc:creator>
      <dc:date>2019-04-24T15:23:02Z</dc:date>
    </item>
    <item>
      <title>That invocation is certain</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Mpiexec-issue-order-of-machines/m-p/1133955#M5702</link>
      <description>&lt;P&gt;That invocation is certain not to work, it's not conforming to command-line syntax. You want to run something like:&lt;/P&gt;
&lt;PRE class="brush:bash; class-name:dark;"&gt; mpiexec -n 2 -host non-local-machine test_mpi : -n 1 -host localhost test_mpi
&lt;/PRE&gt;

&lt;P&gt;for your case.&lt;/P&gt;</description>
      <pubDate>Thu, 25 Apr 2019 09:24:10 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Mpiexec-issue-order-of-machines/m-p/1133955#M5702</guid>
      <dc:creator>Maksim_B_Intel</dc:creator>
      <dc:date>2019-04-25T09:24:10Z</dc:date>
    </item>
    <item>
      <title>Many thanks for this, I have</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Mpiexec-issue-order-of-machines/m-p/1133956#M5703</link>
      <description>&lt;P&gt;Many thanks for this, I have tried the above with the same results (a hang waiting for a reply), ironically if I launch the above:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;mpiexec -n 2 -host non-local-machine test_mpi : -n 1 -host localhost test_mpi&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It hangs awaiting a localhost reply, if I then launch a separate mpi localhost only request, I get a reply in the original command window whilst the second window waits for a reply.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Would this point to a network issue?&lt;/P&gt;</description>
      <pubDate>Thu, 25 Apr 2019 09:38:19 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Mpiexec-issue-order-of-machines/m-p/1133956#M5703</guid>
      <dc:creator>Hob</dc:creator>
      <dc:date>2019-04-25T09:38:19Z</dc:date>
    </item>
    <item>
      <title>Could you provide output of </title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Mpiexec-issue-order-of-machines/m-p/1133957#M5704</link>
      <description>&lt;P&gt;Could you provide output of&amp;nbsp;&lt;/P&gt;
&lt;PRE class="brush:; class-name:dark;"&gt;mpiexec -v -n 1 -host localhost test_mpi&lt;/PRE&gt;

&lt;P&gt;If you get output from remote machine, that's unlikely to be a network issue.&lt;/P&gt;
&lt;P&gt;Are you running under Linux or Windows?&lt;/P&gt;</description>
      <pubDate>Thu, 25 Apr 2019 09:45:32 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Mpiexec-issue-order-of-machines/m-p/1133957#M5704</guid>
      <dc:creator>Maksim_B_Intel</dc:creator>
      <dc:date>2019-04-25T09:45:32Z</dc:date>
    </item>
    <item>
      <title>Quote:Maksim B. (Intel) wrote</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Mpiexec-issue-order-of-machines/m-p/1133958#M5705</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;Maksim B. (Intel) wrote:&lt;BR /&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Could you provide output of&amp;nbsp;&lt;/P&gt;
&lt;PRE class="class-name:dark;"&gt;mpiexec -v -n 1 -host localhost test_mpi&lt;/PRE&gt;

&lt;P&gt;If you get output from remote machine, that's unlikely to be a network issue.&lt;/P&gt;
&lt;P&gt;Are you running under Linux or Windows?&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;Under Windows:&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;type helpfds for help on running fds&lt;BR /&gt;C:\Users\CFD&amp;gt;mpiexec -v -n 1 -host localhost test_mpi&lt;BR /&gt;[proxy:0:0@CFD-PC11] pmi cmd from fd 372: cmd=init pmi_version=1 pmi_subversion=1&lt;BR /&gt;[proxy:0:0@CFD-PC11] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0&lt;BR /&gt;[proxy:0:0@CFD-PC11] pmi cmd from fd 372: cmd=get_maxes&lt;BR /&gt;[proxy:0:0@CFD-PC11] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024&lt;BR /&gt;[proxy:0:0@CFD-PC11] pmi cmd from fd 372: cmd=get_appnum&lt;BR /&gt;[proxy:0:0@CFD-PC11] PMI response: cmd=appnum appnum=0&lt;BR /&gt;[proxy:0:0@CFD-PC11] pmi cmd from fd 372: cmd=get_my_kvsname&lt;BR /&gt;[proxy:0:0@CFD-PC11] PMI response: cmd=my_kvsname kvsname=kvs_6920_0&lt;BR /&gt;[proxy:0:0@CFD-PC11] pmi cmd from fd 372: cmd=barrier_in&lt;BR /&gt;[proxy:0:0@CFD-PC11] PMI response: cmd=barrier_out&lt;BR /&gt;[proxy:0:0@CFD-PC11] pmi cmd from fd 372: cmd=get_my_kvsname&lt;BR /&gt;[proxy:0:0@CFD-PC11] PMI response: cmd=my_kvsname kvsname=kvs_6920_0&lt;BR /&gt;[proxy:0:0@CFD-PC11] pmi cmd from fd 372: cmd=put kvsname=kvs_6920_0 key=OFI-0 value=OFI#0200CE2FC0A801330000000000000000$&lt;BR /&gt;[proxy:0:0@CFD-PC11] PMI response: cmd=put_result rc=0 msg=success&lt;BR /&gt;[proxy:0:0@CFD-PC11] pmi cmd from fd 372: cmd=barrier_in&lt;BR /&gt;[proxy:0:0@CFD-PC11] PMI response: cmd=barrier_out&lt;BR /&gt;[proxy:0:0@CFD-PC11] pmi cmd from fd 372: cmd=get kvsname=kvs_6920_0 key=OFI-0&lt;BR /&gt;[proxy:0:0@CFD-PC11] PMI response: cmd=get_result rc=0 msg=success value=OFI#0200CE2FC0A801330000000000000000$&lt;BR /&gt;[proxy:0:0@CFD-PC11] pmi cmd from fd 372: cmd=barrier_in&lt;BR /&gt;[proxy:0:0@CFD-PC11] PMI response: cmd=barrier_out&lt;BR /&gt;&amp;nbsp;Hello world: rank &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0 &amp;nbsp;of &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;1 &amp;nbsp;running on&lt;BR /&gt;&amp;nbsp;CFD-PC11&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;[proxy:0:0@CFD-PC11] pmi cmd from fd 372: cmd=finalize&lt;BR /&gt;[proxy:0:0@CFD-PC11] PMI response: cmd=finalize_ack&lt;/P&gt;
&lt;P&gt;C:\Users\CFD&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 25 Apr 2019 09:53:57 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Mpiexec-issue-order-of-machines/m-p/1133958#M5705</guid>
      <dc:creator>Hob</dc:creator>
      <dc:date>2019-04-25T09:53:57Z</dc:date>
    </item>
    <item>
      <title>What is your IMPI version,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Mpiexec-issue-order-of-machines/m-p/1133959#M5706</link>
      <description>&lt;P&gt;What is your IMPI version, that can be found with&lt;/P&gt;
&lt;PRE class="brush:bash; class-name:dark;"&gt;mpiexec --version&lt;/PRE&gt;

&lt;P&gt;Can you give the output of&lt;/P&gt;

&lt;PRE class="brush:bash; class-name:dark;"&gt;mpiexec -v -n 1 -host non-local-machine test_mpi : -n 1 -host localhost test_mpi&lt;/PRE&gt;

&lt;P&gt;Can you check that remote machine has hydra_service running?&lt;/P&gt;</description>
      <pubDate>Thu, 25 Apr 2019 10:10:06 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Mpiexec-issue-order-of-machines/m-p/1133959#M5706</guid>
      <dc:creator>Maksim_B_Intel</dc:creator>
      <dc:date>2019-04-25T10:10:06Z</dc:date>
    </item>
    <item>
      <title>Quote:Maksim B. (Intel) wrote</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Mpiexec-issue-order-of-machines/m-p/1133960#M5707</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;Maksim B. (Intel) wrote:&lt;BR /&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;What is your IMPI version, that can be found with&lt;/P&gt;
&lt;PRE class="brush:bash; class-name:dark;"&gt;mpiexec --version&lt;/PRE&gt;

&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;type helpfds for help on running fds&lt;BR /&gt;C:\Users\CFD&amp;gt;mpiexec --version&lt;BR /&gt;Intel(R) MPI Library for Windows* OS, Version 2019 Build 20180829 (id: 15f5d6c0c)&lt;BR /&gt;Copyright 2003-2018, Intel Corporation.&lt;/P&gt;
&lt;P&gt;C:\Users\CFD&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;Maksim B. (Intel) wrote:&lt;BR /&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;Can you give the output of&lt;/P&gt;

&lt;PRE class="brush:bash; class-name:dark;"&gt;mpiexec -v -n 1 -host non-local-machine test_mpi : -n 1 -host localhost test_mpi&lt;/PRE&gt;

&lt;P&gt;Can you check that remote machine has hydra_service running?&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;No output is produced it just hangs,&lt;/P&gt;
&lt;P&gt;hydra_service.exe is running on the remote pc&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If I open a new cmd window and type mpiexec -n 1 test_mpi, the original console returns:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;C:\Users\CFD&amp;gt;mpiexec -v -n 1 -host cfd-pc2 test_mpi : -n 1 -host localhost test_mpi&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=init pmi_version=1 pmi_subversion=1&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get_maxes&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get_appnum&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=appnum appnum=1&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get_my_kvsname&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=my_kvsname kvsname=kvs_4312_0&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get_my_kvsname&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=my_kvsname kvsname=kvs_4312_0&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get kvsname=kvs_4312_0 key=PMI_process_mapping&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,1))&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get kvsname=kvs_4312_0 key=PMI_active_process_mapping&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,0))&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=barrier_in&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=barrier_out&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get_my_kvsname&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=my_kvsname kvsname=kvs_4312_0&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=put kvsname=kvs_4312_0 key=OFI-1 value=OFI#0200CEC6C0A801330000000000000000$&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=put_result rc=0 msg=success&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=barrier_in&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=barrier_out&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get kvsname=kvs_4312_0 key=OFI-0&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=get_result rc=0 msg=success value=OFI#0200CEC5C0A801330000000000000000$&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=get kvsname=kvs_4312_0 key=OFI-1&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=get_result rc=0 msg=success value=OFI#0200CEC6C0A801330000000000000000$&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=barrier_in&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=barrier_out&lt;BR /&gt;&amp;nbsp;Hello world: rank &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0 &amp;nbsp;of &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;2 &amp;nbsp;running on&lt;BR /&gt;&amp;nbsp;CFD-PC11&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&amp;nbsp;Hello world: rank &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;1 &amp;nbsp;of &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;2 &amp;nbsp;running on&lt;BR /&gt;&amp;nbsp;CFD-PC11&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;[proxy:0:1@CFD-PC11] pmi cmd from fd 372: cmd=finalize&lt;BR /&gt;[proxy:0:1@CFD-PC11] PMI response: cmd=finalize_ack&lt;/P&gt;
&lt;P&gt;C:\Users\CFD&amp;gt;&lt;/P&gt;
&lt;P&gt;many thanks,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 25 Apr 2019 10:23:04 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Mpiexec-issue-order-of-machines/m-p/1133960#M5707</guid>
      <dc:creator>Hob</dc:creator>
      <dc:date>2019-04-25T10:23:04Z</dc:date>
    </item>
  </channel>
</rss>

