<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Bizarre authenticity of host issue when running across multiple nodes with Intel MPI in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/Bizarre-authenticity-of-host-issue-when-running-across-multiple/m-p/1031680#M4178</link>
    <description>&lt;P&gt;I am attempting to run a job across three nodes.&amp;nbsp; I have configured passwordless ssh and it definitely works in between every node (each node can ssh to the other two without a password).&amp;nbsp; The known_hosts file is definitely correct and all 3 nodes have identical .ssh directories.&amp;nbsp; I have also tried adding the keys to ssh-agent, although I'm not sure if that was necessary either as I didn't specify a pass phrase when generating the id_rsa key (I know this is terrible security but it's temporary for the sake of testing).&lt;/P&gt;

&lt;P&gt;I can run a job across nodes 1 and 2 simultaneously without any difficulty, however if I try to use node 3 as well (or just nodes 1 and 3, or nodes 2 and 3) then the terminal is spammed with, "The authenticity of host 'node3 (IP of node 3)' can't be established." and there's no way to enter "yes" (even though I shouldn't have to in the first place as node 3's key is already in the known_hosts file of nodes 1 and 2).&lt;/P&gt;

&lt;P&gt;If I try to launch the job on node 3, then I receive the same messages in the terminal with the hostname/IP of nodes 1 and 2.&amp;nbsp; I am able to run the job solely on node 3.&lt;/P&gt;

&lt;P&gt;Any help would be greatly appreciated as this has been a real headache.&amp;nbsp; Clearly there is something I have overlooked even though the configuration and hardware of these three nodes is almost identical.&amp;nbsp; I am using Intel MPI 5.0.0.028 and CentOS 6.6.&amp;nbsp; The nodes are communicating over an Infiniband interface.&amp;nbsp; Thanks for any input.&lt;/P&gt;</description>
    <pubDate>Wed, 29 Jul 2015 18:09:25 GMT</pubDate>
    <dc:creator>Greg_S_1</dc:creator>
    <dc:date>2015-07-29T18:09:25Z</dc:date>
    <item>
      <title>Bizarre authenticity of host issue when running across multiple nodes with Intel MPI</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Bizarre-authenticity-of-host-issue-when-running-across-multiple/m-p/1031680#M4178</link>
      <description>&lt;P&gt;I am attempting to run a job across three nodes.&amp;nbsp; I have configured passwordless ssh and it definitely works in between every node (each node can ssh to the other two without a password).&amp;nbsp; The known_hosts file is definitely correct and all 3 nodes have identical .ssh directories.&amp;nbsp; I have also tried adding the keys to ssh-agent, although I'm not sure if that was necessary either as I didn't specify a pass phrase when generating the id_rsa key (I know this is terrible security but it's temporary for the sake of testing).&lt;/P&gt;

&lt;P&gt;I can run a job across nodes 1 and 2 simultaneously without any difficulty, however if I try to use node 3 as well (or just nodes 1 and 3, or nodes 2 and 3) then the terminal is spammed with, "The authenticity of host 'node3 (IP of node 3)' can't be established." and there's no way to enter "yes" (even though I shouldn't have to in the first place as node 3's key is already in the known_hosts file of nodes 1 and 2).&lt;/P&gt;

&lt;P&gt;If I try to launch the job on node 3, then I receive the same messages in the terminal with the hostname/IP of nodes 1 and 2.&amp;nbsp; I am able to run the job solely on node 3.&lt;/P&gt;

&lt;P&gt;Any help would be greatly appreciated as this has been a real headache.&amp;nbsp; Clearly there is something I have overlooked even though the configuration and hardware of these three nodes is almost identical.&amp;nbsp; I am using Intel MPI 5.0.0.028 and CentOS 6.6.&amp;nbsp; The nodes are communicating over an Infiniband interface.&amp;nbsp; Thanks for any input.&lt;/P&gt;</description>
      <pubDate>Wed, 29 Jul 2015 18:09:25 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Bizarre-authenticity-of-host-issue-when-running-across-multiple/m-p/1031680#M4178</guid>
      <dc:creator>Greg_S_1</dc:creator>
      <dc:date>2015-07-29T18:09:25Z</dc:date>
    </item>
    <item>
      <title>Hey Greg,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Bizarre-authenticity-of-host-issue-when-running-across-multiple/m-p/1031681#M4179</link>
      <description>&lt;P&gt;Hey Greg,&lt;/P&gt;

&lt;P&gt;Interesting, it seems like you're doing all the correct things.&amp;nbsp; We ship an sshconnectivity script with the Intel MPI Library install files.&amp;nbsp; Have you tried running that on your nodes?&amp;nbsp; I should do all the steps necessary for passwordless ssh setup.&lt;/P&gt;

&lt;P&gt;After you untar the l_mpi_p_5.0.0.028.tgz package, in the &lt;STRONG&gt;l_mpi_p_5.0.0.028/&lt;/STRONG&gt; directory, you should see '&lt;STRONG&gt;sshconnectivity.exp&lt;/STRONG&gt;'.&amp;nbsp; You'll need a file that contains a list of all your nodes:&lt;/P&gt;

&lt;PRE class="brush:bash;"&gt;$ cat machines.LINUX
node1
node2
node3&lt;/PRE&gt;

&lt;P&gt;and you need to provide that to the ssh script:&lt;/P&gt;

&lt;PRE class="brush:bash;"&gt;$ sshconnectivity.exp machines.LINUX&lt;/PRE&gt;

&lt;P&gt;It'll prompt you in the correct places for a pass phrase (can leave it blank).&lt;/P&gt;

&lt;P&gt;Let me know how this works.&lt;/P&gt;

&lt;P&gt;~Gergana&lt;/P&gt;</description>
      <pubDate>Wed, 29 Jul 2015 22:47:18 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Bizarre-authenticity-of-host-issue-when-running-across-multiple/m-p/1031681#M4179</guid>
      <dc:creator>Gergana_S_Intel</dc:creator>
      <dc:date>2015-07-29T22:47:18Z</dc:date>
    </item>
    <item>
      <title>Hi Gergana,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Bizarre-authenticity-of-host-issue-when-running-across-multiple/m-p/1031682#M4180</link>
      <description>&lt;P&gt;Hi Gergana,&lt;/P&gt;

&lt;P&gt;Running that script did the trick, and I am able to launch a job across all 3 nodes now!&amp;nbsp; Thanks very much for your help!&lt;/P&gt;

&lt;P&gt;Best regards,&lt;/P&gt;

&lt;P&gt;Greg&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jul 2015 16:28:25 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Bizarre-authenticity-of-host-issue-when-running-across-multiple/m-p/1031682#M4180</guid>
      <dc:creator>Greg_S_1</dc:creator>
      <dc:date>2015-07-30T16:28:25Z</dc:date>
    </item>
    <item>
      <title>Glad to hear it worked :) </title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Bizarre-authenticity-of-host-issue-when-running-across-multiple/m-p/1031683#M4181</link>
      <description>&lt;P&gt;Glad to hear it worked :)&amp;nbsp; Have fun with MPI!&lt;/P&gt;

&lt;P&gt;~Gergana&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jul 2015 18:22:36 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Bizarre-authenticity-of-host-issue-when-running-across-multiple/m-p/1031683#M4181</guid>
      <dc:creator>Gergana_S_Intel</dc:creator>
      <dc:date>2015-07-30T18:22:36Z</dc:date>
    </item>
  </channel>
</rss>

