<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Unable to run bstrap_proxy error with intel-oneapi-mpi on Ubuntu 22.04.1 in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/Unable-to-run-bstrap-proxy-error-with-intel-oneapi-mpi-on-Ubuntu/m-p/1627483#M11886</link>
    <description>&lt;P&gt;&lt;SPAN&gt;We are developing a simple cluster of two nodes PC8(master node) and PC9 (slave node).&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;A passwordless login is enabled from PC8 to PC9&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;We don't have any job scheduler at this stage.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Our OS is Ubuntu 22.04.1 LTS&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;We have shared /nfstest with NFS and successfuly mounted on PC8.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;We copied the intel toolkit and vasp software to the shared directory.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Then we initialized the setvars.sh environment from both nodes individually.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;After that we tried to run the vasp with the command mpirun -np 2 -host&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://10.0.0.8/" target="_blank" rel="noopener"&gt;10.0.0.8&lt;/A&gt;&lt;SPAN&gt;: -host&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://10.0.0.9/" target="_blank" rel="noopener"&gt;10.0.0.9&lt;/A&gt;&lt;SPAN&gt;: /nfstest/vasp/vasp.5.4.4/bin/&lt;/SPAN&gt;&lt;SPAN&gt;vasp_std&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;But it failed with the following error. Can someone help us please?&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;mpirun -np 2 -host&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://10.0.0.8/" target="_blank" rel="noopener"&gt;10.0.0.8&lt;/A&gt;&lt;SPAN&gt;: -host&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://10.0.0.9/" target="_blank" rel="noopener"&gt;10.0.0.9&lt;/A&gt;&lt;SPAN&gt;: /nfstest/vasp/vasp.5.4.4/bin/&lt;/SPAN&gt;&lt;SPAN&gt;vasp_std&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] check_exit_codes (../../../../../src/pm/i_&lt;/SPAN&gt;&lt;SPAN&gt;hydra/libhydra/demux/hydra_&lt;/SPAN&gt;&lt;SPAN&gt;demux_poll.c:117): unable to run bstrap_proxy on&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://10.0.0.8/" target="_blank" rel="noopener"&gt;10.0.0.8&lt;/A&gt;&lt;SPAN&gt;: (pid 2853, exit code 65280)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] poll_for_event (../../../../../src/pm/i_&lt;/SPAN&gt;&lt;SPAN&gt;hydra/libhydra/demux/hydra_&lt;/SPAN&gt;&lt;SPAN&gt;demux_poll.c:159): check exit codes error&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] HYD_dmx_poll_wait_for_proxy_&lt;/SPAN&gt;&lt;SPAN&gt;event (../../../../../src/pm/i_&lt;/SPAN&gt;&lt;SPAN&gt;hydra/libhydra/demux/hydra_&lt;/SPAN&gt;&lt;SPAN&gt;demux_poll.c:212): poll for event error&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] HYD_bstrap_setup (../../../../../src/pm/i_&lt;/SPAN&gt;&lt;SPAN&gt;hydra/libhydra/bstrap/src/&lt;/SPAN&gt;&lt;SPAN&gt;intel/i_hydra_bstrap.c:1065): error waiting for event&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] HYD_print_bstrap_setup_error_&lt;/SPAN&gt;&lt;SPAN&gt;message (../../../../../src/pm/i_&lt;/SPAN&gt;&lt;SPAN&gt;hydra/mpiexec/intel/i_mpiexec.&lt;/SPAN&gt;&lt;SPAN&gt;c:1026): error setting up the bootstrap proxies&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] Possible reasons:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] 1. Host is unavailable. Please check that all hosts are available.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] 2. Cannot launch hydra_bstrap_proxy or it crashed on one of the hosts. Make sure hydra_bstrap_proxy is available on all hosts and it has right permissions.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] 3. Firewall refused connection. Check that enough ports are allowed in the firewall and specify them with the I_MPI_PORT_RANGE variable.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] 4. Ssh bootstrap cannot launch processes on remote host. Make sure that passwordless ssh connection is established across compute hosts.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] &amp;nbsp; &amp;nbsp;You may try using -bootstrap option to select alternative launcher.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;our fstab file for NFS is in PC9 is:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# /etc/fstab: static file system information.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;#&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# Use 'blkid' to print the universally unique identifier for a&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# device; this may be used with UUID= as a more robust way to name devices&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# that works even if disks are added and removed. See fstab(5).&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;#&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# &amp;lt;file system&amp;gt; &amp;lt;mount point&amp;gt; &amp;nbsp; &amp;lt;type&amp;gt; &amp;nbsp;&amp;lt;options&amp;gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;dump&amp;gt; &amp;nbsp;&amp;lt;pass&amp;gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# / was on /dev/nvme0n1p3 during installation&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;UUID=57a614ce-bbd0-4190-9695-&lt;/SPAN&gt;&lt;SPAN&gt;bf8bdcbe21c2 / &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ext4 &amp;nbsp; &amp;nbsp;errors=remount-ro 0 &amp;nbsp; &amp;nbsp; &amp;nbsp; 1&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# /boot/efi was on /dev/nvme0n1p1 during installation&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;UUID=2A53-01A6 &amp;nbsp;/boot/efi &amp;nbsp; &amp;nbsp; &amp;nbsp; vfat &amp;nbsp; &amp;nbsp;umask=0077 &amp;nbsp; &amp;nbsp; &amp;nbsp;0 &amp;nbsp; &amp;nbsp; &amp;nbsp; 1&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;/swapfile &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; none &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;swap &amp;nbsp; &amp;nbsp;sw &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0 &amp;nbsp; &amp;nbsp; &amp;nbsp; 0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;10.0.0.8:/nfsshare /nfsshare nfs auto,noatime,nolock,bg,&lt;/SPAN&gt;&lt;SPAN&gt;nfsvers=3,intr,tcp,actimeo=&lt;/SPAN&gt;&lt;SPAN&gt;1800 0 0&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;while at /etc/exports in PC8 we have:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# /etc/exports: the access control list for filesystems which may be exported&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# to NFS clients.&amp;nbsp; See exports(5).&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;#&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# Example for NFSv2 and NFSv3:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# /srv/homes &amp;nbsp; &amp;nbsp; &amp;nbsp; hostname1(rw,sync,no_subtree_&lt;/SPAN&gt;&lt;SPAN&gt;check) hostname2(ro,sync,no_subtree_&lt;/SPAN&gt;&lt;SPAN&gt;check)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;#&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# Example for NFSv4:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# /srv/nfs4 &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;gss/krb5i(rw,sync,fsid=0,&lt;/SPAN&gt;&lt;SPAN&gt;crossmnt,no_subtree_check)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# /srv/nfs4/homes &amp;nbsp;gss/krb5i(rw,sync,no_subtree_&lt;/SPAN&gt;&lt;SPAN&gt;check)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;#&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;#/nfstest 10.0.0.9(rw,sync)&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Fri, 30 Aug 2024 19:47:36 GMT</pubDate>
    <dc:creator>RMK-lfmd</dc:creator>
    <dc:date>2024-08-30T19:47:36Z</dc:date>
    <item>
      <title>Unable to run bstrap_proxy error with intel-oneapi-mpi on Ubuntu 22.04.1</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Unable-to-run-bstrap-proxy-error-with-intel-oneapi-mpi-on-Ubuntu/m-p/1627483#M11886</link>
      <description>&lt;P&gt;&lt;SPAN&gt;We are developing a simple cluster of two nodes PC8(master node) and PC9 (slave node).&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;A passwordless login is enabled from PC8 to PC9&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;We don't have any job scheduler at this stage.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Our OS is Ubuntu 22.04.1 LTS&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;We have shared /nfstest with NFS and successfuly mounted on PC8.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;We copied the intel toolkit and vasp software to the shared directory.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Then we initialized the setvars.sh environment from both nodes individually.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;After that we tried to run the vasp with the command mpirun -np 2 -host&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://10.0.0.8/" target="_blank" rel="noopener"&gt;10.0.0.8&lt;/A&gt;&lt;SPAN&gt;: -host&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://10.0.0.9/" target="_blank" rel="noopener"&gt;10.0.0.9&lt;/A&gt;&lt;SPAN&gt;: /nfstest/vasp/vasp.5.4.4/bin/&lt;/SPAN&gt;&lt;SPAN&gt;vasp_std&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;But it failed with the following error. Can someone help us please?&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;mpirun -np 2 -host&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://10.0.0.8/" target="_blank" rel="noopener"&gt;10.0.0.8&lt;/A&gt;&lt;SPAN&gt;: -host&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://10.0.0.9/" target="_blank" rel="noopener"&gt;10.0.0.9&lt;/A&gt;&lt;SPAN&gt;: /nfstest/vasp/vasp.5.4.4/bin/&lt;/SPAN&gt;&lt;SPAN&gt;vasp_std&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] check_exit_codes (../../../../../src/pm/i_&lt;/SPAN&gt;&lt;SPAN&gt;hydra/libhydra/demux/hydra_&lt;/SPAN&gt;&lt;SPAN&gt;demux_poll.c:117): unable to run bstrap_proxy on&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://10.0.0.8/" target="_blank" rel="noopener"&gt;10.0.0.8&lt;/A&gt;&lt;SPAN&gt;: (pid 2853, exit code 65280)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] poll_for_event (../../../../../src/pm/i_&lt;/SPAN&gt;&lt;SPAN&gt;hydra/libhydra/demux/hydra_&lt;/SPAN&gt;&lt;SPAN&gt;demux_poll.c:159): check exit codes error&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] HYD_dmx_poll_wait_for_proxy_&lt;/SPAN&gt;&lt;SPAN&gt;event (../../../../../src/pm/i_&lt;/SPAN&gt;&lt;SPAN&gt;hydra/libhydra/demux/hydra_&lt;/SPAN&gt;&lt;SPAN&gt;demux_poll.c:212): poll for event error&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] HYD_bstrap_setup (../../../../../src/pm/i_&lt;/SPAN&gt;&lt;SPAN&gt;hydra/libhydra/bstrap/src/&lt;/SPAN&gt;&lt;SPAN&gt;intel/i_hydra_bstrap.c:1065): error waiting for event&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] HYD_print_bstrap_setup_error_&lt;/SPAN&gt;&lt;SPAN&gt;message (../../../../../src/pm/i_&lt;/SPAN&gt;&lt;SPAN&gt;hydra/mpiexec/intel/i_mpiexec.&lt;/SPAN&gt;&lt;SPAN&gt;c:1026): error setting up the bootstrap proxies&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] Possible reasons:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] 1. Host is unavailable. Please check that all hosts are available.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] 2. Cannot launch hydra_bstrap_proxy or it crashed on one of the hosts. Make sure hydra_bstrap_proxy is available on all hosts and it has right permissions.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] 3. Firewall refused connection. Check that enough ports are allowed in the firewall and specify them with the I_MPI_PORT_RANGE variable.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] 4. Ssh bootstrap cannot launch processes on remote host. Make sure that passwordless ssh connection is established across compute hosts.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;[mpiexec@PC8] &amp;nbsp; &amp;nbsp;You may try using -bootstrap option to select alternative launcher.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;our fstab file for NFS is in PC9 is:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# /etc/fstab: static file system information.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;#&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# Use 'blkid' to print the universally unique identifier for a&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# device; this may be used with UUID= as a more robust way to name devices&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# that works even if disks are added and removed. See fstab(5).&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;#&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# &amp;lt;file system&amp;gt; &amp;lt;mount point&amp;gt; &amp;nbsp; &amp;lt;type&amp;gt; &amp;nbsp;&amp;lt;options&amp;gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;dump&amp;gt; &amp;nbsp;&amp;lt;pass&amp;gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# / was on /dev/nvme0n1p3 during installation&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;UUID=57a614ce-bbd0-4190-9695-&lt;/SPAN&gt;&lt;SPAN&gt;bf8bdcbe21c2 / &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ext4 &amp;nbsp; &amp;nbsp;errors=remount-ro 0 &amp;nbsp; &amp;nbsp; &amp;nbsp; 1&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# /boot/efi was on /dev/nvme0n1p1 during installation&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;UUID=2A53-01A6 &amp;nbsp;/boot/efi &amp;nbsp; &amp;nbsp; &amp;nbsp; vfat &amp;nbsp; &amp;nbsp;umask=0077 &amp;nbsp; &amp;nbsp; &amp;nbsp;0 &amp;nbsp; &amp;nbsp; &amp;nbsp; 1&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;/swapfile &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; none &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;swap &amp;nbsp; &amp;nbsp;sw &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0 &amp;nbsp; &amp;nbsp; &amp;nbsp; 0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;10.0.0.8:/nfsshare /nfsshare nfs auto,noatime,nolock,bg,&lt;/SPAN&gt;&lt;SPAN&gt;nfsvers=3,intr,tcp,actimeo=&lt;/SPAN&gt;&lt;SPAN&gt;1800 0 0&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;while at /etc/exports in PC8 we have:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# /etc/exports: the access control list for filesystems which may be exported&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# to NFS clients.&amp;nbsp; See exports(5).&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;#&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# Example for NFSv2 and NFSv3:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# /srv/homes &amp;nbsp; &amp;nbsp; &amp;nbsp; hostname1(rw,sync,no_subtree_&lt;/SPAN&gt;&lt;SPAN&gt;check) hostname2(ro,sync,no_subtree_&lt;/SPAN&gt;&lt;SPAN&gt;check)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;#&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# Example for NFSv4:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# /srv/nfs4 &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;gss/krb5i(rw,sync,fsid=0,&lt;/SPAN&gt;&lt;SPAN&gt;crossmnt,no_subtree_check)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;# /srv/nfs4/homes &amp;nbsp;gss/krb5i(rw,sync,no_subtree_&lt;/SPAN&gt;&lt;SPAN&gt;check)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;#&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;#/nfstest 10.0.0.9(rw,sync)&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 30 Aug 2024 19:47:36 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Unable-to-run-bstrap-proxy-error-with-intel-oneapi-mpi-on-Ubuntu/m-p/1627483#M11886</guid>
      <dc:creator>RMK-lfmd</dc:creator>
      <dc:date>2024-08-30T19:47:36Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to run bstrap_proxy error with intel-oneapi-mpi on Ubuntu 22.04.1</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Unable-to-run-bstrap-proxy-error-with-intel-oneapi-mpi-on-Ubuntu/m-p/1630131#M11890</link>
      <description>&lt;P&gt;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/337466"&gt;@RMK-lfmd&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;Please change your commandline.&lt;BR /&gt;&lt;SPAN class="sub_section_element_selectors"&gt;mpirun -np 2 -host&amp;nbsp;&lt;/SPAN&gt;&lt;A class="sub_section_element_selectors" href="http://10.0.0.8/" target="_blank" rel="noopener nofollow noreferrer"&gt;&lt;SPAN class="sub_section_element_selectors"&gt;10.0.0.8&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN class="sub_section_element_selectors"&gt;: -host&amp;nbsp;&lt;/SPAN&gt;&lt;A class="sub_section_element_selectors" href="http://10.0.0.9/" target="_blank" rel="noopener nofollow noreferrer"&gt;&lt;SPAN class="sub_section_element_selectors"&gt;10.0.0.9&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN class="sub_section_element_selectors"&gt;: /nfstest/vasp/vasp.5.4.4/bin/&lt;/SPAN&gt;&lt;SPAN class="sub_section_element_selectors"&gt;vasp_std&lt;/SPAN&gt;&lt;BR /&gt;the ":" enables MPMD lauch where you need to specify a binary for each set:&lt;BR /&gt;&lt;A href="https://www.intel.com/content/www/us/en/docs/mpi-library/developer-guide-linux/2021-13/mpmd-launch-mode.html" target="_blank"&gt;https://www.intel.com/content/www/us/en/docs/mpi-library/developer-guide-linux/2021-13/mpmd-launch-mode.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;In your case, you don't need that, just use a comma separated list of hosts:&lt;BR /&gt;&lt;SPAN class="sub_section_element_selectors"&gt;mpirun -np 2 -host&amp;nbsp;&lt;/SPAN&gt;&lt;A class="sub_section_element_selectors" href="http://10.0.0.8/" target="_blank" rel="noopener nofollow noreferrer"&gt;&lt;SPAN class="sub_section_element_selectors"&gt;10.0.0.8&lt;/SPAN&gt;&lt;/A&gt;,&lt;A class="sub_section_element_selectors" href="http://10.0.0.9/" target="_blank" rel="noopener nofollow noreferrer"&gt;&lt;SPAN class="sub_section_element_selectors"&gt;10.0.0.9&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN class="sub_section_element_selectors"&gt;&amp;nbsp;/nfstest/vasp/vasp.5.4.4/bin/&lt;/SPAN&gt;&lt;SPAN class="sub_section_element_selectors"&gt;vasp_std&lt;BR /&gt;&lt;A href="https://www.intel.com/content/www/us/en/docs/mpi-library/developer-guide-linux/2021-13/controlling-process-placement.html" target="_blank"&gt;https://www.intel.com/content/www/us/en/docs/mpi-library/developer-guide-linux/2021-13/controlling-process-placement.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;Please make sure that you set up correct FQDN, IP addresses might not be enough. Please make sure that you can connect from .8 to .9 but also from .9 to .8&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Initializing the environment on the remote node is of no use, the environment is local to your shell, if you do not put it into the initialization of the shell. mpirun will just propagate the environment so the paths have to be identical.&lt;BR /&gt;&lt;BR /&gt;Please try the functionality of MPI with something simple like&lt;BR /&gt;mpirun -np 2 -ppn 1 -hosts a,b IMB-MPI1&lt;/P&gt;</description>
      <pubDate>Tue, 10 Sep 2024 14:10:20 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Unable-to-run-bstrap-proxy-error-with-intel-oneapi-mpi-on-Ubuntu/m-p/1630131#M11890</guid>
      <dc:creator>TobiasK</dc:creator>
      <dc:date>2024-09-10T14:10:20Z</dc:date>
    </item>
  </channel>
</rss>

