<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic MPI error on Windows cluster in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-error-on-Windows-cluster/m-p/1713605#M12200</link>
    <description>&lt;P&gt;Hello everyone,&lt;/P&gt;&lt;P&gt;I installed ifx compiler and mpi libraries using the package intel-fortran-essentials-2025.2.1.6_offline_20250825_115647.&lt;BR /&gt;I am trying to run tests on a windows cluster. The program runs fine on a single host,&lt;BR /&gt;but hangs at MPI_BCAST when run on 2 hosts.&lt;/P&gt;&lt;P&gt;Test program:&lt;/P&gt;&lt;P&gt;PROGRAM winct&lt;BR /&gt;use mpi&lt;BR /&gt;implicit none&lt;BR /&gt;integer::ival,ierr,my_id,num_proc,len1&lt;BR /&gt;character(len=MPI_MAX_PROCESSOR_NAME)::procName&lt;BR /&gt;call MPI_Init(ierr)&lt;BR /&gt;call MPI_COMM_RANK(MPI_COMM_WORLD, my_id, ierr)&lt;BR /&gt;call MPI_COMM_SIZE(MPI_COMM_WORLD, num_proc, ierr)&lt;BR /&gt;call MPI_GET_PROCESSOR_NAME(procName,len1,ierr)&lt;BR /&gt;ival=0&lt;BR /&gt;if (my_id==0)then&lt;BR /&gt;ival=10&lt;BR /&gt;endif&lt;BR /&gt;write(*,*)'Before BCAST: PE,procName:',my_id,procName,ival&lt;BR /&gt;! call mpi_barrier(MPI_COMM_WORLD, ierr)&lt;BR /&gt;call MPI_BCAST(ival,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)&lt;BR /&gt;write(*,*)'After BCAST: PE,procName:',my_id,procName,ival&lt;BR /&gt;call MPI_FINALIZE(ierr)&lt;BR /&gt;END PROGRAM winct&lt;/P&gt;&lt;P&gt;compilation:&lt;/P&gt;&lt;P&gt;"C:\Program Files (x86)\Intel\oneAPI\compiler\2025.2\bin\ifx" /nologo /O3 /I"C:\Program Files (x86)\Intel\oneAPI\mpi\2021.16\include\mpi" /traceback /libs:dll /threads /c /Qlocation,link,"C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.43.34808\bin\HostX64\x64" /Qm64 .\a.f90&lt;/P&gt;&lt;P&gt;Linking:&lt;/P&gt;&lt;P&gt;"C:\Program Files (x86)\Intel\oneAPI\compiler\2025.2\bin\ifx" /exe:"winct.exe" ... /Qoption,link,/LIBPATH:"C:\Program Files (x86)\Intel\oneAPI\mpi\2021.16\lib" /Qoption,link,/LIBPATH:"C:\Program Files (x86)\Intel\oneAPI\compiler\2022.1.0\windows\compiler\lib\intel64_win" ... /Qoption,link,/SUBSYSTEM:CONSOLE /IMPLIB: impi.lib /Qm64 "a.obj"&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;running (on HOST1)&lt;BR /&gt;mpiexec -n 4 -ppn 2 -hosts HOST1 -genv I_MPI_DEBUG=+2 winct.exe&lt;BR /&gt;is OK.&lt;/P&gt;&lt;P&gt;running (on HOST1)&lt;BR /&gt;mpiexec -n 4 -ppn 2 -hosts HOST2 -genv I_MPI_DEBUG=+2 winct.exe&lt;BR /&gt;is also OK.&lt;/P&gt;&lt;P&gt;running (on HOST1)&lt;BR /&gt;mpiexec -n 4 -ppn 2 -hosts HOST1,HOST2 -genv I_MPI_DEBUG=+2 winct.exe&lt;BR /&gt;produces the following output and hangs:&lt;/P&gt;&lt;P&gt;[0] MPI startup(): Intel(R) MPI Library, Version 2021.6 Build 20220227&lt;BR /&gt;[0] MPI startup(): Copyright (C) 2003-2022 Intel Corporation. All rights reserved.&lt;BR /&gt;[0] MPI startup(): library kind: release&lt;BR /&gt;[0] MPI startup(): libfabric version: 2.1.0-impi&lt;BR /&gt;[0] MPI startup(): libfabric provider: tcp;ofi_rxm&lt;BR /&gt;[0] MPI startup(): File "/tuning_skx_shm-ofi_tcp-ofi-rxm.dat" not found&lt;BR /&gt;[0] MPI startup(): Load tuning file: "/tuning_skx_shm-ofi.dat"&lt;BR /&gt;[0] MPI startup(): File "/tuning_skx_shm-ofi.dat" not found&lt;BR /&gt;[0] MPI startup(): File "/tuning_skx_shm-ofi.dat" not found&lt;BR /&gt;[0] MPI startup(): File "" not found&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for ch4 level&lt;BR /&gt;[0] MPI startup(): File "" not found&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for net level&lt;BR /&gt;[0] MPI startup(): File "" not found&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for shm level&lt;BR /&gt;[3#53840:8176@HOST2] MPI startup(): Imported environment partly inaccesible. Map=0 Info=0&lt;BR /&gt;[2#46416:23368@HOST2] MPI startup(): Imported environment partly inaccesible. Map=0 Info=0&lt;BR /&gt;[0#48236:24688@HOST1] MPI startup(): Imported environment partly inaccesible. Map=0 Info=0&lt;BR /&gt;[1#44180:20120@HOST1] MPI startup(): Imported environment partly inaccesible. Map=0 Info=0&lt;BR /&gt;Before BCAST: PE,procName: 3 HOST2 0&lt;BR /&gt;Before BCAST: PE,procName: 0 HOST1 10&lt;BR /&gt;Before BCAST: PE,procName: 1 HOST1 0&lt;BR /&gt;Before BCAST: PE,procName: 2 HOST2 0&lt;/P&gt;&lt;P&gt;Similar problem is encountered for other collective operations such as mpi_barrier, mpi_allgather etc.&lt;/P&gt;&lt;P&gt;Any suggestion would be greatly appreciated.&lt;BR /&gt;Thanks in advance.&lt;/P&gt;</description>
    <pubDate>Fri, 29 Aug 2025 13:23:42 GMT</pubDate>
    <dc:creator>ako2</dc:creator>
    <dc:date>2025-08-29T13:23:42Z</dc:date>
    <item>
      <title>MPI error on Windows cluster</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-error-on-Windows-cluster/m-p/1713605#M12200</link>
      <description>&lt;P&gt;Hello everyone,&lt;/P&gt;&lt;P&gt;I installed ifx compiler and mpi libraries using the package intel-fortran-essentials-2025.2.1.6_offline_20250825_115647.&lt;BR /&gt;I am trying to run tests on a windows cluster. The program runs fine on a single host,&lt;BR /&gt;but hangs at MPI_BCAST when run on 2 hosts.&lt;/P&gt;&lt;P&gt;Test program:&lt;/P&gt;&lt;P&gt;PROGRAM winct&lt;BR /&gt;use mpi&lt;BR /&gt;implicit none&lt;BR /&gt;integer::ival,ierr,my_id,num_proc,len1&lt;BR /&gt;character(len=MPI_MAX_PROCESSOR_NAME)::procName&lt;BR /&gt;call MPI_Init(ierr)&lt;BR /&gt;call MPI_COMM_RANK(MPI_COMM_WORLD, my_id, ierr)&lt;BR /&gt;call MPI_COMM_SIZE(MPI_COMM_WORLD, num_proc, ierr)&lt;BR /&gt;call MPI_GET_PROCESSOR_NAME(procName,len1,ierr)&lt;BR /&gt;ival=0&lt;BR /&gt;if (my_id==0)then&lt;BR /&gt;ival=10&lt;BR /&gt;endif&lt;BR /&gt;write(*,*)'Before BCAST: PE,procName:',my_id,procName,ival&lt;BR /&gt;! call mpi_barrier(MPI_COMM_WORLD, ierr)&lt;BR /&gt;call MPI_BCAST(ival,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)&lt;BR /&gt;write(*,*)'After BCAST: PE,procName:',my_id,procName,ival&lt;BR /&gt;call MPI_FINALIZE(ierr)&lt;BR /&gt;END PROGRAM winct&lt;/P&gt;&lt;P&gt;compilation:&lt;/P&gt;&lt;P&gt;"C:\Program Files (x86)\Intel\oneAPI\compiler\2025.2\bin\ifx" /nologo /O3 /I"C:\Program Files (x86)\Intel\oneAPI\mpi\2021.16\include\mpi" /traceback /libs:dll /threads /c /Qlocation,link,"C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.43.34808\bin\HostX64\x64" /Qm64 .\a.f90&lt;/P&gt;&lt;P&gt;Linking:&lt;/P&gt;&lt;P&gt;"C:\Program Files (x86)\Intel\oneAPI\compiler\2025.2\bin\ifx" /exe:"winct.exe" ... /Qoption,link,/LIBPATH:"C:\Program Files (x86)\Intel\oneAPI\mpi\2021.16\lib" /Qoption,link,/LIBPATH:"C:\Program Files (x86)\Intel\oneAPI\compiler\2022.1.0\windows\compiler\lib\intel64_win" ... /Qoption,link,/SUBSYSTEM:CONSOLE /IMPLIB: impi.lib /Qm64 "a.obj"&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;running (on HOST1)&lt;BR /&gt;mpiexec -n 4 -ppn 2 -hosts HOST1 -genv I_MPI_DEBUG=+2 winct.exe&lt;BR /&gt;is OK.&lt;/P&gt;&lt;P&gt;running (on HOST1)&lt;BR /&gt;mpiexec -n 4 -ppn 2 -hosts HOST2 -genv I_MPI_DEBUG=+2 winct.exe&lt;BR /&gt;is also OK.&lt;/P&gt;&lt;P&gt;running (on HOST1)&lt;BR /&gt;mpiexec -n 4 -ppn 2 -hosts HOST1,HOST2 -genv I_MPI_DEBUG=+2 winct.exe&lt;BR /&gt;produces the following output and hangs:&lt;/P&gt;&lt;P&gt;[0] MPI startup(): Intel(R) MPI Library, Version 2021.6 Build 20220227&lt;BR /&gt;[0] MPI startup(): Copyright (C) 2003-2022 Intel Corporation. All rights reserved.&lt;BR /&gt;[0] MPI startup(): library kind: release&lt;BR /&gt;[0] MPI startup(): libfabric version: 2.1.0-impi&lt;BR /&gt;[0] MPI startup(): libfabric provider: tcp;ofi_rxm&lt;BR /&gt;[0] MPI startup(): File "/tuning_skx_shm-ofi_tcp-ofi-rxm.dat" not found&lt;BR /&gt;[0] MPI startup(): Load tuning file: "/tuning_skx_shm-ofi.dat"&lt;BR /&gt;[0] MPI startup(): File "/tuning_skx_shm-ofi.dat" not found&lt;BR /&gt;[0] MPI startup(): File "/tuning_skx_shm-ofi.dat" not found&lt;BR /&gt;[0] MPI startup(): File "" not found&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for ch4 level&lt;BR /&gt;[0] MPI startup(): File "" not found&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for net level&lt;BR /&gt;[0] MPI startup(): File "" not found&lt;BR /&gt;[0] MPI startup(): Unable to read tuning file for shm level&lt;BR /&gt;[3#53840:8176@HOST2] MPI startup(): Imported environment partly inaccesible. Map=0 Info=0&lt;BR /&gt;[2#46416:23368@HOST2] MPI startup(): Imported environment partly inaccesible. Map=0 Info=0&lt;BR /&gt;[0#48236:24688@HOST1] MPI startup(): Imported environment partly inaccesible. Map=0 Info=0&lt;BR /&gt;[1#44180:20120@HOST1] MPI startup(): Imported environment partly inaccesible. Map=0 Info=0&lt;BR /&gt;Before BCAST: PE,procName: 3 HOST2 0&lt;BR /&gt;Before BCAST: PE,procName: 0 HOST1 10&lt;BR /&gt;Before BCAST: PE,procName: 1 HOST1 0&lt;BR /&gt;Before BCAST: PE,procName: 2 HOST2 0&lt;/P&gt;&lt;P&gt;Similar problem is encountered for other collective operations such as mpi_barrier, mpi_allgather etc.&lt;/P&gt;&lt;P&gt;Any suggestion would be greatly appreciated.&lt;BR /&gt;Thanks in advance.&lt;/P&gt;</description>
      <pubDate>Fri, 29 Aug 2025 13:23:42 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-error-on-Windows-cluster/m-p/1713605#M12200</guid>
      <dc:creator>ako2</dc:creator>
      <dc:date>2025-08-29T13:23:42Z</dc:date>
    </item>
    <item>
      <title>Re: MPI error on Windows cluster</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-error-on-Windows-cluster/m-p/1713798#M12201</link>
      <description>&lt;P&gt;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/442188"&gt;@ako2&lt;/a&gt;&amp;nbsp;please follow the steps to enable user authentication:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.intel.com/content/www/us/en/docs/mpi-library/developer-reference-windows/2021-16/user-authorization.html" target="_blank"&gt;https://www.intel.com/content/www/us/en/docs/mpi-library/developer-reference-windows/2021-16/user-authorization.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;A href="https://www.intel.com/content/www/us/en/docs/mpi-library/developer-guide-windows/2021-16/user-authorization.html" target="_blank"&gt;https://www.intel.com/content/www/us/en/docs/mpi-library/developer-guide-windows/2021-16/user-authorization.html&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 29 Aug 2025 18:48:03 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-error-on-Windows-cluster/m-p/1713798#M12201</guid>
      <dc:creator>TobiasK</dc:creator>
      <dc:date>2025-08-29T18:48:03Z</dc:date>
    </item>
    <item>
      <title>Re: MPI error on Windows cluster</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-error-on-Windows-cluster/m-p/1714413#M12202</link>
      <description>&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;Thank you &lt;/SPAN&gt;&lt;SPAN&gt;for&lt;/SPAN&gt; &lt;SPAN&gt;your&lt;/SPAN&gt;&lt;SPAN&gt; response.&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;Initially the executable was &lt;/SPAN&gt;&lt;SPAN&gt;not&lt;/SPAN&gt;&lt;SPAN&gt; run &lt;/SPAN&gt;&lt;SPAN&gt;from&lt;/SPAN&gt;&lt;SPAN&gt; a shared location, but to &lt;/SPAN&gt;&lt;SPAN&gt;try&lt;/SPAN&gt;&lt;SPAN&gt; your suggestion I&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;placed the executable &lt;/SPAN&gt;&lt;SPAN&gt;in&lt;/SPAN&gt;&lt;SPAN&gt; a shared folder that &lt;/SPAN&gt;&lt;SPAN&gt;is&lt;/SPAN&gt;&lt;SPAN&gt; accesible &lt;/SPAN&gt;&lt;SPAN&gt;from&lt;/SPAN&gt;&lt;SPAN&gt; both hosts (HOST1 &lt;/SPAN&gt;&lt;SPAN&gt;and&lt;/SPAN&gt;&lt;SPAN&gt; HOST2).&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;However, it did &lt;/SPAN&gt;&lt;SPAN&gt;not&lt;/SPAN&gt;&lt;SPAN&gt; change anything. After creating the &lt;/SPAN&gt;&lt;SPAN&gt;"mpi_profile"&lt;/SPAN&gt;&lt;SPAN&gt; on HOST2,&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;running (on HOST1)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;mpiexec &lt;/SPAN&gt;&lt;SPAN&gt;-&lt;/SPAN&gt;&lt;SPAN&gt;n &lt;/SPAN&gt;&lt;SPAN&gt;4&lt;/SPAN&gt; &lt;SPAN&gt;-&lt;/SPAN&gt;&lt;SPAN&gt;ppn &lt;/SPAN&gt;&lt;SPAN&gt;2&lt;/SPAN&gt; &lt;SPAN&gt;-&lt;/SPAN&gt;&lt;SPAN&gt;hosts HOST1,HOST2 &lt;/SPAN&gt;&lt;SPAN&gt;-&lt;/SPAN&gt;&lt;SPAN&gt;genv &lt;/SPAN&gt;&lt;SPAN&gt;I_MPI_DEBUG&lt;/SPAN&gt;&lt;SPAN&gt;=+&lt;/SPAN&gt;&lt;SPAN&gt;2&lt;/SPAN&gt; &lt;SPAN&gt;-&lt;/SPAN&gt;&lt;SPAN&gt;genv &lt;/SPAN&gt;&lt;SPAN&gt;I_MPI_HYDRA_BOOTSTRAP_POWERSHELL_PSCNAME&lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt;mpi_profile &lt;/SPAN&gt;&lt;SPAN&gt;I_MPI_AUTH_METHOD&lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt;delegate \&lt;/SPAN&gt;&lt;SPAN&gt;\shared_folder_on_another_machine\winct.exe&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;we have the following &lt;/SPAN&gt;&lt;SPAN&gt;error&lt;/SPAN&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;LI-CODE lang="markup"&gt;[mpiexec@HOST1] HYD_sock_connect (..\windows\src\hydra_sock.c:240): Retrying connection, retry_count=1, retries=0
[mpiexec@HOST1] HYD_connect_to_service (bstrap\service\service_launch.c:76): assert (!closed) failed
[mpiexec@HOST1] HYDI_bstrap_service_launch (bstrap\service\service_launch.c:319): unable to connect to hydra service (HOST2:8680)
[mpiexec@HOST1] remote_launch (bstrap\src\intel\i_hydra_bstrap.c:609): error launching bstrap proxy
[mpiexec@HOST1] single_launch (bstrap\src\intel\i_hydra_bstrap.c:667): remote launch error
[mpiexec@HOST1] launch_bstrap_proxies (bstrap\src\intel\i_hydra_bstrap.c:851): single launch error
[mpiexec@HOST1] HYD_bstrap_setup (bstrap\src\intel\i_hydra_bstrap.c:1045): unable to launch bstrap proxy
[mpiexec@HOST1] Error setting up the bootstrap proxies
[mpiexec@HOST1] Possible reasons:
[mpiexec@HOST1] 1. Host is unavailable. Please check that all hosts are available.
[mpiexec@HOST1] 2. Cannot launch hydra_bstrap_proxy.exe or it crashed on one of the hosts.
[mpiexec@HOST1] Make sure hydra_bstrap_proxy.exe is available on all hosts and it has right permissions.
[mpiexec@HOST1] 3. Firewall refused connection.
[mpiexec@HOST1] Check that enough ports are allowed in the firewall and specify them with the I_MPI_PORT_RANGE variable.
[mpiexec@HOST1] 4. service bootstrap cannot launch processes on remote host.
[mpiexec@HOST1] You may try using -bootstrap option to select alternative launcher.&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;SPAN&gt;running &lt;STRONG&gt;WITHOUT &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;"-genv I_MPI_AUTH_METHOD=delegate"&lt;/SPAN&gt;&lt;SPAN&gt;; the program executes, but once again hangs at MPI_BCAST:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;[0#8920:46648@HOST1] MPI startup(): Intel(R) MPI Library, Version 2021.16 Build 20250722
[0#8920:46648@HOST1] MPI startup(): Copyright (C) 2003-2025 Intel Corporation. All rights reserved.
[0#8920:46648@HOST1] MPI startup(): library kind: release
[0#8920:46648@HOST1] MPI startup(): libfabric version: 2.1.0-impi
[0#8920:46648@HOST1] MPI startup(): libfabric provider: tcp
[0#8920:46648@HOST1] MPI startup(): File "/tuning_skx_shm-ofi_tcp.dat" not found
[0#8920:46648@HOST1] MPI startup(): Load tuning file: "/tuning_skx_shm-ofi.dat"
[0#8920:46648@HOST1] MPI startup(): File "/tuning_skx_shm-ofi.dat" not found
[0#8920:46648@HOST1] MPI startup(): File "/tuning_skx_shm-ofi.dat" not found
[0#8920:46648@HOST1] MPI startup(): File "/skx_shm-ofi.json" not found
[0#8920:46648@HOST1] MPI startup(): Unable to read tuning file for ch4 level
[0#8920:46648@HOST1] MPI startup(): File "/skx_shm-ofi_network.json" not found
[0#8920:46648@HOST1] MPI startup(): Unable to read tuning file for net level
[0#8920:46648@HOST1] MPI startup(): File "/skx_shm-ofi_node.json" not found
[0#8920:46648@HOST1] MPI startup(): Unable to read tuning file for shm level
Before BCAST: PE,procName: 3 HOST2 0
Before BCAST: PE,procName: 2 HOST2 0
Before BCAST: PE,procName: 1 HOST1 0
Before BCAST: PE,procName: 0 HOST1 10&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;SPAN&gt;In an earlier post&amp;nbsp;&lt;A href="http://community.intel.com/t5/Intel-MPI-Library/running-on-MPI-cluster/m-p/1439764#M10159&amp;nbsp;" target="_self"&gt;community.intel.com/t5/Intel-MPI-Library/running-on-MPI-cluster/m-p/1439764#M10159&amp;nbsp;&lt;/A&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/63968"&gt;@jimdempseyatthecove&lt;/a&gt;&amp;nbsp;recommended to experiment &lt;/SPAN&gt;&lt;SPAN&gt;with&lt;/SPAN&gt;&lt;SPAN&gt; the fabric selection.&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;When I use libfabric.dll &lt;/SPAN&gt;&lt;SPAN&gt;from&lt;/SPAN&gt;&lt;SPAN&gt; an older version of intel MPI,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;MPI_BCAST works &lt;/SPAN&gt;&lt;SPAN&gt;as&lt;/SPAN&gt;&lt;SPAN&gt; expected on &lt;/SPAN&gt;&lt;SPAN&gt;2&lt;/SPAN&gt;&lt;SPAN&gt; hosts, but this time MPI_FINALIZE fails:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;SPAN&gt;running (on HOST1)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;mpiexec &lt;/SPAN&gt;&lt;SPAN&gt;-&lt;/SPAN&gt;&lt;SPAN&gt;n &lt;/SPAN&gt;&lt;SPAN&gt;4&lt;/SPAN&gt; &lt;SPAN&gt;-&lt;/SPAN&gt;&lt;SPAN&gt;ppn &lt;/SPAN&gt;&lt;SPAN&gt;2&lt;/SPAN&gt; &lt;SPAN&gt;-&lt;/SPAN&gt;&lt;SPAN&gt;hosts HOST1,HOST2 &lt;/SPAN&gt;&lt;SPAN&gt;-&lt;/SPAN&gt;&lt;SPAN&gt;genv &lt;/SPAN&gt;&lt;SPAN&gt;I_MPI_DEBUG&lt;/SPAN&gt;&lt;SPAN&gt;=+&lt;/SPAN&gt;&lt;SPAN&gt;2&lt;/SPAN&gt; &lt;SPAN&gt;-&lt;/SPAN&gt;&lt;SPAN&gt;genv &lt;/SPAN&gt;&lt;SPAN&gt;I_MPI_HYDRA_BOOTSTRAP_POWERSHELL_PSCNAME&lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt;mpi_profile \&lt;/SPAN&gt;&lt;SPAN&gt;\shared_folder_on_another_machine\winct.exe&lt;/SPAN&gt;&lt;/DIV&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;[0#39100:19156@HOST1] MPI startup(): Intel(R) MPI Library, Version 2021.16 Build 20250722
[0#39100:19156@HOST1] MPI startup(): Copyright (C) 2003-2025 Intel Corporation. All rights reserved.
[0#39100:19156@HOST1] MPI startup(): library kind: release
[0#39100:19156@HOST1] MPI startup(): libfabric version: 1.11.1a1-impi
[0#39100:19156@HOST1] MPI startup(): libfabric provider: tcp;ofi_rxm
[0#39100:19156@HOST1] MPI startup(): File "/tuning_skx_shm-ofi_tcp-ofi-rxm.dat" not found
[0#39100:19156@HOST1] MPI startup(): Load tuning file: "/tuning_skx_shm-ofi.dat"
[0#39100:19156@HOST1] MPI startup(): File "/tuning_skx_shm-ofi.dat" not found
[0#39100:19156@HOST1] MPI startup(): File "/tuning_skx_shm-ofi.dat" not found
[0#39100:19156@HOST1] MPI startup(): File "/skx_shm-ofi.json" not found
[0#39100:19156@HOST1] MPI startup(): Unable to read tuning file for ch4 level
[0#39100:19156@HOST1] MPI startup(): File "/skx_shm-ofi_network.json" not found
[0#39100:19156@HOST1] MPI startup(): Unable to read tuning file for net level
[0#39100:19156@HOST1] MPI startup(): File "/skx_shm-ofi_node.json" not found
[0#39100:19156@HOST1] MPI startup(): Unable to read tuning file for shm level
Before BCAST: PE,procName: 1 HOST1 0
Before BCAST: PE,procName: 3 HOST2 0
Before BCAST: PE,procName: 0 HOST1 10
Before BCAST: PE,procName: 2 HOST2 0
After BCAST: PE,procName: 0 HOST1 10
After BCAST: PE,procName: 2 HOST2 10
After BCAST: PE,procName: 1 HOST1 10
After BCAST: PE,procName: 3 HOST2 10
Abort(810649615) on node 2 (rank 2 in comm 0): Fatal error in internal_Finalize: Other MPI error, error stack:
internal_Finalize(39706).........: MPI_Finalize failed
MPII_Finalize(436)...............:
MPID_Finalize(1927)..............:
MPIDI_OFI_mpi_finalize_hook(1999):
MPIR_Reduce_intra_binomial(152)..:
MPIC_Send(129)...................:
MPID_Send(817)...................:
MPIDI_send_unsafe(109)...........:
MPIDI_OFI_send_normal(261).......:
MPIDI_OFI_send_handler_vni(502)..: OFI tagged send failed (ofi\ofi_impl.h:502:MPIDI_OFI_send_handler_vni:Unknown error)
[mpiexec@HOST1] HYD_sock_write (..\windows\src\hydra_sock.c:387): write error (errno = 2)
[mpiexec@HOST1] control_cb (mpiexec.c:1436): unable to send confirmation code
[mpiexec@HOST1] HYD_dmx_wait_for_event (..\windows\src\hydra_demux.c:216): callback returned error
[mpiexec@HOST1] wmain (mpiexec.c:1968): error waiting for event&lt;/LI-CODE&gt;&lt;DIV&gt;&lt;SPAN&gt;This configuration also fails when, &lt;/SPAN&gt;&lt;SPAN&gt;for&lt;/SPAN&gt;&lt;SPAN&gt; instance, mpi_allgather &lt;/SPAN&gt;&lt;SPAN&gt;is&lt;/SPAN&gt;&lt;SPAN&gt; called&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;in&lt;/SPAN&gt;&lt;SPAN&gt; the same test program.&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;Please note that fabric selection does not matter&amp;nbsp;when run on a single host. Everything works fine on a single host (i. e.&amp;nbsp;mpiexec -n 4 -ppn 2 -hosts HOST1 \\shared_folder_on_another_machine\winct.exe ).&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;</description>
      <pubDate>Mon, 01 Sep 2025 08:38:12 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-error-on-Windows-cluster/m-p/1714413#M12202</guid>
      <dc:creator>ako2</dc:creator>
      <dc:date>2025-09-01T08:38:12Z</dc:date>
    </item>
    <item>
      <title>Re: MPI error on Windows cluster</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-error-on-Windows-cluster/m-p/1714491#M12203</link>
      <description>&lt;P&gt;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/442188"&gt;@ako2&lt;/a&gt;&amp;nbsp;please don't mix -delegate with PS remoting. You need to create the mpi_profile session on every host that is involved. Please also don't mix dll's&lt;/P&gt;</description>
      <pubDate>Mon, 01 Sep 2025 10:19:02 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-error-on-Windows-cluster/m-p/1714491#M12203</guid>
      <dc:creator>TobiasK</dc:creator>
      <dc:date>2025-09-01T10:19:02Z</dc:date>
    </item>
    <item>
      <title>Re: MPI error on Windows cluster</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-error-on-Windows-cluster/m-p/1714527#M12205</link>
      <description>&lt;P&gt;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/245425"&gt;@TobiasK&lt;/a&gt;&amp;nbsp;thank you for your response.&lt;/P&gt;&lt;P&gt;I only mentioned that mpi_profile session was created on HOST2 because the reqirement seems to be that it should be created on remote host only.&amp;nbsp;&lt;/P&gt;&lt;P&gt;mpi_profile session was actually created on both hosts in my previous reply.&lt;/P&gt;&lt;P&gt;On HOST1 and HOST2&lt;/P&gt;&lt;P&gt;PS&amp;gt;Get-PSSessionConfiguration $sessionName&lt;/P&gt;&lt;P&gt;returns&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;Name : mpi_profile
PSVersion : 5.1
StartupScript :
RunAsUser : myusername
Permission : MYDOMAIN\myusername AccessAllowed&lt;/LI-CODE&gt;&lt;P&gt;&lt;SPAN class=""&gt;The program is launched on HOST1 using:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;mpiexec&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;-&lt;/SPAN&gt;&lt;SPAN class=""&gt;n&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;4&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;-&lt;/SPAN&gt;&lt;SPAN class=""&gt;ppn&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;2&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;-&lt;/SPAN&gt;&lt;SPAN class=""&gt;hosts HOST1,HOST2&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;-&lt;/SPAN&gt;&lt;SPAN class=""&gt;genv&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;I_MPI_DEBUG&lt;/SPAN&gt;&lt;SPAN class=""&gt;=+&lt;/SPAN&gt;&lt;SPAN class=""&gt;2&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;-g&lt;/SPAN&gt;&lt;SPAN class=""&gt;env&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;I_MPI_HYDRA_BOOTSTRAP_POWERSHELL_PSCNAME&lt;/SPAN&gt;&lt;SPAN class=""&gt;=&lt;/SPAN&gt;&lt;SPAN class=""&gt;mpi_profile&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;\&lt;/SPAN&gt;&lt;SPAN class=""&gt;\shared_folder_on_another_machine\winct.exe&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;hangs at mpi_bcast. I am using the latest mpi libraries in this call.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;Mixing of the libfabric.dll was simply a side note I wanted to mention, hoping that it might help resolve the issue.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 01 Sep 2025 12:47:29 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-error-on-Windows-cluster/m-p/1714527#M12205</guid>
      <dc:creator>ako2</dc:creator>
      <dc:date>2025-09-01T12:47:29Z</dc:date>
    </item>
  </channel>
</rss>

