<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Errors using Intel MPI distributed coarrays over InfiniBand with MLX in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/Errors-using-Intel-MPI-distributed-coarrays-over-InfiniBand-with/m-p/1453441#M10341</link>
    <description>&lt;P&gt;Hi, I have problems when using mlx to communicate over infiniband for Intel Coarray Fortran as mentioned here: &lt;A href="https://www.intel.com/content/www/us/en/developer/articles/technical/improve-performance-and-stability-with-intel-mpi-library-on-infiniband.html" target="_blank" rel="noopener"&gt;https://www.intel.com/content/www/us/en/developer/articles/technical/improve-performance-and-stability-with-intel-mpi-library-on-infiniband.html&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;My code works for&amp;nbsp;I_MPI_OFI_PROVIDER=verbs, but hangs when using&amp;nbsp;I_MPI_OFI_PROVIDER=mlx. It also hangs when using&amp;nbsp;FI_PROVIDER=mlx.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I'm using&amp;nbsp;intel-oneapi-mpi/2021.4.0,&amp;nbsp;intel-oneapi-compilers/2022.0.2 and&amp;nbsp;ucx/1.12.1.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Setup:&lt;/P&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;echo&lt;/SPAN&gt; &lt;SPAN&gt;'-n 2 ./a.out'&lt;/SPAN&gt;&lt;SPAN&gt; &amp;gt; &lt;/SPAN&gt;&lt;SPAN&gt;config.caf&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;ifort &lt;/SPAN&gt;&lt;SPAN&gt;-coarray=distributed&lt;/SPAN&gt; &lt;SPAN&gt;-coarray-config-file=config.caf&lt;/SPAN&gt; &lt;SPAN&gt;-o&lt;/SPAN&gt; &lt;SPAN&gt;a.out&lt;/SPAN&gt; &lt;SPAN&gt;main.f90&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;&lt;SPAN&gt;Using SLURM, I execute:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;ucx_info &lt;/SPAN&gt;&lt;SPAN&gt;-v&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;ucx_info &lt;/SPAN&gt;&lt;SPAN&gt;-d&lt;/SPAN&gt;&lt;SPAN&gt; | grep &lt;/SPAN&gt;&lt;SPAN&gt;&lt;SPAN&gt;Transport&lt;BR /&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;ibv_devinfo&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;lspci | grep &lt;/SPAN&gt;&lt;SPAN&gt;Mellanox&lt;BR /&gt;export I_MPI_OFI_PROVIDER=mlx&amp;nbsp; # same result for&amp;nbsp;FI_PROVIDER=mlx&lt;BR /&gt;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;SPAN&gt;./a.out.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;I saw it mentioned somewhere that this is a known bug with some workarounds. Is this still the case? Is there anything else I am doing wrong?&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;Using&amp;nbsp;I_MPI_DEBUG=100, I get:&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;P&gt;# UCT version=1.12.1 revision dc92435&lt;BR /&gt;# configured with: --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --program-prefix= --disable-dependency-tracking --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --disable-optimizations --disable-logging --disable-debug --disable-assertions --disable-params-check --without-java --enable-cma --without-cuda --without-gdrcopy --with-verbs --without-knem --with-rdmacm --without-rocm --without-xpmem --without-fuse3 --without-ugni&lt;BR /&gt;# Transport: posix&lt;BR /&gt;# Transport: sysv&lt;BR /&gt;# Transport: self&lt;BR /&gt;# Transport: tcp&lt;BR /&gt;# Transport: tcp&lt;BR /&gt;# Transport: tcp&lt;BR /&gt;# Transport: rc_verbs&lt;BR /&gt;# Transport: rc_mlx5&lt;BR /&gt;# Transport: dc_mlx5&lt;BR /&gt;# Transport: ud_verbs&lt;BR /&gt;# Transport: ud_mlx5&lt;BR /&gt;# Transport: cma&lt;BR /&gt;hca_id: mlx5_0&lt;BR /&gt;transport: InfiniBand (0)&lt;BR /&gt;fw_ver: 20.32.1010&lt;BR /&gt;node_guid: 08c0:eb03:002c:f98c&lt;BR /&gt;sys_image_guid: 08c0:eb03:002c:f98c&lt;BR /&gt;vendor_id: 0x02c9&lt;BR /&gt;vendor_part_id: 4123&lt;BR /&gt;hw_ver: 0x0&lt;BR /&gt;board_id: MT_0000000223&lt;BR /&gt;phys_port_cnt: 1&lt;BR /&gt;port: 1&lt;BR /&gt;state: PORT_ACTIVE (4)&lt;BR /&gt;max_mtu: 4096 (5)&lt;BR /&gt;active_mtu: 4096 (5)&lt;BR /&gt;sm_lid: 91&lt;BR /&gt;port_lid: 67&lt;BR /&gt;port_lmc: 0x00&lt;BR /&gt;link_layer: InfiniBand&lt;/P&gt;
&lt;P&gt;a1:00.0 Infiniband controller: Mellanox Technologies MT28908 Family [ConnectX-6]&lt;BR /&gt;IPL WARN&amp;gt; Not all cpus are available, switch to I_MPI_PIN_ORDER=compact. (Total: 256 Available: 1)&lt;BR /&gt;[0] MPI startup(): Run 'pmi_process_mapping' nodemap algorithm&lt;BR /&gt;[0] MPI startup(): Intel(R) MPI Library, Version 2021.4 Build 20210831 (id: 758087adf)&lt;BR /&gt;[0] MPI startup(): Copyright (C) 2003-2021 Intel Corporation. All rights reserved.IPL WARN&amp;gt; Not all cpus are available, switch to I_MPI_PIN_ORDER=compact. (Total: 256 Available: 1)&lt;/P&gt;
&lt;P&gt;[0] MPI startup(): library kind: release&lt;BR /&gt;[0] MPI startup(): libfabric version: 1.13.0-impi&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_CUDA not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_ROCR not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_ZE not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: sockets (113.0)&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():502&amp;lt;info&amp;gt; "sockets" filtered by provider include/exclude list, skipping&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: psm2 (113.0)&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():502&amp;lt;info&amp;gt; "psm2" filtered by provider include/exclude list, skipping&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: ofi_rxm (113.0)&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_CUDA not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_ROCR not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_ZE not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: tcp (113.0)&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():502&amp;lt;info&amp;gt; "tcp" filtered by provider include/exclude list, skipping&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_CUDA not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_ROCR not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_ZE not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: shm (113.0)&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():502&amp;lt;info&amp;gt; "shm" filtered by provider include/exclude list, skipping&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_CUDA not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_ROCR not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_ZE not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: verbs (113.0)&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():502&amp;lt;info&amp;gt; "verbs" filtered by provider include/exclude list, skipping&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: mlx (1.4)&lt;BR /&gt;libfabric:1779943:psm3:core:fi_prov_ini():680&amp;lt;info&amp;gt; build options: VERSION=1101.0=11.1.0.0, HAVE_PSM3_src=1, PSM3_CUDA=0&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: psm3 (1101.0)&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():502&amp;lt;info&amp;gt; "psm3" filtered by provider include/exclude list, skipping&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: ofi_hook_noop (113.0)&lt;BR /&gt;libfabric:1779943:core:core:fi_getinfo_():1138&amp;lt;info&amp;gt; Found provider with the highest priority mlx, must_use_util_prov = 0&lt;BR /&gt;libfabric:1779943:core:core:fi_getinfo_():1138&amp;lt;info&amp;gt; Found provider with the highest priority mlx, must_use_util_prov = 0&lt;BR /&gt;[0] MPI startup(): libfabric provider: mlx&lt;BR /&gt;libfabric:1779943:core:core:fi_fabric_():1423&amp;lt;info&amp;gt; Opened fabric: mlx&lt;BR /&gt;[0] MPI startup(): max_ch4_vcis: 1, max_reg_eps 64, enable_sep 0, enable_shared_ctxs 0, do_av_insert 1&lt;BR /&gt;[0] MPI startup(): addrnamelen: 1024&lt;BR /&gt;[0] MPI startup(): File "" not found&lt;BR /&gt;[0] MPI startup(): Load tuning file: "/gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0/etc/tuning_generic_shm-ofi.dat"&lt;BR /&gt;[0] MPI startup(): Rank Pid Node name Pin cpu&lt;BR /&gt;[0] MPI startup(): 0 1779943 n3071-003 {0}&lt;BR /&gt;[0] MPI startup(): 1 3984445 n3071-004 {0}&lt;BR /&gt;[0] MPI startup(): I_MPI_ROOT=/gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0&lt;BR /&gt;[0] MPI startup(): I_MPI_FAULT_CONTINUE=1&lt;BR /&gt;[0] MPI startup(): I_MPI_HYDRA_TOPOLIB=hwloc&lt;BR /&gt;[0] MPI startup(): I_MPI_PIN_DOMAIN=1&lt;BR /&gt;[0] MPI startup(): I_MPI_INTERNAL_MEM_POLICY=default&lt;BR /&gt;[0] MPI startup(): I_MPI_DEBUG=100&lt;BR /&gt;[0] MPI startup(): I_MPI_REMOVED_VAR_WARNING=0&lt;BR /&gt;[0] MPI startup(): I_MPI_VAR_CHECK_SPELLING=0&lt;BR /&gt;[0] MPI startup(): I_MPI_SPIN_COUNT=1&lt;BR /&gt;[0] MPI startup(): I_MPI_THREAD_YIELD=2&lt;BR /&gt;[0] MPI startup(): I_MPI_SILENT_ABORT=1&lt;BR /&gt;[n3071-003:1779943:0:1779943] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x31)&lt;BR /&gt;[n3071-004:3984445:0:3984445] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x31)&lt;BR /&gt;==== backtrace (tid:1779943) ====&lt;BR /&gt;0 /lib64/libucs.so.0(ucs_handle_error+0x2dc) [0x14deacde64fc]&lt;BR /&gt;==== backtrace (tid:3984445) ====&lt;BR /&gt;1 /lib64/libucs.so.0(+0x2a6dc) [0x14deacde66dc]&lt;BR /&gt;0 /lib64/libucs.so.0(ucs_handle_error+0x2dc) [0x150673aa34fc]&lt;BR /&gt;2 /lib64/libucs.so.0(+0x2a8aa) [0x14deacde68aa]&lt;BR /&gt;1 /lib64/libucs.so.0(+0x2a6dc) [0x150673aa36dc]&lt;BR /&gt;3 /lib64/libpthread.so.0(+0x12c20) [0x14deb399ec20]&lt;BR /&gt;2 /lib64/libucs.so.0(+0x2a8aa) [0x150673aa38aa]&lt;BR /&gt;4 /lib64/ucx/libuct_ib.so.0(+0x294cd) [0x14deac3e24cd]&lt;BR /&gt;3 /lib64/libpthread.so.0(+0x12c20) [0x15067a65bc20]&lt;BR /&gt;5 /lib64/ucx/libuct_ib.so.0(+0x29918) [0x14deac3e2918]&lt;BR /&gt;4 /lib64/ucx/libuct_ib.so.0(+0x294cd) [0x15067309f4cd]&lt;BR /&gt;6 /lib64/ucx/libuct_ib.so.0(+0x2266d) [0x14deac3db66d]&lt;BR /&gt;5 /lib64/ucx/libuct_ib.so.0(+0x29918) [0x15067309f918]&lt;BR /&gt;7 /lib64/ucx/libuct_ib.so.0(+0x22ca8) [0x14deac3dbca8]&lt;BR /&gt;6 /lib64/ucx/libuct_ib.so.0(+0x2266d) [0x15067309866d]&lt;BR /&gt;8 /lib64/libucs.so.0(ucs_rcache_get+0x2a6) [0x14deacdeca66]&lt;BR /&gt;7 /lib64/ucx/libuct_ib.so.0(+0x22ca8) [0x150673098ca8]&lt;BR /&gt;9 /lib64/ucx/libuct_ib.so.0(+0x230d8) [0x14deac3dc0d8]&lt;BR /&gt;8 /lib64/libucs.so.0(ucs_rcache_get+0x2a6) [0x150673aa9a66]&lt;BR /&gt;10 /lib64/libucp.so.0(ucp_mem_rereg_mds+0x31f) [0x14dead495bcf]&lt;BR /&gt;11 /lib64/libucp.so.0(+0x2f432) [0x14dead496432]&lt;BR /&gt;12 /lib64/libucp.so.0(ucp_mem_map+0x13e) [0x14dead4967ee]&lt;BR /&gt;13 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//libfabric/lib/prov/libmlx-fi.so(+0x940d) [0x14dead72240d]&lt;BR /&gt;14 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//libfabric/lib/prov/libmlx-fi.so(+0x94e8) [0x14dead7224e8]&lt;BR /&gt;15 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//libfabric/lib/prov/libmlx-fi.so(+0x952a) [0x14dead72252a]&lt;BR /&gt;16 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(+0x614ee4) [0x14deb20e6ee4]&lt;BR /&gt;17 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(+0x614773) [0x14deb20e6773]&lt;BR /&gt;18 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(+0x233483) [0x14deb1d05483]&lt;BR /&gt;19 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(+0x2502e2) [0x14deb1d222e2]&lt;BR /&gt;20 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(MPI_Win_create+0x3c2) [0x14deb2278c32]&lt;BR /&gt;21 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-compilers-2022.0.2-yzi4tsud2tqh4s6ykg2ulr7pp7guyiej/compiler/2022.0.2/linux/compiler/lib/intel64_lin/libicaf.so(for_rtl_ICAF_INIT+0xba2) [0x14deb4101162]&lt;BR /&gt;22 ./a.out() [0x407ee4]&lt;BR /&gt;23 ./a.out() [0x40441d]&lt;BR /&gt;24 /lib64/libc.so.6(__libc_start_main+0xf3) [0x14deb35ea493]&lt;BR /&gt;25 ./a.out() [0x40432e]&lt;BR /&gt;=================================&lt;BR /&gt;9 /lib64/ucx/libuct_ib.so.0(+0x230d8) [0x1506730990d8]&lt;BR /&gt;10 /lib64/libucp.so.0(ucp_mem_rereg_mds+0x31f) [0x150674152bcf]&lt;BR /&gt;11 /lib64/libucp.so.0(+0x2f432) [0x150674153432]&lt;BR /&gt;12 /lib64/libucp.so.0(ucp_mem_map+0x13e) [0x1506741537ee]&lt;BR /&gt;13 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//libfabric/lib/prov/libmlx-fi.so(+0x940d) [0x1506743df40d]&lt;BR /&gt;14 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//libfabric/lib/prov/libmlx-fi.so(+0x94e8) [0x1506743df4e8]&lt;BR /&gt;15 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//libfabric/lib/prov/libmlx-fi.so(+0x952a) [0x1506743df52a]&lt;BR /&gt;16 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(+0x614ee4) [0x150678da3ee4]&lt;BR /&gt;17 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(+0x614773) [0x150678da3773]&lt;BR /&gt;18 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(+0x233483) [0x1506789c2483]&lt;BR /&gt;19 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(+0x2502e2) [0x1506789df2e2]&lt;BR /&gt;20 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(MPI_Win_create+0x3c2) [0x150678f35c32]&lt;BR /&gt;21 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-compilers-2022.0.2-yzi4tsud2tqh4s6ykg2ulr7pp7guyiej/compiler/2022.0.2/linux/compiler/lib/intel64_lin/libicaf.so(for_rtl_ICAF_INIT+0xba2) [0x15067adbe162]&lt;BR /&gt;22 ./a.out() [0x407ee4]&lt;BR /&gt;23 ./a.out() [0x40441d]&lt;BR /&gt;24 /lib64/libc.so.6(__libc_start_main+0xf3) [0x15067a2a7493]&lt;BR /&gt;25 ./a.out() [0x40432e]&lt;BR /&gt;=================================&lt;/P&gt;
&lt;P&gt;===================================================================================&lt;BR /&gt;= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;= RANK 0 PID 1779943 RUNNING AT n3071-003&lt;BR /&gt;= KILLED BY SIGNAL: 11 (Segmentation fault)&lt;BR /&gt;===================================================================================&lt;/P&gt;
&lt;P&gt;===================================================================================&lt;BR /&gt;= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;= RANK 1 PID 3984445 RUNNING AT n3071-004&lt;BR /&gt;= KILLED BY SIGNAL: 11 (Segmentation fault)&lt;BR /&gt;===================================================================================&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
    <pubDate>Sun, 05 Feb 2023 01:27:46 GMT</pubDate>
    <dc:creator>as14</dc:creator>
    <dc:date>2023-02-05T01:27:46Z</dc:date>
    <item>
      <title>Errors using Intel MPI distributed coarrays over InfiniBand with MLX</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Errors-using-Intel-MPI-distributed-coarrays-over-InfiniBand-with/m-p/1453441#M10341</link>
      <description>&lt;P&gt;Hi, I have problems when using mlx to communicate over infiniband for Intel Coarray Fortran as mentioned here: &lt;A href="https://www.intel.com/content/www/us/en/developer/articles/technical/improve-performance-and-stability-with-intel-mpi-library-on-infiniband.html" target="_blank" rel="noopener"&gt;https://www.intel.com/content/www/us/en/developer/articles/technical/improve-performance-and-stability-with-intel-mpi-library-on-infiniband.html&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;My code works for&amp;nbsp;I_MPI_OFI_PROVIDER=verbs, but hangs when using&amp;nbsp;I_MPI_OFI_PROVIDER=mlx. It also hangs when using&amp;nbsp;FI_PROVIDER=mlx.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I'm using&amp;nbsp;intel-oneapi-mpi/2021.4.0,&amp;nbsp;intel-oneapi-compilers/2022.0.2 and&amp;nbsp;ucx/1.12.1.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Setup:&lt;/P&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;echo&lt;/SPAN&gt; &lt;SPAN&gt;'-n 2 ./a.out'&lt;/SPAN&gt;&lt;SPAN&gt; &amp;gt; &lt;/SPAN&gt;&lt;SPAN&gt;config.caf&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;ifort &lt;/SPAN&gt;&lt;SPAN&gt;-coarray=distributed&lt;/SPAN&gt; &lt;SPAN&gt;-coarray-config-file=config.caf&lt;/SPAN&gt; &lt;SPAN&gt;-o&lt;/SPAN&gt; &lt;SPAN&gt;a.out&lt;/SPAN&gt; &lt;SPAN&gt;main.f90&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;&lt;SPAN&gt;Using SLURM, I execute:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;ucx_info &lt;/SPAN&gt;&lt;SPAN&gt;-v&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;ucx_info &lt;/SPAN&gt;&lt;SPAN&gt;-d&lt;/SPAN&gt;&lt;SPAN&gt; | grep &lt;/SPAN&gt;&lt;SPAN&gt;&lt;SPAN&gt;Transport&lt;BR /&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;ibv_devinfo&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;lspci | grep &lt;/SPAN&gt;&lt;SPAN&gt;Mellanox&lt;BR /&gt;export I_MPI_OFI_PROVIDER=mlx&amp;nbsp; # same result for&amp;nbsp;FI_PROVIDER=mlx&lt;BR /&gt;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;SPAN&gt;./a.out.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;I saw it mentioned somewhere that this is a known bug with some workarounds. Is this still the case? Is there anything else I am doing wrong?&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;Using&amp;nbsp;I_MPI_DEBUG=100, I get:&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;P&gt;# UCT version=1.12.1 revision dc92435&lt;BR /&gt;# configured with: --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --program-prefix= --disable-dependency-tracking --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --disable-optimizations --disable-logging --disable-debug --disable-assertions --disable-params-check --without-java --enable-cma --without-cuda --without-gdrcopy --with-verbs --without-knem --with-rdmacm --without-rocm --without-xpmem --without-fuse3 --without-ugni&lt;BR /&gt;# Transport: posix&lt;BR /&gt;# Transport: sysv&lt;BR /&gt;# Transport: self&lt;BR /&gt;# Transport: tcp&lt;BR /&gt;# Transport: tcp&lt;BR /&gt;# Transport: tcp&lt;BR /&gt;# Transport: rc_verbs&lt;BR /&gt;# Transport: rc_mlx5&lt;BR /&gt;# Transport: dc_mlx5&lt;BR /&gt;# Transport: ud_verbs&lt;BR /&gt;# Transport: ud_mlx5&lt;BR /&gt;# Transport: cma&lt;BR /&gt;hca_id: mlx5_0&lt;BR /&gt;transport: InfiniBand (0)&lt;BR /&gt;fw_ver: 20.32.1010&lt;BR /&gt;node_guid: 08c0:eb03:002c:f98c&lt;BR /&gt;sys_image_guid: 08c0:eb03:002c:f98c&lt;BR /&gt;vendor_id: 0x02c9&lt;BR /&gt;vendor_part_id: 4123&lt;BR /&gt;hw_ver: 0x0&lt;BR /&gt;board_id: MT_0000000223&lt;BR /&gt;phys_port_cnt: 1&lt;BR /&gt;port: 1&lt;BR /&gt;state: PORT_ACTIVE (4)&lt;BR /&gt;max_mtu: 4096 (5)&lt;BR /&gt;active_mtu: 4096 (5)&lt;BR /&gt;sm_lid: 91&lt;BR /&gt;port_lid: 67&lt;BR /&gt;port_lmc: 0x00&lt;BR /&gt;link_layer: InfiniBand&lt;/P&gt;
&lt;P&gt;a1:00.0 Infiniband controller: Mellanox Technologies MT28908 Family [ConnectX-6]&lt;BR /&gt;IPL WARN&amp;gt; Not all cpus are available, switch to I_MPI_PIN_ORDER=compact. (Total: 256 Available: 1)&lt;BR /&gt;[0] MPI startup(): Run 'pmi_process_mapping' nodemap algorithm&lt;BR /&gt;[0] MPI startup(): Intel(R) MPI Library, Version 2021.4 Build 20210831 (id: 758087adf)&lt;BR /&gt;[0] MPI startup(): Copyright (C) 2003-2021 Intel Corporation. All rights reserved.IPL WARN&amp;gt; Not all cpus are available, switch to I_MPI_PIN_ORDER=compact. (Total: 256 Available: 1)&lt;/P&gt;
&lt;P&gt;[0] MPI startup(): library kind: release&lt;BR /&gt;[0] MPI startup(): libfabric version: 1.13.0-impi&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_CUDA not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_ROCR not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_ZE not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: sockets (113.0)&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():502&amp;lt;info&amp;gt; "sockets" filtered by provider include/exclude list, skipping&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: psm2 (113.0)&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():502&amp;lt;info&amp;gt; "psm2" filtered by provider include/exclude list, skipping&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: ofi_rxm (113.0)&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_CUDA not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_ROCR not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_ZE not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: tcp (113.0)&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():502&amp;lt;info&amp;gt; "tcp" filtered by provider include/exclude list, skipping&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_CUDA not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_ROCR not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_ZE not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: shm (113.0)&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():502&amp;lt;info&amp;gt; "shm" filtered by provider include/exclude list, skipping&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_CUDA not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_ROCR not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_hmem_init():209&amp;lt;info&amp;gt; Hmem iface FI_HMEM_ZE not supported&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: verbs (113.0)&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():502&amp;lt;info&amp;gt; "verbs" filtered by provider include/exclude list, skipping&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: mlx (1.4)&lt;BR /&gt;libfabric:1779943:psm3:core:fi_prov_ini():680&amp;lt;info&amp;gt; build options: VERSION=1101.0=11.1.0.0, HAVE_PSM3_src=1, PSM3_CUDA=0&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: psm3 (1101.0)&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():502&amp;lt;info&amp;gt; "psm3" filtered by provider include/exclude list, skipping&lt;BR /&gt;libfabric:1779943:core:core:ofi_register_provider():474&amp;lt;info&amp;gt; registering provider: ofi_hook_noop (113.0)&lt;BR /&gt;libfabric:1779943:core:core:fi_getinfo_():1138&amp;lt;info&amp;gt; Found provider with the highest priority mlx, must_use_util_prov = 0&lt;BR /&gt;libfabric:1779943:core:core:fi_getinfo_():1138&amp;lt;info&amp;gt; Found provider with the highest priority mlx, must_use_util_prov = 0&lt;BR /&gt;[0] MPI startup(): libfabric provider: mlx&lt;BR /&gt;libfabric:1779943:core:core:fi_fabric_():1423&amp;lt;info&amp;gt; Opened fabric: mlx&lt;BR /&gt;[0] MPI startup(): max_ch4_vcis: 1, max_reg_eps 64, enable_sep 0, enable_shared_ctxs 0, do_av_insert 1&lt;BR /&gt;[0] MPI startup(): addrnamelen: 1024&lt;BR /&gt;[0] MPI startup(): File "" not found&lt;BR /&gt;[0] MPI startup(): Load tuning file: "/gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0/etc/tuning_generic_shm-ofi.dat"&lt;BR /&gt;[0] MPI startup(): Rank Pid Node name Pin cpu&lt;BR /&gt;[0] MPI startup(): 0 1779943 n3071-003 {0}&lt;BR /&gt;[0] MPI startup(): 1 3984445 n3071-004 {0}&lt;BR /&gt;[0] MPI startup(): I_MPI_ROOT=/gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0&lt;BR /&gt;[0] MPI startup(): I_MPI_FAULT_CONTINUE=1&lt;BR /&gt;[0] MPI startup(): I_MPI_HYDRA_TOPOLIB=hwloc&lt;BR /&gt;[0] MPI startup(): I_MPI_PIN_DOMAIN=1&lt;BR /&gt;[0] MPI startup(): I_MPI_INTERNAL_MEM_POLICY=default&lt;BR /&gt;[0] MPI startup(): I_MPI_DEBUG=100&lt;BR /&gt;[0] MPI startup(): I_MPI_REMOVED_VAR_WARNING=0&lt;BR /&gt;[0] MPI startup(): I_MPI_VAR_CHECK_SPELLING=0&lt;BR /&gt;[0] MPI startup(): I_MPI_SPIN_COUNT=1&lt;BR /&gt;[0] MPI startup(): I_MPI_THREAD_YIELD=2&lt;BR /&gt;[0] MPI startup(): I_MPI_SILENT_ABORT=1&lt;BR /&gt;[n3071-003:1779943:0:1779943] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x31)&lt;BR /&gt;[n3071-004:3984445:0:3984445] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x31)&lt;BR /&gt;==== backtrace (tid:1779943) ====&lt;BR /&gt;0 /lib64/libucs.so.0(ucs_handle_error+0x2dc) [0x14deacde64fc]&lt;BR /&gt;==== backtrace (tid:3984445) ====&lt;BR /&gt;1 /lib64/libucs.so.0(+0x2a6dc) [0x14deacde66dc]&lt;BR /&gt;0 /lib64/libucs.so.0(ucs_handle_error+0x2dc) [0x150673aa34fc]&lt;BR /&gt;2 /lib64/libucs.so.0(+0x2a8aa) [0x14deacde68aa]&lt;BR /&gt;1 /lib64/libucs.so.0(+0x2a6dc) [0x150673aa36dc]&lt;BR /&gt;3 /lib64/libpthread.so.0(+0x12c20) [0x14deb399ec20]&lt;BR /&gt;2 /lib64/libucs.so.0(+0x2a8aa) [0x150673aa38aa]&lt;BR /&gt;4 /lib64/ucx/libuct_ib.so.0(+0x294cd) [0x14deac3e24cd]&lt;BR /&gt;3 /lib64/libpthread.so.0(+0x12c20) [0x15067a65bc20]&lt;BR /&gt;5 /lib64/ucx/libuct_ib.so.0(+0x29918) [0x14deac3e2918]&lt;BR /&gt;4 /lib64/ucx/libuct_ib.so.0(+0x294cd) [0x15067309f4cd]&lt;BR /&gt;6 /lib64/ucx/libuct_ib.so.0(+0x2266d) [0x14deac3db66d]&lt;BR /&gt;5 /lib64/ucx/libuct_ib.so.0(+0x29918) [0x15067309f918]&lt;BR /&gt;7 /lib64/ucx/libuct_ib.so.0(+0x22ca8) [0x14deac3dbca8]&lt;BR /&gt;6 /lib64/ucx/libuct_ib.so.0(+0x2266d) [0x15067309866d]&lt;BR /&gt;8 /lib64/libucs.so.0(ucs_rcache_get+0x2a6) [0x14deacdeca66]&lt;BR /&gt;7 /lib64/ucx/libuct_ib.so.0(+0x22ca8) [0x150673098ca8]&lt;BR /&gt;9 /lib64/ucx/libuct_ib.so.0(+0x230d8) [0x14deac3dc0d8]&lt;BR /&gt;8 /lib64/libucs.so.0(ucs_rcache_get+0x2a6) [0x150673aa9a66]&lt;BR /&gt;10 /lib64/libucp.so.0(ucp_mem_rereg_mds+0x31f) [0x14dead495bcf]&lt;BR /&gt;11 /lib64/libucp.so.0(+0x2f432) [0x14dead496432]&lt;BR /&gt;12 /lib64/libucp.so.0(ucp_mem_map+0x13e) [0x14dead4967ee]&lt;BR /&gt;13 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//libfabric/lib/prov/libmlx-fi.so(+0x940d) [0x14dead72240d]&lt;BR /&gt;14 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//libfabric/lib/prov/libmlx-fi.so(+0x94e8) [0x14dead7224e8]&lt;BR /&gt;15 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//libfabric/lib/prov/libmlx-fi.so(+0x952a) [0x14dead72252a]&lt;BR /&gt;16 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(+0x614ee4) [0x14deb20e6ee4]&lt;BR /&gt;17 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(+0x614773) [0x14deb20e6773]&lt;BR /&gt;18 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(+0x233483) [0x14deb1d05483]&lt;BR /&gt;19 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(+0x2502e2) [0x14deb1d222e2]&lt;BR /&gt;20 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(MPI_Win_create+0x3c2) [0x14deb2278c32]&lt;BR /&gt;21 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-compilers-2022.0.2-yzi4tsud2tqh4s6ykg2ulr7pp7guyiej/compiler/2022.0.2/linux/compiler/lib/intel64_lin/libicaf.so(for_rtl_ICAF_INIT+0xba2) [0x14deb4101162]&lt;BR /&gt;22 ./a.out() [0x407ee4]&lt;BR /&gt;23 ./a.out() [0x40441d]&lt;BR /&gt;24 /lib64/libc.so.6(__libc_start_main+0xf3) [0x14deb35ea493]&lt;BR /&gt;25 ./a.out() [0x40432e]&lt;BR /&gt;=================================&lt;BR /&gt;9 /lib64/ucx/libuct_ib.so.0(+0x230d8) [0x1506730990d8]&lt;BR /&gt;10 /lib64/libucp.so.0(ucp_mem_rereg_mds+0x31f) [0x150674152bcf]&lt;BR /&gt;11 /lib64/libucp.so.0(+0x2f432) [0x150674153432]&lt;BR /&gt;12 /lib64/libucp.so.0(ucp_mem_map+0x13e) [0x1506741537ee]&lt;BR /&gt;13 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//libfabric/lib/prov/libmlx-fi.so(+0x940d) [0x1506743df40d]&lt;BR /&gt;14 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//libfabric/lib/prov/libmlx-fi.so(+0x94e8) [0x1506743df4e8]&lt;BR /&gt;15 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//libfabric/lib/prov/libmlx-fi.so(+0x952a) [0x1506743df52a]&lt;BR /&gt;16 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(+0x614ee4) [0x150678da3ee4]&lt;BR /&gt;17 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(+0x614773) [0x150678da3773]&lt;BR /&gt;18 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(+0x233483) [0x1506789c2483]&lt;BR /&gt;19 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(+0x2502e2) [0x1506789df2e2]&lt;BR /&gt;20 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-mpi-2021.4.0-2e7zm7zu5t7iqbzr7xhjkwivxg3ry5bh/mpi/2021.4.0//lib/release/libmpi.so.12(MPI_Win_create+0x3c2) [0x150678f35c32]&lt;BR /&gt;21 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-compilers-2022.0.2-yzi4tsud2tqh4s6ykg2ulr7pp7guyiej/compiler/2022.0.2/linux/compiler/lib/intel64_lin/libicaf.so(for_rtl_ICAF_INIT+0xba2) [0x15067adbe162]&lt;BR /&gt;22 ./a.out() [0x407ee4]&lt;BR /&gt;23 ./a.out() [0x40441d]&lt;BR /&gt;24 /lib64/libc.so.6(__libc_start_main+0xf3) [0x15067a2a7493]&lt;BR /&gt;25 ./a.out() [0x40432e]&lt;BR /&gt;=================================&lt;/P&gt;
&lt;P&gt;===================================================================================&lt;BR /&gt;= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;= RANK 0 PID 1779943 RUNNING AT n3071-003&lt;BR /&gt;= KILLED BY SIGNAL: 11 (Segmentation fault)&lt;BR /&gt;===================================================================================&lt;/P&gt;
&lt;P&gt;===================================================================================&lt;BR /&gt;= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;= RANK 1 PID 3984445 RUNNING AT n3071-004&lt;BR /&gt;= KILLED BY SIGNAL: 11 (Segmentation fault)&lt;BR /&gt;===================================================================================&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Sun, 05 Feb 2023 01:27:46 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Errors-using-Intel-MPI-distributed-coarrays-over-InfiniBand-with/m-p/1453441#M10341</guid>
      <dc:creator>as14</dc:creator>
      <dc:date>2023-02-05T01:27:46Z</dc:date>
    </item>
    <item>
      <title>Re: Errors using Intel MPI distributed coarrays over InfiniBand with MLX</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Errors-using-Intel-MPI-distributed-coarrays-over-InfiniBand-with/m-p/1453606#M10342</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks for posting in the Intel communities.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;From your debug log, we can see that you are using Intel MPI 2021.4 &amp;amp; trying to run your application on 2 nodes using MLX fabric provider.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Could you please provide the following details to help us investigate your issue?&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Operating system &amp;amp; its version&lt;/LI&gt;
&lt;LI&gt;CPU details&lt;/LI&gt;
&lt;LI&gt;Sample reproducer code&lt;/LI&gt;
&lt;LI&gt;Expected output or provide us the complete output log when FI_PROVIDER=verbs.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#808080"&gt;&lt;EM&gt;&amp;gt;&amp;gt;"I saw it mentioned somewhere that this is a known bug with some workarounds. Is this still the case? Is there anything else I am doing wrong?"&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Could you please provide us the link that you are referring to? Or provide us the exact workaroud that you are referring to?&lt;BR /&gt;&lt;BR /&gt;Since, most of the known issues of Intel MPI 2021.4 were fixed in the later releases, could you please try with the latest Intel MPI 2021.8 &amp;amp; get back to us if&amp;nbsp; the problem persists?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks &amp;amp; Regards,&lt;/P&gt;
&lt;P&gt;Santosh&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 06 Feb 2023 06:49:18 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Errors-using-Intel-MPI-distributed-coarrays-over-InfiniBand-with/m-p/1453606#M10342</guid>
      <dc:creator>SantoshY_Intel</dc:creator>
      <dc:date>2023-02-06T06:49:18Z</dc:date>
    </item>
    <item>
      <title>Re: Errors using Intel MPI distributed coarrays over InfiniBand with MLX</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Errors-using-Intel-MPI-distributed-coarrays-over-InfiniBand-with/m-p/1453651#M10344</link>
      <description>&lt;P&gt;Thanks for getting back to me!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;gt;&amp;gt;"1. Operating system &amp;amp; its version"&lt;/P&gt;
&lt;P&gt;$&amp;nbsp;&lt;SPAN&gt;cat &lt;/SPAN&gt;&lt;SPAN&gt;/etc/os-release&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;NAME="AlmaLinux"&lt;BR /&gt;VERSION="8.5 (Arctic Sphynx)"&lt;BR /&gt;ID="almalinux"&lt;BR /&gt;ID_LIKE="rhel centos fedora"&lt;BR /&gt;VERSION_ID="8.5"&lt;BR /&gt;PLATFORM_ID="platform:el8"&lt;BR /&gt;PRETTY_NAME="AlmaLinux 8.5 (Arctic Sphynx)"&lt;BR /&gt;ANSI_COLOR="0;34"&lt;BR /&gt;CPE_NAME="cpe:/o:almalinux:almalinux:8::baseos"&lt;BR /&gt;HOME_URL="&lt;A href="https://almalinux.org/" target="_blank"&gt;https://almalinux.org/&lt;/A&gt;"&lt;BR /&gt;DOCUMENTATION_URL="&lt;A href="https://wiki.almalinux.org/" target="_blank"&gt;https://wiki.almalinux.org/&lt;/A&gt;"&lt;BR /&gt;BUG_REPORT_URL="&lt;A href="https://bugs.almalinux.org/" target="_blank"&gt;https://bugs.almalinux.org/&lt;/A&gt;"&lt;/P&gt;
&lt;P&gt;ALMALINUX_MANTISBT_PROJECT="AlmaLinux-8"&lt;BR /&gt;ALMALINUX_MANTISBT_PROJECT_VERSION="8.5"&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;gt;&amp;gt;"2. CPU details"&lt;/P&gt;
&lt;P&gt;$&amp;nbsp;lscpu&lt;/P&gt;
&lt;P&gt;Architecture: x86_64&lt;BR /&gt;CPU op-mode(s): 32-bit, 64-bit&lt;BR /&gt;Byte Order: Little Endian&lt;BR /&gt;CPU(s): 256&lt;BR /&gt;On-line CPU(s) list: 0-255&lt;BR /&gt;Thread(s) per core: 2&lt;BR /&gt;Core(s) per socket: 64&lt;BR /&gt;Socket(s): 2&lt;BR /&gt;NUMA node(s): 8&lt;BR /&gt;Vendor ID: AuthenticAMD&lt;BR /&gt;CPU family: 25&lt;BR /&gt;Model: 1&lt;BR /&gt;Model name: AMD EPYC 7713 64-Core Processor&lt;BR /&gt;Stepping: 1&lt;BR /&gt;CPU MHz: 2000.000&lt;BR /&gt;CPU max MHz: 3720.7029&lt;BR /&gt;CPU min MHz: 1500.0000&lt;BR /&gt;BogoMIPS: 4000.16&lt;BR /&gt;Virtualization: AMD-V&lt;BR /&gt;L1d cache: 32K&lt;BR /&gt;L1i cache: 32K&lt;BR /&gt;L2 cache: 512K&lt;BR /&gt;L3 cache: 32768K&lt;BR /&gt;NUMA node0 CPU(s): 0-15,128-143&lt;BR /&gt;NUMA node1 CPU(s): 16-31,144-159&lt;BR /&gt;NUMA node2 CPU(s): 32-47,160-175&lt;BR /&gt;NUMA node3 CPU(s): 48-63,176-191&lt;BR /&gt;NUMA node4 CPU(s): 64-79,192-207&lt;BR /&gt;NUMA node5 CPU(s): 80-95,208-223&lt;BR /&gt;NUMA node6 CPU(s): 96-111,224-239&lt;BR /&gt;NUMA node7 CPU(s): 112-127,240-255&lt;BR /&gt;Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca sme sev sev_es&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;gt;&amp;gt;"3.&amp;nbsp;Sample reproducer code"&lt;/P&gt;
&lt;P&gt;Attached.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;gt;&amp;gt;"4. Expected output or provide us the complete output log when FI_PROVIDER=verbs."&lt;/P&gt;
&lt;P&gt;Output log when using verbs (which is same as what is expected when using mlx) is attached.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;gt;&amp;gt;"Workaround link"&lt;/P&gt;
&lt;P&gt;&lt;A href="https://blog.hpc.qmul.ac.uk/intel-release-2020_4.html" target="_blank"&gt;https://blog.hpc.qmul.ac.uk/intel-release-2020_4.html&lt;/A&gt;. Look under "known issues using fortran coarrays"&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Sadly I cannot test with this different version of intel-mpi as it is not installed on our cluster.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks very much again for the help!&lt;/P&gt;</description>
      <pubDate>Mon, 06 Feb 2023 10:46:21 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Errors-using-Intel-MPI-distributed-coarrays-over-InfiniBand-with/m-p/1453651#M10344</guid>
      <dc:creator>as14</dc:creator>
      <dc:date>2023-02-06T10:46:21Z</dc:date>
    </item>
    <item>
      <title>Re: Errors using Intel MPI distributed coarrays over InfiniBand with MLX</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Errors-using-Intel-MPI-distributed-coarrays-over-InfiniBand-with/m-p/1453653#M10345</link>
      <description>&lt;P&gt;Additionally, I am attaching here the output file when running with&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;export&lt;/SPAN&gt; &lt;SPAN&gt;FI_PROVIDER&lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt;mlx&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;export&lt;/SPAN&gt; &lt;SPAN&gt;I_MPI_OFI_PROVIDER&lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt;mlx&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;&lt;SPAN&gt;instead of "&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN&gt;export I_MPI_OFI_PROVIDER=Verbs".&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;Thanks again!!&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Mon, 06 Feb 2023 10:55:29 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Errors-using-Intel-MPI-distributed-coarrays-over-InfiniBand-with/m-p/1453653#M10345</guid>
      <dc:creator>as14</dc:creator>
      <dc:date>2023-02-06T10:55:29Z</dc:date>
    </item>
    <item>
      <title>Re:Errors using Intel MPI distributed coarrays over InfiniBand with MLX</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Errors-using-Intel-MPI-distributed-coarrays-over-InfiniBand-with/m-p/1454023#M10354</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thank you for your inquiry. We can only offer direct support for Intel hardware platforms that the Intel® oneAPI product supports. Intel provides instructions on how to compile oneAPI code for both CPU and a wide range of GPU accelerators. &lt;A href="https://intel.github.io/llvm-docs/GetStartedGuide.html" target="_blank"&gt;https://intel.github.io/llvm-docs/GetStartedGuide.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thanks &amp;amp; Regards,&lt;/P&gt;&lt;P&gt;Santosh&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 07 Feb 2023 07:16:21 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Errors-using-Intel-MPI-distributed-coarrays-over-InfiniBand-with/m-p/1454023#M10354</guid>
      <dc:creator>SantoshY_Intel</dc:creator>
      <dc:date>2023-02-07T07:16:21Z</dc:date>
    </item>
    <item>
      <title>Re: Errors using Intel MPI distributed coarrays over InfiniBand with MLX</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Errors-using-Intel-MPI-distributed-coarrays-over-InfiniBand-with/m-p/1459062#M10416</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;SPAN&gt;Santosh,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The problem is not that actually, but is with Intel's glitchy support of Coarrays over Infiniband connectors.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Intel officially supports MPI over Infiniband. See here:&amp;nbsp;&lt;A href="https://www.intel.com/content/www/us/en/developer/articles/technical/improve-performance-and-stability-with-intel-mpi-library-on-infiniband.html" target="_blank" rel="noopener"&gt;https://www.intel.com/content/www/us/en/developer/articles/technical/improve-performance-and-stability-with-intel-mpi-library-on-infiniband.html&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;and here:&amp;nbsp;&lt;A href="https://www.intel.com/content/www/us/en/developer/articles/technical/mpi-compatibility-nvidia-mellanox-ofed-infiniband.html" target="_blank" rel="noopener"&gt;https://www.intel.com/content/www/us/en/developer/articles/technical/mpi-compatibility-nvidia-mellanox-ofed-infiniband.html&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;However, a glitch in the coarray implementation has been found, with a suggested fix being to set&amp;nbsp;MPIR_CVAR_CH4_OFI_ENABLE_RMA=0 (mentioned for e.g. here &lt;A href="https://blog.hpc.qmul.ac.uk/intel-release-2020_4.html" target="_blank" rel="noopener"&gt;https://blog.hpc.qmul.ac.uk/intel-release-2020_4.html&lt;/A&gt;). This does allow the communication to work, but unusably slowly. Firstly, could you let me know why this allows it to work, why it makes it much slower, and what it actually means?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Does Intel plan to apply a proper fix for this bug in future releases? If not, I will be required to rewrite my whole code in MPI as it cannot be ported anywhere beyond Omnipath connectors (which are being superseded by Infiniband connectors on HPCs).&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Thanks for any help,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;James&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 24 Feb 2023 15:33:16 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Errors-using-Intel-MPI-distributed-coarrays-over-InfiniBand-with/m-p/1459062#M10416</guid>
      <dc:creator>as14</dc:creator>
      <dc:date>2023-02-24T15:33:16Z</dc:date>
    </item>
  </channel>
</rss>

