<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Hello, I am seeing what in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158394#M6253</link>
    <description>&lt;P&gt;Hello, I am seeing what appears to be the same problem, but none of the suggest environment variable settings are working for me.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is on CentOS Linux release 7.2.1511.&amp;nbsp; The test program is /opt/intel/compilers_and_libraries_2019.1.144/linux/mpi/test/test.c compiled with&amp;nbsp;mpigcc.&amp;nbsp; Trying to launch just on a single host for now.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[user1@centos7-2 tmp]$ export LD_LIBRARY_PATH=/opt/intel/compilers_and_libraries_2019.1.144/linux/mpi/intel64/libfabric/lib/:$LD_LIBRARY_PATH&lt;BR /&gt;[user1@centos7-2 tmp]$ env | grep FI_&lt;BR /&gt;FI_SOCKETS_IFACE=enp0s3&lt;BR /&gt;FI_LOG_LEVEL=debug&lt;BR /&gt;FI_PROVIDER=tcp&lt;BR /&gt;[user1@centos7-2 tmp]$ mpiexec ./mpitest&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var perf_cntr&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable perf_cntr=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var hook&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable hook=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var provider&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var fork_unsafe&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var universe_size&lt;BR /&gt;libfabric:core:core:fi_param_get_():281&amp;lt;info&amp;gt; read string var provider=tcp&lt;BR /&gt;libfabric:core:core:ofi_create_filter():322&amp;lt;warn&amp;gt; unable to parse filter from: tcp&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var provider_path&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable provider_path=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var rxd_enable&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable rxd_enable=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;Abort(1618831) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:&lt;BR /&gt;MPIR_Init_thread(639)......:&lt;BR /&gt;MPID_Init(860).............:&lt;BR /&gt;MPIDI_NM_mpi_init_hook(689): OFI addrinfo() failed (ofi_init.h:689:MPIDI_NM_mpi_init_hook:No data available)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;[user1@centos7-2 tmp]$ export FI_PROVIDER=sockets&lt;BR /&gt;[user1@centos7-2 tmp]$ env | grep FI_&lt;BR /&gt;FI_SOCKETS_IFACE=enp0s3&lt;BR /&gt;FI_LOG_LEVEL=debug&lt;BR /&gt;FI_PROVIDER=sockets&lt;BR /&gt;[user1@centos7-2 tmp]$ mpiexec ./mpitest&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var perf_cntr&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable perf_cntr=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var hook&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable hook=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var provider&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var fork_unsafe&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var universe_size&lt;BR /&gt;libfabric:core:core:fi_param_get_():281&amp;lt;info&amp;gt; read string var provider=sockets&lt;BR /&gt;libfabric:core:core:ofi_create_filter():322&amp;lt;warn&amp;gt; unable to parse filter from: sockets&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var provider_path&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable provider_path=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var rxd_enable&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable rxd_enable=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;Abort(1618831) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:&lt;BR /&gt;MPIR_Init_thread(639)......:&lt;BR /&gt;MPID_Init(860).............:&lt;BR /&gt;MPIDI_NM_mpi_init_hook(689): OFI addrinfo() failed (ofi_init.h:689:MPIDI_NM_mpi_init_hook:No data available)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[root@centos7-2 tmp]# ifconfig&lt;BR /&gt;enp0s3: flags=4163&amp;lt;UP,BROADCAST,RUNNING,MULTICAST&amp;gt; &amp;nbsp;mtu 1500&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet 10.10.10.9 &amp;nbsp;netmask 255.0.0.0 &amp;nbsp;broadcast 10.255.255.255&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet6 fe80::a00:27ff:fe0b:6565 &amp;nbsp;prefixlen 64 &amp;nbsp;scopeid 0x20&amp;lt;link&amp;gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ether 08:00:27:0b:65:65 &amp;nbsp;txqueuelen 1000 &amp;nbsp;(Ethernet)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX packets 405685 &amp;nbsp;bytes 534299467 (509.5 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX errors 0 &amp;nbsp;dropped 0 &amp;nbsp;overruns 0 &amp;nbsp;frame 0&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX packets 441363 &amp;nbsp;bytes 675447638 (644.1 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX errors 0 &amp;nbsp;dropped 0 overruns 0 &amp;nbsp;carrier 0 &amp;nbsp;collisions 0&lt;/P&gt;&lt;P&gt;enp0s8: flags=4163&amp;lt;UP,BROADCAST,RUNNING,MULTICAST&amp;gt; &amp;nbsp;mtu 1500&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet 10.0.3.15 &amp;nbsp;netmask 255.255.255.0 &amp;nbsp;broadcast 10.0.3.255&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet6 fe80::a00:27ff:fee4:5499 &amp;nbsp;prefixlen 64 &amp;nbsp;scopeid 0x20&amp;lt;link&amp;gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ether 08:00:27:e4:54:99 &amp;nbsp;txqueuelen 1000 &amp;nbsp;(Ethernet)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX packets 102700 &amp;nbsp;bytes 133004229 (126.8 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX errors 0 &amp;nbsp;dropped 0 &amp;nbsp;overruns 0 &amp;nbsp;frame 0&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX packets 51572 &amp;nbsp;bytes 3477801 (3.3 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX errors 0 &amp;nbsp;dropped 0 overruns 0 &amp;nbsp;carrier 0 &amp;nbsp;collisions 0&lt;/P&gt;&lt;P&gt;lo: flags=73&amp;lt;UP,LOOPBACK,RUNNING&amp;gt; &amp;nbsp;mtu 65536&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet 127.0.0.1 &amp;nbsp;netmask 255.0.0.0&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet6 ::1 &amp;nbsp;prefixlen 128 &amp;nbsp;scopeid 0x10&amp;lt;host&amp;gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; loop &amp;nbsp;txqueuelen 0 &amp;nbsp;(Local Loopback)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX packets 48387 &amp;nbsp;bytes 8166614 (7.7 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX errors 0 &amp;nbsp;dropped 0 &amp;nbsp;overruns 0 &amp;nbsp;frame 0&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX packets 48387 &amp;nbsp;bytes 8166614 (7.7 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX errors 0 &amp;nbsp;dropped 0 overruns 0 &amp;nbsp;carrier 0 &amp;nbsp;collisions 0&lt;/P&gt;&lt;P&gt;virbr0: flags=4099&amp;lt;UP,BROADCAST,MULTICAST&amp;gt; &amp;nbsp;mtu 1500&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet 192.168.122.1 &amp;nbsp;netmask 255.255.255.0 &amp;nbsp;broadcast 192.168.122.255&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ether 52:54:00:86:b9:f7 &amp;nbsp;txqueuelen 0 &amp;nbsp;(Ethernet)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX packets 0 &amp;nbsp;bytes 0 (0.0 B)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX errors 0 &amp;nbsp;dropped 0 &amp;nbsp;overruns 0 &amp;nbsp;frame 0&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX packets 0 &amp;nbsp;bytes 0 (0.0 B)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX errors 0 &amp;nbsp;dropped 0 overruns 0 &amp;nbsp;carrier 0 &amp;nbsp;collisions 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I get the same errors if I set&amp;nbsp;FI_SOCKETS_IFACE to&amp;nbsp;enp0s8 as well.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any suggestions?&lt;/P&gt;</description>
    <pubDate>Tue, 20 Nov 2018 16:17:18 GMT</pubDate>
    <dc:creator>campbell__scott</dc:creator>
    <dc:date>2018-11-20T16:17:18Z</dc:date>
    <item>
      <title>New MPI error with Intel 2019.1, unable to run MPI hello world</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158382#M6241</link>
      <description>&lt;P&gt;After upgrading to update 1 of Intel 2019 we are not able to run even an MPI hello world example. This is new behavior and e.g. a spack installed gcc 8.20 and OpenMPI have no trouble on this system. This is a single workstation and only shm needs to work. For non-mpi use the compilers&amp;nbsp; work correctly. Presumably dependencies have changed slightly in this new update?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE class="brush:bash; class-name:dark;"&gt;$ cat /etc/redhat-release
Red Hat Enterprise Linux Workstation release 7.5 (Maipo)
$ source /opt/intel2019/bin/compilervars.sh intel64
$ mpiicc -v
mpiicc for the Intel(R) MPI Library 2019 Update 1 for Linux*
Copyright 2003-2018, Intel Corporation.
icc version 19.0.1.144 (gcc version 4.8.5 compatibility)
$ cat mpi_hello_world.c
#include &amp;lt;mpi.h&amp;gt;
#include &amp;lt;stdio.h&amp;gt;

int main(int argc, char** argv) {
  // Initialize the MPI environment
  MPI_Init(NULL, NULL);

  // Get the number of processes
  int world_size;
  MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;world_size);

  // Get the rank of the process
  int world_rank;
  MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;world_rank);

  // Get the name of the processor
  char processor_name[MPI_MAX_PROCESSOR_NAME];
  int name_len;
  MPI_Get_processor_name(processor_name, &amp;amp;name_len);

  // Print off a hello world message
  printf("Hello world from processor %s, rank %d out of %d processors\n",
	 processor_name, world_rank, world_size);

  // Finalize the MPI environment.
  MPI_Finalize();
}
$ mpiicc ./mpi_hello_world.c
$ ./a.out
Abort(1094543) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(639)......:
MPID_Init(860).............:
MPIDI_NM_mpi_init_hook(689): OFI addrinfo() failed (ofi_init.h:689:MPIDI_NM_mpi_init_hook:No data available)
$ export I_MPI_FABRICS=shm:ofi
$ export I_MPI_DEBUG=666
$ ./a.out
[0] MPI startup(): Imported environment partly inaccesible. Map=0 Info=0
[0] MPI startup(): libfabric version: 1.7.0a1-impi
Abort(1094543) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(639)......:
MPID_Init(860).............:
MPIDI_NM_mpi_init_hook(689): OFI addrinfo() failed (ofi_init.h:689:MPIDI_NM_mpi_init_hook:No data available)
&lt;/PRE&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 13 Nov 2018 16:51:58 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158382#M6241</guid>
      <dc:creator>Paul_K_2</dc:creator>
      <dc:date>2018-11-13T16:51:58Z</dc:date>
    </item>
    <item>
      <title>I have encountered the same</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158383#M6242</link>
      <description>&lt;P&gt;I have encountered the same problem. Have you got any solutions yet?&lt;/P&gt;</description>
      <pubDate>Wed, 14 Nov 2018 02:53:49 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158383#M6242</guid>
      <dc:creator>Liang__C</dc:creator>
      <dc:date>2018-11-14T02:53:49Z</dc:date>
    </item>
    <item>
      <title>Hi Paul,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158384#M6243</link>
      <description>&lt;P&gt;Hi Paul,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Could you send us an output from an "ifconfig" command, please?&lt;/P&gt;&lt;P&gt;Thank you!&lt;/P&gt;</description>
      <pubDate>Wed, 14 Nov 2018 08:56:39 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158384#M6243</guid>
      <dc:creator>Dmitry_G_Intel</dc:creator>
      <dc:date>2018-11-14T08:56:39Z</dc:date>
    </item>
    <item>
      <title>$ ifconfigdocker0: flags=4099</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158385#M6244</link>
      <description>&lt;P&gt;$ ifconfig&lt;BR /&gt;docker0: flags=4099&amp;lt;UP,BROADCAST,MULTICAST&amp;gt;&amp;nbsp; mtu 1500&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; inet 172.17.0.1&amp;nbsp; netmask 255.255.0.0&amp;nbsp; broadcast 0.0.0.0&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; inet6 fe80::42:77ff:fed7:8a4c&amp;nbsp; prefixlen 64&amp;nbsp; scopeid 0x20&amp;lt;link&amp;gt;&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ether 02:42:77:d7:8a:4c&amp;nbsp; txqueuelen 0&amp;nbsp; (Ethernet)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; RX packets 126235&amp;nbsp; bytes 5187732 (4.9 MiB)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; RX errors 0&amp;nbsp; dropped 0&amp;nbsp; overruns 0&amp;nbsp; frame 0&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; TX packets 174599&amp;nbsp; bytes 478222947 (456.0 MiB)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; TX errors 0&amp;nbsp; dropped 0 overruns 0&amp;nbsp; carrier 0&amp;nbsp; collisions 0&lt;/P&gt;&lt;P&gt;eth0: flags=4163&amp;lt;UP,BROADCAST,RUNNING,MULTICAST&amp;gt;&amp;nbsp; mtu 1500&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; inet 128.219.166.53&amp;nbsp; netmask 255.255.252.0&amp;nbsp; broadcast 128.219.167.255&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; inet6 fe80::225:90ff:fee1:835a&amp;nbsp; prefixlen 64&amp;nbsp; scopeid 0x20&amp;lt;link&amp;gt;&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ether 00:25:90:e1:83:5a&amp;nbsp; txqueuelen 1000&amp;nbsp; (Ethernet)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; RX packets 58556671&amp;nbsp; bytes 16033320775 (14.9 GiB)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; RX errors 0&amp;nbsp; dropped 33204&amp;nbsp; overruns 0&amp;nbsp; frame 0&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; TX packets 13740853&amp;nbsp; bytes 6787935989 (6.3 GiB)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; TX errors 0&amp;nbsp; dropped 0 overruns 0&amp;nbsp; carrier 0&amp;nbsp; collisions 0&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; device memory 0xe3920000-e393ffff&lt;/P&gt;&lt;P&gt;eth1: flags=4099&amp;lt;UP,BROADCAST,MULTICAST&amp;gt;&amp;nbsp; mtu 1500&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ether 00:25:90:e1:83:5b&amp;nbsp; txqueuelen 1000&amp;nbsp; (Ethernet)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; RX packets 0&amp;nbsp; bytes 0 (0.0 B)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; RX errors 0&amp;nbsp; dropped 0&amp;nbsp; overruns 0&amp;nbsp; frame 0&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; TX packets 0&amp;nbsp; bytes 0 (0.0 B)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; TX errors 0&amp;nbsp; dropped 0 overruns 0&amp;nbsp; carrier 0&amp;nbsp; collisions 0&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; device memory 0xe3900000-e391ffff&lt;/P&gt;&lt;P&gt;lo: flags=73&amp;lt;UP,LOOPBACK,RUNNING&amp;gt;&amp;nbsp; mtu 65536&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; inet 127.0.0.1&amp;nbsp; netmask 255.0.0.0&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; inet6 ::1&amp;nbsp; prefixlen 128&amp;nbsp; scopeid 0x10&amp;lt;host&amp;gt;&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; loop&amp;nbsp; txqueuelen 1000&amp;nbsp; (Local Loopback)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; RX packets 4183437&amp;nbsp; bytes 4132051465 (3.8 GiB)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; RX errors 0&amp;nbsp; dropped 0&amp;nbsp; overruns 0&amp;nbsp; frame 0&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; TX packets 4183437&amp;nbsp; bytes 4132051465 (3.8 GiB)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; TX errors 0&amp;nbsp; dropped 0 overruns 0&amp;nbsp; carrier 0&amp;nbsp; collisions 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 14 Nov 2018 15:22:06 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158385#M6244</guid>
      <dc:creator>Paul_K_2</dc:creator>
      <dc:date>2018-11-14T15:22:06Z</dc:date>
    </item>
    <item>
      <title>I have encountered the same</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158386#M6245</link>
      <description>&lt;P&gt;I have encountered the same problem / same error message running an application between nodes.&lt;/P&gt;&lt;P&gt;Configuration: Scientific Linux 7.5, Intel latest version&lt;/P&gt;</description>
      <pubDate>Fri, 16 Nov 2018 15:36:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158386#M6245</guid>
      <dc:creator>Rudolf_Berrendorf</dc:creator>
      <dc:date>2018-11-16T15:36:00Z</dc:date>
    </item>
    <item>
      <title>Installing Intel Parallel</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158387#M6246</link>
      <description>&lt;P&gt;Installing Intel Parallel Studio XE Cluster Edition from scratch (clean install) did not change this problem or error. i.e. It is unrelated to the update procedure 2019.0 -&amp;gt; 2019.1&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 19 Nov 2018 16:23:57 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158387#M6246</guid>
      <dc:creator>Paul_K_2</dc:creator>
      <dc:date>2018-11-19T16:23:57Z</dc:date>
    </item>
    <item>
      <title>Hi Paul,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158388#M6247</link>
      <description>&lt;P&gt;Hi Paul,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Did you see the same error with IMPI&amp;nbsp;2019 Gold?&lt;/P&gt;&lt;P&gt;I have two possible methods that could help you:&lt;/P&gt;&lt;P&gt;- set FI_SOCKETS_IFACE=eth0 (or any IP interfaces that works correctly) environment variable.&lt;/P&gt;&lt;P&gt;- set FI_PROVIDER=tcp environment variable (only applicable for IMPI 2019 U1). This will switch to another&amp;nbsp;OFI provider (i.e. IMPI access to the network), but this provider is available as a Technical Preview and will replace OFI/sockets provider in the future&amp;nbsp;releases.&lt;/P&gt;&lt;P&gt;If you still have the same problems, please, collect logs with&amp;nbsp;FI_LOG_LEVEL=debug set. The logs will be printed to standard error (stderr) output.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you!&lt;/P&gt;&lt;P&gt;--&lt;/P&gt;&lt;P&gt;Dmitry&lt;/P&gt;</description>
      <pubDate>Mon, 19 Nov 2018 17:49:50 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158388#M6247</guid>
      <dc:creator>Dmitry_G_Intel</dc:creator>
      <dc:date>2018-11-19T17:49:50Z</dc:date>
    </item>
    <item>
      <title>Thanks Dmitry. Seeting FI_LOG</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158389#M6248</link>
      <description>&lt;P&gt;Thanks Dmitry. Setting FI_LOG_LEVEL=debug was very helpful:&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;$ ./a.out&lt;BR /&gt;Abort(1094543) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:&lt;BR /&gt;MPIR_Init_thread(639)......:&lt;BR /&gt;MPID_Init(860).............:&lt;BR /&gt;MPIDI_NM_mpi_init_hook(689): OFI addrinfo() failed (ofi_init.h:689:MPIDI_NM_mpi_init_hook:No data available)&lt;BR /&gt;$ export FI_LOG_LEVEL=debug&lt;BR /&gt;$ ./a.out&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var perf_cntr&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable perf_cntr=&amp;lt;not set&amp;gt;&lt;BR /&gt;...&lt;/P&gt;&lt;P&gt;libfabric:core:core:fi_getinfo_():899&amp;lt;warn&amp;gt; fi_getinfo: provider psm2 returned -61 (No data available)&lt;BR /&gt;libfabric:psm2:core:psmx2_getinfo():341&amp;lt;info&amp;gt;&lt;BR /&gt;libfabric:psm2:core:psmx2_init_prov_info():201&amp;lt;info&amp;gt; Unsupported endpoint type&lt;BR /&gt;libfabric:psm2:core:psmx2_init_prov_info():203&amp;lt;info&amp;gt; Supported: FI_EP_RDM&lt;BR /&gt;libfabric:psm2:core:psmx2_init_prov_info():205&amp;lt;info&amp;gt; Supported: FI_EP_DGRAM&lt;BR /&gt;libfabric:psm2:core:psmx2_init_prov_info():207&amp;lt;info&amp;gt; Requested: FI_EP_MSG&lt;BR /&gt;libfabric:core:core:fi_getinfo_():899&amp;lt;warn&amp;gt; fi_getinfo: provider psm2 returned -61 (No data available)&lt;BR /&gt;libfabric:core:core:ofi_layering_ok():776&amp;lt;info&amp;gt; Need core provider, skipping util ofi_rxm&lt;BR /&gt;libfabric:core:core:fi_getinfo_():877&amp;lt;info&amp;gt; Since psm2 can be used, sockets has been skipped. To use sockets, please, set FI_PROVIDER=sockets&lt;BR /&gt;libfabric:core:core:fi_getinfo_():877&amp;lt;info&amp;gt; Since psm2 can be used, tcp has been skipped. To use tcp, please, set FI_PROVIDER=tcp&lt;BR /&gt;libfabric:core:core:fi_getinfo_():899&amp;lt;warn&amp;gt; fi_getinfo: provider ofi_rxm returned -61 (No data available)&lt;BR /&gt;libfabric:core:core:fi_getinfo_():877&amp;lt;info&amp;gt; Since psm2 can be used, sockets has been skipped. To use sockets, please, set FI_PROVIDER=sockets&lt;BR /&gt;libfabric:core:core:fi_getinfo_():877&amp;lt;info&amp;gt; Since psm2 can be used, tcp has been skipped. To use tcp, please, set FI_PROVIDER=tcp&lt;BR /&gt;Abort(1094543) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:&lt;BR /&gt;MPIR_Init_thread(639)......:&lt;BR /&gt;MPID_Init(860).............:&lt;BR /&gt;MPIDI_NM_mpi_init_hook(689): OFI addrinfo() failed (ofi_init.h:689:MPIDI_NM_mpi_init_hook:No data available)&lt;BR /&gt;libfabric:psm2:core:psmx2_fini():476&amp;lt;info&amp;gt;&lt;/P&gt;&lt;P&gt;$export FI_PROVIDER=tcp&lt;BR /&gt;$ ./a.out&lt;/P&gt;&lt;P&gt;...&lt;/P&gt;&lt;P&gt;Hello world from processor system.place.com, rank 0 out of 1 processors&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;Using export FI_PROVIDER=tcp solves the crash for us. Hopefully this has no performance impact for on-node messages. I also hope update 2 can avoid this environment/configuration variable requirement.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 19 Nov 2018 18:12:12 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158389#M6248</guid>
      <dc:creator>Paul_K_2</dc:creator>
      <dc:date>2018-11-19T18:12:12Z</dc:date>
    </item>
    <item>
      <title>Hi Paul,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158390#M6249</link>
      <description>&lt;P&gt;Hi Paul,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Sorry for inconvenience. Yes, we are going to identify the root cause of the problem.&lt;/P&gt;&lt;P&gt;FI_PROVIDER=tcp has better performance number in compare to FI_PROVIDER=sockets (current default OFI provider for Intel MPI 2019 Gold and U1), but&amp;nbsp;FI_PROVIDER=tcp is a technical preview due to some stability issues.&lt;/P&gt;&lt;P&gt;You are right. TCP provider doesn't impact performance for intra-node communication, because SHM transport is used&amp;nbsp;by default. You can ensure that shm is used for intra- and OFI for inter-node communications by setting&amp;nbsp;I_MPI_FABRICS=shm:ofi.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;By the way, I have one more question to you that can help our team to indentify the root cause -- Did you try to set&amp;nbsp;FI_SOCKETS_IFACE to any interface? If not, could you try it, please (please, unset FI_PROVIDER or set to FI_PROVIDER=sockets, we should ensure that OFI/sockets provider is used in your test)?&lt;/P&gt;&lt;P&gt;It would be great&amp;nbsp;to check&amp;nbsp;it for all values:&amp;nbsp;docker0; eth0; eth1.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you in advance!&lt;/P&gt;&lt;P&gt;--&lt;/P&gt;&lt;P&gt;Dmitry&lt;/P&gt;</description>
      <pubDate>Mon, 19 Nov 2018 20:12:35 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158390#M6249</guid>
      <dc:creator>Dmitry_G_Intel</dc:creator>
      <dc:date>2018-11-19T20:12:35Z</dc:date>
    </item>
    <item>
      <title>$ source /opt/intel2019/bin</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158391#M6250</link>
      <description>&lt;PRE class="brush:bash; class-name:dark;"&gt;$ source /opt/intel2019/bin/compilervars.sh intel64
$ ./a.out
Abort(1094543) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(639)......:
MPID_Init(860).............:
MPIDI_NM_mpi_init_hook(689): OFI addrinfo() failed (ofi_init.h:689:MPIDI_NM_mpi_init_hook:No data available)
$ export FI_SOCKETS_IFACE=eth0
$ ./a.out
Abort(1094543) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(639)......:
MPID_Init(860).............:
MPIDI_NM_mpi_init_hook(689): OFI addrinfo() failed (ofi_init.h:689:MPIDI_NM_mpi_init_hook:No data available)
$ export FI_PROVIDER=sockets
$ ./a.out
Hello world from processor thing.machine.com, rank 0 out of 1 processors
$ export FI_SOCKETS_IFACE=eth1
[pk7@oxygen t]$ ./a.out
Hello world from processor thing.machine.com, rank 0 out of 1 processors
$ export FI_SOCKETS_IFACE=docker0
[pk7@oxygen t]$ ./a.out
Hello world from processor thing.machine.com, rank 0 out of 1 processors
$ export FI_PROVIDER=""
[pk7@oxygen t]$ ./a.out
Abort(1094543) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(639)......:
MPID_Init(860).............:
MPIDI_NM_mpi_init_hook(689): OFI addrinfo() failed (ofi_init.h:689:MPIDI_NM_mpi_init_hook:No data available)
$ export FI_PROVIDER=tcp
$ ./a.out
Hello world from processor thing.machine.com, rank 0 out of 1 processors
&lt;/PRE&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Looks like it "lost" the default sockets provider in the update.&lt;/P&gt;</description>
      <pubDate>Mon, 19 Nov 2018 21:08:27 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158391#M6250</guid>
      <dc:creator>Paul_K_2</dc:creator>
      <dc:date>2018-11-19T21:08:27Z</dc:date>
    </item>
    <item>
      <title>Hi Paul,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158392#M6251</link>
      <description>&lt;P&gt;Hi Paul,&lt;/P&gt;&lt;P&gt;Thank you! We appreciate&amp;nbsp;your help.&lt;/P&gt;&lt;P&gt;It looks like that setting either FI_PROVIDER=sockets or FI_PROVIDER=tcp solves your problem, doesn't it?&lt;/P&gt;&lt;P&gt;IMPI 2019 should use sockets OFI provider (i.e. FI_PROVIDER=sockets) by default, but this is not case&amp;nbsp;due to some reason.&amp;nbsp;&lt;/P&gt;&lt;P&gt;--&lt;/P&gt;&lt;P&gt;Dmitry&lt;/P&gt;</description>
      <pubDate>Tue, 20 Nov 2018 04:29:51 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158392#M6251</guid>
      <dc:creator>Dmitry_G_Intel</dc:creator>
      <dc:date>2018-11-20T04:29:51Z</dc:date>
    </item>
    <item>
      <title>Hello, I am seeing what</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158393#M6252</link>
      <description>&lt;P&gt;Hello, I am seeing what appears to be the same problem, but none of the suggest environment variable settings are working for me.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is on CentOS Linux release 7.2.1511.&amp;nbsp; The test program is /opt/intel/compilers_and_libraries_2019.1.144/linux/mpi/test/test.c compiled with&amp;nbsp;mpigcc.&amp;nbsp; Trying to launch just on a single host for now.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[user1@centos7-2 tmp]$ export LD_LIBRARY_PATH=/opt/intel/compilers_and_libraries_2019.1.144/linux/mpi/intel64/libfabric/lib/:$LD_LIBRARY_PATH&lt;BR /&gt;[user1@centos7-2 tmp]$ env | grep FI_&lt;BR /&gt;FI_SOCKETS_IFACE=enp0s3&lt;BR /&gt;FI_LOG_LEVEL=debug&lt;BR /&gt;FI_PROVIDER=tcp&lt;BR /&gt;[user1@centos7-2 tmp]$ mpiexec ./mpitest&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var perf_cntr&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable perf_cntr=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var hook&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable hook=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var provider&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var fork_unsafe&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var universe_size&lt;BR /&gt;libfabric:core:core:fi_param_get_():281&amp;lt;info&amp;gt; read string var provider=tcp&lt;BR /&gt;libfabric:core:core:ofi_create_filter():322&amp;lt;warn&amp;gt; unable to parse filter from: tcp&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var provider_path&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable provider_path=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var rxd_enable&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable rxd_enable=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;Abort(1618831) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:&lt;BR /&gt;MPIR_Init_thread(639)......:&lt;BR /&gt;MPID_Init(860).............:&lt;BR /&gt;MPIDI_NM_mpi_init_hook(689): OFI addrinfo() failed (ofi_init.h:689:MPIDI_NM_mpi_init_hook:No data available)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;[user1@centos7-2 tmp]$ export FI_PROVIDER=sockets&lt;BR /&gt;[user1@centos7-2 tmp]$ env | grep FI_&lt;BR /&gt;FI_SOCKETS_IFACE=enp0s3&lt;BR /&gt;FI_LOG_LEVEL=debug&lt;BR /&gt;FI_PROVIDER=sockets&lt;BR /&gt;[user1@centos7-2 tmp]$ mpiexec ./mpitest&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var perf_cntr&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable perf_cntr=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var hook&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable hook=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var provider&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var fork_unsafe&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var universe_size&lt;BR /&gt;libfabric:core:core:fi_param_get_():281&amp;lt;info&amp;gt; read string var provider=sockets&lt;BR /&gt;libfabric:core:core:ofi_create_filter():322&amp;lt;warn&amp;gt; unable to parse filter from: sockets&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var provider_path&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable provider_path=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var rxd_enable&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable rxd_enable=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;Abort(1618831) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:&lt;BR /&gt;MPIR_Init_thread(639)......:&lt;BR /&gt;MPID_Init(860).............:&lt;BR /&gt;MPIDI_NM_mpi_init_hook(689): OFI addrinfo() failed (ofi_init.h:689:MPIDI_NM_mpi_init_hook:No data available)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[root@centos7-2 tmp]# ifconfig&lt;BR /&gt;enp0s3: flags=4163&amp;lt;UP,BROADCAST,RUNNING,MULTICAST&amp;gt; &amp;nbsp;mtu 1500&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet 10.10.10.9 &amp;nbsp;netmask 255.0.0.0 &amp;nbsp;broadcast 10.255.255.255&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet6 fe80::a00:27ff:fe0b:6565 &amp;nbsp;prefixlen 64 &amp;nbsp;scopeid 0x20&amp;lt;link&amp;gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ether 08:00:27:0b:65:65 &amp;nbsp;txqueuelen 1000 &amp;nbsp;(Ethernet)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX packets 405685 &amp;nbsp;bytes 534299467 (509.5 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX errors 0 &amp;nbsp;dropped 0 &amp;nbsp;overruns 0 &amp;nbsp;frame 0&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX packets 441363 &amp;nbsp;bytes 675447638 (644.1 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX errors 0 &amp;nbsp;dropped 0 overruns 0 &amp;nbsp;carrier 0 &amp;nbsp;collisions 0&lt;/P&gt;&lt;P&gt;enp0s8: flags=4163&amp;lt;UP,BROADCAST,RUNNING,MULTICAST&amp;gt; &amp;nbsp;mtu 1500&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet 10.0.3.15 &amp;nbsp;netmask 255.255.255.0 &amp;nbsp;broadcast 10.0.3.255&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet6 fe80::a00:27ff:fee4:5499 &amp;nbsp;prefixlen 64 &amp;nbsp;scopeid 0x20&amp;lt;link&amp;gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ether 08:00:27:e4:54:99 &amp;nbsp;txqueuelen 1000 &amp;nbsp;(Ethernet)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX packets 102700 &amp;nbsp;bytes 133004229 (126.8 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX errors 0 &amp;nbsp;dropped 0 &amp;nbsp;overruns 0 &amp;nbsp;frame 0&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX packets 51572 &amp;nbsp;bytes 3477801 (3.3 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX errors 0 &amp;nbsp;dropped 0 overruns 0 &amp;nbsp;carrier 0 &amp;nbsp;collisions 0&lt;/P&gt;&lt;P&gt;lo: flags=73&amp;lt;UP,LOOPBACK,RUNNING&amp;gt; &amp;nbsp;mtu 65536&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet 127.0.0.1 &amp;nbsp;netmask 255.0.0.0&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet6 ::1 &amp;nbsp;prefixlen 128 &amp;nbsp;scopeid 0x10&amp;lt;host&amp;gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; loop &amp;nbsp;txqueuelen 0 &amp;nbsp;(Local Loopback)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX packets 48387 &amp;nbsp;bytes 8166614 (7.7 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX errors 0 &amp;nbsp;dropped 0 &amp;nbsp;overruns 0 &amp;nbsp;frame 0&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX packets 48387 &amp;nbsp;bytes 8166614 (7.7 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX errors 0 &amp;nbsp;dropped 0 overruns 0 &amp;nbsp;carrier 0 &amp;nbsp;collisions 0&lt;/P&gt;&lt;P&gt;virbr0: flags=4099&amp;lt;UP,BROADCAST,MULTICAST&amp;gt; &amp;nbsp;mtu 1500&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet 192.168.122.1 &amp;nbsp;netmask 255.255.255.0 &amp;nbsp;broadcast 192.168.122.255&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ether 52:54:00:86:b9:f7 &amp;nbsp;txqueuelen 0 &amp;nbsp;(Ethernet)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX packets 0 &amp;nbsp;bytes 0 (0.0 B)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX errors 0 &amp;nbsp;dropped 0 &amp;nbsp;overruns 0 &amp;nbsp;frame 0&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX packets 0 &amp;nbsp;bytes 0 (0.0 B)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX errors 0 &amp;nbsp;dropped 0 overruns 0 &amp;nbsp;carrier 0 &amp;nbsp;collisions 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I get the same errors if I set&amp;nbsp;FI_SOCKETS_IFACE to&amp;nbsp;enp0s8 as well.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any suggestions?&lt;/P&gt;</description>
      <pubDate>Tue, 20 Nov 2018 16:02:36 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158393#M6252</guid>
      <dc:creator>campbell__scott</dc:creator>
      <dc:date>2018-11-20T16:02:36Z</dc:date>
    </item>
    <item>
      <title>Hello, I am seeing what</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158394#M6253</link>
      <description>&lt;P&gt;Hello, I am seeing what appears to be the same problem, but none of the suggest environment variable settings are working for me.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is on CentOS Linux release 7.2.1511.&amp;nbsp; The test program is /opt/intel/compilers_and_libraries_2019.1.144/linux/mpi/test/test.c compiled with&amp;nbsp;mpigcc.&amp;nbsp; Trying to launch just on a single host for now.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[user1@centos7-2 tmp]$ export LD_LIBRARY_PATH=/opt/intel/compilers_and_libraries_2019.1.144/linux/mpi/intel64/libfabric/lib/:$LD_LIBRARY_PATH&lt;BR /&gt;[user1@centos7-2 tmp]$ env | grep FI_&lt;BR /&gt;FI_SOCKETS_IFACE=enp0s3&lt;BR /&gt;FI_LOG_LEVEL=debug&lt;BR /&gt;FI_PROVIDER=tcp&lt;BR /&gt;[user1@centos7-2 tmp]$ mpiexec ./mpitest&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var perf_cntr&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable perf_cntr=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var hook&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable hook=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var provider&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var fork_unsafe&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var universe_size&lt;BR /&gt;libfabric:core:core:fi_param_get_():281&amp;lt;info&amp;gt; read string var provider=tcp&lt;BR /&gt;libfabric:core:core:ofi_create_filter():322&amp;lt;warn&amp;gt; unable to parse filter from: tcp&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var provider_path&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable provider_path=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var rxd_enable&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable rxd_enable=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;Abort(1618831) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:&lt;BR /&gt;MPIR_Init_thread(639)......:&lt;BR /&gt;MPID_Init(860).............:&lt;BR /&gt;MPIDI_NM_mpi_init_hook(689): OFI addrinfo() failed (ofi_init.h:689:MPIDI_NM_mpi_init_hook:No data available)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;[user1@centos7-2 tmp]$ export FI_PROVIDER=sockets&lt;BR /&gt;[user1@centos7-2 tmp]$ env | grep FI_&lt;BR /&gt;FI_SOCKETS_IFACE=enp0s3&lt;BR /&gt;FI_LOG_LEVEL=debug&lt;BR /&gt;FI_PROVIDER=sockets&lt;BR /&gt;[user1@centos7-2 tmp]$ mpiexec ./mpitest&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var perf_cntr&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable perf_cntr=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var hook&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable hook=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var provider&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var fork_unsafe&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var universe_size&lt;BR /&gt;libfabric:core:core:fi_param_get_():281&amp;lt;info&amp;gt; read string var provider=sockets&lt;BR /&gt;libfabric:core:core:ofi_create_filter():322&amp;lt;warn&amp;gt; unable to parse filter from: sockets&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var provider_path&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable provider_path=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:fi_param_define_():223&amp;lt;info&amp;gt; registered var rxd_enable&lt;BR /&gt;libfabric:core:core:fi_param_get_():272&amp;lt;info&amp;gt; variable rxd_enable=&amp;lt;not set&amp;gt;&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;libfabric:core:core:ofi_register_provider():194&amp;lt;warn&amp;gt; no provider structure or name&lt;BR /&gt;Abort(1618831) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:&lt;BR /&gt;MPIR_Init_thread(639)......:&lt;BR /&gt;MPID_Init(860).............:&lt;BR /&gt;MPIDI_NM_mpi_init_hook(689): OFI addrinfo() failed (ofi_init.h:689:MPIDI_NM_mpi_init_hook:No data available)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[root@centos7-2 tmp]# ifconfig&lt;BR /&gt;enp0s3: flags=4163&amp;lt;UP,BROADCAST,RUNNING,MULTICAST&amp;gt; &amp;nbsp;mtu 1500&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet 10.10.10.9 &amp;nbsp;netmask 255.0.0.0 &amp;nbsp;broadcast 10.255.255.255&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet6 fe80::a00:27ff:fe0b:6565 &amp;nbsp;prefixlen 64 &amp;nbsp;scopeid 0x20&amp;lt;link&amp;gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ether 08:00:27:0b:65:65 &amp;nbsp;txqueuelen 1000 &amp;nbsp;(Ethernet)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX packets 405685 &amp;nbsp;bytes 534299467 (509.5 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX errors 0 &amp;nbsp;dropped 0 &amp;nbsp;overruns 0 &amp;nbsp;frame 0&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX packets 441363 &amp;nbsp;bytes 675447638 (644.1 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX errors 0 &amp;nbsp;dropped 0 overruns 0 &amp;nbsp;carrier 0 &amp;nbsp;collisions 0&lt;/P&gt;&lt;P&gt;enp0s8: flags=4163&amp;lt;UP,BROADCAST,RUNNING,MULTICAST&amp;gt; &amp;nbsp;mtu 1500&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet 10.0.3.15 &amp;nbsp;netmask 255.255.255.0 &amp;nbsp;broadcast 10.0.3.255&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet6 fe80::a00:27ff:fee4:5499 &amp;nbsp;prefixlen 64 &amp;nbsp;scopeid 0x20&amp;lt;link&amp;gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ether 08:00:27:e4:54:99 &amp;nbsp;txqueuelen 1000 &amp;nbsp;(Ethernet)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX packets 102700 &amp;nbsp;bytes 133004229 (126.8 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX errors 0 &amp;nbsp;dropped 0 &amp;nbsp;overruns 0 &amp;nbsp;frame 0&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX packets 51572 &amp;nbsp;bytes 3477801 (3.3 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX errors 0 &amp;nbsp;dropped 0 overruns 0 &amp;nbsp;carrier 0 &amp;nbsp;collisions 0&lt;/P&gt;&lt;P&gt;lo: flags=73&amp;lt;UP,LOOPBACK,RUNNING&amp;gt; &amp;nbsp;mtu 65536&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet 127.0.0.1 &amp;nbsp;netmask 255.0.0.0&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet6 ::1 &amp;nbsp;prefixlen 128 &amp;nbsp;scopeid 0x10&amp;lt;host&amp;gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; loop &amp;nbsp;txqueuelen 0 &amp;nbsp;(Local Loopback)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX packets 48387 &amp;nbsp;bytes 8166614 (7.7 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX errors 0 &amp;nbsp;dropped 0 &amp;nbsp;overruns 0 &amp;nbsp;frame 0&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX packets 48387 &amp;nbsp;bytes 8166614 (7.7 MiB)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX errors 0 &amp;nbsp;dropped 0 overruns 0 &amp;nbsp;carrier 0 &amp;nbsp;collisions 0&lt;/P&gt;&lt;P&gt;virbr0: flags=4099&amp;lt;UP,BROADCAST,MULTICAST&amp;gt; &amp;nbsp;mtu 1500&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inet 192.168.122.1 &amp;nbsp;netmask 255.255.255.0 &amp;nbsp;broadcast 192.168.122.255&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ether 52:54:00:86:b9:f7 &amp;nbsp;txqueuelen 0 &amp;nbsp;(Ethernet)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX packets 0 &amp;nbsp;bytes 0 (0.0 B)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; RX errors 0 &amp;nbsp;dropped 0 &amp;nbsp;overruns 0 &amp;nbsp;frame 0&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX packets 0 &amp;nbsp;bytes 0 (0.0 B)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TX errors 0 &amp;nbsp;dropped 0 overruns 0 &amp;nbsp;carrier 0 &amp;nbsp;collisions 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I get the same errors if I set&amp;nbsp;FI_SOCKETS_IFACE to&amp;nbsp;enp0s8 as well.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any suggestions?&lt;/P&gt;</description>
      <pubDate>Tue, 20 Nov 2018 16:17:18 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158394#M6253</guid>
      <dc:creator>campbell__scott</dc:creator>
      <dc:date>2018-11-20T16:17:18Z</dc:date>
    </item>
    <item>
      <title>Yes, setting either FI</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158395#M6254</link>
      <description>&lt;P&gt;Yes, setting either FI_PROVIDER=sockets or FI_PROVIDER=tcp solves the problem. Thanks.&lt;/P&gt;</description>
      <pubDate>Tue, 20 Nov 2018 18:07:47 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158395#M6254</guid>
      <dc:creator>Paul_K_2</dc:creator>
      <dc:date>2018-11-20T18:07:47Z</dc:date>
    </item>
    <item>
      <title>I had made a post here</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158396#M6255</link>
      <description>&lt;P&gt;I had made a post here yesterday that never made it through moderation, I have since solved the problem with "source /opt/intel/bin/compilervars.sh intel64", there is no need to push my original post through.&amp;nbsp; Thanks.&lt;/P&gt;</description>
      <pubDate>Wed, 21 Nov 2018 14:05:43 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158396#M6255</guid>
      <dc:creator>campbell__scott</dc:creator>
      <dc:date>2018-11-21T14:05:43Z</dc:date>
    </item>
    <item>
      <title>you need FI_PROVIDER_PATH set</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158397#M6256</link>
      <description>&lt;P&gt;you need FI_PROVIDER_PATH set, eg&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;export FI_PROVIDER_PATH=$MPI_HOME/compilers_and_libraries_2019.2.187/linux/mpi/intel64/libfabric/lib/prov&lt;/P&gt;</description>
      <pubDate>Mon, 25 Feb 2019 22:24:32 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158397#M6256</guid>
      <dc:creator>Matt_H_3</dc:creator>
      <dc:date>2019-02-25T22:24:32Z</dc:date>
    </item>
    <item>
      <title>Finding the documentation for</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158398#M6257</link>
      <description>&lt;P&gt;Finding the documentation for Intel MPI 2019 environment variables is not easy. I found that setting FI_PROVIDER to tcp or sockets works in my non OFI setting. I would like to know more about this variable.&amp;nbsp; What are the valid values&amp;nbsp; for FI_PROVIDER?&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;-Rashawn&lt;/P&gt;</description>
      <pubDate>Tue, 09 Jul 2019 19:24:42 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158398#M6257</guid>
      <dc:creator>Rashawn_K_Intel1</dc:creator>
      <dc:date>2019-07-09T19:24:42Z</dc:date>
    </item>
    <item>
      <title>I noticed that instead of</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158399#M6258</link>
      <description>&lt;P&gt;I noticed that instead of setting FI_PROVIDER, setting the following environment variable also works:&lt;/P&gt;
&lt;PRE class="brush:bash; class-name:dark;"&gt;I_MPI_FABRICS=shm&lt;/PRE&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Whereas setting I_MPI_FABRCS=shm:ofi results in the same error as above.&lt;/P&gt;</description>
      <pubDate>Sat, 28 Sep 2019 21:08:10 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158399#M6258</guid>
      <dc:creator>subham_m_</dc:creator>
      <dc:date>2019-09-28T21:08:10Z</dc:date>
    </item>
    <item>
      <title>I seem to have the same (or</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158400#M6259</link>
      <description>&lt;P&gt;I seem to have the same (or similar) problem (2019 Update 5), except that it only occurs for&amp;nbsp;when run on a E5410. It runs fine with E5-2660, Gold 6130 or Gold 6138, and also runs fine with comp2015/impi/5.0.2.044. The various environmental options suggested don't work (for me).&lt;/P&gt;&lt;P&gt;&amp;nbsp;mpirun -np 8 -machinefile .machine0 ./Hello&lt;BR /&gt;forrtl: severe (168): Program Exception - illegal instruction&lt;BR /&gt;Image &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;PC &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Routine &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Line &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Source &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;Hello &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0000000000405EA4 &amp;nbsp;Unknown &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Unknown &amp;nbsp;Unknown&lt;BR /&gt;libpthread-2.17.s &amp;nbsp;00002AD66501F5D0 &amp;nbsp;Unknown &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Unknown &amp;nbsp;Unknown&lt;BR /&gt;libmpi.so.12.0.0 &amp;nbsp; 00002AD664705252 &amp;nbsp;MPL_dbg_pre_init &amp;nbsp; &amp;nbsp; &amp;nbsp;Unknown &amp;nbsp;Unknown&lt;BR /&gt;libmpi.so.12.0.0 &amp;nbsp; 00002AD66421B0FE &amp;nbsp;MPI_Init &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Unknown &amp;nbsp;Unknown&lt;BR /&gt;libmpifort.so.12. &amp;nbsp;00002AD663937D2B &amp;nbsp;MPI_INIT &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Unknown &amp;nbsp;Unknown&lt;BR /&gt;Hello &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0000000000404F40 &amp;nbsp;Unknown &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Unknown &amp;nbsp;Unknown&lt;BR /&gt;Hello &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0000000000404EE2 &amp;nbsp;Unknown &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Unknown &amp;nbsp;Unknown&lt;BR /&gt;libc-2.17.so &amp;nbsp; &amp;nbsp; &amp;nbsp; 00002AD6655503D5 &amp;nbsp;__libc_start_main &amp;nbsp; &amp;nbsp; Unknown &amp;nbsp;Unknown&lt;BR /&gt;Hello &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0000000000404DE9 &amp;nbsp;Unknown &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Unknown &amp;nbsp;Unknown&lt;BR /&gt;forrtl: severe (168): Program Exception - illegal instruction&lt;BR /&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Sep 2019 19:05:57 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158400#M6259</guid>
      <dc:creator>L__D__Marks</dc:creator>
      <dc:date>2019-09-30T19:05:57Z</dc:date>
    </item>
    <item>
      <title>Similar thing happens to me.</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158401#M6260</link>
      <description>&lt;P&gt;Similar thing happens to me. Just installed the 2019.5.281 version of MPI Library. I use the Intel Parallel Studio XE 2019, the latest version on the Visual Studio.&amp;nbsp;Running the MPI code results in the following message:&lt;/P&gt;&lt;P&gt;[mpiexec@Sebastian-PC] bstrap\service\service_launch.c (305): server rejected credentials&lt;BR /&gt;[mpiexec@Sebastian-PC] bstrap\src\hydra_bstrap.c (371): error launching bstrap proxy&lt;BR /&gt;[mpiexec@Sebastian-PC] mpiexec.c (1898): error setting up the boostrap proxies&lt;/P&gt;&lt;P&gt;With -localonly I can run the code but all cores are executing the same thing as the master code (everybody runs the same) and have the same id.&amp;nbsp;&amp;nbsp;Any ideas how to fix this?&lt;/P&gt;</description>
      <pubDate>Thu, 03 Oct 2019 16:56:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/New-MPI-error-with-Intel-2019-1-unable-to-run-MPI-hello-world/m-p/1158401#M6260</guid>
      <dc:creator>sebastian_d_</dc:creator>
      <dc:date>2019-10-03T16:56:00Z</dc:date>
    </item>
  </channel>
</rss>

