<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Hi Dong, in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/When-I-MPI-FABRICS-shm-the-size-of-MPI-Bcast-can-t-larger-than/m-p/1085771#M4976</link>
    <description>&lt;P style="word-wrap: break-word; font-size: 12px;"&gt;Hi Dong,&lt;/P&gt;

&lt;P style="word-wrap: break-word; font-size: 12px;"&gt;Could you please try to set the I_MPI_SHM_FBOX/I_MPI_SHM_LMT (https://software.intel.com/en-us/node/528902?language=es), does this help on the hang-up?&lt;/P&gt;

&lt;P style="word-wrap: break-word; font-size: 12px;"&gt;Best Regards,&lt;/P&gt;

&lt;P style="word-wrap: break-word; font-size: 12px;"&gt;Zhuowei&lt;/P&gt;</description>
    <pubDate>Thu, 04 May 2017 05:24:39 GMT</pubDate>
    <dc:creator>James_S</dc:creator>
    <dc:date>2017-05-04T05:24:39Z</dc:date>
    <item>
      <title>When I_MPI_FABRICS=shm, the size of MPI_Bcast can't larger than 64kb</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/When-I-MPI-FABRICS-shm-the-size-of-MPI-Bcast-can-t-larger-than/m-p/1085766#M4971</link>
      <description>I run MPI with a single workstation( 2 x E5 2690).

When I export I_MPI_FABRICS=shm,  the size of MPI_Bcast can't larger than 64kb.

But when I export I_MPI_FABRICS={shm,tcp}, everything is ok.

Are there some limit for shm? Can I adjust the limit?</description>
      <pubDate>Mon, 17 Apr 2017 09:37:38 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/When-I-MPI-FABRICS-shm-the-size-of-MPI-Bcast-can-t-larger-than/m-p/1085766#M4971</guid>
      <dc:creator>杨_栋_</dc:creator>
      <dc:date>2017-04-17T09:37:38Z</dc:date>
    </item>
    <item>
      <title>Dear customer,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/When-I-MPI-FABRICS-shm-the-size-of-MPI-Bcast-can-t-larger-than/m-p/1085767#M4972</link>
      <description>&lt;P&gt;Dear customer,&lt;/P&gt;

&lt;P&gt;Your question is more relevant to MPI not MKL. I will transfer your thread to MPI forum zone. Thank you.&lt;/P&gt;

&lt;P&gt;Best regards,&lt;BR /&gt;
	Fiona&lt;/P&gt;</description>
      <pubDate>Tue, 18 Apr 2017 01:24:54 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/When-I-MPI-FABRICS-shm-the-size-of-MPI-Bcast-can-t-larger-than/m-p/1085767#M4972</guid>
      <dc:creator>Zhen_Z_Intel</dc:creator>
      <dc:date>2017-04-18T01:24:54Z</dc:date>
    </item>
    <item>
      <title>Quote:Fiona Z. (Intel) wrote:</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/When-I-MPI-FABRICS-shm-the-size-of-MPI-Bcast-can-t-larger-than/m-p/1085768#M4973</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;Fiona Z. (Intel) wrote:&lt;BR /&gt;&lt;P&gt;&lt;/P&gt;

&lt;P&gt;Dear customer,&lt;/P&gt;

&lt;P&gt;Your question is more relevant to MPI not MKL. I will transfer your thread to MPI forum zone. Thank you.&lt;/P&gt;

&lt;P&gt;Best regards,&lt;BR /&gt;
	Fiona&lt;/P&gt;

&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;

&lt;P&gt;Thank you!&lt;/P&gt;</description>
      <pubDate>Tue, 18 Apr 2017 01:39:29 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/When-I-MPI-FABRICS-shm-the-size-of-MPI-Bcast-can-t-larger-than/m-p/1085768#M4973</guid>
      <dc:creator>杨_栋_</dc:creator>
      <dc:date>2017-04-18T01:39:29Z</dc:date>
    </item>
    <item>
      <title>Hi Dong,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/When-I-MPI-FABRICS-shm-the-size-of-MPI-Bcast-can-t-larger-than/m-p/1085769#M4974</link>
      <description>&lt;P&gt;Hi Dong,&lt;/P&gt;

&lt;P&gt;What is your OS and Intel MPI version? Could please send me the outputs of your MPI environment, and the debug results when exporting I_MPI_DEBUG=6. Thanks.&lt;/P&gt;

&lt;P&gt;Best Regards,&lt;/P&gt;

&lt;P&gt;Zhuowei&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 18 Apr 2017 03:29:11 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/When-I-MPI-FABRICS-shm-the-size-of-MPI-Bcast-can-t-larger-than/m-p/1085769#M4974</guid>
      <dc:creator>James_S</dc:creator>
      <dc:date>2017-04-18T03:29:11Z</dc:date>
    </item>
    <item>
      <title>Quote:Si, Zhuowei wrote:</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/When-I-MPI-FABRICS-shm-the-size-of-MPI-Bcast-can-t-larger-than/m-p/1085770#M4975</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;Si, Zhuowei wrote:&lt;BR /&gt;&lt;P&gt;&lt;/P&gt;

&lt;P&gt;Hi Dong,&lt;/P&gt;

&lt;P&gt;What is your OS and Intel MPI version? Could please send me the outputs of your MPI environment, and the debug results when exporting I_MPI_DEBUG=6. Thanks.&lt;/P&gt;

&lt;P&gt;Best Regards,&lt;/P&gt;

&lt;P&gt;Zhuowei&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;Hello Zhuowei.&lt;/P&gt;

&lt;P&gt;thanks for your help!&lt;/P&gt;

&lt;P&gt;I test my code on two workstations. One of the workstations runs with ubuntu 16.04 LTS, and one runs with Debian GNU/Linux 8.&lt;/P&gt;

&lt;P&gt;Intel MPI version of the two workstations is Intel(R) MPI Library 2017 Update 2 for Linux.&lt;/P&gt;

&lt;P&gt;My MPI environment is set in .bashrc like this:&lt;/P&gt;

&lt;PRE class="brush:bash;"&gt;export PATH=$PATH:/opt/intel/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_LIB:/opt/intel/mkl/lib/intel64:/opt/intel/lib/intel64:
source /opt/intel/bin/compilervars.sh intel64
source /opt/intel/mkl/bin/mklvars.sh intel64
export INTEL_LICENSE_FILE=/opt/intel/licenses&lt;/PRE&gt;

&lt;P&gt;This is my c++ code:&lt;/P&gt;

&lt;PRE class="brush:cpp;"&gt;# include &amp;lt;mpi.h&amp;gt;
# include &amp;lt;iostream&amp;gt;
# include &amp;lt;unistd.h&amp;gt;

void main(int argc,char *argv[])
{
  MPI_Init(&amp;amp;argc,&amp;amp;argv);
  
  int processor_id_temp;
  MPI_Comm_rank(MPI_COMM_WORLD,&amp;amp;processor_id_temp);
  const int processor_id = processor_id_temp;

  char*const buf = new char[BCAST_SIZE];
  sprintf(buf, "Hello! (from processor id %d)", processor_id);

  const int color = (processor_id&amp;gt;0 ? 1 : 0);

  MPI_Comm MPI_COMM_TEST;
  MPI_Comm_split(MPI_COMM_WORLD,
		 color,
		 processor_id,
		 &amp;amp;MPI_COMM_TEST);
  
  MPI_Bcast(buf,
	    BCAST_SIZE,
	    MPI_CHAR,
	    0,
	    MPI_COMM_TEST);

  usleep(processor_id * 10000);
    
  std::cout&amp;lt;&amp;lt;"processor id "
	   &amp;lt;&amp;lt;processor_id
	   &amp;lt;&amp;lt;", color "
	   &amp;lt;&amp;lt;color
	   &amp;lt;&amp;lt;": "
	   &amp;lt;&amp;lt;buf
	   &amp;lt;&amp;lt;std::endl;

  delete [] buf;
    
  MPI_Finalize();
}
&lt;/PRE&gt;

&lt;P&gt;This is the result on the workstation with ubuntu:&lt;/P&gt;

&lt;PRE class="brush:bash;"&gt;$ export I_MPI_FABRICS=shm
$ export I_MPI_DEBUG=6
$ for size in 32768 131072; do mpiicpc -DBCAST_SIZE=${size} mpi_comm_split.cpp; mpirun -n 3 ./a.out; echo; done
[0] MPI startup(): Intel(R) MPI Library, Version 2017 Update 2  Build 20170125 (id: 16752)
[0] MPI startup(): Copyright (C) 2003-2017 Intel Corporation.  All rights reserved.
[0] MPI startup(): Multi-threaded optimized library
[0] MPI startup(): shm data transfer mode
[1] MPI startup(): shm data transfer mode
[2] MPI startup(): shm data transfer mode
[0] MPI startup(): Device_reset_idx=8
[0] MPI startup(): Allgather: 3: 0-0 &amp;amp; 0-2147483647
[0] MPI startup(): Allgather: 1: 1-6459 &amp;amp; 0-2147483647
[0] MPI startup(): Allgather: 5: 6460-14628 &amp;amp; 0-2147483647
[0] MPI startup(): Allgather: 1: 14629-25466 &amp;amp; 0-2147483647
[0] MPI startup(): Allgather: 3: 25467-36131 &amp;amp; 0-2147483647
[0] MPI startup(): Allgather: 5: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Allgatherv: 1: 0-7199 &amp;amp; 0-2147483647
[0] MPI startup(): Allgatherv: 3: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 7: 0-4 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 1: 5-8 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 7: 9-32 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 1: 33-64 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 7: 65-341 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 1: 342-6656 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 7: 6657-8192 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 2: 8193-113595 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 7: 113596-132320 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 2: 132321-1318322 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 7: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Alltoall: 3: 0-25 &amp;amp; 0-2147483647
[0] MPI startup(): Alltoall: 4: 26-37 &amp;amp; 0-2147483647
[0] MPI startup(): Alltoall: 3: 38-1024 &amp;amp; 0-2147483647
[0] MPI startup(): Alltoall: 4: 1025-4096 &amp;amp; 0-2147483647
[0] MPI startup(): Alltoall: 2: 4097-70577 &amp;amp; 0-2147483647
[0] MPI startup(): Alltoall: 4: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Alltoallv: 1: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Alltoallw: 0: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Barrier: 2: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Bcast: 1: 0-0 &amp;amp; 0-2147483647
[0] MPI startup(): Bcast: 8: 1-12746 &amp;amp; 0-2147483647
[0] MPI startup(): Bcast: 1: 12747-42366 &amp;amp; 0-2147483647
[0] MPI startup(): Bcast: 7: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Exscan: 0: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Gather: 1: 0-0 &amp;amp; 0-2147483647
[0] MPI startup(): Gather: 3: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Gatherv: 1: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Reduce_scatter: 4: 0-5 &amp;amp; 0-2147483647
[0] MPI startup(): Reduce_scatter: 1: 6-128 &amp;amp; 0-2147483647
[0] MPI startup(): Reduce_scatter: 3: 129-89367 &amp;amp; 0-2147483647
[0] MPI startup(): Reduce_scatter: 2: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Reduce: 1: 0-0 &amp;amp; 0-2147483647
[0] MPI startup(): Reduce: 7: 1-39679 &amp;amp; 0-2147483647
[0] MPI startup(): Reduce: 1: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Scan: 0: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Scatter: 1: 0-0 &amp;amp; 0-2147483647
[0] MPI startup(): Scatter: 3: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Scatterv: 0: 0-2147483647 &amp;amp; 0-2147483647
[1] MPI startup(): Recognition=2 Platform(code=32 ippn=2 dev=1) Fabric(intra=1 inter=1 flags=0x0)
[2] MPI startup(): Recognition=2 Platform(code=32 ippn=2 dev=1) Fabric(intra=1 inter=1 flags=0x0)
[0] MPI startup(): Rank    Pid      Node name  Pin cpu
[0] MPI startup(): 0       4440     yd-ws1     {0,4}
[0] MPI startup(): 1       4441     yd-ws1     {1,5}
[0] MPI startup(): 2       4442     yd-ws1     {2,6}
[0] MPI startup(): Recognition=2 Platform(code=32 ippn=2 dev=1) Fabric(intra=1 inter=1 flags=0x0)
[0] MPI startup(): I_MPI_DEBUG=6
[0] MPI startup(): I_MPI_FABRICS=shm
[0] MPI startup(): I_MPI_INFO_NUMA_NODE_NUM=1
[0] MPI startup(): I_MPI_PIN_MAPPING=3:0 0,1 1,2 2
processor id 0, color 0: Hello! (from processor id 0)
processor id 1, color 1: Hello! (from processor id 1)
processor id 2, color 1: Hello! (from processor id 1)

[0] MPI startup(): Intel(R) MPI Library, Version 2017 Update 2  Build 20170125 (id: 16752)
[0] MPI startup(): Copyright (C) 2003-2017 Intel Corporation.  All rights reserved.
[0] MPI startup(): Multi-threaded optimized library
[0] MPI startup(): shm data transfer mode
[1] MPI startup(): shm data transfer mode
[2] MPI startup(): shm data transfer mode
[0] MPI startup(): Device_reset_idx=8
[0] MPI startup(): Allgather: 3: 0-0 &amp;amp; 0-2147483647
[0] MPI startup(): Allgather: 1: 1-6459 &amp;amp; 0-2147483647
[0] MPI startup(): Allgather: 5: 6460-14628 &amp;amp; 0-2147483647
[0] MPI startup(): Allgather: 1: 14629-25466 &amp;amp; 0-2147483647
[0] MPI startup(): Allgather: 3: 25467-36131 &amp;amp; 0-2147483647
[0] MPI startup(): Allgather: 5: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Allgatherv: 1: 0-7199 &amp;amp; 0-2147483647
[0] MPI startup(): Allgatherv: 3: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 7: 0-4 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 1: 5-8 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 7: 9-32 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 1: 33-64 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 7: 65-341 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 1: 342-6656 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 7: 6657-8192 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 2: 8193-113595 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 7: 113596-132320 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 2: 132321-1318322 &amp;amp; 0-2147483647
[0] MPI startup(): Allreduce: 7: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Alltoall: 3: 0-25 &amp;amp; 0-2147483647
[0] MPI startup(): Alltoall: 4: 26-37 &amp;amp; 0-2147483647
[0] MPI startup(): Alltoall: 3: 38-1024 &amp;amp; 0-2147483647
[0] MPI startup(): Alltoall: 4: 1025-4096 &amp;amp; 0-2147483647
[0] MPI startup(): Alltoall: 2: 4097-70577 &amp;amp; 0-2147483647
[0] MPI startup(): Alltoall: 4: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Alltoallv: 1: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Alltoallw: 0: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Barrier: 2: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Bcast: 1: 0-0 &amp;amp; 0-2147483647
[0] MPI startup(): Bcast: 8: 1-12746 &amp;amp; 0-2147483647
[0] MPI startup(): Bcast: 1: 12747-42366 &amp;amp; 0-2147483647
[0] MPI startup(): Bcast: 7: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Exscan: 0: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Gather: 1: 0-0 &amp;amp; 0-2147483647
[0] MPI startup(): Gather: 3: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Gatherv: 1: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Reduce_scatter: 4: 0-5 &amp;amp; 0-2147483647
[0] MPI startup(): Reduce_scatter: 1: 6-128 &amp;amp; 0-2147483647
[0] MPI startup(): Reduce_scatter: 3: 129-89367 &amp;amp; 0-2147483647
[0] MPI startup(): Reduce_scatter: 2: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Reduce: 1: 0-0 &amp;amp; 0-2147483647
[0] MPI startup(): Reduce: 7: 1-39679 &amp;amp; 0-2147483647
[0] MPI startup(): Reduce: 1: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Scan: 0: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Scatter: 1: 0-0 &amp;amp; 0-2147483647
[0] MPI startup(): Scatter: 3: 0-2147483647 &amp;amp; 0-2147483647
[0] MPI startup(): Scatterv: 0: 0-2147483647 &amp;amp; 0-2147483647
[1] MPI startup(): Recognition=2 Platform(code=32 ippn=2 dev=1) Fabric(intra=1 inter=1 flags=0x0)
[2] MPI startup(): Recognition=2 Platform(code=32 ippn=2 dev=1) Fabric(intra=1 inter=1 flags=0x0)
[0] MPI startup(): Rank    Pid      Node name  Pin cpu
[0] MPI startup(): 0       4468     yd-ws1     {0,4}
[0] MPI startup(): 1       4469     yd-ws1     {1,5}
[0] MPI startup(): 2       4470     yd-ws1     {2,6}
[0] MPI startup(): Recognition=2 Platform(code=32 ippn=2 dev=1) Fabric(intra=1 inter=1 flags=0x0)
[0] MPI startup(): I_MPI_DEBUG=6
[0] MPI startup(): I_MPI_FABRICS=shm
[0] MPI startup(): I_MPI_INFO_NUMA_NODE_NUM=1
[0] MPI startup(): I_MPI_PIN_MAPPING=3:0 0,1 1,2 2
processor id 0, color 0: Hello! (from processor id 0)
&lt;/PRE&gt;

&lt;P&gt;&lt;CODE class="plain"&gt;When BCAST_SIZE=&lt;/CODE&gt;131072, the processors 1 and 2 couldn't output(line 32 of the code) and they was stopped by Ctrl+C.&lt;/P&gt;</description>
      <pubDate>Wed, 19 Apr 2017 03:40:23 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/When-I-MPI-FABRICS-shm-the-size-of-MPI-Bcast-can-t-larger-than/m-p/1085770#M4975</guid>
      <dc:creator>杨_栋_</dc:creator>
      <dc:date>2017-04-19T03:40:23Z</dc:date>
    </item>
    <item>
      <title>Hi Dong,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/When-I-MPI-FABRICS-shm-the-size-of-MPI-Bcast-can-t-larger-than/m-p/1085771#M4976</link>
      <description>&lt;P style="word-wrap: break-word; font-size: 12px;"&gt;Hi Dong,&lt;/P&gt;

&lt;P style="word-wrap: break-word; font-size: 12px;"&gt;Could you please try to set the I_MPI_SHM_FBOX/I_MPI_SHM_LMT (https://software.intel.com/en-us/node/528902?language=es), does this help on the hang-up?&lt;/P&gt;

&lt;P style="word-wrap: break-word; font-size: 12px;"&gt;Best Regards,&lt;/P&gt;

&lt;P style="word-wrap: break-word; font-size: 12px;"&gt;Zhuowei&lt;/P&gt;</description>
      <pubDate>Thu, 04 May 2017 05:24:39 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/When-I-MPI-FABRICS-shm-the-size-of-MPI-Bcast-can-t-larger-than/m-p/1085771#M4976</guid>
      <dc:creator>James_S</dc:creator>
      <dc:date>2017-05-04T05:24:39Z</dc:date>
    </item>
  </channel>
</rss>

