<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Hi , in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/PMPI-Bcast-Message-truncated/m-p/953853#M3052</link>
    <description>&lt;P&gt;Hi ,&lt;/P&gt;

&lt;P&gt;I have compiled espresso with intel mpi and MKL library but&amp;nbsp; getting error Failure during collective error when ever it is working fine with openmpi.&lt;/P&gt;

&lt;P&gt;is there problem with intel mpi&lt;/P&gt;

&lt;P&gt;&lt;BR /&gt;
	Fatal error in PMPI_Bcast: Other MPI error, error stack:&lt;BR /&gt;
	PMPI_Bcast(2112)........: MPI_Bcast(buf=0x516f460, count=96, MPI_DOUBLE_PRECISION, root=4, comm=0x84000004) failed&lt;BR /&gt;
	MPIR_Bcast_impl(1670)...:&lt;BR /&gt;
	I_MPIR_Bcast_intra(1887): Failure during collective&lt;BR /&gt;
	MPIR_Bcast_intra(1524)..: Failure during collective&lt;BR /&gt;
	Fatal error in PMPI_Bcast: Other MPI error, error stack:&lt;BR /&gt;
	PMPI_Bcast(2112)........: MPI_Bcast(buf=0x5300310, count=96, MPI_DOUBLE_PRECISION, root=4, comm=0x84000004) failed&lt;BR /&gt;
	MPIR_Bcast_impl(1670)...:&lt;BR /&gt;
	I_MPIR_Bcast_intra(1887): Failure during collective&lt;BR /&gt;
	MPIR_Bcast_intra(1524)..: Failure during collective&lt;BR /&gt;
	Fatal error in PMPI_Bcast: Other MPI error, error stack:&lt;BR /&gt;
	PMPI_Bcast(2112)........: MPI_Bcast(buf=0x6b295c0, count=96, MPI_DOUBLE_PRECISION, root=4, comm=0x84000004) failed&lt;BR /&gt;
	MPIR_Bcast_impl(1670)...:&lt;BR /&gt;
	I_MPIR_Bcast_intra(1887): Failure during collective&lt;BR /&gt;
	MPIR_Bcast_intra(1524)..: Failure during collective&lt;BR /&gt;
	Fatal error in PMPI_Bcast: Other MPI error, error stack:&lt;BR /&gt;
	PMPI_Bcast(2112)........: MPI_Bcast(buf=0x67183d0, count=96, MPI_DOUBLE_PRECISION, root=4, comm=0x84000004) failed&lt;BR /&gt;
	MPIR_Bcast_impl(1670)...:&lt;BR /&gt;
	I_MPIR_Bcast_intra(1887): Failure during collective&lt;BR /&gt;
	MPIR_Bcast_intra(1524)..: Failure during collective&lt;BR /&gt;
	Fatal error in PMPI_Bcast: Other MPI error, error stack:&lt;BR /&gt;
	PMPI_Bcast(2112)........: MPI_Bcast(buf=0x4f794c0, count=96, MPI_DOUBLE_PRECISION, root=4, comm=0x84000004) failed&lt;BR /&gt;
	MPIR_Bcast_impl(1670)...:&lt;BR /&gt;
	I_MPIR_Bcast_intra(1887): Failure during collective&lt;BR /&gt;
	MPIR_Bcast_intra(1524)..: Failure during collective&lt;BR /&gt;
	[0:n125] unexpected disconnect completion event from [22:n122]&lt;BR /&gt;
	Assertion failed in file ../../dapl_conn_rc.c at line 1128: 0&lt;BR /&gt;
	internal ABORT - process 0&lt;BR /&gt;
	Fatal error in PMPI_Bcast: Other MPI error, error stack:&lt;BR /&gt;
	PMPI_Bcast(2112)........: MPI_Bcast(buf=0x56bfe30, count=96, MPI_DOUBLE_PRECISION, root=4, comm=0x84000004) failed&lt;BR /&gt;
	MPIR_Bcast_impl(1670)...:&lt;BR /&gt;
	I_MPIR_Bcast_intra(1887): Failure during collective&lt;BR /&gt;
	MPIR_Bcast_intra(1524)..: Failure during collective&lt;BR /&gt;
	/var/spool/PBS/mom_priv/epilogue: line 30: kill: (5089) - No such process&lt;/P&gt;

&lt;P&gt;&lt;BR /&gt;
	Kindly help us for resolving this&lt;/P&gt;

&lt;P&gt;&lt;BR /&gt;
	Thanks&lt;BR /&gt;
	sanjiv&lt;/P&gt;</description>
    <pubDate>Wed, 03 Jun 2015 09:36:04 GMT</pubDate>
    <dc:creator>Sanjiv_T_</dc:creator>
    <dc:date>2015-06-03T09:36:04Z</dc:date>
    <item>
      <title>PMPI_Bcast: Message truncated,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/PMPI-Bcast-Message-truncated/m-p/953850#M3049</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;

&lt;P&gt;I am trying to debug some problems with getting an exe developed by another group in our company to run on Intel MPI. I am using Linux version 4.1.&lt;/P&gt;

&lt;P&gt;Debug&amp;nbsp; output as below....&lt;/P&gt;

&lt;P&gt;Does the error indicate a "programming error" on their part ( buffers not sized correctly?) or some other issue.&lt;/P&gt;

&lt;P&gt;Thanks&lt;/P&gt;

&lt;P&gt;[0] MPI startup(): Intel(R) MPI Library, Version 4.1 Update 2&amp;nbsp; Build 20131023&lt;BR /&gt;
	[0] MPI startup(): Copyright (C) 2003-2013 Intel Corporation.&amp;nbsp; All rights reserved.&lt;BR /&gt;
	[0] MPI startup(): shm and tcp data transfer modes&lt;BR /&gt;
	[1] MPI startup(): shm and tcp data transfer modes&lt;BR /&gt;
	[0] MPI startup(): Recognition mode: 2, selected platform: 8 own platform: 8&lt;BR /&gt;
	[1] MPI startup(): Recognition mode: 2, selected platform: 8 own platform: 8&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;[0] MPI startup(): Rank&amp;nbsp;&amp;nbsp;&amp;nbsp; Pid&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Node name&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Pin cpu&lt;BR /&gt;
	[0] MPI startup(): 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 30601&amp;nbsp;&amp;nbsp;&amp;nbsp; linuxdev&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {0,1,2,3}&lt;BR /&gt;
	[0] MPI startup(): 1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 15240&amp;nbsp;&amp;nbsp;&amp;nbsp; centosserver&amp;nbsp; {0,1,2,3}&lt;BR /&gt;
	[0] MPI startup(): Recognition=2 Platform(code=8 ippn=0 dev=5) Fabric(intra=1 inter=6 flags=0x0)&lt;BR /&gt;
	[0] MPI startup(): Topology split mode = 1&lt;/P&gt;

&lt;P&gt;[1] MPI startup(): Recognition=2 Platform(code=8 ippn=0 dev=5) Fabric(intra=1 inter=6 flags=0x0)&lt;BR /&gt;
	| rank | node | space=2&lt;BR /&gt;
	|&amp;nbsp; 0&amp;nbsp; |&amp;nbsp; 0&amp;nbsp; |&lt;BR /&gt;
	|&amp;nbsp; 1&amp;nbsp; |&amp;nbsp; 1&amp;nbsp; |&lt;BR /&gt;
	[0] MPI startup(): I_MPI_DEBUG=100&lt;BR /&gt;
	[0] MPI startup(): I_MPI_FABRICS=shm:tcp&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_BRAND=Intel(R) Xeon(R)&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_CACHE1=0,1,2,3&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_CACHE2=0,1,2,3&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_CACHE3=0,0,0,0&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_CACHES=3&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_CACHE_SHARE=2,2,16&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_CACHE_SIZE=32768,262144,6291456&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_CORE=0,1,2,3&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_C_NAME=Wolfdale&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_DESC=1342208505&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_FLGB=0&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_FLGC=398124031&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_FLGD=-1075053569&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_LCPU=4&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_MODE=263&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_PACK=0,0,0,0&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_SERIAL=E31225&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_SIGN=132775&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_STATE=0&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_THREAD=0,0,0,0&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_VEND=1&lt;BR /&gt;
	[0] MPI startup(): I_MPI_PIN_INFO=x0,1,2,3&lt;BR /&gt;
	[0] MPI startup(): I_MPI_PIN_MAPPING=1:0 0&lt;/P&gt;

&lt;P&gt;.....&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	Fatal error in PMPI_Bcast: Message truncated, error stack:&lt;BR /&gt;
	PMPI_Bcast(2112)......................: MPI_Bcast(buf=0x2ae6d2ef9010, count=1, dtype=USER&amp;lt;vector&amp;gt;, root=0, comm=0x84000000) failed&lt;BR /&gt;
	MPIR_Bcast_impl(1670).................:&lt;BR /&gt;
	I_MPIR_Bcast_intra(1887)..............: Failure during collective&lt;BR /&gt;
	MPIR_Bcast_intra(1524)................: Failure during collective&lt;BR /&gt;
	MPIR_Bcast_intra(1510)................:&lt;BR /&gt;
	MPIR_Bcast_scatter_ring_allgather(841):&lt;BR /&gt;
	MPIDI_CH3U_Receive_data_found(129)....: Message from rank 0 and tag 2 truncated; 50000000 bytes received but buffer size is 20000000&lt;BR /&gt;
	MPIR_Bcast_scatter_ring_allgather(789):&lt;BR /&gt;
	scatter_for_bcast(301)................:&lt;BR /&gt;
	MPIDI_CH3U_Receive_data_found(129)....: Message from rank 0 and tag 2 truncated; 50000000 bytes received but buffer size is 20000000&lt;BR /&gt;
	rank = 1, revents = 8, state = 8&lt;/P&gt;</description>
      <pubDate>Thu, 09 Jan 2014 15:59:56 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/PMPI-Bcast-Message-truncated/m-p/953850#M3049</guid>
      <dc:creator>AndrewC</dc:creator>
      <dc:date>2014-01-09T15:59:56Z</dc:date>
    </item>
    <item>
      <title>That error indicates that the</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/PMPI-Bcast-Message-truncated/m-p/953851#M3050</link>
      <description>&lt;P&gt;That error indicates that the MPI_Bcast call is trying to send too large of a message.&amp;nbsp; Keep the message under 2 GB and it should work.&lt;/P&gt;

&lt;P&gt;Sincerely,&lt;BR /&gt;
	James Tullos&lt;BR /&gt;
	Technical Consulting Engineer&lt;BR /&gt;
	Intel® Cluster Tools&lt;/P&gt;</description>
      <pubDate>Fri, 10 Jan 2014 17:12:46 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/PMPI-Bcast-Message-truncated/m-p/953851#M3050</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2014-01-10T17:12:46Z</dc:date>
    </item>
    <item>
      <title>Excellent, thanks for  the</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/PMPI-Bcast-Message-truncated/m-p/953852#M3051</link>
      <description>&lt;P&gt;Excellent, thanks for&amp;nbsp; the tip!&lt;/P&gt;</description>
      <pubDate>Fri, 10 Jan 2014 17:30:05 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/PMPI-Bcast-Message-truncated/m-p/953852#M3051</guid>
      <dc:creator>AndrewC</dc:creator>
      <dc:date>2014-01-10T17:30:05Z</dc:date>
    </item>
    <item>
      <title>Hi ,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/PMPI-Bcast-Message-truncated/m-p/953853#M3052</link>
      <description>&lt;P&gt;Hi ,&lt;/P&gt;

&lt;P&gt;I have compiled espresso with intel mpi and MKL library but&amp;nbsp; getting error Failure during collective error when ever it is working fine with openmpi.&lt;/P&gt;

&lt;P&gt;is there problem with intel mpi&lt;/P&gt;

&lt;P&gt;&lt;BR /&gt;
	Fatal error in PMPI_Bcast: Other MPI error, error stack:&lt;BR /&gt;
	PMPI_Bcast(2112)........: MPI_Bcast(buf=0x516f460, count=96, MPI_DOUBLE_PRECISION, root=4, comm=0x84000004) failed&lt;BR /&gt;
	MPIR_Bcast_impl(1670)...:&lt;BR /&gt;
	I_MPIR_Bcast_intra(1887): Failure during collective&lt;BR /&gt;
	MPIR_Bcast_intra(1524)..: Failure during collective&lt;BR /&gt;
	Fatal error in PMPI_Bcast: Other MPI error, error stack:&lt;BR /&gt;
	PMPI_Bcast(2112)........: MPI_Bcast(buf=0x5300310, count=96, MPI_DOUBLE_PRECISION, root=4, comm=0x84000004) failed&lt;BR /&gt;
	MPIR_Bcast_impl(1670)...:&lt;BR /&gt;
	I_MPIR_Bcast_intra(1887): Failure during collective&lt;BR /&gt;
	MPIR_Bcast_intra(1524)..: Failure during collective&lt;BR /&gt;
	Fatal error in PMPI_Bcast: Other MPI error, error stack:&lt;BR /&gt;
	PMPI_Bcast(2112)........: MPI_Bcast(buf=0x6b295c0, count=96, MPI_DOUBLE_PRECISION, root=4, comm=0x84000004) failed&lt;BR /&gt;
	MPIR_Bcast_impl(1670)...:&lt;BR /&gt;
	I_MPIR_Bcast_intra(1887): Failure during collective&lt;BR /&gt;
	MPIR_Bcast_intra(1524)..: Failure during collective&lt;BR /&gt;
	Fatal error in PMPI_Bcast: Other MPI error, error stack:&lt;BR /&gt;
	PMPI_Bcast(2112)........: MPI_Bcast(buf=0x67183d0, count=96, MPI_DOUBLE_PRECISION, root=4, comm=0x84000004) failed&lt;BR /&gt;
	MPIR_Bcast_impl(1670)...:&lt;BR /&gt;
	I_MPIR_Bcast_intra(1887): Failure during collective&lt;BR /&gt;
	MPIR_Bcast_intra(1524)..: Failure during collective&lt;BR /&gt;
	Fatal error in PMPI_Bcast: Other MPI error, error stack:&lt;BR /&gt;
	PMPI_Bcast(2112)........: MPI_Bcast(buf=0x4f794c0, count=96, MPI_DOUBLE_PRECISION, root=4, comm=0x84000004) failed&lt;BR /&gt;
	MPIR_Bcast_impl(1670)...:&lt;BR /&gt;
	I_MPIR_Bcast_intra(1887): Failure during collective&lt;BR /&gt;
	MPIR_Bcast_intra(1524)..: Failure during collective&lt;BR /&gt;
	[0:n125] unexpected disconnect completion event from [22:n122]&lt;BR /&gt;
	Assertion failed in file ../../dapl_conn_rc.c at line 1128: 0&lt;BR /&gt;
	internal ABORT - process 0&lt;BR /&gt;
	Fatal error in PMPI_Bcast: Other MPI error, error stack:&lt;BR /&gt;
	PMPI_Bcast(2112)........: MPI_Bcast(buf=0x56bfe30, count=96, MPI_DOUBLE_PRECISION, root=4, comm=0x84000004) failed&lt;BR /&gt;
	MPIR_Bcast_impl(1670)...:&lt;BR /&gt;
	I_MPIR_Bcast_intra(1887): Failure during collective&lt;BR /&gt;
	MPIR_Bcast_intra(1524)..: Failure during collective&lt;BR /&gt;
	/var/spool/PBS/mom_priv/epilogue: line 30: kill: (5089) - No such process&lt;/P&gt;

&lt;P&gt;&lt;BR /&gt;
	Kindly help us for resolving this&lt;/P&gt;

&lt;P&gt;&lt;BR /&gt;
	Thanks&lt;BR /&gt;
	sanjiv&lt;/P&gt;</description>
      <pubDate>Wed, 03 Jun 2015 09:36:04 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/PMPI-Bcast-Message-truncated/m-p/953853#M3052</guid>
      <dc:creator>Sanjiv_T_</dc:creator>
      <dc:date>2015-06-03T09:36:04Z</dc:date>
    </item>
  </channel>
</rss>

