<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Intel MPI BenchMarks in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-BenchMarks/m-p/834116#M1368</link>
    <description>Oops, sorry!&lt;BR /&gt;The first issue will be fixed in IMB 3.2.3&lt;BR /&gt;&lt;BR /&gt;Regards!&lt;BR /&gt;---Dmitry</description>
    <pubDate>Thu, 08 Sep 2011 12:49:07 GMT</pubDate>
    <dc:creator>Dmitry_K_Intel2</dc:creator>
    <dc:date>2011-09-08T12:49:07Z</dc:date>
    <item>
      <title>Intel MPI BenchMarks</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-BenchMarks/m-p/834113#M1365</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I downloaded the IMB_3.2.2 and installed. It comes with three set of tests, IMB_MPI1,IMB_IO, IMB_EXT. I tried to execute these three tests over the OFED stack. IMB_MPI1 worked fine. When i tried to execute the remaining two tests it gave error like this.&lt;BR /&gt;&lt;BR /&gt;For IMB_IO:&lt;BR /&gt;&lt;BR /&gt;[root@localhost src]# mpirun --prefix /usr/local/ -np 2 -mca btl_openib_if_include "mthca0:1" -H 192.168.2.92 IMB-IO&lt;BR /&gt;root@192.168.2.92's password: &lt;BR /&gt;rdma_create_id2: line: 469 id created: 0&lt;BR /&gt;rdma_create_id2: line: 469 id created: 0&lt;BR /&gt;rdma_create_id2: line: 469 id created: 0&lt;BR /&gt;rdma_create_id2: line: 469 id created: 0&lt;BR /&gt;rdma_create_id2: line: 469 id created: 0&lt;BR /&gt;rdma_create_id2: line: 469 id created: 1&lt;BR /&gt;#---------------------------------------------------&lt;BR /&gt;# Intel  MPI Benchmark Suite V3.2.2, MPI-IO part &lt;BR /&gt;#---------------------------------------------------&lt;BR /&gt;# Date : Thu Sep 8 10:59:04 2011&lt;BR /&gt;# Machine : x86_64&lt;BR /&gt;# System : Linux&lt;BR /&gt;# Release : 2.6.30&lt;BR /&gt;# Version : #2 SMP Wed Sep 7 13:53:29 IST 2011&lt;BR /&gt;# MPI Version : 2.1&lt;BR /&gt;# MPI Thread Environment: MPI_THREAD_SINGLE&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;# New default behavior from Version 3.2 on:&lt;BR /&gt;&lt;BR /&gt;# the number of iterations per message size is cut down &lt;BR /&gt;# dynamically when a certain run time (per message size sample) &lt;BR /&gt;# is expected to be exceeded. Time limit is defined by variable &lt;BR /&gt;# "SECS_PER_SAMPLE" (=&amp;gt; IMB_settings.h) &lt;BR /&gt;# or through the flag =&amp;gt; -time &lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;# Calling sequence was: &lt;BR /&gt;&lt;BR /&gt;# IMB-IO&lt;BR /&gt;&lt;BR /&gt;# Minimum io portion in bytes: 0&lt;BR /&gt;# Maximum io portion in bytes: 16777216&lt;BR /&gt;#&lt;BR /&gt;#&lt;BR /&gt;#&lt;BR /&gt;&lt;BR /&gt;# List of Benchmarks to run:&lt;BR /&gt;&lt;BR /&gt;# S_Write_Indv&lt;BR /&gt;# S_IWrite_Indv&lt;BR /&gt;# S_Write_Expl&lt;BR /&gt;# S_IWrite_Expl&lt;BR /&gt;# P_Write_Indv&lt;BR /&gt;# P_IWrite_Indv&lt;BR /&gt;# P_Write_Shared&lt;BR /&gt;# P_IWrite_Shared&lt;BR /&gt;# P_Write_Priv&lt;BR /&gt;# P_IWrite_Priv&lt;BR /&gt;# P_Write_Expl&lt;BR /&gt;# P_IWrite_Expl&lt;BR /&gt;# C_Write_Indv&lt;BR /&gt;# C_IWrite_Indv&lt;BR /&gt;# C_Write_Shared&lt;BR /&gt;# C_IWrite_Shared&lt;BR /&gt;# C_Write_Expl&lt;BR /&gt;# C_IWrite_Expl&lt;BR /&gt;# S_Read_Indv&lt;BR /&gt;# S_IRead_Indv&lt;BR /&gt;# S_Read_Expl&lt;BR /&gt;# S_IRead_Expl&lt;BR /&gt;# P_Read_Indv&lt;BR /&gt;# P_IRead_Indv&lt;BR /&gt;# P_Read_Shared&lt;BR /&gt;# P_IRead_Shared&lt;BR /&gt;# P_Read_Priv&lt;BR /&gt;# P_IRead_Priv&lt;BR /&gt;# P_Read_Expl&lt;BR /&gt;# P_IRead_Expl&lt;BR /&gt;# C_Read_Indv&lt;BR /&gt;# C_IRead_Indv&lt;BR /&gt;# C_Read_Shared&lt;BR /&gt;# C_IRead_Shared&lt;BR /&gt;# C_Read_Expl&lt;BR /&gt;# C_IRead_Expl&lt;BR /&gt;# Open_Close&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;# For nonblocking benchmarks:&lt;BR /&gt;&lt;BR /&gt;# Function CPU_Exploit obtains an undisturbed&lt;BR /&gt;# performance of 434.38 MFlops&lt;BR /&gt;[localhost:04711] *** Process received signal ***&lt;BR /&gt;[localhost:04711] Signal: Segmentation fault (11)&lt;BR /&gt;[localhost:04711] Signal code: Address not mapped (1)&lt;BR /&gt;[localhost:04711] Failing at address: 0x10&lt;BR /&gt;[localhost:04711] [ 0] /lib64/libpthread.so.0 [0x364da0e7c0]&lt;BR /&gt;[localhost:04711] [ 1] /usr/local/lib/libmpi.so.0(MPI_Barrier+0x62) [0x7fbe5317a822]&lt;BR /&gt;[localhost:04711] [ 2] IMB-IO(IMB_write_ij+0xbf) [0x40dc5c]&lt;BR /&gt;[localhost:04711] [ 3] IMB-IO(IMB_write_indv+0x6f) [0x40d936]&lt;BR /&gt;[localhost:04711] [ 4] IMB-IO(IMB_init_buffers_iter+0x109f) [0x40900e]&lt;BR /&gt;[localhost:04711] [ 5] IMB-IO(main+0x42d) [0x404675]&lt;BR /&gt;[localhost:04711] [ 6] /lib64/libc.so.6(__libc_start_main+0xf4) [0x364ce1d994]&lt;BR /&gt;[localhost:04711] [ 7] IMB-IO(MPI_File_write_all+0x121) [0x404199]&lt;BR /&gt;[localhost:04711] *** End of error message ***&lt;BR /&gt;--------------------------------------------------------------------------&lt;BR /&gt;mpirun noticed that process rank 0 with PID 4711 on node 192.168.2.92 exited on signal 11 (Segmentation fault).&lt;BR /&gt;--------------------------------------------------------------------------&lt;BR /&gt;&lt;BR /&gt;For IMB_EXT:&lt;BR /&gt;&lt;BR /&gt;[root@localhost src]# mpirun --prefix /usr/local/ -np 2 -mca btl_openib_if_include "mthca0:1" -H 192.168.2.92 IMB-EXT&lt;BR /&gt;root@192.168.2.92's password: &lt;BR /&gt;rdma_create_id2: line: 469 id created: 0&lt;BR /&gt;rdma_create_id2: line: 469 id created: 1&lt;BR /&gt;rdma_create_id2: line: 469 id created: 0&lt;BR /&gt;rdma_create_id2: line: 469 id created: 0&lt;BR /&gt;rdma_create_id2: line: 469 id created: 0&lt;BR /&gt;rdma_create_id2: line: 469 id created: 1&lt;BR /&gt;#---------------------------------------------------&lt;BR /&gt;# Intel  MPI Benchmark Suite V3.2.2, MPI-2 part &lt;BR /&gt;#---------------------------------------------------&lt;BR /&gt;# Date : Thu Sep 8 11:01:24 2011&lt;BR /&gt;# Machine : x86_64&lt;BR /&gt;# System : Linux&lt;BR /&gt;# Release : 2.6.30&lt;BR /&gt;# Version : #2 SMP Wed Sep 7 13:53:29 IST 2011&lt;BR /&gt;# MPI Version : 2.1&lt;BR /&gt;# MPI Thread Environment: MPI_THREAD_SINGLE&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;# New default behavior from Version 3.2 on:&lt;BR /&gt;&lt;BR /&gt;# the number of iterations per message size is cut down &lt;BR /&gt;# dynamically when a certain run time (per message size sample) &lt;BR /&gt;# is expected to be exceeded. Time limit is defined by variable &lt;BR /&gt;# "SECS_PER_SAMPLE" (=&amp;gt; IMB_settings.h) &lt;BR /&gt;# or through the flag =&amp;gt; -time &lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;# Calling sequence was: &lt;BR /&gt;&lt;BR /&gt;# IMB-EXT&lt;BR /&gt;&lt;BR /&gt;# Minimum message length in bytes: 0&lt;BR /&gt;# Maximum message length in bytes: 4194304&lt;BR /&gt;#&lt;BR /&gt;# MPI_Datatype : MPI_BYTE &lt;BR /&gt;# MPI_Datatype for reductions : MPI_FLOAT&lt;BR /&gt;# MPI_Op : MPI_SUM &lt;BR /&gt;#&lt;BR /&gt;#&lt;BR /&gt;&lt;BR /&gt;# List of Benchmarks to run:&lt;BR /&gt;&lt;BR /&gt;# Window&lt;BR /&gt;# Unidir_Get&lt;BR /&gt;# Unidir_Put&lt;BR /&gt;# Bidir_Get&lt;BR /&gt;# Bidir_Put&lt;BR /&gt;# Accumulate&lt;BR /&gt;[localhost.localdomain:4797] *** An error occurred in MPI_Win_free&lt;BR /&gt;[localhost.localdomain:4797] *** on win &lt;BR /&gt;[localhost.localdomain:4797] *** MPI_ERR_RMA_SYNC: error while executing rma sync&lt;BR /&gt;[localhost.localdomain:4797] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)&lt;BR /&gt;--------------------------------------------------------------------------&lt;BR /&gt;mpirun has exited due to process rank 1 with PID 4798 on&lt;BR /&gt;node 192.168.2.92 exiting without calling "finalize". This may&lt;BR /&gt;have caused other processes in the application to be&lt;BR /&gt;terminated by signals sent by mpirun (as reported here).&lt;BR /&gt;--------------------------------------------------------------------------&lt;BR /&gt;[localhost.localdomain:11359] 1 more process has sent help message help-mpi-errors.txt / mpi_errors_are_fatal&lt;BR /&gt;[localhost.localdomain:11359] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages&lt;BR /&gt;&lt;BR /&gt; May I know why i am getting those errors with those tests..&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;&lt;BR /&gt;Venkateswara Rao Dokku.&lt;BR /&gt;</description>
      <pubDate>Thu, 08 Sep 2011 05:30:59 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-BenchMarks/m-p/834113#M1365</guid>
      <dc:creator>dvrao_584</dc:creator>
      <dc:date>2011-09-08T05:30:59Z</dc:date>
    </item>
    <item>
      <title>Intel MPI BenchMarks</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-BenchMarks/m-p/834114#M1366</link>
      <description>Hi Venkateswara,&lt;BR /&gt;&lt;BR /&gt;The first issue is probably related to the bug in IMB_window.c - MPI_Win_fence() should be called before MPI_Win_free(). This issue has been fixed and will be availble in IMB 3.2.2&lt;BR /&gt;&lt;BR /&gt;The second issue is rather an issue of the MPI library. It seems to me that you are using OpenMPI - can you try the same test case with Intel MPI Library (or any other implementation)?&lt;BR /&gt;&lt;BR /&gt;Regards!&lt;BR /&gt; Dmitry&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 08 Sep 2011 10:45:21 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-BenchMarks/m-p/834114#M1366</guid>
      <dc:creator>Dmitry_K_Intel2</dc:creator>
      <dc:date>2011-09-08T10:45:21Z</dc:date>
    </item>
    <item>
      <title>Intel MPI BenchMarks</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-BenchMarks/m-p/834115#M1367</link>
      <description>Hi Dmitry,&lt;BR /&gt;&lt;BR /&gt;Thank you for the response. In the reply u mentioned two issues. In the First issue you mentioned that the problem is fixed in the IMB_3.2.2. But I am using the same version. So,is it falls in the second category only or are there any other issues. &lt;BR /&gt;&lt;BR /&gt;Thanks &amp;amp; Regards,&lt;BR /&gt;&lt;BR /&gt;Venkateswara Rao Dokku</description>
      <pubDate>Thu, 08 Sep 2011 12:46:14 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-BenchMarks/m-p/834115#M1367</guid>
      <dc:creator>dvrao_584</dc:creator>
      <dc:date>2011-09-08T12:46:14Z</dc:date>
    </item>
    <item>
      <title>Intel MPI BenchMarks</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-BenchMarks/m-p/834116#M1368</link>
      <description>Oops, sorry!&lt;BR /&gt;The first issue will be fixed in IMB 3.2.3&lt;BR /&gt;&lt;BR /&gt;Regards!&lt;BR /&gt;---Dmitry</description>
      <pubDate>Thu, 08 Sep 2011 12:49:07 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-BenchMarks/m-p/834116#M1368</guid>
      <dc:creator>Dmitry_K_Intel2</dc:creator>
      <dc:date>2011-09-08T12:49:07Z</dc:date>
    </item>
  </channel>
</rss>

