Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

Intel MPI BenchMarks

dvrao_584
Beginner
770 Views
Hi,

I downloaded the IMB_3.2.2 and installed. It comes with three set of tests, IMB_MPI1,IMB_IO, IMB_EXT. I tried to execute these three tests over the OFED stack. IMB_MPI1 worked fine. When i tried to execute the remaining two tests it gave error like this.

For IMB_IO:

[root@localhost src]# mpirun --prefix /usr/local/ -np 2 -mca btl_openib_if_include "mthca0:1" -H 192.168.2.92 IMB-IO
root@192.168.2.92's password:
rdma_create_id2: line: 469 id created: 0
rdma_create_id2: line: 469 id created: 0
rdma_create_id2: line: 469 id created: 0
rdma_create_id2: line: 469 id created: 0
rdma_create_id2: line: 469 id created: 0
rdma_create_id2: line: 469 id created: 1
#---------------------------------------------------
# Intel MPI Benchmark Suite V3.2.2, MPI-IO part
#---------------------------------------------------
# Date : Thu Sep 8 10:59:04 2011
# Machine : x86_64
# System : Linux
# Release : 2.6.30
# Version : #2 SMP Wed Sep 7 13:53:29 IST 2011
# MPI Version : 2.1
# MPI Thread Environment: MPI_THREAD_SINGLE


# New default behavior from Version 3.2 on:

# the number of iterations per message size is cut down
# dynamically when a certain run time (per message size sample)
# is expected to be exceeded. Time limit is defined by variable
# "SECS_PER_SAMPLE" (=> IMB_settings.h)
# or through the flag => -time



# Calling sequence was:

# IMB-IO

# Minimum io portion in bytes: 0
# Maximum io portion in bytes: 16777216
#
#
#

# List of Benchmarks to run:

# S_Write_Indv
# S_IWrite_Indv
# S_Write_Expl
# S_IWrite_Expl
# P_Write_Indv
# P_IWrite_Indv
# P_Write_Shared
# P_IWrite_Shared
# P_Write_Priv
# P_IWrite_Priv
# P_Write_Expl
# P_IWrite_Expl
# C_Write_Indv
# C_IWrite_Indv
# C_Write_Shared
# C_IWrite_Shared
# C_Write_Expl
# C_IWrite_Expl
# S_Read_Indv
# S_IRead_Indv
# S_Read_Expl
# S_IRead_Expl
# P_Read_Indv
# P_IRead_Indv
# P_Read_Shared
# P_IRead_Shared
# P_Read_Priv
# P_IRead_Priv
# P_Read_Expl
# P_IRead_Expl
# C_Read_Indv
# C_IRead_Indv
# C_Read_Shared
# C_IRead_Shared
# C_Read_Expl
# C_IRead_Expl
# Open_Close


# For nonblocking benchmarks:

# Function CPU_Exploit obtains an undisturbed
# performance of 434.38 MFlops
[localhost:04711] *** Process received signal ***
[localhost:04711] Signal: Segmentation fault (11)
[localhost:04711] Signal code: Address not mapped (1)
[localhost:04711] Failing at address: 0x10
[localhost:04711] [ 0] /lib64/libpthread.so.0 [0x364da0e7c0]
[localhost:04711] [ 1] /usr/local/lib/libmpi.so.0(MPI_Barrier+0x62) [0x7fbe5317a822]
[localhost:04711] [ 2] IMB-IO(IMB_write_ij+0xbf) [0x40dc5c]
[localhost:04711] [ 3] IMB-IO(IMB_write_indv+0x6f) [0x40d936]
[localhost:04711] [ 4] IMB-IO(IMB_init_buffers_iter+0x109f) [0x40900e]
[localhost:04711] [ 5] IMB-IO(main+0x42d) [0x404675]
[localhost:04711] [ 6] /lib64/libc.so.6(__libc_start_main+0xf4) [0x364ce1d994]
[localhost:04711] [ 7] IMB-IO(MPI_File_write_all+0x121) [0x404199]
[localhost:04711] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 4711 on node 192.168.2.92 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------

For IMB_EXT:

[root@localhost src]# mpirun --prefix /usr/local/ -np 2 -mca btl_openib_if_include "mthca0:1" -H 192.168.2.92 IMB-EXT
root@192.168.2.92's password:
rdma_create_id2: line: 469 id created: 0
rdma_create_id2: line: 469 id created: 1
rdma_create_id2: line: 469 id created: 0
rdma_create_id2: line: 469 id created: 0
rdma_create_id2: line: 469 id created: 0
rdma_create_id2: line: 469 id created: 1
#---------------------------------------------------
# Intel MPI Benchmark Suite V3.2.2, MPI-2 part
#---------------------------------------------------
# Date : Thu Sep 8 11:01:24 2011
# Machine : x86_64
# System : Linux
# Release : 2.6.30
# Version : #2 SMP Wed Sep 7 13:53:29 IST 2011
# MPI Version : 2.1
# MPI Thread Environment: MPI_THREAD_SINGLE


# New default behavior from Version 3.2 on:

# the number of iterations per message size is cut down
# dynamically when a certain run time (per message size sample)
# is expected to be exceeded. Time limit is defined by variable
# "SECS_PER_SAMPLE" (=> IMB_settings.h)
# or through the flag => -time



# Calling sequence was:

# IMB-EXT

# Minimum message length in bytes: 0
# Maximum message length in bytes: 4194304
#
# MPI_Datatype : MPI_BYTE
# MPI_Datatype for reductions : MPI_FLOAT
# MPI_Op : MPI_SUM
#
#

# List of Benchmarks to run:

# Window
# Unidir_Get
# Unidir_Put
# Bidir_Get
# Bidir_Put
# Accumulate
[localhost.localdomain:4797] *** An error occurred in MPI_Win_free
[localhost.localdomain:4797] *** on win
[localhost.localdomain:4797] *** MPI_ERR_RMA_SYNC: error while executing rma sync
[localhost.localdomain:4797] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
--------------------------------------------------------------------------
mpirun has exited due to process rank 1 with PID 4798 on
node 192.168.2.92 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[localhost.localdomain:11359] 1 more process has sent help message help-mpi-errors.txt / mpi_errors_are_fatal
[localhost.localdomain:11359] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

May I know why i am getting those errors with those tests..

Thanks,

Venkateswara Rao Dokku.
0 Kudos
3 Replies
Dmitry_K_Intel2
Employee
770 Views
Hi Venkateswara,

The first issue is probably related to the bug in IMB_window.c - MPI_Win_fence() should be called before MPI_Win_free(). This issue has been fixed and will be availble in IMB 3.2.2

The second issue is rather an issue of the MPI library. It seems to me that you are using OpenMPI - can you try the same test case with Intel MPI Library (or any other implementation)?

Regards!
Dmitry

0 Kudos
dvrao_584
Beginner
770 Views
Hi Dmitry,

Thank you for the response. In the reply u mentioned two issues. In the First issue you mentioned that the problem is fixed in the IMB_3.2.2. But I am using the same version. So,is it falls in the second category only or are there any other issues.

Thanks & Regards,

Venkateswara Rao Dokku
0 Kudos
Dmitry_K_Intel2
Employee
770 Views
Oops, sorry!
The first issue will be fixed in IMB 3.2.3

Regards!
---Dmitry
0 Kudos
Reply