Intel® Moderncode for Parallel Architectures
Support for developing parallel programming applications on Intel® Architecture.

Problem with MPI_detach

ghillo_ptiscali_it
406 Views
Hi everyone,
I'm experiencing a problem with the intel implementation of MPI (version 4.0.0.027) used in conjunction with the Intel compiler (version 11.1.072)on a Linux RedHat cluster. The code is written in Fortran 90.
The problem arises when I execute a Buffered Send (MPI_BSEND) between the master (0) and all the other processes, including itself. It is required to attach (MPI_ATTACH) a buffer of sufficient size and then detach it (MPI_DETACH) when the messages have been sent.
Unfortunately, the code gets stuck during the MPI_DETACH without any reason.
When I use the same code with Open MPI (and the same intel compiler) the code is executed correcly.

I know that in this case I could use other types of send/receive or (in many cases) broadcast. Alternatively I could also exclude the master from the process and send only to the rest of the processes, but there are reasons for using the chosen structure.

The problem can be reproduced with the following Fortran code:

program main

implicit none

include 'mpif.h'

integer i, send_buf, recv_buf, nprocs, rank, namelen, ierr, req
character (len=MPI_MAX_PROCESSOR_NAME) :: name
integer stat(MPI_STATUS_SIZE)
INTEGER :: bsend_size
CHARACTER, dimension (:), allocatable :: bsend_buffer

call MPI_INIT (ierr)

call MPI_COMM_SIZE (MPI_COMM_WORLD, nprocs, ierr)
call MPI_COMM_RANK (MPI_COMM_WORLD, rank, ierr)
call MPI_GET_PROCESSOR_NAME (name, namelen, ierr)

if (rank.eq.0) then

bsend_size = nprocs*(1*MPI_INTEGER + MPI_BSEND_OVERHEAD)

allocate(bsend_buffer(bsend_size))
call MPI_Buffer_attach(bsend_buffer, bsend_size, ierr)

do i = 0, nprocs - 1
send_buf = i*10
call MPI_BSEND (send_buf, 1, MPI_INTEGER, i, 1, MPI_COMM_WORLD, ierr)
enddo

! attached buffer always need to be detached after use
call MPI_Buffer_detach(bsend_buffer, bsend_size, ierr)
deallocate(bsend_buffer)
end if

call MPI_RECV (recv_buf, 1, MPI_INTEGER, 0, 1, MPI_COMM_WORLD, stat, ierr)

print *, 'Hello world: rank ', rank, 'received', recv_buf

call MPI_BARRIER(MPI_COMM_WORLD, ierr)

call MPI_FINALIZE (ierr)

end

Is anyone aware of a known issue with this type of calls?

Thank you!

Pietro Ghillani

0 Kudos
1 Solution
TimP
Honored Contributor III
406 Views
You could more likely get an expert answer on the HPC forum. I'm not totally surprised that you may not be able to release the buffers until MPI_RECV is complete. OpenMPI seems to be more tolerant of such things.

View solution in original post

0 Kudos
2 Replies
TimP
Honored Contributor III
407 Views
You could more likely get an expert answer on the HPC forum. I'm not totally surprised that you may not be able to release the buffers until MPI_RECV is complete. OpenMPI seems to be more tolerant of such things.
0 Kudos
ghillo_ptiscali_it
406 Views
Thank you. I was just not thinking about this aspect.
I think you're right, Intel MPI is less tolerant on this.
I tried to detach it after the receive and everything worked fine.
So, no need topost it again.

Regards
0 Kudos
Reply